How to Use Cisco Packet Tracer Internet Cloud for Network Simulation
-
Cisco Packet Tracer is a network simulation and visualization tool that allows you to create and test various network scenarios. One of the features of Cisco Packet Tracer is the Internet Cloud, which can be used to emulate the Internet or other networks that are not directly accessible from your local network. In this article, we will show you how to use Cisco Packet Tracer Internet Cloud for network simulation and what are the benefits and limitations of this feature.
Cisco Packet Tracer Internet Cloud is a device that can be added to your network topology in Cisco Packet Tracer. It has two main functions: DSL and PT-Cloud.
-
-
The DSL function allows you to connect your network devices to a DSL modem, which can then communicate with the Internet Cloud. You can configure the DSL settings, such as username, password, and encapsulation type, on the Internet Cloud device.
-
The PT-Cloud function allows you to create custom routes between different network segments that are connected to the Internet Cloud. You can specify the source and destination IP addresses and subnet masks for each route on the Internet Cloud device.
-
-
By using these functions, you can simulate various network scenarios that involve the Internet or other networks that are not directly connected to your local network. For example, you can create a VPN tunnel between two routers that are separated by the Internet Cloud, or you can test the connectivity and performance of your network devices over different network paths.
-
How to Use Cisco Packet Tracer Internet Cloud?
-
To use Cisco Packet Tracer Internet Cloud for network simulation, you need to follow these steps:
-
-
Open Cisco Packet Tracer and create a new network topology or open an existing one.
-
Drag and drop the Internet Cloud device from the End Devices section to your workspace.
-
Connect your network devices to the Internet Cloud device using copper straight-through cables or fiber optic cables. You can use any of the eight ports on the Internet Cloud device.
-
Double-click on the Internet Cloud device to open its configuration window.
-
Select the DSL tab and configure the DSL settings for each port that is connected to a DSL modem. You can specify the username, password, encapsulation type, and service name for each port. You can also enable or disable NAT on each port.
-
Select the PT-Cloud tab and configure the custom routes for each network segment that is connected to the Internet Cloud. You can specify the source and destination IP addresses and subnet masks for each route. You can also enable or disable ICMP on each route.
-
Click OK to save your configuration and close the window.
-
Test your network simulation by using ping, traceroute, or other commands on your network devices. You should be able to communicate with other devices that are connected to the Internet Cloud according to your configuration.
-
-
What are the Benefits and Limitations of Cisco Packet Tracer Internet Cloud?
-
Cisco Packet Tracer Internet Cloud has some benefits and limitations that you should be aware of before using it for network simulation. Here are some of them:
-
-
-
The benefits of Cisco Packet Tracer Internet Cloud are:
-
-
It allows you to simulate various network scenarios that involve the Internet or other networks that are not directly accessible from your local network.
-
It gives you more control over the network parameters and conditions that affect your network simulation.
-
It helps you to learn and practice networking concepts and skills in a realistic and interactive way.
-
-
-
The limitations of Cisco Packet Tracer Internet Cloud are:
-
-
It does not support some popular file formats, such as MP4, MOV, and MKV. You may need to convert your files to other formats before using them in your network simulation.
-
It does not have built-in codecs for these file formats, which means that you may need to install additional codecs on your computer to play them.
-
It does not have advanced features that are available in real networking devices or software, such as livestream integration, NDI support, alpha channel output, etc.
-
It may ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FS2004 - Wilco Feelthere CRJ Retail CD - SERIAL Needed ! TOP Download.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FS2004 - Wilco Feelthere CRJ Retail CD - SERIAL Needed ! TOP Download.md
deleted file mode 100644
index 51e6d586c696ccd32e3f7582dae442c766281560..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/FS2004 - Wilco Feelthere CRJ Retail CD - SERIAL Needed ! TOP Download.md
+++ /dev/null
@@ -1,244 +0,0 @@
-
-
-
-
FS2004 - Wilco Feelthere CRJ Retail CD - SERIAL Needed ! Download
-
-
-
If you are a fan of flight simulation games, you probably know about FS2004 or Microsoft Flight Simulator 2004: A Century of Flight. It is one of the most popular and realistic flight simulators ever created. But did you know that you can enhance your flying experience with add-ons that provide new aircraft models, scenery, sounds, and more? One of the best add-ons for FS2004 is the CRJ Nextgen by Wilco Publishing and FeelThere. It is a package that includes three variants of the CRJ regional jet: CRJ-700, CRJ-900, and CRJ-1000. In this article, we will tell you everything you need to know about this add-on, why you need a serial number to use it, and where you can download it from. Let's get started!
-
FS2004 - Wilco Feelthere CRJ Retail CD - SERIAL Needed ! Download
FS2004 or Microsoft Flight Simulator 2004: A Century of Flight is a flight simulation game developed by Microsoft and released in 2003. It is the tenth installment in the Microsoft Flight Simulator series and the last one to run on Windows 98 and Windows Me. It is also the first one to include a dynamic weather system, interactive air traffic control, and 3D virtual cockpits for some aircraft.
-
FS2004 covers the entire world with over 24,000 airports, 33 cities, and 45 detailed regions. It also features over 70 aircraft, ranging from historical planes like the Wright Flyer and the Spirit of St. Louis, to modern jets like the Boeing 747 and the Concorde. It also allows users to create and share their own custom aircraft, scenery, missions, and more.
-
FS2004 is widely regarded as one of the best and most realistic flight simulators ever made. It has received many awards and accolades from critics and fans alike. It has also spawned a large and active community of flight simulation enthusiasts who continue to enjoy and improve the game with various add-ons and modifications.
-
-
-
-
What is Wilco Feelthere CRJ?
-
-
-
Wilco Feelthere CRJ or CRJ Nextgen is an add-on for FS2004 that provides three variants of the CRJ regional jet: CRJ-700, CRJ-900, and CRJ-1000. The CRJ or Canadair Regional Jet is a family of twin-engine, single-aisle jet airliners designed and manufactured by Bombardier Aerospace. It is one of the most successful and widely used regional jets in the world, with over 2,000 units delivered to more than 100 operators in over 50 countries.
-
The add-on was developed by Wilco Publishing and FeelThere, two leading companies in the flight simulation industry. Wilco Publishing is a French company that specializes in creating high-quality add-ons for various flight simulators, such as Airbus Series, Boeing Series, ERJ Series, etc. FeelThere is a Hungarian company that focuses on developing realistic and complex aircraft systems, such as Embraer Phenom 100, Embraer E-Jets Series, etc.
-
The add-on is compatible with FS2004 and offers a high level of realism and immersion for users who want to fly the CRJ aircraft. It features high-definition models, interactive virtual cockpits, realistic flight management computers, immersive audio experience, and more.
-
-
-
Features of Wilco Feelthere CRJ
-
-
-
The add-on offers many features that enhance the flying experience of the CRJ aircraft. Some of the main features are:
-
-
High-definition models: The add-on includes three highly detailed models of the CRJ aircraft: CRJ-700 (70 seats), CRJ-900 (90 seats), and CRJ-1000 (100 seats). Each model has accurate dimensions, shapes, textures, liveries, animations, lighting effects, etc.
-
Interactive virtual cockpit: The add-on provides a fully functional virtual cockpit for each model of the CRJ aircraft. The virtual cockpit has realistic gauges, displays, switches, buttons, knobs, levers, etc. that can be operated with the mouse or keyboard. The virtual cockpit also has a head-up display (HUD), a weather radar (WX), a traffic collision avoidance system (TCAS), etc.
-
Realistic flight management computer: The add-on includes a realistic flight management computer (FMC) for each model of the CRJ aircraft. The FMC is a device that helps pilots plan and execute flights by providing information such as route data, fuel calculations, performance data, etc. The FMC can be programmed with waypoints, airways, sid, stars, etc. The FMC can also be updated with real-time data from online sources, such as Navigraph or NavDataPro.
-
Immersive audio experience: The add-on delivers a high-quality audio experience for each model of the CRJ aircraft. The audio includes realistic engine sounds, cockpit sounds, cabin sounds, environmental sounds, etc. The audio also supports 3D sound positioning and spatialization, as well as dynamic sound effects based on speed, altitude, weather, etc.
-
-
-
-
Specifications of Wilco Feelthere CRJ
-
-
-
The add-on provides accurate and detailed specifications for each model of the CRJ aircraft. The specifications include dimensions, weights, capacities, performance, range, etc. The specifications are based on the official data from Bombardier Aerospace and can be compared in the following table:
-
-
-
Specification
-
CRJ-700
-
CRJ-900
-
CRJ-1000
-
-
-
Length
-
32.51 m (106 ft 8 in)
-
36.40 m (119 ft 4 in)
-
39.13 m (128 ft 4 in)
-
-
-
Wingspan
-
23.24 m (76 ft 3 in)
-
24.85 m (81 ft 6 in)
-
26.16 m (85 ft 10 in)
-
-
-
Height
-
7.57 m (24 ft 10 in)
-
7.51 m (24 ft 7 in)
-
7.51 m (24 ft 7 in)
-
-
-
Maximum takeoff weight
-
32,999 kg (72,750 lb)
-
38,330 kg (84,500 lb)
-
41,640 kg (91,800 lb)
-
-
-
Fuel capacity
-
9,480 L (2,504 US gal)
-
9,480 L (2,504 US gal)
-
9,480 L (2,504 US gal)
-
-
-
Passengers
-
70 (standard), 78 (maximum)
-
90 (standard), 100 (maximum)
-
100 (standard), 104 (maximum)
-
-
-
Cruise speed
-
Mach 0.78 (829 km/h; 515 mph)
-
Mach 0.78 (829 km/h; 515 mph)
-
Mach 0.78 (829 km/h; 515 mph)
-
-
Range
-
3,148 km (1,700 nmi)
-
3,385 km (1,828 nmi)
-
3,057 km (1,650 nmi)
-
-
-
Engines
-
2 × General Electric CF34-8C1
-
2 × General Electric CF34-8C5
-
2 × General Electric CF34-8C5A1
-
-
-
Thrust
-
56.4 kN (12,670 lbf) each
-
62.3 kN (14,000 lbf) each
-
63.4 kN (14,255 lbf) each
-
-
-
-
-
Compatibility of Wilco Feelthere CRJ
-
-
-
The add-on is compatible with FS2004 and can be installed and run on any computer that meets the minimum system requirements for the game. The add-on is also compatible with other third-party software and hardware that enhance the flight simulation experience, such as:
-
-
VRinsight modules: The add-on supports the use of VRinsight modules, such as the CDU II panel, the MCP Combo panel, the Flight Master Yoke II, etc. These modules are hardware devices that provide realistic controls and displays for the CRJ aircraft.
-
Go Flight modules: The add-on supports the use of Go Flight modules, such as the GF-MCP Pro panel, the GF-P8 push button module, the GF-T8 toggle switch module, etc. These modules are hardware devices that provide additional switches and buttons for the CRJ aircraft.
-
Track IR: The add-on supports the use of Track IR, a device that tracks the head movements of the user and translates them into corresponding movements of the virtual camera in the game. This allows the user to look around the cockpit and outside the aircraft in a natural and intuitive way.
-
-
-
-
Why do you need a serial for Wilco Feelthere CRJ?
-
-
-
A serial number is a unique code that is used to activate and register the add-on. The serial number is usually provided by the seller or distributor of the add-on when you purchase it. The serial number is required for two reasons:
-
-
To verify your purchase: The serial number is used to verify that you have purchased a legitimate copy of the add-on from an authorized source. This helps to prevent piracy and fraud.
-
To unlock all features: The serial number is used to unlock all features and functions of the add-on. Without a valid serial number, you will not be able to use some features of the add-on, such as online activation, updates, support, etc.
-
-
If you do not have a valid serial number for Wilco Feelthere CRJ, you will not be able to enjoy the full potential of the add-on. You will also risk violating the terms and conditions of use and facing legal consequences.
-
-
Where can you download Wilco Feelthere CRJ?
-
-
-
There are different sources and methods for downloading Wilco Feelthere CRJ for FS2004. Some of them are official and legal, while others are unofficial and illegal. The choice is yours, but we recommend that you always download from a trusted and authorized source to avoid any problems or risks. Here are some of the options for downloading Wilco Feelthere CRJ:
-
-
-
Official website
-
-
-
The best and safest way to download Wilco Feelthere CRJ is from the official website of Wilco Publishing or FeelThere. You can find the add-on on their online catalog and purchase it with a secure payment method, such as credit card, PayPal, etc. The price of the add-on is €29.95 (about $34) for the download version or €34.95 (about $40) for the boxed version.
-
After purchasing the add-on, you will receive an email with a download link and a serial number. You can then download the add-on as a ZIP file (about 500 MB) and extract it to your FS2004 folder. You will also need to activate the add-on with your serial number using an online or offline method.
-
The advantages of downloading from the official website are:
-
-
Quality and reliability: You can be sure that you are getting a high-quality and reliable product that has been tested and approved by the developers.
-
Support and updates: You can get access to technical support and customer service from the developers in case you have any issues or questions. You can also get free updates and patches for the add-on when they are available.
-
Legality and ethics: You can respect the intellectual property rights and hard work of the developers by paying for their product. You can also avoid any legal troubles or penalties that may arise from using pirated or illegal copies of the add-on.
-
-
-
-
Online stores
-
-
-
Another way to download Wilco Feelthere CRJ is from other online stores that sell flight simulation products, such as SimMarket, FlightSim.com, Aerosoft, etc. These online stores are authorized resellers of Wilco Publishing and FeelThere products and offer similar prices and payment methods as the official website.
-
After purchasing the add-on from an online store, you will receive an email with a download link and a serial number. You can then download the add-on as a ZIP file (about 500 MB) and extract it to your FS2004 folder. You will also need to activate the add-on with your serial number using an online or offline method.
-
The advantages of downloading from an online store are:
-
-
Variety and convenience: You can choose from a wide range of flight simulation products and compare prices and features among different online stores. You can also find discounts and deals on some products.
-
Security and trust: You can trust that you are getting a legitimate and safe product from a reputable and verified online store. You can also use secure payment methods and encryption technologies to protect your personal and financial information.
-
Legality and ethics: You can respect the intellectual property rights and hard work of the developers by paying for their product. You can also avoid any legal troubles or penalties that may arise from using pirated or illegal copies of the add-on.
-
-
-
Torrent sites
-
-
-
A third way to download Wilco Feelthere CRJ is from torrent sites that offer free or pirated copies of flight simulation products, such as The Pirate Bay, Kickass Torrents, RARBG, etc. These torrent sites are not authorized or endorsed by Wilco Publishing or FeelThere and offer illegal downloads of their products.
-
After downloading the add-on from a torrent site, you will get a ZIP file (about 500 MB) that contains the add-on files and a crack or keygen program. You will need to extract the add-on files to your FS2004 folder and run the crack or keygen program to generate a serial number and activate the add-on.
-
The disadvantages of downloading from a torrent site are:
-
-
Quality and reliability: You cannot be sure that you are getting a high-quality and reliable product that has not been tampered with or infected with malware. You may also encounter errors, bugs, or crashes while using the add-on.
-
Support and updates: You cannot get access to technical support and customer service from the developers in case you have any issues or questions. You also cannot get free updates and patches for the add-on when they are available.
-
Legality and ethics: You are violating the intellectual property rights and hard work of the developers by downloading their product without paying for it. You are also risking legal troubles or penalties that may arise from using pirated or illegal copies of the add-on.
-
-
-
-
How to install and activate Wilco Feelthere CRJ?
-
-
-
After downloading Wilco Feelthere CRJ from any source, you will need to install and activate it before you can use it. The installation and activation process is simple and straightforward, but it may vary depending on the source of your download. Here are the steps for installing and activating Wilco Feelthere CRJ:
-
-
-
How to install Wilco Feelthere CRJ?
-
-
-
The installation process depends on whether you have downloaded the add-on as an installation program or a ZIP file. Here are the steps for both methods:
-
-
Installation program: If you have downloaded the add-on as an installation program (usually named Setup.exe), you just need to double-click on it and follow the instructions on the screen. You will need to select your FS2004 folder as the destination folder for the add-on files. You will also need to agree to the terms and conditions of use and enter your name and email address.
-
ZIP file: If you have downloaded the add-on as a ZIP file (usually named CRJ_NextGen_FS2004.zip), you will need to extract it using a ZIP file extractor, such as WinZip, WinRAR, 7-Zip, etc. You will need to extract the add-on files to your FS2004 folder. You will also need to agree to the terms and conditions of use and enter your name and email address.
-
-
After installing the add-on, you will see a new folder named "FeelThere" in your FS2004 folder. This folder contains all the files and folders related to the add-on, such as aircraft, gauges, manuals, sounds, etc.
-
-
-
How to activate Wilco Feelthere CRJ?
-
-
-
The activation process depends on whether you have downloaded the add-on from an official or unofficial source. Here are the steps for both methods:
-
-
Official source: If you have downloaded the add-on from an official source, such as the official website or an online store, you will need to activate it with your serial number using an online or offline method. Here are the steps for both methods:
-
-
Online method: If you have an internet connection, you can activate the add-on online by running the "Wilco Activation Tool" program that is located in your FS2004 folder. You will need to enter your serial number and click on "Activate". The program will connect to the activation server and verify your serial number. If your serial number is valid, you will see a message saying "Activation successful". You can then close the program and start FS2004.
-
Offline method: If you do not have an internet connection, you can activate the add-on offline by running the "Wilco Activation Tool" program that is located in your FS2004 folder. You will need to enter your serial number and click on "Generate". The program will generate an activation code that you will need to write down or copy. You will then need to go to the activation website (https://www.wilcopub.com/activation) on another device that has an internet connection. You will need to enter your serial number and the activation code and click on "Activate". The website will verify your serial number and activation code. If they are valid, you will see a message saying "Activation successful". You can then close the website and start FS2004.
-
-
Unofficial source: If you have downloaded the add-on from an unofficial source, such as a torrent site, you will need to activate it with a crack or keygen program that is included in the download. Here are the steps for using the crack or keygen program:
-
-
Crack program: If you have a crack program (usually named CRJ_NextGen_FS2004_Crack.exe), you just need to run it and click on "Crack". The program will automatically copy and replace some files in your FS2004 folder. You will see a message saying "Crack successful". You can then close the program and start FS2004.
-
Keygen program: If you have a keygen program (usually named CRJ_NextGen_FS2004_Keygen.exe), you just need to run it and click on "Generate". The program will generate a serial number that you will need to write down or copy. You will then need to run the "Wilco Activation Tool" program that is located in your FS2004 folder. You will need to enter the serial number and click on "Activate". The program will connect to the activation server and verify your serial number. If your serial number is valid, you will see a message saying "Activation successful". You can then close the program and start FS2004.
-
-
-
After activating the add-on, you will be able to use all features and functions of Wilco Feelthere CRJ for FS2004.
-
-
-
Conclusion
-
-
-
Wilco Feelthere CRJ is an amazing add-on for FS2004 that provides three variants of the CRJ regional jet: CRJ-700, CRJ-900, and CRJ-1000. It offers a high level of realism and immersion for users who want to fly the CRJ aircraft. It features high-definition models, interactive virtual cockpits, realistic flight management computers, immersive audio experience, and more. It also supports other third-party software and hardware that enhance the flight simulation experience, such as VRinsight modules, Go Flight modules, Track IR, etc.
-
To download Wilco Feelthere CRJ, you have different options: official website, online stores, or torrent sites. We recommend that you always download from a trusted and authorized source to avoid any problems or risks. To install and activate Wilco Feelthere CRJ, you just need to follow some simple steps depending on the source of your download.
-
We hope that this article has helped you learn more about Wilco Feelthere CRJ for FS2004 and how to download, install, and activate it. If you have any questions or comments, please feel free to contact us or leave a comment below. Happy flying!
-
-
-
FAQs
-
-
-
Here are some frequently asked questions and answers about Wilco Feelthere CRJ for FS2004:
-
-
Q: Can I use Wilco Feelthere CRJ with other flight simulators?
-
A: No, Wilco Feelthere CRJ is only compatible with FS2004. However, there are other versions of Wilco Feelthere CRJ for other flight simulators, such as FSX, P3D, etc.
-
Q: Can I use Wilco Feelthere CRJ with other add-ons?
-
A: Yes, Wilco Feelthere CRJ is compatible with most other add-ons for FS2004, such as scenery, weather, traffic, etc. However, some add-ons may cause conflicts or errors with Wilco Feelthere CRJ. In that case, you may need to adjust some settings or disable some add-ons.
-
Q: How can I update Wilco Feelthere CRJ?
-
A: If you have downloaded Wilco Feelthere CRJ from an official source, you can get free updates and patches for the add-on when they are available. You can check for updates on the official website of Wilco Publishing or FeelThere, or on the online store where you purchased the add-on. You can then download and install the updates following the instructions provided.
-
Q: How can I get support for Wilco Feelthere CRJ?
-
A: If you have downloaded Wilco Feelthere CRJ from an official source, you can get technical support and customer service from the developers. You can contact them by email, phone, or online form. You can also visit their forums and FAQs for more information and help.
-
Q: How can I uninstall Wilco Feelthere CRJ?
-
A: If you want to uninstall Wilco Feelthere CRJ, you can use the uninstall program that is located in your FS2004 folder. You just need to run the program and follow the instructions on the screen. You will also need to deactivate the add-on with your serial number using the "Wilco Activation Tool" program.
-
-
-
-
-
-
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Stumble Guys APK Mod 0.39 and Enjoy Unlimited Money and Unlocked Features.md b/spaces/1phancelerku/anime-remove-background/Download Stumble Guys APK Mod 0.39 and Enjoy Unlimited Money and Unlocked Features.md
deleted file mode 100644
index a5d996253c438784f665fed3e47ec57a0a5262bd..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Stumble Guys APK Mod 0.39 and Enjoy Unlimited Money and Unlocked Features.md
+++ /dev/null
@@ -1,78 +0,0 @@
-
-
Download Stumble Guys APK Mod 0.39: A Fun and Wacky Multiplayer Game
-
If you are looking for a fun and wacky multiplayer game that will make you laugh and scream, then you should try Stumble Guys. Stumble Guys is a hilarious online game where you have to compete with up to 32 players in various obstacle courses and challenges. You have to run, jump, slide, and dodge your way to the finish line, while avoiding being eliminated by other players or the environment. Sounds easy, right? Well, not so fast. The game is full of surprises and twists that will keep you on your toes and test your skills and reflexes.
Stumble Guys is a multiplayer game developed by Kitka Games and released in August 2020. It is inspired by popular TV shows like Wipeout and Takeshi's Castle, where contestants have to go through crazy and funny obstacle courses. The game has a colorful and cartoonish graphics style, with cute and customizable characters that you can dress up with different outfits and accessories. The game also has a catchy and upbeat soundtrack that matches the mood of the game.
-
Features of Stumble Guys
-
Stumble Guys has many features that make it a fun and addictive game to play with your friends or strangers online. Some of these features are:
-
-
Online multiplayer mode: You can join or create a room with up to 32 players and compete in various rounds of obstacle courses and mini-games. You can also chat with other players and make new friends.
-
Random and dynamic levels: The game has over 20 different levels that are randomly selected and change every time you play. You will never get bored or know what to expect next.
-
Creative and challenging obstacles: The game has a variety of obstacles that will challenge your skills and reflexes. You will have to deal with spinning platforms, swinging hammers, slippery slides, bouncing balls, flying fruits, and more.
-
Cute and customizable characters: You can choose from different characters and customize them with different outfits and accessories. You can also unlock more items as you play and level up.
-
-
How to play Stumble Guys
-
The gameplay of Stumble Guys is simple and intuitive. You just have to use the virtual joystick to move your character and the jump button to jump over obstacles or gaps. You have to reach the finish line before the time runs out or before you get eliminated by other players or the environment. You can also push or grab other players to slow them down or knock them off the course. The last player standing wins the game.
-
Why download Stumble Guys APK Mod 0.39?
-
If you want to enjoy Stumble Guys even more, then you should download the APK mod version 0.39 of the game. This version has some advantages over the original version that will make your gaming experience more fun and satisfying.
-
Benefits of Stumble Guys APK Mod 0.39
-
The benefits of downloading Stumble Guys APK mod 0.39 are:
-
How to download stumble guys mod apk 0.39 for free
-Stumble guys multiplayer royale mod apk 0.39 unlocked
-Download stumble guys latest version mod apk 0.39
-Stumble guys hack mod apk 0.39 unlimited gems
-Stumble guys mod apk 0.39 android download
-Stumble guys mod apk 0.39 no root required
-Stumble guys mod apk 0.39 online gameplay
-Stumble guys mod apk 0.39 features and review
-Stumble guys mod apk 0.39 safe and secure download
-Stumble guys mod apk 0.39 installation guide
-Download stumble guys mod apk 0.39 from happymod.com[^1^]
-Stumble guys mod apk 0.39 direct download link
-Stumble guys mod apk 0.39 update and changelog
-Stumble guys mod apk 0.39 best settings and tips
-Stumble guys mod apk 0.39 compatible devices and requirements
-Download stumble guys mod apk 0.39 with obb file
-Stumble guys mod apk 0.39 offline mode available
-Stumble guys mod apk 0.39 new maps and costumes
-Stumble guys mod apk 0.39 premium access unlocked
-Stumble guys mod apk 0.39 bug fixes and improvements
-Download stumble guys mod apk 0.39 from apkpure.com
-Stumble guys mod apk 0.39 fast and easy download
-Stumble guys mod apk 0.39 full version download
-Stumble guys mod apk 0.39 fun and addictive game
-Stumble guys mod apk 0.39 support and feedback
-
-
All skins unlocked: You can access all the skins in the game without having to spend coins or gems. You can dress up your character with any outfit or accessory you want.
-
No ads: You can play the game without being interrupted by annoying ads that pop up every time you finish a round or level up.
-
No root required: You don't need to root your device to install the APK mod version of the game. You just need to enable unknown sources in your settings and follow the installation steps below.
-
-
How to download and install St
How to download and install Stumble Guys APK Mod 0.39
-
If you want to download and install Stumble Guys APK mod 0.39 on your Android device, you just have to follow these simple steps:
-
-
Click on the download button below to download the APK file of the game.
-
Go to your device settings and enable unknown sources. This will allow you to install apps from sources other than the Google Play Store.
-
Locate the downloaded APK file in your file manager and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to complete.
-
Launch the game and enjoy playing Stumble Guys with all skins unlocked and no ads.
-
-
Download Stumble Guys APK Mod 0.39
-
Conclusion
-
Stumble Guys is a fun and wacky multiplayer game that will make you laugh and scream as you compete with other players in various obstacle courses and challenges. You can customize your character with different skins and accessories, and play with up to 32 players online. You can also download the APK mod version of the game to unlock all skins, remove ads, and install it without root. If you are looking for a game that will keep you entertained and amused, then you should try Stumble Guys today.
-
FAQs
-
Here are some frequently asked questions about Stumble Guys and its APK mod version:
-
-
Q: Is Stumble Guys free to play?
-
A: Yes, Stumble Guys is free to play. However, it has some in-app purchases that allow you to buy coins or gems to unlock more skins or items in the game.
-
Q: Is Stumble Guys safe to play?
-
A: Yes, Stumble Guys is safe to play. It does not contain any viruses or malware that can harm your device or data. However, you should always download the game from a trusted source like the Google Play Store or our website.
-
Q: Is Stumble Guys compatible with my device?
-
A: Stumble Guys is compatible with most Android devices that have Android 5.0 or higher. However, some older devices may experience some lag or performance issues due to the high graphics and animation of the game.
-
Q: How can I contact the developers of Stumble Guys?
-
A: You can contact the developers of Stumble Guys by sending them an email at support@kitkagames.com or by visiting their website at https://www.kitkagames.com/.
-
Q: How can I update Stumble Guys APK Mod 0.39?
-
A: You can update Stumble Guys APK Mod 0.39 by visiting our website regularly and downloading the latest version of the game. You can also enable notifications on our website to get notified when a new update is available.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download the Coolest and Trendiest mp3 Ringtones with Ringtone Download 3.md b/spaces/1phancelerku/anime-remove-background/Download the Coolest and Trendiest mp3 Ringtones with Ringtone Download 3.md
deleted file mode 100644
index 1540ca88c35ce6021d89ecf536558a7baddfec96..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download the Coolest and Trendiest mp3 Ringtones with Ringtone Download 3.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-
Ringtone Download 3: How to Get the Best Ringtones for Your Phone
-
Do you want to spice up your phone with some cool and trendy ringtones? Do you want to express your mood and personality with your ringtone? Do you want to have access to thousands of ringtones in different categories and genres? If you answered yes to any of these questions, then you need to check out ringtone download 3, the ultimate destination for all your ringtone needs.
-
Introduction
-
What are ringtones and why do they matter?
-
Ringtones are the sounds that your phone makes when someone calls you or when you receive a notification. They are an important part of your phone's customization and personalization, as they can make your phone stand out from the crowd and reflect your taste and preferences. Ringtones can also help you identify who is calling you without looking at your phone, or set different tones for different contacts or groups. Ringtones can also be a fun way to express yourself and have some fun with your phone.
Ringtone download 3 is a website and an app that allows you to download free mp3 ringtones for your mobile phones. It has a huge collection of ringtones uploaded by users and shared by other users. You can choose from over 52900 ringtones uploaded under various categories, such as Hindi, Tamil, Devotional, Music, Name, iPhone, etc. You can also upload your own ringtones and share them with others. Ringtone download 3 works by letting you listen to the preview of the ringtones and then download them with a simple click. You don't need to sign up or register to use ringtone download 3, and you can download as many ringtones as you want.
-
Benefits of using ringtone download 3
-
Access to a wide range of ringtones in different categories and genres
-
One of the main benefits of using ringtone download 3 is that you can access a wide range of ringtones in different categories and genres. Whether you are looking for Bollywood songs, Hollywood movies, pop music, classical music, instrumental music, devotional songs, baby sounds, animal sounds, funny sounds, or anything else, you can find it on ringtone download 3. You can also search for ringtones by keywords or browse through the popular or recent categories. You can also find ringtones that suit your mood, occasion, or personality.
-
Easy and fast download process with no sign up or registration required
-
Another benefit of using ringtone download 3 is that it has an easy and fast download process with no sign up or registration required. You don't need to create an account or provide any personal information to use ringtone download 3. You just need to visit the website or app, select the ringtones you want, and click on the download button. The ringtones will be saved to your phone in mp3 format, which is compatible with all types of phones. The download process is fast and smooth, and you can get your ringtones in seconds.
-
High-quality sound and compatibility with all types of phones
-
A third benefit of using ringtone download 3 is that it offers high-quality sound and compatibility with all types of phones. The ringtones on ringtone download 3 are in mp3 format, which is a common and widely used audio format that delivers clear and crisp sound. The ringtones are also compatible with all types of phones, whether they are Android, iOS, Windows, or any other operating system. You don't need to worry about the format or the size of the ringtones, as they will work on any phone.
-
Tips for choosing the best ringtone for your phone
-
Set a ringtone that reflects your personality and style
-
One of the tips for choosing the best ringtone for your phone is to set a ringtone that reflects your personality and style. Your ringtone is a way of expressing yourself and showing your taste and preferences. You can choose a ringtone that matches your mood, your hobbies, your interests, or your favorite things. For example, if you are a fan of sports, you can choose a ringtone that plays the theme song of your favorite team or player. If you are a fan of movies, you can choose a ringtone that plays a famous dialogue or a catchy tune from your favorite movie. If you are a fan of music, you can choose a ringtone that plays a song or a melody from your favorite artist or genre.
-
Avoid ringtones that are irritating or inappropriate for your surroundings
-
Another tip for choosing the best ringtone for your phone is to avoid ringtones that are irritating or inappropriate for your surroundings. You don't want to annoy or offend other people with your ringtone, especially in public places or professional settings. You should avoid ringtones that are too loud, too long, too vulgar, too violent, or too controversial. You should also avoid ringtones that are similar to emergency sounds, such as sirens, alarms, or horns. You should also consider the context and the occasion when choosing your ringtone. For example, if you are in a meeting, you should choose a ringtone that is subtle and discreet. If you are in a party, you should choose a ringtone that is fun and upbeat.
-
Pick a song or music that you like and enjoy
-
A third tip for choosing the best ringtone for your phone is to pick a song or music that you like and enjoy. Your ringtone should be something that makes you happy and relaxed when you hear it. You should choose a song or music that you know well and can sing along to. You should also choose a song or music that has a catchy and memorable hook or chorus that can easily be recognized by others. You should also choose a song or music that has a good quality and clarity of sound.
-
How to use ringtone download 3 to get your favorite ringtones
-
Visit the website or app of ringtone download 3 and browse through the categories
-
The first step to use ringtone download 3 to get your favorite ringtones is to visit the website or app of ringtone download 3 and browse through the categories. You can access the website by typing www.ringtone-download-3.com on your browser or by scanning the QR code on the homepage. You can also download the app from Google Play Store or Apple App Store by searching for "ringtone download 3". Once you open the website or app, you will see various categories of ringtones, such as Hindi, Tamil, Devotional, Music, Name, iPhone, etc. You can click on any category to see the list of ringtones available under it.
-
ringtone download 3 moonu bgm
-ringtone download 3 tamil movie bgm
-ringtone download 3 bgm instrumental
-ringtone download 3 kannazhaga moonu bgm
-ringtone download 3 idhazhin oram bgm
-ringtone download 3 mp3 ringtones
-ringtone download 3 ar flute bgm
-ringtone download 3 bade achhe lagte hai
-ringtone download 3 overture tron legacy
-ringtone download 3 thadam inaye bgm
-ringtone download 3 doctor movie bgm
-ringtone download 3 valmiki bgm
-ringtone download 3 vardaan bgm
-ringtone download 3 lets get it on bgm
-ringtone download 3 in the meantime bgm
-ringtone download 3 awesome bgm
-ringtone download 3 hindi ringtones
-ringtone download 3 name ringtones
-ringtone download 3 iphone ringtones
-ringtone download 3 music ringtones
-ringtone download 3 devotional ringtones
-ringtone download 3 baby ringtones
-ringtone download 3 tamil ringtones
-ringtone download 3 nainowale by gulzar hussain
-ringtone download 3 hi by gulzar hussain
-ringtone download 3 hello by gulzar hussain
-ringtone download 3 swarnika by swarnika
-ringtone download 3 jay shree ram sms by chirag prajapat
-ringtone download 3 airtel old ringtone by chirag prajapat
-ringtone download 3 jaydip by jaydip
-ringtone download 3 gurjar by ramhet gurjar
-ringtone download 3 sad song by mustafa
-ringtone download 3 ved by amol
-ringtone download 3 iphone by rehman
-ringtone download 3 prokerala ringtones
-ringtone download 3 zedge ringtones
-ringtone download 3 free mp3 ringtones
-ringtone download 3 mobile ringtones
-ringtone download 3 cool ringtones
-ringtone download 3 trendy ringtones
-ringtone download 3 upload ringtones
-ringtone download 3 share ringtones
-ringtone download 3 buzzer ringtones
-ringtone download 3 personality ringtones
-ringtone download 3 new ringtones
-ringtone download 3 popular ringtones
-ringtone download 3 message tones
-ringtone download 3 alert tones
-ringtone download 3 love calculator ringtones
-
Listen to the preview of the ringtones and select the ones you want
-
The second step to use ringtone download 3 to get your favorite ringtones is to listen to the preview of the ringtones and select the ones you want. You can listen to the preview of any ringtone by clicking on the play button next to it. You can also see the name, duration, size, and rating of each ringtone. You can select as many ringtones as you want by clicking on the checkbox next to them.
-
Click on the download button and save the ringtones to your phone
-
The third step to use ringtone download 3 to get your favorite ringtones is to click on the download button and save the ringtones to your phone. Once you have selected all the ringtones you want, you can click on the download button at the bottom of the page. You will see a pop-up window that asks you to choose the location where you want to save the ringtones. You can choose any folder or directory on your phone or SD card. You can also rename the ringtones if you want. After you have chosen the location, click on the save button and wait for the download to complete. You will see a confirmation message that says "Download successful". You can then go to your phone's settings and set the ringtones as your default or custom ringtones.
-
Conclusion
-
Ringtone download 3 is a great way to get the best ringtones for your phone. It offers a wide range of ringtones in different categories and genres, an easy and fast download process with no sign up or registration required, and high-quality sound and compatibility with all types of phones. You can also use some tips to choose the best ringtone for your phone, such as setting a ringtone that reflects your personality and style, avoiding ringtones that are irritating or inappropriate for your surroundings, and picking a song or music that you like and enjoy. You can also use ringtone download 3 to get your favorite ringtones by visiting the website or app, listening to the preview of the ringtones, and clicking on the download button. So, what are you waiting for? Visit ringtone download 3 today and get ready to rock your phone with some awesome ringtones.
-
FAQs
-
Q: Is ringtone download 3 free?
-
A: Yes, ringtone download 3 is completely free and does not charge any fees or subscriptions for downloading ringtones.
-
Q: How many ringtones can I download from ringtone download 3?
-
A: You can download as many ringtones as you want from ringtone download 3. There is no limit or restriction on the number of downloads.
-
Q: Can I upload my own ringtones to ringtone download 3?
-
A: Yes, you can upload your own ringtones to ringtone download 3 and share them with other users. You just need to click on the upload button on the homepage and follow the instructions.
-
Q: Can I rate and review the ringtones on ringtone download 3?
-
A: Yes, you can rate and review the ringtones on ringtone download 3 by clicking on the star icon and the comment icon next to each ringtone. You can also see the ratings and reviews of other users.
-
Q: Can I request a specific ringtone on ringtone download 3?
-
A: Yes, you can request a specific ringtone on ringtone download 3 by clicking on the request button on the homepage and filling out the form. You can also see the requests of other users and vote for them.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Pixel Demolish Mod APK with Unlimited Money and Gear - No Root Required.md b/spaces/1phancelerku/anime-remove-background/Enjoy Pixel Demolish Mod APK with Unlimited Money and Gear - No Root Required.md
deleted file mode 100644
index 40dec9679c7d525d973fe6f58232d5820579e26e..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy Pixel Demolish Mod APK with Unlimited Money and Gear - No Root Required.md
+++ /dev/null
@@ -1,101 +0,0 @@
-
-
Pixel Demolish Mod APK Unlimited Money: A Fun and Addictive Game for Android Users
-
If you are looking for a simple yet challenging game that will keep you entertained for hours, then you should try Pixel Demolish Mod APK Unlimited Money. This is a modified version of the original Pixel Demolish game that gives you unlimited money to upgrade your towers and win. In this article, we will tell you what Pixel Demolish Mod APK is, why you should download it, and how to install it on your Android device.
-
What is Pixel Demolish Mod APK?
-
Pixel Demolish is a casual game developed by Dalak Games that involves placing towers and tapping on the falling blocks to demolish them. The game has pixelated graphics and retro sound effects that give it a nostalgic feel. The game is easy to play but hard to master, as you have to balance your tower placement, timing, and strategy to grind all the falling pixels and collect gold coins.
The gameplay of Pixel Demolish is simple and fun. You have to place towers on the ground and tap on the falling blocks to destroy them. The blocks come in different shapes, sizes, colors, and speeds, and you have to match the tower color with the block color to demolish it. If you miss a block or hit a wrong color, you will lose a life. You have three lives in each level, and if you lose them all, you will have to start over.
-
The game has 100 levels with increasing difficulty and variety. You will encounter different types of blocks, such as bombs, spikes, shields, magnets, and more, that will challenge your skills and reflexes. You will also unlock new towers with different abilities, such as lasers, rockets, cannons, and more, that will help you clear the levels faster and easier.
-
The features of Pixel Demolish Mod APK
-
Pixel Demolish Mod APK is a modified version of the original Pixel Demolish game that gives you unlimited money to upgrade your towers and win. With this mod apk, you can enjoy the following features:
-
-
Unlimited money: You can get unlimited gold coins by destroying the blocks and use them to buy new towers and upgrade them. You can also use the money to buy power-ups, such as extra lives, bombs, magnets, and more, that will help you in the game.
-
All towers unlocked: You can access all the towers in the game without having to complete the levels or spend money. You can choose from 12 different towers with unique abilities and effects.
-
No ads: You can play the game without any interruptions or distractions from annoying ads. You can enjoy the game without any lag or glitches.
-
-
Why should you download Pixel Demolish Mod APK Unlimited Money?
-
If you are a fan of pixel art games and tower defense games, then you should download Pixel Demolish Mod APK Unlimited Money. This mod apk will give you a lot of advantages over the original version of the game. Here are some reasons why you should download this mod apk:
-
The benefits of having unlimited money in Pixel Demolish
-
Having unlimited money in Pixel Demolish will make the game more fun and easy for you. You can buy any tower you want and upgrade it to its maximum level. You can also buy power-ups that will help you clear the levels faster and easier. You can experiment with different tower combinations and strategies and have more fun with the game. You can also save your money for other things, such as buying apps, games, or subscriptions.
-
The drawbacks of the original version of Pixel Demolish
-
The original version of Pixel Demolish has some drawbacks that can make the game frustrating and boring for some players. Here are some of the drawbacks of the original version:
-
pixel demolish mod apk download free
-pixel demolish hack apk unlimited coins
-pixel demolish mod apk latest version
-pixel demolish cheat apk unlimited gems
-pixel demolish mod apk android 1
-pixel demolish cracked apk unlimited ammo
-pixel demolish mod apk revdl
-pixel demolish modded apk unlimited health
-pixel demolish mod apk offline
-pixel demolish premium apk unlimited weapons
-pixel demolish mod apk no root
-pixel demolish hack apk download for android
-pixel demolish mod apk obb
-pixel demolish pro apk unlimited money and gear
-pixel demolish mod apk rexdl
-pixel demolish hack apk online
-pixel demolish mod apk data
-pixel demolish full apk unlimited everything
-pixel demolish mod apk happymod
-pixel demolish hack apk no verification
-pixel demolish mod apk 2.6.7
-pixel demolish unlock all apk unlimited money and gear
-pixel demolish mod apk apkpure
-pixel demolish hack apk ios
-pixel demolish mod apk 2023
-pixel demolish free shopping apk unlimited money and gear
-pixel demolish mod apk android republic
-pixel demolish hack apk pc
-pixel demolish mod apk 2.6.8
-pixel demolish unlimited resources apk unlimited money and gear
-pixel demolish mod apk an1
-pixel demolish hack apk 2023
-pixel demolish mod apk 2.6.9
-pixel demolish mega mod apk unlimited money and gear
-pixel demolish mod apk platinmods
-pixel demolish hack tool apk unlimited money and gear
-pixel demolish mod apk 2.7.0
-pixel demolish god mode apk unlimited money and gear
-pixel demolish mod apk blackmod
-pixel demolish hack generator apk unlimited money and gear
-
-
Limited money: You can only get a limited amount of gold coins by destroying the blocks, and you have to spend them wisely to buy and upgrade your towers. You may not have enough money to buy the tower you want or to upgrade it to its full potential. You may also run out of money to buy power-ups that can help you in the game.
-
Locked towers: You can only unlock new towers by completing the levels or by spending money. You may not be able to access some of the towers that you like or that suit your playstyle. You may also miss out on some of the cool abilities and effects that the towers have.
-
Ads: You have to watch ads to get extra lives, coins, or power-ups in the game. The ads can be annoying and distracting, and they can also cause lag or glitches in the game. You may also accidentally click on the ads and be redirected to other websites or apps.
-
-
How to download and install Pixel Demolish Mod APK Unlimited Money on your Android device?
-
If you want to download and install Pixel Demolish Mod APK Unlimited Money on your Android device, you have to follow some simple steps. Here are the steps to download and install Pixel Demolish Mod APK:
-
The steps to download and install Pixel Demolish Mod APK
-
-
Download the mod apk file: You can download the mod apk file from a reliable source, such as [this link]. The file size is about 30 MB, so make sure you have enough space on your device.
-
Enable unknown sources: You have to enable unknown sources on your device settings to allow the installation of apps from sources other than Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Install the mod apk file: You have to locate the downloaded mod apk file on your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.
-
Launch the game: You can now launch the game from your app drawer or home screen and enjoy playing Pixel Demolish Mod APK Unlimited Money.
-
-
The precautions to take before installing Pixel Demolish Mod APK
-
Before installing Pixel Demolish Mod APK Unlimited Money on your device, you should take some precautions to avoid any problems or risks. Here are some of the precautions you should take:
-
-
Backup your data: You should backup your data, such as contacts, photos, videos, messages, etc., before installing any mod apk on your device. This will help you restore your data in case something goes wrong or you lose your data.
-
Scan the mod apk file: You should scan the mod apk file with a trusted antivirus or malware scanner before installing it on your device. This will help you detect any viruses or malware that may harm your device or steal your information.
-
Uninstall the original version of Pixel Demolish: You should uninstall the original version of Pixel Demolish from your device before installing the mod apk version. This will prevent any conflicts or errors between the two versions of the game.
-
-
Conclusion
-
Pixel Demolish Mod APK Unlimited Money is a fun and addictive game that will keep you entertained for hours. You can enjoy destroying pixelated blocks with different towers and power-ups, and you can also get unlimited money to buy and upgrade anything you want in the game. You can download and install Pixel Demolish Mod APK Unlimited Money on your Android device by following some simple steps and taking some precautions. If you are looking for a simple yet challenging game that will give you a nostalgic feel, then you should try Pixel Demolish Mod APK Unlimited Money.
-
FAQs
-
Here are some frequently asked questions about Pixel Demolish Mod APK Unlimited Money:
-
-
Is Pixel Demolish Mod APK Unlimited Money safe to use?
-
Yes, Pixel Demolish Mod APK Unlimited Money is safe to use if you download it from a reliable source and scan it with a trusted antivirus or malware scanner. You should also take some precautions before installing it on your device, such as backing up your data, uninstalling the original version of the game, and enabling unknown sources.
-
What are the requirements to play Pixel Demolish Mod APK Unlimited Money?
-
To play Pixel Demolish Mod APK Unlimited Money, you need an Android device with Android 4.4 or higher and at least 30 MB of free space. You also need an internet connection to download and install the mod apk file.
-
Can I play Pixel Demolish Mod APK Unlimited Money offline?
-
Yes, you can play Pixel Demolish Mod APK Unlimited Money offline once you have installed it on your device. You do not need an internet connection to play the game, unless you want to update it or access some online features.
-
Can I play Pixel Demolish Mod APK Unlimited Money with my friends?
-
No, Pixel Demolish Mod APK Unlimited Money does not have a multiplayer mode or a social feature. You can only play the game solo and compete with yourself or with the global leaderboard.
-
How can I contact the developer of Pixel Demolish Mod APK Unlimited Money?
-
If you have any questions, feedback, or suggestions about Pixel Demolish Mod APK Unlimited Money, you can contact the developer of the game by emailing them at dalakgames@gmail.com. You can also follow them on Facebook and Twitter for more updates and news about the game.
"""
-INTRODUCTION_TEXT = """
-
-ANGO is ANovel Generation-Oriented Chinese LLM evaluation benchmark.
-
-We introduces the format of single-question multiple-keypoints dataset for the first time, which include 171 keypoints accumulated in 4 hierarchical levels and 9 difficulty categories.
-
-
-The data were exclusively obtained from the Administrative Proficiency Test,
- which serves as a significant component of the Chinese civil service examination.
-
-
-We will apply a seasonal system for the leaderboard, updating them every two months.
-The corresponding test dataset will be announced at the beginning of each season,
-and some questions will be eliminated at the end of the season.
-
-
-Read more details in "About" page!
-"""
-QUESTION_TEXT = r"""
-About Wrong Hit & Wrong Value, pls go to "About" page
-"""
-
-KEYPOINT_TEXT = """
-Because single question may contains more than one keypoint, so the total number of keypoint count is higher than question count
-"""
-KEYPOINT_DISTRIBUTION = """{"data":[{"branchvalues":"total","insidetextorientation":"radial","labels":["关联词-转折","关联词-因果","关联词-对策","关联词-并列","主题词","程度词","行文脉络-总分","行文脉络-分总","行文脉络-分总分","特殊问法","实词","代词","首句特征","非首句特征","确定捆绑","确定顺序","尾句特征","开头","中间","结尾","词的辨析-词义侧重","词的辨析-固定搭配","词的辨析-感情色彩","词的辨析-程度轻重","关联关系-转折关系","关联关系-因果关系","关联关系-并列关系","对应关系-解释类对应","对应关系-重点词句对应","给完工时间型","给效率比例型","给具体单位型","工程问题-其他","非典型最值问题","构造数列","最不利构造","多集合反向构造","周期相遇问题","周期余数问题","周期问题-其他","火车过桥","平均速度","普通行程","相遇追及","流水行船","行程问题-其他","平面几何","立体几何","两集合","三集合","基础排列组合","相邻问题","不相邻问题","同素分堆问题","环形排列问题","错位排列","排列组合问题-其他","给情况求概率","给概率求概率","概率问题-其他","普通不定方程","不定方程组","主客体","大前提","方式目的","原因结果","单定义-其他句式","故事类","拆词","常规问法","搭桥","必要条件","补充论据","加强选非题","加强-其他","削弱论点","拆桥","他因削弱","削弱选非题","削弱论据","因果倒置","削弱-其他","常规翻译","集合推理","推理形式","翻译推理-其他","语义关系-近义关系","语义关系-反义关系","语义-其他","逻辑关系-全同关系","逻辑关系-并列关系","逻辑关系-交叉关系","逻辑关系-包容关系","逻辑关系-对应关系","中心理解题","细节判断题","词句理解题","标题填入题","语句排序题","语句填空题","接语选择题","实词填空","成语填空","混搭填空","词的辨析","语境分析","工程问题","最值问题","年龄问题","和差倍比问题","周期问题","数列问题","行程问题","几何问题","容斥原理问题","排列组合问题","概率问题","经济利润问题","不定方程问题","统筹规划问题","数学运算-其他","公倍数与公约数问题","单定义","多定义","加强题型","削弱题型","翻译推理","组合排列-材料","原因解释","语义关系","逻辑关系","拆分思维","直接找数","简单加减计算","排序类","基期计算","现期计算","基期比较","间隔基期","基期和差","现期追赶","一般增长率","混合增长率","间隔增长率","年均增长率","增长量计算","增长量比较","间隔增长量","年均增长量","现期比重","基期比重","两期比重","混合比重","基期平均数","现期平均数","平均数的增长率","平均数的增长量","两期平均数比较","基期倍数","现期倍数","比值计算","比值比较","时政","中国特色社会主义建设","宏观经济与调控政策","物理常识","化学常识","生物常识","科技理论与成就","生活常识","中国历史","世界历史","文学常识","文化常识","自然常识","国情社情","宪法","行政法","民法","刑法","劳动法","其他法律法规","民事诉讼法","经济法","阅读理解","语句表达","逻辑填空","数学运算","定义判断","逻辑判断","类比推理","文字资料","综合资料","简单计算","基期与现期","增长率","增长量","比重问题","平均数问题","倍数与比值相关","综合分析","政治常识","经济常识","科技常识","人文常识","地理国情","法律常识","未分类","言语理解与表达","数量关系","判断推理","资料分析","常识判断"],"marker":{"colors":["#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#B22222","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC6600","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#CC9900","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#B22222","#B22222","#B22222","#CC6600","#CC9900","#CC9900","#CC9900","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#228B22","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#0077BE","#9400D3","#B22222","#CC6600","#CC9900","#228B22","#0077BE"]},"parents":["中心理解题","中心理解题","中心理解题","中心理解题","中心理解题","中心理解题","中心理解题","中心理解题","中心理解题","中心理解题","词句理解题","词句理解题","语句排序题","语句排序题","语句排序题","语句排序题","语句排序题","语句填空题","语句填空题","语句填空题","词的辨析","词的辨析","词的辨析","词的辨析","语境分析","语境分析","语境分析","语境分析","语境分析","工程问题","工程问题","工程问题","工程问题","最值问题","最值问题","最值问题","最值问题","周期问题","周期问题","周期问题","行程问题","行程问题","行程问题","行程问题","行程问题","行程问题","几何问题","几何问题","容斥原理问题","容斥原理问题","排列组合问题","排列组合问题","排列组合问题","排列组合问题","排列组合问题","排列组合问题","排列组合问题","概率问题","概率问题","概率问题","不定方程问题","不定方程问题","单定义","单定义","单定义","单定义","单定义","单定义","单定义","多定义","加强题型","加强题型","加强题型","加强题型","加强题型","削弱题型","削弱题型","削弱题型","削弱题型","削弱题型","削弱题型","削弱题型","翻译推理","翻译推理","翻译推理","翻译推理","语义关系","语义关系","语义关系","逻辑关系","逻辑关系","逻辑关系","逻辑关系","逻辑关系","阅读理解","阅读理解","阅读理解","阅读理解","语句表达","语句表达","语句表达","逻辑填空","逻辑填空","逻辑填空","逻辑填空","逻辑填空","数学运算","数学运算","数学运算","数学运算","数学运算","数学运算","数学运算","数学运算","数学运算","数学运算","数学运算","数学运算","数学运算","数学运算","数学运算","数学运算","定义判断","定义判断","逻辑判断","逻辑判断","逻辑判断","逻辑判断","逻辑判断","类比推理","类比推理","类比推理","简单计算","简单计算","简单计算","基期与现期","基期与现期","基期与现期","基期与现期","基期与现期","基期与现期","增长率","增长率","增长率","增长率","增长量","增长量","增长量","增长量","比重问题","比重问题","比重问题","比重问题","平均数问题","平均数问题","平均数问题","平均数问题","平均数问题","倍数与比值相关","倍数与比值相关","倍数与比值相关","倍数与比值相关","政治常识","政治常识","经济常识","科技常识","科技常识","科技常识","科技常识","科技常识","人文常识","人文常识","人文常识","人文常识","地理国情","地理国情","法律常识","法律常识","法律常识","法律常识","法律常识","法律常识","法律常识","法律常识","言语理解与表达","言语理解与表达","言语理解与表达","数量关系","判断推理","判断推理","判断推理","资料分析","资料分析","资料分析","资料分析","资料分析","资料分析","资料分析","资料分析","资料分析","资料分析","常识判断","常识判断","常识判断","常识判断","常识判断","常识判断","","","","","",""],"values":[892,340,1028,634,1029,211,649,1130,409,629,193,153,110,139,659,560,38,234,417,295,1116,3837,801,808,662,378,1371,2173,4832,162,203,149,51,339,154,111,20,80,103,32,22,38,211,322,75,14,230,183,124,157,373,51,41,29,16,18,23,304,108,36,125,126,266,433,1148,521,1300,118,209,525,582,308,598,220,8,708,226,110,155,90,81,5,708,133,325,36,210,178,117,113,761,278,873,2087,6957,2221,346,465,1506,946,750,3340,2396,2474,6562,9416,565,624,169,1063,215,216,682,413,281,551,448,565,251,163,19,63,3995,525,1716,1375,1202,708,525,505,4112,240,105,118,52,152,24,18,22,61,7,147,50,41,2,113,34,4,2,244,120,91,2,35,94,53,7,3,50,64,32,1,3751,247,433,614,362,687,627,631,737,124,916,1087,568,629,347,669,513,309,75,641,69,105,9989,3202,24188,6288,4520,5526,4857,2168,1,275,284,240,153,457,192,147,441,3999,435,2921,2866,1198,2728,15907,37379,6288,14903,4358,14147],"type":"sunburst"}],"layout":{"template":{"data":{"histogram2dcontour":[{"type":"histogram2dcontour","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"choropleth":[{"type":"choropleth","colorbar":{"outlinewidth":0,"ticks":""}}],"histogram2d":[{"type":"histogram2d","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"heatmap":[{"type":"heatmap","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"heatmapgl":[{"type":"heatmapgl","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"contourcarpet":[{"type":"contourcarpet","colorbar":{"outlinewidth":0,"ticks":""}}],"contour":[{"type":"contour","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"surface":[{"type":"surface","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"mesh3d":[{"type":"mesh3d","colorbar":{"outlinewidth":0,"ticks":""}}],"scatter":[{"fillpattern":{"fillmode":"overlay","size":10,"solidity":0.2},"type":"scatter"}],"parcoords":[{"type":"parcoords","line":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterpolargl":[{"type":"scatterpolargl","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"bar":[{"error_x":{"color":"#2a3f5f"},"error_y":{"color":"#2a3f5f"},"marker":{"line":{"color":"#E5ECF6","width":0.5},"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"bar"}],"scattergeo":[{"type":"scattergeo","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterpolar":[{"type":"scatterpolar","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"histogram":[{"marker":{"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"histogram"}],"scattergl":[{"type":"scattergl","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatter3d":[{"type":"scatter3d","line":{"colorbar":{"outlinewidth":0,"ticks":""}},"marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scattermapbox":[{"type":"scattermapbox","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterternary":[{"type":"scatterternary","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scattercarpet":[{"type":"scattercarpet","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"carpet":[{"aaxis":{"endlinecolor":"#2a3f5f","gridcolor":"white","linecolor":"white","minorgridcolor":"white","startlinecolor":"#2a3f5f"},"baxis":{"endlinecolor":"#2a3f5f","gridcolor":"white","linecolor":"white","minorgridcolor":"white","startlinecolor":"#2a3f5f"},"type":"carpet"}],"table":[{"cells":{"fill":{"color":"#EBF0F8"},"line":{"color":"white"}},"header":{"fill":{"color":"#C8D4E3"},"line":{"color":"white"}},"type":"table"}],"barpolar":[{"marker":{"line":{"color":"#E5ECF6","width":0.5},"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"barpolar"}],"pie":[{"automargin":true,"type":"pie"}]},"layout":{"autotypenumbers":"strict","colorway":["#636efa","#EF553B","#00cc96","#ab63fa","#FFA15A","#19d3f3","#FF6692","#B6E880","#FF97FF","#FECB52"],"font":{"color":"#2a3f5f"},"hovermode":"closest","hoverlabel":{"align":"left"},"paper_bgcolor":"white","plot_bgcolor":"#E5ECF6","polar":{"bgcolor":"#E5ECF6","angularaxis":{"gridcolor":"white","linecolor":"white","ticks":""},"radialaxis":{"gridcolor":"white","linecolor":"white","ticks":""}},"ternary":{"bgcolor":"#E5ECF6","aaxis":{"gridcolor":"white","linecolor":"white","ticks":""},"baxis":{"gridcolor":"white","linecolor":"white","ticks":""},"caxis":{"gridcolor":"white","linecolor":"white","ticks":""}},"coloraxis":{"colorbar":{"outlinewidth":0,"ticks":""}},"colorscale":{"sequential":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]],"sequentialminus":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]],"diverging":[[0,"#8e0152"],[0.1,"#c51b7d"],[0.2,"#de77ae"],[0.3,"#f1b6da"],[0.4,"#fde0ef"],[0.5,"#f7f7f7"],[0.6,"#e6f5d0"],[0.7,"#b8e186"],[0.8,"#7fbc41"],[0.9,"#4d9221"],[1,"#276419"]]},"xaxis":{"gridcolor":"white","linecolor":"white","ticks":"","title":{"standoff":15},"zerolinecolor":"white","automargin":true,"zerolinewidth":2},"yaxis":{"gridcolor":"white","linecolor":"white","ticks":"","title":{"standoff":15},"zerolinecolor":"white","automargin":true,"zerolinewidth":2},"scene":{"xaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2},"yaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2},"zaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2}},"shapedefaults":{"line":{"color":"#2a3f5f"}},"annotationdefaults":{"arrowcolor":"#2a3f5f","arrowhead":0,"arrowwidth":1},"geo":{"bgcolor":"white","landcolor":"#E5ECF6","subunitcolor":"white","showland":true,"showlakes":true,"lakecolor":"white"},"title":{"x":0.05},"mapbox":{"style":"light"}}}}}"""
-DIFFICULTY_DISTRIBUTION = """{"data":[{"marker":{"color":[24,130,9283,18231,23734,10120,9546,69,12],"colorbar":{"title":{"text":"Total"}},"colorscale":[[0.0,"#440154"],[0.1111111111111111,"#482878"],[0.2222222222222222,"#3e4989"],[0.3333333333333333,"#31688e"],[0.4444444444444444,"#26828e"],[0.5555555555555556,"#1f9e89"],[0.6666666666666666,"#35b779"],[0.7777777777777778,"#6ece58"],[0.8888888888888888,"#b5de2b"],[1.0,"#fde725"]]},"x":[1,2,3,4,5,6,7,8,9],"y":[24,130,9283,18231,23734,10120,9546,69,12],"type":"bar"}],"layout":{"template":{"data":{"histogram2dcontour":[{"type":"histogram2dcontour","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"choropleth":[{"type":"choropleth","colorbar":{"outlinewidth":0,"ticks":""}}],"histogram2d":[{"type":"histogram2d","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"heatmap":[{"type":"heatmap","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"heatmapgl":[{"type":"heatmapgl","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"contourcarpet":[{"type":"contourcarpet","colorbar":{"outlinewidth":0,"ticks":""}}],"contour":[{"type":"contour","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"surface":[{"type":"surface","colorbar":{"outlinewidth":0,"ticks":""},"colorscale":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]]}],"mesh3d":[{"type":"mesh3d","colorbar":{"outlinewidth":0,"ticks":""}}],"scatter":[{"fillpattern":{"fillmode":"overlay","size":10,"solidity":0.2},"type":"scatter"}],"parcoords":[{"type":"parcoords","line":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterpolargl":[{"type":"scatterpolargl","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"bar":[{"error_x":{"color":"#2a3f5f"},"error_y":{"color":"#2a3f5f"},"marker":{"line":{"color":"#E5ECF6","width":0.5},"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"bar"}],"scattergeo":[{"type":"scattergeo","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterpolar":[{"type":"scatterpolar","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"histogram":[{"marker":{"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"histogram"}],"scattergl":[{"type":"scattergl","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatter3d":[{"type":"scatter3d","line":{"colorbar":{"outlinewidth":0,"ticks":""}},"marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scattermapbox":[{"type":"scattermapbox","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scatterternary":[{"type":"scatterternary","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"scattercarpet":[{"type":"scattercarpet","marker":{"colorbar":{"outlinewidth":0,"ticks":""}}}],"carpet":[{"aaxis":{"endlinecolor":"#2a3f5f","gridcolor":"white","linecolor":"white","minorgridcolor":"white","startlinecolor":"#2a3f5f"},"baxis":{"endlinecolor":"#2a3f5f","gridcolor":"white","linecolor":"white","minorgridcolor":"white","startlinecolor":"#2a3f5f"},"type":"carpet"}],"table":[{"cells":{"fill":{"color":"#EBF0F8"},"line":{"color":"white"}},"header":{"fill":{"color":"#C8D4E3"},"line":{"color":"white"}},"type":"table"}],"barpolar":[{"marker":{"line":{"color":"#E5ECF6","width":0.5},"pattern":{"fillmode":"overlay","size":10,"solidity":0.2}},"type":"barpolar"}],"pie":[{"automargin":true,"type":"pie"}]},"layout":{"autotypenumbers":"strict","colorway":["#636efa","#EF553B","#00cc96","#ab63fa","#FFA15A","#19d3f3","#FF6692","#B6E880","#FF97FF","#FECB52"],"font":{"color":"#2a3f5f"},"hovermode":"closest","hoverlabel":{"align":"left"},"paper_bgcolor":"white","plot_bgcolor":"#E5ECF6","polar":{"bgcolor":"#E5ECF6","angularaxis":{"gridcolor":"white","linecolor":"white","ticks":""},"radialaxis":{"gridcolor":"white","linecolor":"white","ticks":""}},"ternary":{"bgcolor":"#E5ECF6","aaxis":{"gridcolor":"white","linecolor":"white","ticks":""},"baxis":{"gridcolor":"white","linecolor":"white","ticks":""},"caxis":{"gridcolor":"white","linecolor":"white","ticks":""}},"coloraxis":{"colorbar":{"outlinewidth":0,"ticks":""}},"colorscale":{"sequential":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]],"sequentialminus":[[0.0,"#0d0887"],[0.1111111111111111,"#46039f"],[0.2222222222222222,"#7201a8"],[0.3333333333333333,"#9c179e"],[0.4444444444444444,"#bd3786"],[0.5555555555555556,"#d8576b"],[0.6666666666666666,"#ed7953"],[0.7777777777777778,"#fb9f3a"],[0.8888888888888888,"#fdca26"],[1.0,"#f0f921"]],"diverging":[[0,"#8e0152"],[0.1,"#c51b7d"],[0.2,"#de77ae"],[0.3,"#f1b6da"],[0.4,"#fde0ef"],[0.5,"#f7f7f7"],[0.6,"#e6f5d0"],[0.7,"#b8e186"],[0.8,"#7fbc41"],[0.9,"#4d9221"],[1,"#276419"]]},"xaxis":{"gridcolor":"white","linecolor":"white","ticks":"","title":{"standoff":15},"zerolinecolor":"white","automargin":true,"zerolinewidth":2},"yaxis":{"gridcolor":"white","linecolor":"white","ticks":"","title":{"standoff":15},"zerolinecolor":"white","automargin":true,"zerolinewidth":2},"scene":{"xaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2},"yaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2},"zaxis":{"backgroundcolor":"#E5ECF6","gridcolor":"white","linecolor":"white","showbackground":true,"ticks":"","zerolinecolor":"white","gridwidth":2}},"shapedefaults":{"line":{"color":"#2a3f5f"}},"annotationdefaults":{"arrowcolor":"#2a3f5f","arrowhead":0,"arrowwidth":1},"geo":{"bgcolor":"white","landcolor":"#E5ECF6","subunitcolor":"white","showland":true,"showlakes":true,"lakecolor":"white"},"title":{"x":0.05},"mapbox":{"style":"light"}}},"yaxis":{"type":"log"}}}"""
-TEST_SET_TEXT = """
-The test set comprises a total of 1768 records.
-
-Among these records, there are 988 distinct combinations of Keypoints, which indicates the provision of an additional few-shot examples amounting to 988 * 5.
-
-The test set encompasses all 171 Keypoint categories.
-
-If you want to using HuggingFace dataset, go to [ANGO Dataset] https://huggingface.co/datasets/AngoHF/ANGO-S1
-For more details, please refer to the "About" page.
-"""
-TEST_SCRIPT_TEXT = """
-
-The evaluation script requires three mandatory arguments, while the others should remain unchanged.
-
---model_path: specifies the location where the model parameters are saved.
---dataset_path: indicates the directory where the ANGO test set data is stored.
-
---save_path: denotes the path where the evaluation results will be saved.
-
-You can modify the specific functions to adapt them to your model.
-
-
-Upon completion of the evaluation, the script will generate three files:
-
-acc_result: This file contains the predicted results for each record, along with statistical data at the question level.
-
-category_result: This file provides statistical data at the Keypoint level.
-
-difficulty_result: This file includes statistical data categorized by difficulty level.
-"""
-SUBMIT_TEXT = """
-You can raise PR in this Space to submit your result, and we will update leaderboard manually after check.
-"""
-
-ABOUT_HTML = """
-
What is ANGO
-
We introduce a novel Chinese LLM benchmark dataset called ANGO, aiming to provide more in-depth guidance for model training and evaluation. We introduce the format of a single-question multiple-keypoints dataset for the first time, which will provide the most complete description for each question, enabling the test results to comprehensively showcase the model's performance from multiple perspectives. Based on the single-question multiple-keypoints format, we design a more detailed and refined model capability classification system - the Keypoint Tree, which reflects the relationships between different keypoints. It includes a total of 171 specific model capabilities accumulated in 4 hierarchical levels. With the help of the KeyPoint Tree, the performance of models on multiple levels of capabilities can be quickly measured, and corresponding adjustments can be made. ANGO also involves two new question attributes: human accuracy and human error-prone options. Based on human accuracy, we propose a more detailed difficulty classification compared to previous benchmarks. By combining the human accuracy of the question itself, the human accuracy of the involved key points, and the actual score of the question, all questions are divided into 9 difficulty levels, providing a quantifiable reference for evaluating models of different difficulty.
-
-
In addition to the innovative data, we propose a complete set of verification processes tailored for ANGO, which can provide fairer results compared to the current leaderboards. This includes conducting multiple experiments with option shuffling to mitigate the issue of data leakage, designing test set sampling strategies that fully utilize the characteristics of ANGO, and implementing elimination mechanisms for high-accuracy questions. Based on these, we establish a dynamic updating system for the test set, resembling a seasonal system. Thanks to these methods, ANGO can continually update the test results, ensuring the fairness and effectiveness of the leaderboard. By preserving the test results from multiple seasons, it can provide researchers with an overview of the current trends in optimizing models within the community.
-
-
Data Source
-
The data utilized in our study were exclusively obtained from the Administrative Proficiency Test, which serves as a significant component of the Chinese civil service examination.
-
The Administrative Proficiency Test is entirely composed of multiple-choice questions and aims to evaluate the abilities and skills necessary for practical administrative work. This test covers a wide range of knowledge areas, including Expression& Comprehension , Data Analysis, Quantitative Relations, Judgement&Inference, and Common Knowledge. As a comprehensive assessment tool, it requires candidates to respond to a series of questions related to administrative work within a limited timeframe. These questions may involve policy formulation, problem-solving, personnel and resource management, as well as handling emergency situations. By formulating these questions, it facilitates the evaluation of candidates' analytical thinking, Judgement&Inference, problem-solving abilities, and language proficiency.
-
The nature of the Administrative Proficiency Test necessitates candidates to tackle complex questions within a specified timeframe, making it an ideal testing environment for assessing the language capabilities of language models. Language models typically demonstrate excellent performance in generating and comprehending text, and this test provides concrete and intricate contexts that simulate real-world language communication and decision-making processes. By employing language models to answer these questions, we can evaluate their understanding of complex problems, Judgement&Inference abilities, as well as the accuracy and fluency of their language expressions.
-
Furthermore, the Administrative Proficiency Test encompasses a broad coverage and diversity. It includes questions and scenarios from various administrative domains, such as government administration, social affairs, and economic development. This diversity aids in evaluating the language processing abilities of language models across different fields, thereby providing a more comprehensive understanding of their potential strengths and limitations in practical applications. Moreover, it offers valuable insights for future model improvements and applications.
-
ANGO's data covers all 34 provinces in China and includes three different types of examinations conducted between 2008 and 2023, including formal and mock exams.
-
Data Processing
-
In order to enhance the quality of our data, we employed a simple yet efficient preprocessing approach.
-
Duplicate Removal
-
Given that mock exams often include previous exam questions, our data contained numerous duplicates. To address this issue, we employed a straightforward strategy of removing duplicates based on the record ID obtained from the data source. As a result of this step, the size of our data was reduced to 88,799 instances.
-
-
Image Removal
-
The data consisted of two types of images: formula pictures and other types (such as images containing graphics). However, since our primary focus was on Chinese Natural Language Processing (NLP) evaluation rather than the multi-modal domain, we opted to remove all records containing pure images. This resulted in the removal of 17,650 records.
-
-
Formula Replacement
-
As mentioned earlier, our data still contained formula pictures, and we recognized the importance of including formulae to ensure diversity in our data. To address this, we extracted 8,144 unique formula images from a pool of 34,062 LaTeX formulas derived from 5,574 questions. These images were then processed using a Formula OCR (Optical Character Recognition) model, followed by manual verification to ensure formula accuracy. Ultimately, we obtained a clean data consisting of 71,149 instances.
-
Data Format
-
-
-
Question: The content of the question.
-
Material: Some questions require additional information from a given material.
-
Type: The classification of the question, encompassing single-choice and multiple-choice formats.
-
Options: The candidate answers, presented in a line-separated format.
-
Choice: The correct answer to the question.
-
Keypoints: All the keypoints involved in the question.
-
Human Accuracy: The accuracy of humans on this question.
-
Human Count: The number of times this question has been completed by humans.
-
Most Wrong: The option that humans are most likely to choose incorrectly.
-
Difficulty: The level of difficulty of the question, given by our standard.
-
Solution: A concise explanation of the methodology to arrive at the correct answer.
-
Source: The original index and examination source of the question.
-
Formulas: The count of formulas present in the material, question, and options.
-
-
-
Here is an example record:
-
-
-
- Question: Forward: Backward
- Material: Please select the option that best resembles the relationship between the given words or phrases in the question stem.
- Type: Single Choice
- Options:
- A. Urge: Advise
- B. Ocean: Land
- C. Vibration: Quiet
- D. Extend: Compress
- Choice: D
- Difficulty: 4
- KeyPoints: Semantic Relationship - Antonym
- Human Accuracy: 79.564999
- Human Count: 183494
- Most Wrong: C
- Solution: Step 1: Determine the logical relationship between the words in the question stem. The two words in the question stem are antonyms. Step 2: Determine the logical relationship between the options. The option that has the same logical relationship as the question stem is option D. Option A is a synonym relationship, option B is a parallel relationship, and in option C, the antonym of "quiet" should be "noisy" instead of "vibration". Therefore, the correct answer is D.
- Source: 2011 Jiangsu Province Civil Service Recruitment Examination 'Administrative Aptitude Test' (Category A), Question 41
- Formulas: 0
-
-
-
-
Wrong Hit & Wrong Value
-
There are two special attributes in ANGO:
-
-
-
- Human Acc: Refers to the accuracy of humans in this question.
-
-
- Most Wrong: Represents the option that humans are prone to get wrong.
-
-
-
-
So based on these two attributes, we have derived two new metrics for evaluation:
-
-
-
- Wrong Hit: Refers to the number of times the model's incorrect predictions match the options that humans are prone to get wrong.
-
-
- Wrong Value: Calculated by taking the average of the human accuracy for all the questions in wrong_hit and subtracting that value from 1.
-
-
-
-
Wrong Value and Wrong Hit do not express the model's ability to perfectly solve the problem, but rather to some extent demonstrate the similarity between the model and real humans. Due to intentional guidance or design errors in the questions, humans often exhibit a tendency for widespread errors. In such cases, if the model's predicted answer is similar to the widespread human error tendency, it indicates that the model's way of thinking is closer to that of the majority of ordinary humans.
-
-
Evaluation(Not Implement Yet)
-
To mitigate the impact of data leakage during model pretraining on benchmark evaluations, we have employed multiple benchmark evaluation tricks to enhance fairness and real-time performance of the benchmarks.
-
-
Confusion of Options Order
-
Sometimes, a model's correct answer to a specific question may not be due to mastering a certain ability or understanding the question, but rather because it has recognized patterns of token order in the training data. By shuffling the order of options in multiple-choice questions and making multiple predictions with the correct answer placed in different options, we can average the results to reduce the model's reliance on character order.
-
-
Season For Dynamic Evaluation
-
Thanks to sampling strategies optimized for ANGO, we can periodically sample the test set and update the leaderboard. This prevents certain institutions or individuals from maliciously hacking ANGO to inflate the model's performance. However, due to the limited number of questions in some key areas, dynamic iteration may not be feasible for all questions.
-
-
Question Elimination Mechanism
-
In addition to the aforementioned dynamic updating of season, a new question elimination mechanism has been proposed. This mechanism calculates the average accuracy of each question across all models for each iteration. Questions with accuracies exceeding a threshold are temporarily removed by ANGO to ensure reliable discrimination among questions in ANGO.
-"""
diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/guided_diffusion/nn.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/guided_diffusion/nn.py
deleted file mode 100644
index a4cd59c2324b003626b8cf4c7581effd334908d3..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/guided_diffusion/nn.py
+++ /dev/null
@@ -1,170 +0,0 @@
-"""
-Various utilities for neural networks.
-"""
-
-import math
-
-import torch as th
-import torch.nn as nn
-
-
-# PyTorch 1.7 has SiLU, but we support PyTorch 1.5.
-class SiLU(nn.Module):
- def forward(self, x):
- return x * th.sigmoid(x)
-
-
-class GroupNorm32(nn.GroupNorm):
- def forward(self, x):
- return super().forward(x.float()).type(x.dtype)
-
-
-def conv_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D convolution module.
- """
- if dims == 1:
- return nn.Conv1d(*args, **kwargs)
- elif dims == 2:
- return nn.Conv2d(*args, **kwargs)
- elif dims == 3:
- return nn.Conv3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-def linear(*args, **kwargs):
- """
- Create a linear module.
- """
- return nn.Linear(*args, **kwargs)
-
-
-def avg_pool_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D average pooling module.
- """
- if dims == 1:
- return nn.AvgPool1d(*args, **kwargs)
- elif dims == 2:
- return nn.AvgPool2d(*args, **kwargs)
- elif dims == 3:
- return nn.AvgPool3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-
-def update_ema(target_params, source_params, rate=0.99):
- """
- Update target parameters to be closer to those of source parameters using
- an exponential moving average.
-
- :param target_params: the target parameter sequence.
- :param source_params: the source parameter sequence.
- :param rate: the EMA rate (closer to 1 means slower).
- """
- for targ, src in zip(target_params, source_params):
- targ.detach().mul_(rate).add_(src, alpha=1 - rate)
-
-
-def zero_module(module):
- """
- Zero out the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().zero_()
- return module
-
-
-def scale_module(module, scale):
- """
- Scale the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().mul_(scale)
- return module
-
-
-def mean_flat(tensor):
- """
- Take the mean over all non-batch dimensions.
- """
- return tensor.mean(dim=list(range(1, len(tensor.shape))))
-
-
-def normalization(channels):
- """
- Make a standard normalization layer.
-
- :param channels: number of input channels.
- :return: an nn.Module for normalization.
- """
- return GroupNorm32(32, channels)
-
-
-def timestep_embedding(timesteps, dim, max_period=10000):
- """
- Create sinusoidal timestep embeddings.
-
- :param timesteps: a 1-D Tensor of N indices, one per batch element.
- These may be fractional.
- :param dim: the dimension of the output.
- :param max_period: controls the minimum frequency of the embeddings.
- :return: an [N x dim] Tensor of positional embeddings.
- """
- half = dim // 2
- freqs = th.exp(
- -math.log(max_period) * th.arange(start=0, end=half, dtype=th.float32) / half
- ).to(device=timesteps.device)
- args = timesteps[:, None].float() * freqs[None]
- embedding = th.cat([th.cos(args), th.sin(args)], dim=-1)
- if dim % 2:
- embedding = th.cat([embedding, th.zeros_like(embedding[:, :1])], dim=-1)
- return embedding
-
-
-def checkpoint(func, inputs, params, flag):
- """
- Evaluate a function without caching intermediate activations, allowing for
- reduced memory at the expense of extra compute in the backward pass.
-
- :param func: the function to evaluate.
- :param inputs: the argument sequence to pass to `func`.
- :param params: a sequence of parameters `func` depends on but does not
- explicitly take as arguments.
- :param flag: if False, disable gradient checkpointing.
- """
- if flag:
- args = tuple(inputs) + tuple(params)
- return CheckpointFunction.apply(func, len(inputs), *args)
- else:
- return func(*inputs)
-
-
-class CheckpointFunction(th.autograd.Function):
- @staticmethod
- def forward(ctx, run_function, length, *args):
- ctx.run_function = run_function
- ctx.input_tensors = list(args[:length])
- ctx.input_params = list(args[length:])
- with th.no_grad():
- output_tensors = ctx.run_function(*ctx.input_tensors)
- return output_tensors
-
- @staticmethod
- def backward(ctx, *output_grads):
- ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]
- with th.enable_grad():
- # Fixes a bug where the first op in run_function modifies the
- # Tensor storage in place, which is not allowed for detach()'d
- # Tensors.
- shallow_copies = [x.view_as(x) for x in ctx.input_tensors]
- output_tensors = ctx.run_function(*shallow_copies)
- input_grads = th.autograd.grad(
- output_tensors,
- ctx.input_tensors + ctx.input_params,
- output_grads,
- allow_unused=True,
- )
- del ctx.input_tensors
- del ctx.input_params
- del output_tensors
- return (None, None) + input_grads
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/api.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/api.py
deleted file mode 100644
index 1ab9f15bf96bbaffcee0e3e29fc9d3979d6c32e8..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/api.py
+++ /dev/null
@@ -1,169 +0,0 @@
-# based on https://github.com/isl-org/MiDaS
-
-import cv2
-import os
-import torch
-import torch.nn as nn
-from torchvision.transforms import Compose
-
-from .midas.dpt_depth import DPTDepthModel
-from .midas.midas_net import MidasNet
-from .midas.midas_net_custom import MidasNet_small
-from .midas.transforms import Resize, NormalizeImage, PrepareForNet
-from annotator.util import annotator_ckpts_path
-
-
-ISL_PATHS = {
- "dpt_large": os.path.join(annotator_ckpts_path, "dpt_large-midas-2f21e586.pt"),
- "dpt_hybrid": os.path.join(annotator_ckpts_path, "dpt_hybrid-midas-501f0c75.pt"),
- "midas_v21": "",
- "midas_v21_small": "",
-}
-
-remote_model_path = "https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/dpt_hybrid-midas-501f0c75.pt"
-
-
-def disabled_train(self, mode=True):
- """Overwrite model.train with this function to make sure train/eval mode
- does not change anymore."""
- return self
-
-
-def load_midas_transform(model_type):
- # https://github.com/isl-org/MiDaS/blob/master/run.py
- # load transform only
- if model_type == "dpt_large": # DPT-Large
- net_w, net_h = 384, 384
- resize_mode = "minimal"
- normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
-
- elif model_type == "dpt_hybrid": # DPT-Hybrid
- net_w, net_h = 384, 384
- resize_mode = "minimal"
- normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
-
- elif model_type == "midas_v21":
- net_w, net_h = 384, 384
- resize_mode = "upper_bound"
- normalization = NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
-
- elif model_type == "midas_v21_small":
- net_w, net_h = 256, 256
- resize_mode = "upper_bound"
- normalization = NormalizeImage(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
-
- else:
- assert False, f"model_type '{model_type}' not implemented, use: --model_type large"
-
- transform = Compose(
- [
- Resize(
- net_w,
- net_h,
- resize_target=None,
- keep_aspect_ratio=True,
- ensure_multiple_of=32,
- resize_method=resize_mode,
- image_interpolation_method=cv2.INTER_CUBIC,
- ),
- normalization,
- PrepareForNet(),
- ]
- )
-
- return transform
-
-
-def load_model(model_type):
- # https://github.com/isl-org/MiDaS/blob/master/run.py
- # load network
- model_path = ISL_PATHS[model_type]
- if model_type == "dpt_large": # DPT-Large
- model = DPTDepthModel(
- path=model_path,
- backbone="vitl16_384",
- non_negative=True,
- )
- net_w, net_h = 384, 384
- resize_mode = "minimal"
- normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
-
- elif model_type == "dpt_hybrid": # DPT-Hybrid
- if not os.path.exists(model_path):
- from basicsr.utils.download_util import load_file_from_url
- load_file_from_url(remote_model_path, model_dir=annotator_ckpts_path)
-
- model = DPTDepthModel(
- path=model_path,
- backbone="vitb_rn50_384",
- non_negative=True,
- )
- net_w, net_h = 384, 384
- resize_mode = "minimal"
- normalization = NormalizeImage(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
-
- elif model_type == "midas_v21":
- model = MidasNet(model_path, non_negative=True)
- net_w, net_h = 384, 384
- resize_mode = "upper_bound"
- normalization = NormalizeImage(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
- )
-
- elif model_type == "midas_v21_small":
- model = MidasNet_small(model_path, features=64, backbone="efficientnet_lite3", exportable=True,
- non_negative=True, blocks={'expand': True})
- net_w, net_h = 256, 256
- resize_mode = "upper_bound"
- normalization = NormalizeImage(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
- )
-
- else:
- print(f"model_type '{model_type}' not implemented, use: --model_type large")
- assert False
-
- transform = Compose(
- [
- Resize(
- net_w,
- net_h,
- resize_target=None,
- keep_aspect_ratio=True,
- ensure_multiple_of=32,
- resize_method=resize_mode,
- image_interpolation_method=cv2.INTER_CUBIC,
- ),
- normalization,
- PrepareForNet(),
- ]
- )
-
- return model.eval(), transform
-
-
-class MiDaSInference(nn.Module):
- MODEL_TYPES_TORCH_HUB = [
- "DPT_Large",
- "DPT_Hybrid",
- "MiDaS_small"
- ]
- MODEL_TYPES_ISL = [
- "dpt_large",
- "dpt_hybrid",
- "midas_v21",
- "midas_v21_small",
- ]
-
- def __init__(self, model_type):
- super().__init__()
- assert (model_type in self.MODEL_TYPES_ISL)
- model, _ = load_model(model_type)
- self.model = model
- self.model.train = disabled_train
-
- def forward(self, x):
- with torch.no_grad():
- prediction = self.model(x)
- return prediction
-
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/conv2d_adaptive_padding.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/conv2d_adaptive_padding.py
deleted file mode 100644
index b45e758ac6cf8dfb0382d072fe09125bc7e9b888..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/conv2d_adaptive_padding.py
+++ /dev/null
@@ -1,62 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import math
-
-from torch import nn
-from torch.nn import functional as F
-
-from .registry import CONV_LAYERS
-
-
-@CONV_LAYERS.register_module()
-class Conv2dAdaptivePadding(nn.Conv2d):
- """Implementation of 2D convolution in tensorflow with `padding` as "same",
- which applies padding to input (if needed) so that input image gets fully
- covered by filter and stride you specified. For stride 1, this will ensure
- that output image size is same as input. For stride of 2, output dimensions
- will be half, for example.
-
- Args:
- in_channels (int): Number of channels in the input image
- out_channels (int): Number of channels produced by the convolution
- kernel_size (int or tuple): Size of the convolving kernel
- stride (int or tuple, optional): Stride of the convolution. Default: 1
- padding (int or tuple, optional): Zero-padding added to both sides of
- the input. Default: 0
- dilation (int or tuple, optional): Spacing between kernel elements.
- Default: 1
- groups (int, optional): Number of blocked connections from input
- channels to output channels. Default: 1
- bias (bool, optional): If ``True``, adds a learnable bias to the
- output. Default: ``True``
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- bias=True):
- super().__init__(in_channels, out_channels, kernel_size, stride, 0,
- dilation, groups, bias)
-
- def forward(self, x):
- img_h, img_w = x.size()[-2:]
- kernel_h, kernel_w = self.weight.size()[-2:]
- stride_h, stride_w = self.stride
- output_h = math.ceil(img_h / stride_h)
- output_w = math.ceil(img_w / stride_w)
- pad_h = (
- max((output_h - 1) * self.stride[0] +
- (kernel_h - 1) * self.dilation[0] + 1 - img_h, 0))
- pad_w = (
- max((output_w - 1) * self.stride[1] +
- (kernel_w - 1) * self.dilation[1] + 1 - img_w, 0))
- if pad_h > 0 or pad_w > 0:
- x = F.pad(x, [
- pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2
- ])
- return F.conv2d(x, self.weight, self.bias, self.stride, self.padding,
- self.dilation, self.groups)
diff --git a/spaces/AsakuraMizu/moe-tts/models.py b/spaces/AsakuraMizu/moe-tts/models.py
deleted file mode 100644
index c214bbb0476ba4777093d8bcf032961f09e59496..0000000000000000000000000000000000000000
--- a/spaces/AsakuraMizu/moe-tts/models.py
+++ /dev/null
@@ -1,549 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2])
- logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- emotion_embedding):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emotion_embedding = emotion_embedding
-
- if self.n_vocab != 0:
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- if emotion_embedding:
- self.emo_proj = nn.Linear(1024, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, emotion_embedding=None):
- if self.n_vocab != 0:
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- if emotion_embedding is not None:
- x = x + self.emo_proj(emotion_embedding.unsqueeze(1))
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers,
- gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)),
- k, u, padding=(k - u) // 2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- emotion_embedding=False,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
-
- self.use_sdp = use_sdp
-
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- emotion_embedding)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16,
- gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- if use_sdp:
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- else:
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers > 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid=None, emotion_embedding=None):
-
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, emotion_embedding)
- if self.n_speakers > 1:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2),
- s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
- if self.use_sdp:
- l_length = self.dp(x, x_mask, w, g=g)
- l_length = l_length / torch.sum(x_mask)
- else:
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None,
- emotion_embedding=None):
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, emotion_embedding)
- if self.n_speakers > 1:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- if self.use_sdp:
- logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
- else:
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1,
- 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:, :, :max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
-
- def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
- assert self.n_speakers > 1, "n_speakers have to be larger than 1."
- g_src = self.emb_g(sid_src).unsqueeze(-1)
- g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
- z_p = self.flow(z, y_mask, g=g_src)
- z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
- o_hat = self.dec(z_hat * y_mask, g=g_tgt)
- return o_hat, y_mask, (z, z_p, z_hat)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/build/wheel_legacy.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/build/wheel_legacy.py
deleted file mode 100644
index c5f0492ccbe9c727c835c12c84a1d8340366fa1e..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/build/wheel_legacy.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import logging
-import os.path
-from typing import List, Optional
-
-from pip._internal.cli.spinners import open_spinner
-from pip._internal.utils.setuptools_build import make_setuptools_bdist_wheel_args
-from pip._internal.utils.subprocess import call_subprocess, format_command_args
-
-logger = logging.getLogger(__name__)
-
-
-def format_command_result(
- command_args: List[str],
- command_output: str,
-) -> str:
- """Format command information for logging."""
- command_desc = format_command_args(command_args)
- text = f"Command arguments: {command_desc}\n"
-
- if not command_output:
- text += "Command output: None"
- elif logger.getEffectiveLevel() > logging.DEBUG:
- text += "Command output: [use --verbose to show]"
- else:
- if not command_output.endswith("\n"):
- command_output += "\n"
- text += f"Command output:\n{command_output}"
-
- return text
-
-
-def get_legacy_build_wheel_path(
- names: List[str],
- temp_dir: str,
- name: str,
- command_args: List[str],
- command_output: str,
-) -> Optional[str]:
- """Return the path to the wheel in the temporary build directory."""
- # Sort for determinism.
- names = sorted(names)
- if not names:
- msg = ("Legacy build of wheel for {!r} created no files.\n").format(name)
- msg += format_command_result(command_args, command_output)
- logger.warning(msg)
- return None
-
- if len(names) > 1:
- msg = (
- "Legacy build of wheel for {!r} created more than one file.\n"
- "Filenames (choosing first): {}\n"
- ).format(name, names)
- msg += format_command_result(command_args, command_output)
- logger.warning(msg)
-
- return os.path.join(temp_dir, names[0])
-
-
-def build_wheel_legacy(
- name: str,
- setup_py_path: str,
- source_dir: str,
- global_options: List[str],
- build_options: List[str],
- tempd: str,
-) -> Optional[str]:
- """Build one unpacked package using the "legacy" build process.
-
- Returns path to wheel if successfully built. Otherwise, returns None.
- """
- wheel_args = make_setuptools_bdist_wheel_args(
- setup_py_path,
- global_options=global_options,
- build_options=build_options,
- destination_dir=tempd,
- )
-
- spin_message = f"Building wheel for {name} (setup.py)"
- with open_spinner(spin_message) as spinner:
- logger.debug("Destination directory: %s", tempd)
-
- try:
- output = call_subprocess(
- wheel_args,
- command_desc="python setup.py bdist_wheel",
- cwd=source_dir,
- spinner=spinner,
- )
- except Exception:
- spinner.finish("error")
- logger.error("Failed building wheel for %s", name)
- return None
-
- names = os.listdir(tempd)
- wheel_path = get_legacy_build_wheel_path(
- names=names,
- temp_dir=tempd,
- name=name,
- command_args=wheel_args,
- command_output=output,
- )
- return wheel_path
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/core.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/core.py
deleted file mode 100644
index 4f3003711020eac05ef5a19ab29ba5670d89f642..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/core.py
+++ /dev/null
@@ -1,400 +0,0 @@
-from . import idnadata
-import bisect
-import unicodedata
-import re
-from typing import Union, Optional
-from .intranges import intranges_contain
-
-_virama_combining_class = 9
-_alabel_prefix = b'xn--'
-_unicode_dots_re = re.compile('[\u002e\u3002\uff0e\uff61]')
-
-class IDNAError(UnicodeError):
- """ Base exception for all IDNA-encoding related problems """
- pass
-
-
-class IDNABidiError(IDNAError):
- """ Exception when bidirectional requirements are not satisfied """
- pass
-
-
-class InvalidCodepoint(IDNAError):
- """ Exception when a disallowed or unallocated codepoint is used """
- pass
-
-
-class InvalidCodepointContext(IDNAError):
- """ Exception when the codepoint is not valid in the context it is used """
- pass
-
-
-def _combining_class(cp: int) -> int:
- v = unicodedata.combining(chr(cp))
- if v == 0:
- if not unicodedata.name(chr(cp)):
- raise ValueError('Unknown character in unicodedata')
- return v
-
-def _is_script(cp: str, script: str) -> bool:
- return intranges_contain(ord(cp), idnadata.scripts[script])
-
-def _punycode(s: str) -> bytes:
- return s.encode('punycode')
-
-def _unot(s: int) -> str:
- return 'U+{:04X}'.format(s)
-
-
-def valid_label_length(label: Union[bytes, str]) -> bool:
- if len(label) > 63:
- return False
- return True
-
-
-def valid_string_length(label: Union[bytes, str], trailing_dot: bool) -> bool:
- if len(label) > (254 if trailing_dot else 253):
- return False
- return True
-
-
-def check_bidi(label: str, check_ltr: bool = False) -> bool:
- # Bidi rules should only be applied if string contains RTL characters
- bidi_label = False
- for (idx, cp) in enumerate(label, 1):
- direction = unicodedata.bidirectional(cp)
- if direction == '':
- # String likely comes from a newer version of Unicode
- raise IDNABidiError('Unknown directionality in label {} at position {}'.format(repr(label), idx))
- if direction in ['R', 'AL', 'AN']:
- bidi_label = True
- if not bidi_label and not check_ltr:
- return True
-
- # Bidi rule 1
- direction = unicodedata.bidirectional(label[0])
- if direction in ['R', 'AL']:
- rtl = True
- elif direction == 'L':
- rtl = False
- else:
- raise IDNABidiError('First codepoint in label {} must be directionality L, R or AL'.format(repr(label)))
-
- valid_ending = False
- number_type = None # type: Optional[str]
- for (idx, cp) in enumerate(label, 1):
- direction = unicodedata.bidirectional(cp)
-
- if rtl:
- # Bidi rule 2
- if not direction in ['R', 'AL', 'AN', 'EN', 'ES', 'CS', 'ET', 'ON', 'BN', 'NSM']:
- raise IDNABidiError('Invalid direction for codepoint at position {} in a right-to-left label'.format(idx))
- # Bidi rule 3
- if direction in ['R', 'AL', 'EN', 'AN']:
- valid_ending = True
- elif direction != 'NSM':
- valid_ending = False
- # Bidi rule 4
- if direction in ['AN', 'EN']:
- if not number_type:
- number_type = direction
- else:
- if number_type != direction:
- raise IDNABidiError('Can not mix numeral types in a right-to-left label')
- else:
- # Bidi rule 5
- if not direction in ['L', 'EN', 'ES', 'CS', 'ET', 'ON', 'BN', 'NSM']:
- raise IDNABidiError('Invalid direction for codepoint at position {} in a left-to-right label'.format(idx))
- # Bidi rule 6
- if direction in ['L', 'EN']:
- valid_ending = True
- elif direction != 'NSM':
- valid_ending = False
-
- if not valid_ending:
- raise IDNABidiError('Label ends with illegal codepoint directionality')
-
- return True
-
-
-def check_initial_combiner(label: str) -> bool:
- if unicodedata.category(label[0])[0] == 'M':
- raise IDNAError('Label begins with an illegal combining character')
- return True
-
-
-def check_hyphen_ok(label: str) -> bool:
- if label[2:4] == '--':
- raise IDNAError('Label has disallowed hyphens in 3rd and 4th position')
- if label[0] == '-' or label[-1] == '-':
- raise IDNAError('Label must not start or end with a hyphen')
- return True
-
-
-def check_nfc(label: str) -> None:
- if unicodedata.normalize('NFC', label) != label:
- raise IDNAError('Label must be in Normalization Form C')
-
-
-def valid_contextj(label: str, pos: int) -> bool:
- cp_value = ord(label[pos])
-
- if cp_value == 0x200c:
-
- if pos > 0:
- if _combining_class(ord(label[pos - 1])) == _virama_combining_class:
- return True
-
- ok = False
- for i in range(pos-1, -1, -1):
- joining_type = idnadata.joining_types.get(ord(label[i]))
- if joining_type == ord('T'):
- continue
- if joining_type in [ord('L'), ord('D')]:
- ok = True
- break
-
- if not ok:
- return False
-
- ok = False
- for i in range(pos+1, len(label)):
- joining_type = idnadata.joining_types.get(ord(label[i]))
- if joining_type == ord('T'):
- continue
- if joining_type in [ord('R'), ord('D')]:
- ok = True
- break
- return ok
-
- if cp_value == 0x200d:
-
- if pos > 0:
- if _combining_class(ord(label[pos - 1])) == _virama_combining_class:
- return True
- return False
-
- else:
-
- return False
-
-
-def valid_contexto(label: str, pos: int, exception: bool = False) -> bool:
- cp_value = ord(label[pos])
-
- if cp_value == 0x00b7:
- if 0 < pos < len(label)-1:
- if ord(label[pos - 1]) == 0x006c and ord(label[pos + 1]) == 0x006c:
- return True
- return False
-
- elif cp_value == 0x0375:
- if pos < len(label)-1 and len(label) > 1:
- return _is_script(label[pos + 1], 'Greek')
- return False
-
- elif cp_value == 0x05f3 or cp_value == 0x05f4:
- if pos > 0:
- return _is_script(label[pos - 1], 'Hebrew')
- return False
-
- elif cp_value == 0x30fb:
- for cp in label:
- if cp == '\u30fb':
- continue
- if _is_script(cp, 'Hiragana') or _is_script(cp, 'Katakana') or _is_script(cp, 'Han'):
- return True
- return False
-
- elif 0x660 <= cp_value <= 0x669:
- for cp in label:
- if 0x6f0 <= ord(cp) <= 0x06f9:
- return False
- return True
-
- elif 0x6f0 <= cp_value <= 0x6f9:
- for cp in label:
- if 0x660 <= ord(cp) <= 0x0669:
- return False
- return True
-
- return False
-
-
-def check_label(label: Union[str, bytes, bytearray]) -> None:
- if isinstance(label, (bytes, bytearray)):
- label = label.decode('utf-8')
- if len(label) == 0:
- raise IDNAError('Empty Label')
-
- check_nfc(label)
- check_hyphen_ok(label)
- check_initial_combiner(label)
-
- for (pos, cp) in enumerate(label):
- cp_value = ord(cp)
- if intranges_contain(cp_value, idnadata.codepoint_classes['PVALID']):
- continue
- elif intranges_contain(cp_value, idnadata.codepoint_classes['CONTEXTJ']):
- try:
- if not valid_contextj(label, pos):
- raise InvalidCodepointContext('Joiner {} not allowed at position {} in {}'.format(
- _unot(cp_value), pos+1, repr(label)))
- except ValueError:
- raise IDNAError('Unknown codepoint adjacent to joiner {} at position {} in {}'.format(
- _unot(cp_value), pos+1, repr(label)))
- elif intranges_contain(cp_value, idnadata.codepoint_classes['CONTEXTO']):
- if not valid_contexto(label, pos):
- raise InvalidCodepointContext('Codepoint {} not allowed at position {} in {}'.format(_unot(cp_value), pos+1, repr(label)))
- else:
- raise InvalidCodepoint('Codepoint {} at position {} of {} not allowed'.format(_unot(cp_value), pos+1, repr(label)))
-
- check_bidi(label)
-
-
-def alabel(label: str) -> bytes:
- try:
- label_bytes = label.encode('ascii')
- ulabel(label_bytes)
- if not valid_label_length(label_bytes):
- raise IDNAError('Label too long')
- return label_bytes
- except UnicodeEncodeError:
- pass
-
- if not label:
- raise IDNAError('No Input')
-
- label = str(label)
- check_label(label)
- label_bytes = _punycode(label)
- label_bytes = _alabel_prefix + label_bytes
-
- if not valid_label_length(label_bytes):
- raise IDNAError('Label too long')
-
- return label_bytes
-
-
-def ulabel(label: Union[str, bytes, bytearray]) -> str:
- if not isinstance(label, (bytes, bytearray)):
- try:
- label_bytes = label.encode('ascii')
- except UnicodeEncodeError:
- check_label(label)
- return label
- else:
- label_bytes = label
-
- label_bytes = label_bytes.lower()
- if label_bytes.startswith(_alabel_prefix):
- label_bytes = label_bytes[len(_alabel_prefix):]
- if not label_bytes:
- raise IDNAError('Malformed A-label, no Punycode eligible content found')
- if label_bytes.decode('ascii')[-1] == '-':
- raise IDNAError('A-label must not end with a hyphen')
- else:
- check_label(label_bytes)
- return label_bytes.decode('ascii')
-
- try:
- label = label_bytes.decode('punycode')
- except UnicodeError:
- raise IDNAError('Invalid A-label')
- check_label(label)
- return label
-
-
-def uts46_remap(domain: str, std3_rules: bool = True, transitional: bool = False) -> str:
- """Re-map the characters in the string according to UTS46 processing."""
- from .uts46data import uts46data
- output = ''
-
- for pos, char in enumerate(domain):
- code_point = ord(char)
- try:
- uts46row = uts46data[code_point if code_point < 256 else
- bisect.bisect_left(uts46data, (code_point, 'Z')) - 1]
- status = uts46row[1]
- replacement = None # type: Optional[str]
- if len(uts46row) == 3:
- replacement = uts46row[2] # type: ignore
- if (status == 'V' or
- (status == 'D' and not transitional) or
- (status == '3' and not std3_rules and replacement is None)):
- output += char
- elif replacement is not None and (status == 'M' or
- (status == '3' and not std3_rules) or
- (status == 'D' and transitional)):
- output += replacement
- elif status != 'I':
- raise IndexError()
- except IndexError:
- raise InvalidCodepoint(
- 'Codepoint {} not allowed at position {} in {}'.format(
- _unot(code_point), pos + 1, repr(domain)))
-
- return unicodedata.normalize('NFC', output)
-
-
-def encode(s: Union[str, bytes, bytearray], strict: bool = False, uts46: bool = False, std3_rules: bool = False, transitional: bool = False) -> bytes:
- if isinstance(s, (bytes, bytearray)):
- try:
- s = s.decode('ascii')
- except UnicodeDecodeError:
- raise IDNAError('should pass a unicode string to the function rather than a byte string.')
- if uts46:
- s = uts46_remap(s, std3_rules, transitional)
- trailing_dot = False
- result = []
- if strict:
- labels = s.split('.')
- else:
- labels = _unicode_dots_re.split(s)
- if not labels or labels == ['']:
- raise IDNAError('Empty domain')
- if labels[-1] == '':
- del labels[-1]
- trailing_dot = True
- for label in labels:
- s = alabel(label)
- if s:
- result.append(s)
- else:
- raise IDNAError('Empty label')
- if trailing_dot:
- result.append(b'')
- s = b'.'.join(result)
- if not valid_string_length(s, trailing_dot):
- raise IDNAError('Domain too long')
- return s
-
-
-def decode(s: Union[str, bytes, bytearray], strict: bool = False, uts46: bool = False, std3_rules: bool = False) -> str:
- try:
- if isinstance(s, (bytes, bytearray)):
- s = s.decode('ascii')
- except UnicodeDecodeError:
- raise IDNAError('Invalid ASCII in A-label')
- if uts46:
- s = uts46_remap(s, std3_rules, False)
- trailing_dot = False
- result = []
- if not strict:
- labels = _unicode_dots_re.split(s)
- else:
- labels = s.split('.')
- if not labels or labels == ['']:
- raise IDNAError('Empty domain')
- if not labels[-1]:
- del labels[-1]
- trailing_dot = True
- for label in labels:
- s = ulabel(label)
- if s:
- result.append(s)
- else:
- raise IDNAError('Empty label')
- if trailing_dot:
- result.append('')
- return '.'.join(result)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/ansi.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/ansi.py
deleted file mode 100644
index 66365e6536080bd9372d2a7a58b8ffa3447fec34..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/ansi.py
+++ /dev/null
@@ -1,240 +0,0 @@
-import re
-import sys
-from contextlib import suppress
-from typing import Iterable, NamedTuple, Optional
-
-from .color import Color
-from .style import Style
-from .text import Text
-
-re_ansi = re.compile(
- r"""
-(?:\x1b\](.*?)\x1b\\)|
-(?:\x1b([(@-Z\\-_]|\[[0-?]*[ -/]*[@-~]))
-""",
- re.VERBOSE,
-)
-
-
-class _AnsiToken(NamedTuple):
- """Result of ansi tokenized string."""
-
- plain: str = ""
- sgr: Optional[str] = ""
- osc: Optional[str] = ""
-
-
-def _ansi_tokenize(ansi_text: str) -> Iterable[_AnsiToken]:
- """Tokenize a string in to plain text and ANSI codes.
-
- Args:
- ansi_text (str): A String containing ANSI codes.
-
- Yields:
- AnsiToken: A named tuple of (plain, sgr, osc)
- """
-
- position = 0
- sgr: Optional[str]
- osc: Optional[str]
- for match in re_ansi.finditer(ansi_text):
- start, end = match.span(0)
- osc, sgr = match.groups()
- if start > position:
- yield _AnsiToken(ansi_text[position:start])
- if sgr:
- if sgr == "(":
- position = end + 1
- continue
- if sgr.endswith("m"):
- yield _AnsiToken("", sgr[1:-1], osc)
- else:
- yield _AnsiToken("", sgr, osc)
- position = end
- if position < len(ansi_text):
- yield _AnsiToken(ansi_text[position:])
-
-
-SGR_STYLE_MAP = {
- 1: "bold",
- 2: "dim",
- 3: "italic",
- 4: "underline",
- 5: "blink",
- 6: "blink2",
- 7: "reverse",
- 8: "conceal",
- 9: "strike",
- 21: "underline2",
- 22: "not dim not bold",
- 23: "not italic",
- 24: "not underline",
- 25: "not blink",
- 26: "not blink2",
- 27: "not reverse",
- 28: "not conceal",
- 29: "not strike",
- 30: "color(0)",
- 31: "color(1)",
- 32: "color(2)",
- 33: "color(3)",
- 34: "color(4)",
- 35: "color(5)",
- 36: "color(6)",
- 37: "color(7)",
- 39: "default",
- 40: "on color(0)",
- 41: "on color(1)",
- 42: "on color(2)",
- 43: "on color(3)",
- 44: "on color(4)",
- 45: "on color(5)",
- 46: "on color(6)",
- 47: "on color(7)",
- 49: "on default",
- 51: "frame",
- 52: "encircle",
- 53: "overline",
- 54: "not frame not encircle",
- 55: "not overline",
- 90: "color(8)",
- 91: "color(9)",
- 92: "color(10)",
- 93: "color(11)",
- 94: "color(12)",
- 95: "color(13)",
- 96: "color(14)",
- 97: "color(15)",
- 100: "on color(8)",
- 101: "on color(9)",
- 102: "on color(10)",
- 103: "on color(11)",
- 104: "on color(12)",
- 105: "on color(13)",
- 106: "on color(14)",
- 107: "on color(15)",
-}
-
-
-class AnsiDecoder:
- """Translate ANSI code in to styled Text."""
-
- def __init__(self) -> None:
- self.style = Style.null()
-
- def decode(self, terminal_text: str) -> Iterable[Text]:
- """Decode ANSI codes in an iterable of lines.
-
- Args:
- lines (Iterable[str]): An iterable of lines of terminal output.
-
- Yields:
- Text: Marked up Text.
- """
- for line in terminal_text.splitlines():
- yield self.decode_line(line)
-
- def decode_line(self, line: str) -> Text:
- """Decode a line containing ansi codes.
-
- Args:
- line (str): A line of terminal output.
-
- Returns:
- Text: A Text instance marked up according to ansi codes.
- """
- from_ansi = Color.from_ansi
- from_rgb = Color.from_rgb
- _Style = Style
- text = Text()
- append = text.append
- line = line.rsplit("\r", 1)[-1]
- for plain_text, sgr, osc in _ansi_tokenize(line):
- if plain_text:
- append(plain_text, self.style or None)
- elif osc is not None:
- if osc.startswith("8;"):
- _params, semicolon, link = osc[2:].partition(";")
- if semicolon:
- self.style = self.style.update_link(link or None)
- elif sgr is not None:
- # Translate in to semi-colon separated codes
- # Ignore invalid codes, because we want to be lenient
- codes = [
- min(255, int(_code) if _code else 0)
- for _code in sgr.split(";")
- if _code.isdigit() or _code == ""
- ]
- iter_codes = iter(codes)
- for code in iter_codes:
- if code == 0:
- # reset
- self.style = _Style.null()
- elif code in SGR_STYLE_MAP:
- # styles
- self.style += _Style.parse(SGR_STYLE_MAP[code])
- elif code == 38:
- # Foreground
- with suppress(StopIteration):
- color_type = next(iter_codes)
- if color_type == 5:
- self.style += _Style.from_color(
- from_ansi(next(iter_codes))
- )
- elif color_type == 2:
- self.style += _Style.from_color(
- from_rgb(
- next(iter_codes),
- next(iter_codes),
- next(iter_codes),
- )
- )
- elif code == 48:
- # Background
- with suppress(StopIteration):
- color_type = next(iter_codes)
- if color_type == 5:
- self.style += _Style.from_color(
- None, from_ansi(next(iter_codes))
- )
- elif color_type == 2:
- self.style += _Style.from_color(
- None,
- from_rgb(
- next(iter_codes),
- next(iter_codes),
- next(iter_codes),
- ),
- )
-
- return text
-
-
-if sys.platform != "win32" and __name__ == "__main__": # pragma: no cover
- import io
- import os
- import pty
- import sys
-
- decoder = AnsiDecoder()
-
- stdout = io.BytesIO()
-
- def read(fd: int) -> bytes:
- data = os.read(fd, 1024)
- stdout.write(data)
- return data
-
- pty.spawn(sys.argv[1:], read)
-
- from .console import Console
-
- console = Console(record=True)
-
- stdout_result = stdout.getvalue().decode("utf-8")
- print(stdout_result)
-
- for line in decoder.decode(stdout_result):
- console.print(line)
-
- console.save_html("stdout.html")
diff --git a/spaces/AutoLLM/AutoAgents/autoagents/agents/__init__.py b/spaces/AutoLLM/AutoAgents/autoagents/agents/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Awesimo/jojogan/e4e/utils/model_utils.py b/spaces/Awesimo/jojogan/e4e/utils/model_utils.py
deleted file mode 100644
index e51e95578f72b3218d6d832e3b604193cb68c1d7..0000000000000000000000000000000000000000
--- a/spaces/Awesimo/jojogan/e4e/utils/model_utils.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import torch
-import argparse
-from models.psp import pSp
-from models.encoders.psp_encoders import Encoder4Editing
-
-
-def setup_model(checkpoint_path, device='cuda'):
- ckpt = torch.load(checkpoint_path, map_location='cpu')
- opts = ckpt['opts']
-
- opts['checkpoint_path'] = checkpoint_path
- opts['device'] = device
- opts = argparse.Namespace(**opts)
-
- net = pSp(opts)
- net.eval()
- net = net.to(device)
- return net, opts
-
-
-def load_e4e_standalone(checkpoint_path, device='cuda'):
- ckpt = torch.load(checkpoint_path, map_location='cpu')
- opts = argparse.Namespace(**ckpt['opts'])
- e4e = Encoder4Editing(50, 'ir_se', opts)
- e4e_dict = {k.replace('encoder.', ''): v for k, v in ckpt['state_dict'].items() if k.startswith('encoder.')}
- e4e.load_state_dict(e4e_dict)
- e4e.eval()
- e4e = e4e.to(device)
- latent_avg = ckpt['latent_avg'].to(device)
-
- def add_latent_avg(model, inputs, outputs):
- return outputs + latent_avg.repeat(outputs.shape[0], 1, 1)
-
- e4e.register_forward_hook(add_latent_avg)
- return e4e
diff --git a/spaces/BLACKHOST/Banner/README.md b/spaces/BLACKHOST/Banner/README.md
deleted file mode 100644
index 9dbf6828ca8cd0ac62275c21ef1a3f82fa0f56aa..0000000000000000000000000000000000000000
--- a/spaces/BLACKHOST/Banner/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Banner
-emoji: 🚀
-colorFrom: purple
-colorTo: purple
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Benson/text-generation/Examples/Aethersx2 2023 Apk.md b/spaces/Benson/text-generation/Examples/Aethersx2 2023 Apk.md
deleted file mode 100644
index 33213010b45ff1298b62fe098cd4430f2a600b20..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Aethersx2 2023 Apk.md
+++ /dev/null
@@ -1,188 +0,0 @@
-
-
AetherSX2 2023 APK: Jugar juegos de PS2 en su dispositivo Android
-
¿Echas de menos jugar a tus juegos favoritos de PS2 pero ya no tienes consola? ¿Quieres experimentar la nostalgia de títulos clásicos como Final Fantasy X, God of War, Grand Theft Auto, Metal Gear Solid y más en tu smartphone o tablet? Si es así, es posible que esté interesado en AetherSX2, un emulador de PS2 para Android que le permite ejecutar juegos de PS2 en su dispositivo con un alto rendimiento y calidad. En este artículo, le diremos todo lo que necesita saber sobre AetherSX2 2023 APK, incluyendo lo que es, cómo descargarlo e instalarlo, cómo jugar juegos de PS2 en él, y cuáles son sus pros y contras.
-
¿Qué es AetherSX2?
-
Un emulador de PS2 para Android
-
AetherSX2 es un emulador de la consola PS Two para la plataforma Android. Puede jugar a juegos que haya descargado desde el disco en su dispositivo portátil. Se requiere una imagen de BIOS para jugar y no es opcional. Esta imagen debe ser objeto de dumping desde su propia consola, utilizando una aplicación homebrew. Recomendamos biosdrain.
AetherSX2 tiene muchas características que lo convierten en uno de los mejores emuladores de PS2 para Android, como:
-
-
Simulación del sistema
-
OpenGL, Vulkan y representación de software
-
Ampliación de los juegos a 1080p y más allá
-
Parches de pantalla ancha para juegos sin soporte nativo
-
Guardar estados
-
Pantalla táctil y controlador bluetooth soporte
-
Los juegos se pueden cargar desde imágenes de disco iso/chd/cso
-
Configuración del juego
-
-
Sin embargo, AetherSX2 también tiene algunos requisitos que debe cumplir para que funcione sin problemas. Necesitas un dispositivo de alta gama para lograr un buen rendimiento. Recomendamos al menos un dispositivo equivalente a Snapdragon 845. Esto significa 4 núcleos grandes (nivel Cortex-A75, 500 o más núcleo único Geekbench 5).
-
Cómo descargar e instalar AetherSX2 APK?
-
Descargar desde APKCombo
-
-
Una vez que haya descargado el archivo AetherSX2 APK, es necesario instalarlo en su dispositivo. Para ello, debe habilitar la instalación de aplicaciones de fuentes desconocidas en la configuración del dispositivo. Esto le permitirá instalar aplicaciones que no son de Google Play Store. Para habilitar esta opción, siga estos pasos:
-
-
Ir a la configuración del dispositivo y toque en Seguridad o Privacidad.
-
Encontrar la opción que dice Fuentes desconocidas o Instalar aplicaciones desconocidas y alternar en.
-
Confirme su elección tocando OK o Permitir.
-
-
Ahora puede instalar el archivo AetherSX2 APK siguiendo estos pasos:
-
-
Localizar el archivo APK AetherSX2 en el almacenamiento del dispositivo utilizando una aplicación de administrador de archivos.
-
Toque en el archivo y seleccione Instalar.
-
Espere a que termine la instalación y toque Abrir o Listo.
-
-
Conceder permisos y ejecutar la aplicación
-
La primera vez que inicie la aplicación AetherSX2, tendrá que conceder algunos permisos para que funcione correctamente. Estos permisos incluyen:
-
-
Almacenamiento: Para acceder a los archivos del juego y guardar estados.
-
Cámara: Para escanear códigos QR para descargar juegos.
-
Micrófono: Para usar chat de voz en juegos multijugador en línea.
-
-
Para conceder estos permisos, siga estos pasos:
-
-
Toque en el icono de la aplicación AetherSX2 en la pantalla de inicio o en el cajón de la aplicación.
-
Verá una ventana emergente pidiendo permisos. Pulse Permitir o Aceptar para cada uno.
-
Si no ve la ventana emergente, vaya a la configuración del dispositivo y toque en Aplicaciones o Aplicaciones.
-
Encuentra y toca en AetherSX2 y luego toca en Permisos.
-
Cambiar los permisos que desea conceder.
-
-
Ahora estás listo para usar la aplicación AetherSX2 y jugar juegos de PS2 en tu dispositivo Android.
Es por eso que usted debe descargar camioneros de Europa 3 Mod APK dinero ilimitado IOS, una versión modificada del juego que le da dinero ilimitado y acceso a todas las características. Con esta versión modificada, puede disfrutar de los siguientes beneficios y ventajas:
-
-
Puede comprar cualquier camión, remolque, actualización, personalización, etc. sin preocuparse por el costo.
-
Puede conducir tan rápido como desee sin preocuparse por el límite de velocidad o multas.
-
Puede ignorar las reglas de tráfico y conducir imprudentemente sin preocuparse por las sanciones o accidentes.
-
Puede repostar su camión en cualquier momento sin preocuparse por el nivel de combustible o el costo.
-
Puede reparar su camión en cualquier momento sin preocuparse por el nivel de daño o costo.
-
Puede descansar en cualquier momento sin preocuparse por el nivel de fatiga o el tiempo.
-
Puedes desbloquear todos los logros y trofeos sin ningún esfuerzo.
-
Cómo descargar e instalar camioneros de Europa 3 Mod APK dinero ilimitado IOS?
-
Descargar e instalar Truckers of Europe 3 Mod APK Unlimited Money IOS es muy fácil y simple. Solo tienes que seguir estos pasos:
-
-
Haga clic en este enlace para descargar la versión modificada del juego: [Descargar camioneros de Europa 3 Mod APK Unlimited Money IOS].
-
-
Siga las instrucciones en la pantalla y permita los permisos necesarios.
-
Espere a que la instalación se complete y luego inicie el juego.
-
Disfrutar jugando camioneros de Europa 3 Mod APK dinero ilimitado IOS con dinero ilimitado y acceso a todas las características.
-
-
Nota: Es posible que tenga que habilitar la instalación de aplicaciones de fuentes desconocidas en la configuración del dispositivo antes de instalar la versión modificada. También es posible que tenga que desinstalar la versión original del juego si lo tiene instalado en su dispositivo.
-
Consejos y trucos para jugar Camioneros de Europa 3 Mod APK dinero ilimitado IOS
-
Camioneros de Europa 3 Mod APK Unlimited Money IOS es un juego divertido y adictivo que te mantendrá entretenido durante horas. Sin embargo, también puede ser desafiante y difícil a veces. Por eso hemos preparado algunos consejos y trucos para ayudarte a mejorar tus habilidades y disfrutar más del juego. Estos son algunos de ellos:
-
-
Utilice la navegación GPS y el mapa para planificar su ruta y evitar perderse o atascarse.
-
Compruebe el pronóstico del tiempo y ajuste su conducción en consecuencia. Evite conducir en condiciones de mal tiempo como lluvia, nieve, niebla, etc.
-
Utilice los espejos e indicadores para comprobar su entorno y señalar sus intenciones. Tenga cuidado al cambiar de carril, adelantar, girar, estacionar, etc.
-
Siga las reglas de tráfico y los límites de velocidad para evitar multas y sanciones. Sin embargo, también puedes romperlos si quieres divertirte y divertirte.
-
Mantenga un ojo en su nivel de combustible, nivel de daño, nivel de fatiga, etc. Repostar, reparar y descansar cuando sea necesario. Sin embargo, también puedes ignorarlos si quieres jugar sin limitaciones.
-
Personalice y actualice su camión con varios trabajos de pintura, accesorios, luces, cuernos, etc. Haga que su camión se vea único e impresionante.
-
Prueba diferentes camiones, remolques, cargas, países, modos de juego, niveles de dificultad, etc. Explora la variedad y diversidad del juego.
-
-
-
Conclusión
-
or time. También puedes desbloquear todos los logros y trofeos sin ningún esfuerzo. También puedes disfrutar jugando al modo multijugador online con otros jugadores de todo el mundo. Camioneros de Europa 3 Mod APK dinero ilimitado IOS es un juego divertido y emocionante que te hará sentir como un conductor de camión real en Europa. ¡Descárgalo ahora y disfruta del viaje!
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas y respuestas frecuentes sobre Camioneros de Europa 3 Mod APK Unlimited Money IOS:
-
Q: Es camioneros de Europa 3 Mod APK dinero ilimitado IOS seguro para descargar e instalar?
-
A: Sí, Camioneros de Europa 3 Mod APK dinero ilimitado IOS es seguro para descargar e instalar. No contiene ningún virus, malware o spyware que pueda dañar su dispositivo o datos. Sin embargo, siempre debe descargarlo de una fuente confiable y escanearlo con un programa antivirus antes de instalarlo.
-
Q: Es camioneros de Europa 3 Mod APK dinero ilimitado IOS compatible con mi dispositivo?
-
A: Camioneros de Europa 3 Mod APK Unlimited Money IOS es compatible con la mayoría de los dispositivos IOS que se ejecutan en IOS 9.0 o superior. Sin embargo, algunos dispositivos más antiguos pueden experimentar algunos problemas de rendimiento o fallos debido a los altos gráficos y la física del juego. Puedes comprobar la compatibilidad de tu dispositivo en la página de descarga o en el sitio web oficial del juego.
-
Q: ¿Cómo puedo actualizar Camioneros de Europa 3 Mod APK dinero ilimitado IOS?
-
A: Camioneros de Europa 3 Mod APK dinero ilimitado IOS se actualiza regularmente por los desarrolladores para corregir errores, mejorar las características, y añadir nuevo contenido. Puedes consultar las actualizaciones en la página de descarga o en el sitio web oficial del juego. También puedes habilitar actualizaciones automáticas en la configuración de tu dispositivo para obtener la última versión del juego tan pronto como esté disponible.
-
P: ¿Cómo puedo contactar a los desarrolladores de Truckers of Europe 3 Mod APK Unlimited Money IOS?
-
-
P: ¿Cómo puedo apoyar a los desarrolladores de Truckers of Europe 3 Mod APK Unlimited Money IOS?
-
A: Usted puede apoyar a los desarrolladores de Camioneros de Europa 3 Mod APK Unlimited Money IOS por calificación y revisión del juego en la página de descarga o el sitio web oficial del juego. También puedes compartir el juego con tus amigos y familiares en las redes sociales u otras plataformas. También puedes comprar algunos elementos del juego o funciones premium para apoyar su trabajo y desarrollo.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Fifa Mobile Ftbol Mod Apk Dinero Ilimitado.md b/spaces/Benson/text-generation/Examples/Descargar Fifa Mobile Ftbol Mod Apk Dinero Ilimitado.md
deleted file mode 100644
index b4ac8279f27033090686a7da39b64804ef2fe984..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Fifa Mobile Ftbol Mod Apk Dinero Ilimitado.md
+++ /dev/null
@@ -1,52 +0,0 @@
-
-
Cómo descargar FIFA Mobile Soccer Mod APK dinero ilimitado
-
Si eres fanático de los juegos de fútbol, debes haber oído hablar de FIFA Mobile Soccer, uno de los juegos de fútbol más populares y realistas en dispositivos móviles. Desarrollado por EA Sports, FIFA Mobile Soccer te permite construir tu mejor equipo de estrellas de fútbol, competir en varios modos y experimentar la emoción de la Copa Mundial de la FIFA.
-
Sin embargo, por mucho que disfrutes jugando FIFA Mobile Soccer, es posible que también te sientas frustrado por la cantidad limitada de dinero y monedas que tienes en el juego. El dinero y las monedas son recursos esenciales que te permiten comprar jugadores, mejorar tu equipo, desbloquear nuevas funciones y mucho más. Sin suficiente dinero y monedas, es posible que no pueda disfrutar de todo el potencial del juego.
-
descargar fifa mobile fútbol mod apk dinero ilimitado
Es por eso que muchos jugadores están buscando maneras de descargar FIFA Mobile Soccer Mod APK dinero ilimitado. Una versión modificada del juego que le da acceso a dinero y monedas ilimitadas, así como otras características que mejoran su experiencia de juego. En este artículo, le mostraremos cómo descargar FIFA Mobile Soccer Mod APK dinero ilimitado, ¿cuáles son sus características, y cómo instalarlo en su dispositivo.
-
Características de FIFA Mobile Soccer Mod APK
-
FIFA Mobile Soccer Mod APK no es solo una versión regular del juego con dinero y monedas ilimitadas. También tiene muchas otras características que lo hacen más divertido y emocionante para jugar. Aquí están algunas de las características de FIFA Mobile Soccer Mod APK:
-
Desbloqueado todos los jugadores y equipos
-
Con FIFA Mobile Soccer Mod APK, puede desbloquear todos los jugadores y equipos en el juego, incluyendo los que son exclusivos para ciertos eventos o temporadas. Usted puede elegir entre más de 15.000 auténticas estrellas de fútbol de más de 600 equipos, incluyendo Chelsea, Paris SG, Real Madrid, Liverpool, Juventus, y más. También puedes crear tu propio equipo personalizado con tus jugadores favoritos.
-
Dinero y monedas ilimitados
-
-
Mod de menú con opciones de personalización
-
FIFA Mobile Soccer Mod APK también viene con un menú mod que le da más control sobre el juego. Puedes acceder al mod del menú pulsando un icono flotante en la pantalla. Desde allí, puedes personalizar varios aspectos del juego, como:
-
-
El nivel de dificultad del juego
-
La velocidad del juego
-
El tamaño de los jugadores
-
El ángulo de la cámara
-
Las condiciones meteorológicas
-
Los efectos de sonido
-
La calidad gráfica
-
Gráficos y efectos de sonido de alta calidad
-
FIFA Mobile Soccer Mod APK también mejora los gráficos y efectos de sonido del juego, por lo que es más realista y envolvente. Puede disfrutar de las impresionantes imágenes de los estadios, los jugadores, el balón y las animaciones. También se pueden escuchar los vítores de la multitud, los comentarios de los locutores y el sonido de la bola golpeando la red.
-
Cómo descargar e instalar FIFA Mobile Soccer Mod APK
-
Ahora que conoces las características de FIFA Mobile Soccer Mod APK, es posible que se pregunte cómo descargar e instalar en su dispositivo. No te preocupes, es muy fácil y sencillo. Solo sigue estos pasos:
-
Paso 1: Habilitar fuentes desconocidas en su dispositivo
-
Antes de que pueda instalar FIFA Mobile Soccer Mod APK, es necesario habilitar fuentes desconocidas en su dispositivo. Esto le permitirá instalar aplicaciones que no son de la tienda oficial de Google Play. Para hacer esto, vaya a la configuración del dispositivo, luego a la seguridad, luego a fuentes desconocidas. Active la opción para permitir fuentes desconocidas.
-
-
Paso 2: Descargar FIFA Mobile Soccer Mod APK Archivo de una fuente de confianza
-
-
Paso 3: Localizar e instalar el archivo APK en su dispositivo
-
Después de descargar el archivo FIFA Mobile Soccer Mod APK, es necesario localizar e instalar en su dispositivo. Para hacer esto, vaya a su aplicación de administrador de archivos y luego busque la carpeta donde guardó el archivo APK. Toque en el archivo para iniciar el proceso de instalación. Es posible que vea una ventana emergente pidiendo su permiso para instalar la aplicación. Simplemente toque en instalar y espere unos segundos hasta que se complete la instalación.
-
Paso 4: Iniciar el juego y disfrutar de dinero ilimitado
-
Felicidades! Usted ha instalado con éxito FIFA Mobile Soccer Mod APK en su dispositivo. Ahora puede iniciar el juego y disfrutar de dinero ilimitado y otras características. Puede acceder al mod del menú pulsando un icono flotante en la pantalla. A partir de ahí, puede personalizar varios aspectos del juego como desee.
-
Conclusión
-
FIFA Mobile Soccer es uno de los mejores juegos de fútbol en dispositivos móviles. Le ofrece una experiencia de fútbol realista y emocionante con gráficos de alta calidad y efectos de sonido. Sin embargo, si quieres disfrutar del juego aún más, usted debe descargar FIFA Mobile Soccer Mod APK dinero ilimitado. Esta versión modificada del juego le da acceso a dinero y monedas ilimitadas, así como otras características que mejoran su experiencia de juego. Puedes desbloquear a todos los jugadores y equipos, personalizar la configuración del juego y divertirte más jugando a FIFA Mobile Soccer.
-
Si desea descargar FIFA Mobile Soccer Mod APK dinero ilimitado, solo tienes que seguir los pasos que hemos proporcionado en este artículo. Es muy fácil y simple. Solo asegúrese de descargar FIFA Mobile Soccer Mod APK de una fuente de confianza como [este enlace]. Esto asegurará que usted consigue una versión segura y de trabajo de FIFA Mobile Soccer Mod APK.
-
Entonces, ¿qué estás esperando? Descargar FIFA Mobile Soccer Mod APK dinero ilimitado ahora y disfrutar de jugar al fútbol como nunca antes!
-
Preguntas frecuentes
-
¿Es FIFA Mobile Soccer Mod APK seguro de descargar y usar?
-
-
¿Necesito rootear mi dispositivo para usar FIFA Mobile Soccer Mod APK?
-
No, no es necesario rootear el dispositivo para usar FIFA Mobile Soccer Mod APK. La versión modificada del juego funciona bien tanto en dispositivos rooteados como no.
-
¿Puedo jugar en línea con FIFA Mobile Soccer Mod APK?
-
Sí, se puede jugar en línea con FIFA Mobile Soccer Mod APK. Sin embargo, es posible que se enfrenten a algunos problemas o errores al jugar en línea con otros jugadores que están utilizando la versión original del juego. Para evitar esto, le sugerimos que juegue sin conexión o con otros jugadores que también están utilizando FIFA Mobile Soccer Mod APK.
-
¿Cómo puedo actualizar FIFA Mobile Soccer Mod APK?
¿Cómo puedo actualizar FIFA Mobile Soccer Mod APK?
-
Para actualizar FIFA Mobile Soccer Mod APK, es necesario descargar la última versión del juego modded de la misma fuente donde se descargó la versión anterior. Puede consultar las actualizaciones visitando [este enlace] regularmente. Una vez que descargue la última versión de FIFA Mobile Soccer Mod APK, es necesario desinstalar la versión anterior e instalar el nuevo siguiendo los mismos pasos que hemos proporcionado en este artículo.
-
¿Dónde puedo encontrar más juegos modded como FIFA Mobile Soccer?
-
Si estás buscando juegos más modded como FIFA Mobile Soccer, puedes visitar [este sitio web]. Esta es una fuente confiable y confiable que le ofrece una amplia gama de juegos modificados para varios géneros y plataformas. Puedes encontrar juegos de acción, aventura, carreras, deportes, simulación, estrategia y más. También puede solicitar juegos modificados que no están disponibles en el sitio web.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/main.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/main.py
deleted file mode 100644
index 7e061f5b39081f39e9f4fa2a0e88aec0e0a3da79..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/main.py
+++ /dev/null
@@ -1,79 +0,0 @@
-"""Primary application entrypoint.
-"""
-import locale
-import logging
-import os
-import sys
-import warnings
-from typing import List, Optional
-
-from pip._internal.cli.autocompletion import autocomplete
-from pip._internal.cli.main_parser import parse_command
-from pip._internal.commands import create_command
-from pip._internal.exceptions import PipError
-from pip._internal.utils import deprecation
-
-logger = logging.getLogger(__name__)
-
-
-# Do not import and use main() directly! Using it directly is actively
-# discouraged by pip's maintainers. The name, location and behavior of
-# this function is subject to change, so calling it directly is not
-# portable across different pip versions.
-
-# In addition, running pip in-process is unsupported and unsafe. This is
-# elaborated in detail at
-# https://pip.pypa.io/en/stable/user_guide/#using-pip-from-your-program.
-# That document also provides suggestions that should work for nearly
-# all users that are considering importing and using main() directly.
-
-# However, we know that certain users will still want to invoke pip
-# in-process. If you understand and accept the implications of using pip
-# in an unsupported manner, the best approach is to use runpy to avoid
-# depending on the exact location of this entry point.
-
-# The following example shows how to use runpy to invoke pip in that
-# case:
-#
-# sys.argv = ["pip", your, args, here]
-# runpy.run_module("pip", run_name="__main__")
-#
-# Note that this will exit the process after running, unlike a direct
-# call to main. As it is not safe to do any processing after calling
-# main, this should not be an issue in practice.
-
-
-def main(args: Optional[List[str]] = None) -> int:
- if args is None:
- args = sys.argv[1:]
-
- # Suppress the pkg_resources deprecation warning
- # Note - we use a module of .*pkg_resources to cover
- # the normal case (pip._vendor.pkg_resources) and the
- # devendored case (a bare pkg_resources)
- warnings.filterwarnings(
- action="ignore", category=DeprecationWarning, module=".*pkg_resources"
- )
-
- # Configure our deprecation warnings to be sent through loggers
- deprecation.install_warning_logger()
-
- autocomplete()
-
- try:
- cmd_name, cmd_args = parse_command(args)
- except PipError as exc:
- sys.stderr.write(f"ERROR: {exc}")
- sys.stderr.write(os.linesep)
- sys.exit(1)
-
- # Needed for locale.getpreferredencoding(False) to work
- # in pip._internal.utils.encoding.auto_decode
- try:
- locale.setlocale(locale.LC_ALL, "")
- except locale.Error as e:
- # setlocale can apparently crash if locale are uninitialized
- logger.debug("Ignoring error %s when setting locale", e)
- command = create_command(cmd_name, isolated=("--isolated" in cmd_args))
-
- return command.main(cmd_args)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/uninstall.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/uninstall.py
deleted file mode 100644
index f198fc313ff57929d95d36216e3e6ecec3877673..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/uninstall.py
+++ /dev/null
@@ -1,113 +0,0 @@
-import logging
-from optparse import Values
-from typing import List
-
-from pip._vendor.packaging.utils import canonicalize_name
-
-from pip._internal.cli import cmdoptions
-from pip._internal.cli.base_command import Command
-from pip._internal.cli.req_command import SessionCommandMixin, warn_if_run_as_root
-from pip._internal.cli.status_codes import SUCCESS
-from pip._internal.exceptions import InstallationError
-from pip._internal.req import parse_requirements
-from pip._internal.req.constructors import (
- install_req_from_line,
- install_req_from_parsed_requirement,
-)
-from pip._internal.utils.misc import (
- check_externally_managed,
- protect_pip_from_modification_on_windows,
-)
-
-logger = logging.getLogger(__name__)
-
-
-class UninstallCommand(Command, SessionCommandMixin):
- """
- Uninstall packages.
-
- pip is able to uninstall most installed packages. Known exceptions are:
-
- - Pure distutils packages installed with ``python setup.py install``, which
- leave behind no metadata to determine what files were installed.
- - Script wrappers installed by ``python setup.py develop``.
- """
-
- usage = """
- %prog [options] ...
- %prog [options] -r ..."""
-
- def add_options(self) -> None:
- self.cmd_opts.add_option(
- "-r",
- "--requirement",
- dest="requirements",
- action="append",
- default=[],
- metavar="file",
- help=(
- "Uninstall all the packages listed in the given requirements "
- "file. This option can be used multiple times."
- ),
- )
- self.cmd_opts.add_option(
- "-y",
- "--yes",
- dest="yes",
- action="store_true",
- help="Don't ask for confirmation of uninstall deletions.",
- )
- self.cmd_opts.add_option(cmdoptions.root_user_action())
- self.cmd_opts.add_option(cmdoptions.override_externally_managed())
- self.parser.insert_option_group(0, self.cmd_opts)
-
- def run(self, options: Values, args: List[str]) -> int:
- session = self.get_default_session(options)
-
- reqs_to_uninstall = {}
- for name in args:
- req = install_req_from_line(
- name,
- isolated=options.isolated_mode,
- )
- if req.name:
- reqs_to_uninstall[canonicalize_name(req.name)] = req
- else:
- logger.warning(
- "Invalid requirement: %r ignored -"
- " the uninstall command expects named"
- " requirements.",
- name,
- )
- for filename in options.requirements:
- for parsed_req in parse_requirements(
- filename, options=options, session=session
- ):
- req = install_req_from_parsed_requirement(
- parsed_req, isolated=options.isolated_mode
- )
- if req.name:
- reqs_to_uninstall[canonicalize_name(req.name)] = req
- if not reqs_to_uninstall:
- raise InstallationError(
- f"You must give at least one requirement to {self.name} (see "
- f'"pip help {self.name}")'
- )
-
- if not options.override_externally_managed:
- check_externally_managed()
-
- protect_pip_from_modification_on_windows(
- modifying_pip="pip" in reqs_to_uninstall
- )
-
- for req in reqs_to_uninstall.values():
- uninstall_pathset = req.uninstall(
- auto_confirm=options.yes,
- verbose=self.verbosity > 0,
- )
- if uninstall_pathset:
- uninstall_pathset.commit()
- if options.root_user_action == "warn":
- warn_if_run_as_root()
- return SUCCESS
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/more_itertools/recipes.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/more_itertools/recipes.py
deleted file mode 100644
index a2596423a4c3dbd15a357241477a0af0a531f9ec..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pkg_resources/_vendor/more_itertools/recipes.py
+++ /dev/null
@@ -1,698 +0,0 @@
-"""Imported from the recipes section of the itertools documentation.
-
-All functions taken from the recipes section of the itertools library docs
-[1]_.
-Some backward-compatible usability improvements have been made.
-
-.. [1] http://docs.python.org/library/itertools.html#recipes
-
-"""
-import warnings
-from collections import deque
-from itertools import (
- chain,
- combinations,
- count,
- cycle,
- groupby,
- islice,
- repeat,
- starmap,
- tee,
- zip_longest,
-)
-import operator
-from random import randrange, sample, choice
-
-__all__ = [
- 'all_equal',
- 'before_and_after',
- 'consume',
- 'convolve',
- 'dotproduct',
- 'first_true',
- 'flatten',
- 'grouper',
- 'iter_except',
- 'ncycles',
- 'nth',
- 'nth_combination',
- 'padnone',
- 'pad_none',
- 'pairwise',
- 'partition',
- 'powerset',
- 'prepend',
- 'quantify',
- 'random_combination_with_replacement',
- 'random_combination',
- 'random_permutation',
- 'random_product',
- 'repeatfunc',
- 'roundrobin',
- 'sliding_window',
- 'tabulate',
- 'tail',
- 'take',
- 'triplewise',
- 'unique_everseen',
- 'unique_justseen',
-]
-
-
-def take(n, iterable):
- """Return first *n* items of the iterable as a list.
-
- >>> take(3, range(10))
- [0, 1, 2]
-
- If there are fewer than *n* items in the iterable, all of them are
- returned.
-
- >>> take(10, range(3))
- [0, 1, 2]
-
- """
- return list(islice(iterable, n))
-
-
-def tabulate(function, start=0):
- """Return an iterator over the results of ``func(start)``,
- ``func(start + 1)``, ``func(start + 2)``...
-
- *func* should be a function that accepts one integer argument.
-
- If *start* is not specified it defaults to 0. It will be incremented each
- time the iterator is advanced.
-
- >>> square = lambda x: x ** 2
- >>> iterator = tabulate(square, -3)
- >>> take(4, iterator)
- [9, 4, 1, 0]
-
- """
- return map(function, count(start))
-
-
-def tail(n, iterable):
- """Return an iterator over the last *n* items of *iterable*.
-
- >>> t = tail(3, 'ABCDEFG')
- >>> list(t)
- ['E', 'F', 'G']
-
- """
- return iter(deque(iterable, maxlen=n))
-
-
-def consume(iterator, n=None):
- """Advance *iterable* by *n* steps. If *n* is ``None``, consume it
- entirely.
-
- Efficiently exhausts an iterator without returning values. Defaults to
- consuming the whole iterator, but an optional second argument may be
- provided to limit consumption.
-
- >>> i = (x for x in range(10))
- >>> next(i)
- 0
- >>> consume(i, 3)
- >>> next(i)
- 4
- >>> consume(i)
- >>> next(i)
- Traceback (most recent call last):
- File "", line 1, in
- StopIteration
-
- If the iterator has fewer items remaining than the provided limit, the
- whole iterator will be consumed.
-
- >>> i = (x for x in range(3))
- >>> consume(i, 5)
- >>> next(i)
- Traceback (most recent call last):
- File "", line 1, in
- StopIteration
-
- """
- # Use functions that consume iterators at C speed.
- if n is None:
- # feed the entire iterator into a zero-length deque
- deque(iterator, maxlen=0)
- else:
- # advance to the empty slice starting at position n
- next(islice(iterator, n, n), None)
-
-
-def nth(iterable, n, default=None):
- """Returns the nth item or a default value.
-
- >>> l = range(10)
- >>> nth(l, 3)
- 3
- >>> nth(l, 20, "zebra")
- 'zebra'
-
- """
- return next(islice(iterable, n, None), default)
-
-
-def all_equal(iterable):
- """
- Returns ``True`` if all the elements are equal to each other.
-
- >>> all_equal('aaaa')
- True
- >>> all_equal('aaab')
- False
-
- """
- g = groupby(iterable)
- return next(g, True) and not next(g, False)
-
-
-def quantify(iterable, pred=bool):
- """Return the how many times the predicate is true.
-
- >>> quantify([True, False, True])
- 2
-
- """
- return sum(map(pred, iterable))
-
-
-def pad_none(iterable):
- """Returns the sequence of elements and then returns ``None`` indefinitely.
-
- >>> take(5, pad_none(range(3)))
- [0, 1, 2, None, None]
-
- Useful for emulating the behavior of the built-in :func:`map` function.
-
- See also :func:`padded`.
-
- """
- return chain(iterable, repeat(None))
-
-
-padnone = pad_none
-
-
-def ncycles(iterable, n):
- """Returns the sequence elements *n* times
-
- >>> list(ncycles(["a", "b"], 3))
- ['a', 'b', 'a', 'b', 'a', 'b']
-
- """
- return chain.from_iterable(repeat(tuple(iterable), n))
-
-
-def dotproduct(vec1, vec2):
- """Returns the dot product of the two iterables.
-
- >>> dotproduct([10, 10], [20, 20])
- 400
-
- """
- return sum(map(operator.mul, vec1, vec2))
-
-
-def flatten(listOfLists):
- """Return an iterator flattening one level of nesting in a list of lists.
-
- >>> list(flatten([[0, 1], [2, 3]]))
- [0, 1, 2, 3]
-
- See also :func:`collapse`, which can flatten multiple levels of nesting.
-
- """
- return chain.from_iterable(listOfLists)
-
-
-def repeatfunc(func, times=None, *args):
- """Call *func* with *args* repeatedly, returning an iterable over the
- results.
-
- If *times* is specified, the iterable will terminate after that many
- repetitions:
-
- >>> from operator import add
- >>> times = 4
- >>> args = 3, 5
- >>> list(repeatfunc(add, times, *args))
- [8, 8, 8, 8]
-
- If *times* is ``None`` the iterable will not terminate:
-
- >>> from random import randrange
- >>> times = None
- >>> args = 1, 11
- >>> take(6, repeatfunc(randrange, times, *args)) # doctest:+SKIP
- [2, 4, 8, 1, 8, 4]
-
- """
- if times is None:
- return starmap(func, repeat(args))
- return starmap(func, repeat(args, times))
-
-
-def _pairwise(iterable):
- """Returns an iterator of paired items, overlapping, from the original
-
- >>> take(4, pairwise(count()))
- [(0, 1), (1, 2), (2, 3), (3, 4)]
-
- On Python 3.10 and above, this is an alias for :func:`itertools.pairwise`.
-
- """
- a, b = tee(iterable)
- next(b, None)
- yield from zip(a, b)
-
-
-try:
- from itertools import pairwise as itertools_pairwise
-except ImportError:
- pairwise = _pairwise
-else:
-
- def pairwise(iterable):
- yield from itertools_pairwise(iterable)
-
- pairwise.__doc__ = _pairwise.__doc__
-
-
-def grouper(iterable, n, fillvalue=None):
- """Collect data into fixed-length chunks or blocks.
-
- >>> list(grouper('ABCDEFG', 3, 'x'))
- [('A', 'B', 'C'), ('D', 'E', 'F'), ('G', 'x', 'x')]
-
- """
- if isinstance(iterable, int):
- warnings.warn(
- "grouper expects iterable as first parameter", DeprecationWarning
- )
- n, iterable = iterable, n
- args = [iter(iterable)] * n
- return zip_longest(fillvalue=fillvalue, *args)
-
-
-def roundrobin(*iterables):
- """Yields an item from each iterable, alternating between them.
-
- >>> list(roundrobin('ABC', 'D', 'EF'))
- ['A', 'D', 'E', 'B', 'F', 'C']
-
- This function produces the same output as :func:`interleave_longest`, but
- may perform better for some inputs (in particular when the number of
- iterables is small).
-
- """
- # Recipe credited to George Sakkis
- pending = len(iterables)
- nexts = cycle(iter(it).__next__ for it in iterables)
- while pending:
- try:
- for next in nexts:
- yield next()
- except StopIteration:
- pending -= 1
- nexts = cycle(islice(nexts, pending))
-
-
-def partition(pred, iterable):
- """
- Returns a 2-tuple of iterables derived from the input iterable.
- The first yields the items that have ``pred(item) == False``.
- The second yields the items that have ``pred(item) == True``.
-
- >>> is_odd = lambda x: x % 2 != 0
- >>> iterable = range(10)
- >>> even_items, odd_items = partition(is_odd, iterable)
- >>> list(even_items), list(odd_items)
- ([0, 2, 4, 6, 8], [1, 3, 5, 7, 9])
-
- If *pred* is None, :func:`bool` is used.
-
- >>> iterable = [0, 1, False, True, '', ' ']
- >>> false_items, true_items = partition(None, iterable)
- >>> list(false_items), list(true_items)
- ([0, False, ''], [1, True, ' '])
-
- """
- if pred is None:
- pred = bool
-
- evaluations = ((pred(x), x) for x in iterable)
- t1, t2 = tee(evaluations)
- return (
- (x for (cond, x) in t1 if not cond),
- (x for (cond, x) in t2 if cond),
- )
-
-
-def powerset(iterable):
- """Yields all possible subsets of the iterable.
-
- >>> list(powerset([1, 2, 3]))
- [(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)]
-
- :func:`powerset` will operate on iterables that aren't :class:`set`
- instances, so repeated elements in the input will produce repeated elements
- in the output. Use :func:`unique_everseen` on the input to avoid generating
- duplicates:
-
- >>> seq = [1, 1, 0]
- >>> list(powerset(seq))
- [(), (1,), (1,), (0,), (1, 1), (1, 0), (1, 0), (1, 1, 0)]
- >>> from more_itertools import unique_everseen
- >>> list(powerset(unique_everseen(seq)))
- [(), (1,), (0,), (1, 0)]
-
- """
- s = list(iterable)
- return chain.from_iterable(combinations(s, r) for r in range(len(s) + 1))
-
-
-def unique_everseen(iterable, key=None):
- """
- Yield unique elements, preserving order.
-
- >>> list(unique_everseen('AAAABBBCCDAABBB'))
- ['A', 'B', 'C', 'D']
- >>> list(unique_everseen('ABBCcAD', str.lower))
- ['A', 'B', 'C', 'D']
-
- Sequences with a mix of hashable and unhashable items can be used.
- The function will be slower (i.e., `O(n^2)`) for unhashable items.
-
- Remember that ``list`` objects are unhashable - you can use the *key*
- parameter to transform the list to a tuple (which is hashable) to
- avoid a slowdown.
-
- >>> iterable = ([1, 2], [2, 3], [1, 2])
- >>> list(unique_everseen(iterable)) # Slow
- [[1, 2], [2, 3]]
- >>> list(unique_everseen(iterable, key=tuple)) # Faster
- [[1, 2], [2, 3]]
-
- Similary, you may want to convert unhashable ``set`` objects with
- ``key=frozenset``. For ``dict`` objects,
- ``key=lambda x: frozenset(x.items())`` can be used.
-
- """
- seenset = set()
- seenset_add = seenset.add
- seenlist = []
- seenlist_add = seenlist.append
- use_key = key is not None
-
- for element in iterable:
- k = key(element) if use_key else element
- try:
- if k not in seenset:
- seenset_add(k)
- yield element
- except TypeError:
- if k not in seenlist:
- seenlist_add(k)
- yield element
-
-
-def unique_justseen(iterable, key=None):
- """Yields elements in order, ignoring serial duplicates
-
- >>> list(unique_justseen('AAAABBBCCDAABBB'))
- ['A', 'B', 'C', 'D', 'A', 'B']
- >>> list(unique_justseen('ABBCcAD', str.lower))
- ['A', 'B', 'C', 'A', 'D']
-
- """
- return map(next, map(operator.itemgetter(1), groupby(iterable, key)))
-
-
-def iter_except(func, exception, first=None):
- """Yields results from a function repeatedly until an exception is raised.
-
- Converts a call-until-exception interface to an iterator interface.
- Like ``iter(func, sentinel)``, but uses an exception instead of a sentinel
- to end the loop.
-
- >>> l = [0, 1, 2]
- >>> list(iter_except(l.pop, IndexError))
- [2, 1, 0]
-
- Multiple exceptions can be specified as a stopping condition:
-
- >>> l = [1, 2, 3, '...', 4, 5, 6]
- >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError)))
- [7, 6, 5]
- >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError)))
- [4, 3, 2]
- >>> list(iter_except(lambda: 1 + l.pop(), (IndexError, TypeError)))
- []
-
- """
- try:
- if first is not None:
- yield first()
- while 1:
- yield func()
- except exception:
- pass
-
-
-def first_true(iterable, default=None, pred=None):
- """
- Returns the first true value in the iterable.
-
- If no true value is found, returns *default*
-
- If *pred* is not None, returns the first item for which
- ``pred(item) == True`` .
-
- >>> first_true(range(10))
- 1
- >>> first_true(range(10), pred=lambda x: x > 5)
- 6
- >>> first_true(range(10), default='missing', pred=lambda x: x > 9)
- 'missing'
-
- """
- return next(filter(pred, iterable), default)
-
-
-def random_product(*args, repeat=1):
- """Draw an item at random from each of the input iterables.
-
- >>> random_product('abc', range(4), 'XYZ') # doctest:+SKIP
- ('c', 3, 'Z')
-
- If *repeat* is provided as a keyword argument, that many items will be
- drawn from each iterable.
-
- >>> random_product('abcd', range(4), repeat=2) # doctest:+SKIP
- ('a', 2, 'd', 3)
-
- This equivalent to taking a random selection from
- ``itertools.product(*args, **kwarg)``.
-
- """
- pools = [tuple(pool) for pool in args] * repeat
- return tuple(choice(pool) for pool in pools)
-
-
-def random_permutation(iterable, r=None):
- """Return a random *r* length permutation of the elements in *iterable*.
-
- If *r* is not specified or is ``None``, then *r* defaults to the length of
- *iterable*.
-
- >>> random_permutation(range(5)) # doctest:+SKIP
- (3, 4, 0, 1, 2)
-
- This equivalent to taking a random selection from
- ``itertools.permutations(iterable, r)``.
-
- """
- pool = tuple(iterable)
- r = len(pool) if r is None else r
- return tuple(sample(pool, r))
-
-
-def random_combination(iterable, r):
- """Return a random *r* length subsequence of the elements in *iterable*.
-
- >>> random_combination(range(5), 3) # doctest:+SKIP
- (2, 3, 4)
-
- This equivalent to taking a random selection from
- ``itertools.combinations(iterable, r)``.
-
- """
- pool = tuple(iterable)
- n = len(pool)
- indices = sorted(sample(range(n), r))
- return tuple(pool[i] for i in indices)
-
-
-def random_combination_with_replacement(iterable, r):
- """Return a random *r* length subsequence of elements in *iterable*,
- allowing individual elements to be repeated.
-
- >>> random_combination_with_replacement(range(3), 5) # doctest:+SKIP
- (0, 0, 1, 2, 2)
-
- This equivalent to taking a random selection from
- ``itertools.combinations_with_replacement(iterable, r)``.
-
- """
- pool = tuple(iterable)
- n = len(pool)
- indices = sorted(randrange(n) for i in range(r))
- return tuple(pool[i] for i in indices)
-
-
-def nth_combination(iterable, r, index):
- """Equivalent to ``list(combinations(iterable, r))[index]``.
-
- The subsequences of *iterable* that are of length *r* can be ordered
- lexicographically. :func:`nth_combination` computes the subsequence at
- sort position *index* directly, without computing the previous
- subsequences.
-
- >>> nth_combination(range(5), 3, 5)
- (0, 3, 4)
-
- ``ValueError`` will be raised If *r* is negative or greater than the length
- of *iterable*.
- ``IndexError`` will be raised if the given *index* is invalid.
- """
- pool = tuple(iterable)
- n = len(pool)
- if (r < 0) or (r > n):
- raise ValueError
-
- c = 1
- k = min(r, n - r)
- for i in range(1, k + 1):
- c = c * (n - k + i) // i
-
- if index < 0:
- index += c
-
- if (index < 0) or (index >= c):
- raise IndexError
-
- result = []
- while r:
- c, n, r = c * r // n, n - 1, r - 1
- while index >= c:
- index -= c
- c, n = c * (n - r) // n, n - 1
- result.append(pool[-1 - n])
-
- return tuple(result)
-
-
-def prepend(value, iterator):
- """Yield *value*, followed by the elements in *iterator*.
-
- >>> value = '0'
- >>> iterator = ['1', '2', '3']
- >>> list(prepend(value, iterator))
- ['0', '1', '2', '3']
-
- To prepend multiple values, see :func:`itertools.chain`
- or :func:`value_chain`.
-
- """
- return chain([value], iterator)
-
-
-def convolve(signal, kernel):
- """Convolve the iterable *signal* with the iterable *kernel*.
-
- >>> signal = (1, 2, 3, 4, 5)
- >>> kernel = [3, 2, 1]
- >>> list(convolve(signal, kernel))
- [3, 8, 14, 20, 26, 14, 5]
-
- Note: the input arguments are not interchangeable, as the *kernel*
- is immediately consumed and stored.
-
- """
- kernel = tuple(kernel)[::-1]
- n = len(kernel)
- window = deque([0], maxlen=n) * n
- for x in chain(signal, repeat(0, n - 1)):
- window.append(x)
- yield sum(map(operator.mul, kernel, window))
-
-
-def before_and_after(predicate, it):
- """A variant of :func:`takewhile` that allows complete access to the
- remainder of the iterator.
-
- >>> it = iter('ABCdEfGhI')
- >>> all_upper, remainder = before_and_after(str.isupper, it)
- >>> ''.join(all_upper)
- 'ABC'
- >>> ''.join(remainder) # takewhile() would lose the 'd'
- 'dEfGhI'
-
- Note that the first iterator must be fully consumed before the second
- iterator can generate valid results.
- """
- it = iter(it)
- transition = []
-
- def true_iterator():
- for elem in it:
- if predicate(elem):
- yield elem
- else:
- transition.append(elem)
- return
-
- def remainder_iterator():
- yield from transition
- yield from it
-
- return true_iterator(), remainder_iterator()
-
-
-def triplewise(iterable):
- """Return overlapping triplets from *iterable*.
-
- >>> list(triplewise('ABCDE'))
- [('A', 'B', 'C'), ('B', 'C', 'D'), ('C', 'D', 'E')]
-
- """
- for (a, _), (b, c) in pairwise(pairwise(iterable)):
- yield a, b, c
-
-
-def sliding_window(iterable, n):
- """Return a sliding window of width *n* over *iterable*.
-
- >>> list(sliding_window(range(6), 4))
- [(0, 1, 2, 3), (1, 2, 3, 4), (2, 3, 4, 5)]
-
- If *iterable* has fewer than *n* items, then nothing is yielded:
-
- >>> list(sliding_window(range(3), 4))
- []
-
- For a variant with more features, see :func:`windowed`.
- """
- it = iter(iterable)
- window = deque(islice(it, n), maxlen=n)
- if len(window) == n:
- yield tuple(window)
- for x in it:
- window.append(x)
- yield tuple(window)
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/easy_install.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/easy_install.py
deleted file mode 100644
index 444d3b33110b65c14ff5a043d0ca4137e92b30eb..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/easy_install.py
+++ /dev/null
@@ -1,2312 +0,0 @@
-"""
-Easy Install
-------------
-
-A tool for doing automatic download/extract/build of distutils-based Python
-packages. For detailed documentation, see the accompanying EasyInstall.txt
-file, or visit the `EasyInstall home page`__.
-
-__ https://setuptools.pypa.io/en/latest/deprecated/easy_install.html
-
-"""
-
-from glob import glob
-from distutils.util import get_platform
-from distutils.util import convert_path, subst_vars
-from distutils.errors import (
- DistutilsArgError, DistutilsOptionError,
- DistutilsError, DistutilsPlatformError,
-)
-from distutils import log, dir_util
-from distutils.command.build_scripts import first_line_re
-from distutils.spawn import find_executable
-from distutils.command import install
-import sys
-import os
-import zipimport
-import shutil
-import tempfile
-import zipfile
-import re
-import stat
-import random
-import textwrap
-import warnings
-import site
-import struct
-import contextlib
-import subprocess
-import shlex
-import io
-import configparser
-import sysconfig
-
-
-from sysconfig import get_path
-
-from setuptools import SetuptoolsDeprecationWarning
-
-from setuptools import Command
-from setuptools.sandbox import run_setup
-from setuptools.command import setopt
-from setuptools.archive_util import unpack_archive
-from setuptools.package_index import (
- PackageIndex, parse_requirement_arg, URL_SCHEME,
-)
-from setuptools.command import bdist_egg, egg_info
-from setuptools.wheel import Wheel
-from pkg_resources import (
- normalize_path, resource_string,
- get_distribution, find_distributions, Environment, Requirement,
- Distribution, PathMetadata, EggMetadata, WorkingSet, DistributionNotFound,
- VersionConflict, DEVELOP_DIST,
-)
-import pkg_resources
-from .._path import ensure_directory
-from ..extern.jaraco.text import yield_lines
-
-
-# Turn on PEP440Warnings
-warnings.filterwarnings("default", category=pkg_resources.PEP440Warning)
-
-__all__ = [
- 'easy_install', 'PthDistributions', 'extract_wininst_cfg',
- 'get_exe_prefixes',
-]
-
-
-def is_64bit():
- return struct.calcsize("P") == 8
-
-
-def _to_bytes(s):
- return s.encode('utf8')
-
-
-def isascii(s):
- try:
- s.encode('ascii')
- return True
- except UnicodeError:
- return False
-
-
-def _one_liner(text):
- return textwrap.dedent(text).strip().replace('\n', '; ')
-
-
-class easy_install(Command):
- """Manage a download/build/install process"""
- description = "Find/get/install Python packages"
- command_consumes_arguments = True
-
- user_options = [
- ('prefix=', None, "installation prefix"),
- ("zip-ok", "z", "install package as a zipfile"),
- ("multi-version", "m", "make apps have to require() a version"),
- ("upgrade", "U", "force upgrade (searches PyPI for latest versions)"),
- ("install-dir=", "d", "install package to DIR"),
- ("script-dir=", "s", "install scripts to DIR"),
- ("exclude-scripts", "x", "Don't install scripts"),
- ("always-copy", "a", "Copy all needed packages to install dir"),
- ("index-url=", "i", "base URL of Python Package Index"),
- ("find-links=", "f", "additional URL(s) to search for packages"),
- ("build-directory=", "b",
- "download/extract/build in DIR; keep the results"),
- ('optimize=', 'O',
- "also compile with optimization: -O1 for \"python -O\", "
- "-O2 for \"python -OO\", and -O0 to disable [default: -O0]"),
- ('record=', None,
- "filename in which to record list of installed files"),
- ('always-unzip', 'Z', "don't install as a zipfile, no matter what"),
- ('site-dirs=', 'S', "list of directories where .pth files work"),
- ('editable', 'e', "Install specified packages in editable form"),
- ('no-deps', 'N', "don't install dependencies"),
- ('allow-hosts=', 'H', "pattern(s) that hostnames must match"),
- ('local-snapshots-ok', 'l',
- "allow building eggs from local checkouts"),
- ('version', None, "print version information and exit"),
- ('no-find-links', None,
- "Don't load find-links defined in packages being installed"),
- ('user', None, "install in user site-package '%s'" % site.USER_SITE)
- ]
- boolean_options = [
- 'zip-ok', 'multi-version', 'exclude-scripts', 'upgrade', 'always-copy',
- 'editable',
- 'no-deps', 'local-snapshots-ok', 'version',
- 'user'
- ]
-
- negative_opt = {'always-unzip': 'zip-ok'}
- create_index = PackageIndex
-
- def initialize_options(self):
- warnings.warn(
- "easy_install command is deprecated. "
- "Use build and pip and other standards-based tools.",
- EasyInstallDeprecationWarning,
- )
-
- # the --user option seems to be an opt-in one,
- # so the default should be False.
- self.user = 0
- self.zip_ok = self.local_snapshots_ok = None
- self.install_dir = self.script_dir = self.exclude_scripts = None
- self.index_url = None
- self.find_links = None
- self.build_directory = None
- self.args = None
- self.optimize = self.record = None
- self.upgrade = self.always_copy = self.multi_version = None
- self.editable = self.no_deps = self.allow_hosts = None
- self.root = self.prefix = self.no_report = None
- self.version = None
- self.install_purelib = None # for pure module distributions
- self.install_platlib = None # non-pure (dists w/ extensions)
- self.install_headers = None # for C/C++ headers
- self.install_lib = None # set to either purelib or platlib
- self.install_scripts = None
- self.install_data = None
- self.install_base = None
- self.install_platbase = None
- self.install_userbase = site.USER_BASE
- self.install_usersite = site.USER_SITE
- self.no_find_links = None
-
- # Options not specifiable via command line
- self.package_index = None
- self.pth_file = self.always_copy_from = None
- self.site_dirs = None
- self.installed_projects = {}
- # Always read easy_install options, even if we are subclassed, or have
- # an independent instance created. This ensures that defaults will
- # always come from the standard configuration file(s)' "easy_install"
- # section, even if this is a "develop" or "install" command, or some
- # other embedding.
- self._dry_run = None
- self.verbose = self.distribution.verbose
- self.distribution._set_command_options(
- self, self.distribution.get_option_dict('easy_install')
- )
-
- def delete_blockers(self, blockers):
- extant_blockers = (
- filename for filename in blockers
- if os.path.exists(filename) or os.path.islink(filename)
- )
- list(map(self._delete_path, extant_blockers))
-
- def _delete_path(self, path):
- log.info("Deleting %s", path)
- if self.dry_run:
- return
-
- is_tree = os.path.isdir(path) and not os.path.islink(path)
- remover = rmtree if is_tree else os.unlink
- remover(path)
-
- @staticmethod
- def _render_version():
- """
- Render the Setuptools version and installation details, then exit.
- """
- ver = '{}.{}'.format(*sys.version_info)
- dist = get_distribution('setuptools')
- tmpl = 'setuptools {dist.version} from {dist.location} (Python {ver})'
- print(tmpl.format(**locals()))
- raise SystemExit()
-
- def finalize_options(self): # noqa: C901 # is too complex (25) # FIXME
- self.version and self._render_version()
-
- py_version = sys.version.split()[0]
-
- self.config_vars = dict(sysconfig.get_config_vars())
-
- self.config_vars.update({
- 'dist_name': self.distribution.get_name(),
- 'dist_version': self.distribution.get_version(),
- 'dist_fullname': self.distribution.get_fullname(),
- 'py_version': py_version,
- 'py_version_short': f'{sys.version_info.major}.{sys.version_info.minor}',
- 'py_version_nodot': f'{sys.version_info.major}{sys.version_info.minor}',
- 'sys_prefix': self.config_vars['prefix'],
- 'sys_exec_prefix': self.config_vars['exec_prefix'],
- # Only python 3.2+ has abiflags
- 'abiflags': getattr(sys, 'abiflags', ''),
- 'platlibdir': getattr(sys, 'platlibdir', 'lib'),
- })
- with contextlib.suppress(AttributeError):
- # only for distutils outside stdlib
- self.config_vars.update({
- 'implementation_lower': install._get_implementation().lower(),
- 'implementation': install._get_implementation(),
- })
-
- # pypa/distutils#113 Python 3.9 compat
- self.config_vars.setdefault(
- 'py_version_nodot_plat',
- getattr(sys, 'windir', '').replace('.', ''),
- )
-
- self.config_vars['userbase'] = self.install_userbase
- self.config_vars['usersite'] = self.install_usersite
- if self.user and not site.ENABLE_USER_SITE:
- log.warn("WARNING: The user site-packages directory is disabled.")
-
- self._fix_install_dir_for_user_site()
-
- self.expand_basedirs()
- self.expand_dirs()
-
- self._expand(
- 'install_dir', 'script_dir', 'build_directory',
- 'site_dirs',
- )
- # If a non-default installation directory was specified, default the
- # script directory to match it.
- if self.script_dir is None:
- self.script_dir = self.install_dir
-
- if self.no_find_links is None:
- self.no_find_links = False
-
- # Let install_dir get set by install_lib command, which in turn
- # gets its info from the install command, and takes into account
- # --prefix and --home and all that other crud.
- self.set_undefined_options(
- 'install_lib', ('install_dir', 'install_dir')
- )
- # Likewise, set default script_dir from 'install_scripts.install_dir'
- self.set_undefined_options(
- 'install_scripts', ('install_dir', 'script_dir')
- )
-
- if self.user and self.install_purelib:
- self.install_dir = self.install_purelib
- self.script_dir = self.install_scripts
- # default --record from the install command
- self.set_undefined_options('install', ('record', 'record'))
- self.all_site_dirs = get_site_dirs()
- self.all_site_dirs.extend(self._process_site_dirs(self.site_dirs))
-
- if not self.editable:
- self.check_site_dir()
- default_index = os.getenv("__EASYINSTALL_INDEX", "https://pypi.org/simple/")
- # ^ Private API for testing purposes only
- self.index_url = self.index_url or default_index
- self.shadow_path = self.all_site_dirs[:]
- for path_item in self.install_dir, normalize_path(self.script_dir):
- if path_item not in self.shadow_path:
- self.shadow_path.insert(0, path_item)
-
- if self.allow_hosts is not None:
- hosts = [s.strip() for s in self.allow_hosts.split(',')]
- else:
- hosts = ['*']
- if self.package_index is None:
- self.package_index = self.create_index(
- self.index_url, search_path=self.shadow_path, hosts=hosts,
- )
- self.local_index = Environment(self.shadow_path + sys.path)
-
- if self.find_links is not None:
- if isinstance(self.find_links, str):
- self.find_links = self.find_links.split()
- else:
- self.find_links = []
- if self.local_snapshots_ok:
- self.package_index.scan_egg_links(self.shadow_path + sys.path)
- if not self.no_find_links:
- self.package_index.add_find_links(self.find_links)
- self.set_undefined_options('install_lib', ('optimize', 'optimize'))
- self.optimize = self._validate_optimize(self.optimize)
-
- if self.editable and not self.build_directory:
- raise DistutilsArgError(
- "Must specify a build directory (-b) when using --editable"
- )
- if not self.args:
- raise DistutilsArgError(
- "No urls, filenames, or requirements specified (see --help)")
-
- self.outputs = []
-
- @staticmethod
- def _process_site_dirs(site_dirs):
- if site_dirs is None:
- return
-
- normpath = map(normalize_path, sys.path)
- site_dirs = [
- os.path.expanduser(s.strip()) for s in
- site_dirs.split(',')
- ]
- for d in site_dirs:
- if not os.path.isdir(d):
- log.warn("%s (in --site-dirs) does not exist", d)
- elif normalize_path(d) not in normpath:
- raise DistutilsOptionError(
- d + " (in --site-dirs) is not on sys.path"
- )
- else:
- yield normalize_path(d)
-
- @staticmethod
- def _validate_optimize(value):
- try:
- value = int(value)
- if value not in range(3):
- raise ValueError
- except ValueError as e:
- raise DistutilsOptionError(
- "--optimize must be 0, 1, or 2"
- ) from e
-
- return value
-
- def _fix_install_dir_for_user_site(self):
- """
- Fix the install_dir if "--user" was used.
- """
- if not self.user:
- return
-
- self.create_home_path()
- if self.install_userbase is None:
- msg = "User base directory is not specified"
- raise DistutilsPlatformError(msg)
- self.install_base = self.install_platbase = self.install_userbase
- scheme_name = f'{os.name}_user'
- self.select_scheme(scheme_name)
-
- def _expand_attrs(self, attrs):
- for attr in attrs:
- val = getattr(self, attr)
- if val is not None:
- if os.name == 'posix' or os.name == 'nt':
- val = os.path.expanduser(val)
- val = subst_vars(val, self.config_vars)
- setattr(self, attr, val)
-
- def expand_basedirs(self):
- """Calls `os.path.expanduser` on install_base, install_platbase and
- root."""
- self._expand_attrs(['install_base', 'install_platbase', 'root'])
-
- def expand_dirs(self):
- """Calls `os.path.expanduser` on install dirs."""
- dirs = [
- 'install_purelib',
- 'install_platlib',
- 'install_lib',
- 'install_headers',
- 'install_scripts',
- 'install_data',
- ]
- self._expand_attrs(dirs)
-
- def run(self, show_deprecation=True):
- if show_deprecation:
- self.announce(
- "WARNING: The easy_install command is deprecated "
- "and will be removed in a future version.",
- log.WARN,
- )
- if self.verbose != self.distribution.verbose:
- log.set_verbosity(self.verbose)
- try:
- for spec in self.args:
- self.easy_install(spec, not self.no_deps)
- if self.record:
- outputs = self.outputs
- if self.root: # strip any package prefix
- root_len = len(self.root)
- for counter in range(len(outputs)):
- outputs[counter] = outputs[counter][root_len:]
- from distutils import file_util
-
- self.execute(
- file_util.write_file, (self.record, outputs),
- "writing list of installed files to '%s'" %
- self.record
- )
- self.warn_deprecated_options()
- finally:
- log.set_verbosity(self.distribution.verbose)
-
- def pseudo_tempname(self):
- """Return a pseudo-tempname base in the install directory.
- This code is intentionally naive; if a malicious party can write to
- the target directory you're already in deep doodoo.
- """
- try:
- pid = os.getpid()
- except Exception:
- pid = random.randint(0, sys.maxsize)
- return os.path.join(self.install_dir, "test-easy-install-%s" % pid)
-
- def warn_deprecated_options(self):
- pass
-
- def check_site_dir(self): # noqa: C901 # is too complex (12) # FIXME
- """Verify that self.install_dir is .pth-capable dir, if needed"""
-
- instdir = normalize_path(self.install_dir)
- pth_file = os.path.join(instdir, 'easy-install.pth')
-
- if not os.path.exists(instdir):
- try:
- os.makedirs(instdir)
- except (OSError, IOError):
- self.cant_write_to_target()
-
- # Is it a configured, PYTHONPATH, implicit, or explicit site dir?
- is_site_dir = instdir in self.all_site_dirs
-
- if not is_site_dir and not self.multi_version:
- # No? Then directly test whether it does .pth file processing
- is_site_dir = self.check_pth_processing()
- else:
- # make sure we can write to target dir
- testfile = self.pseudo_tempname() + '.write-test'
- test_exists = os.path.exists(testfile)
- try:
- if test_exists:
- os.unlink(testfile)
- open(testfile, 'w').close()
- os.unlink(testfile)
- except (OSError, IOError):
- self.cant_write_to_target()
-
- if not is_site_dir and not self.multi_version:
- # Can't install non-multi to non-site dir with easy_install
- pythonpath = os.environ.get('PYTHONPATH', '')
- log.warn(self.__no_default_msg, self.install_dir, pythonpath)
-
- if is_site_dir:
- if self.pth_file is None:
- self.pth_file = PthDistributions(pth_file, self.all_site_dirs)
- else:
- self.pth_file = None
-
- if self.multi_version and not os.path.exists(pth_file):
- self.pth_file = None # don't create a .pth file
- self.install_dir = instdir
-
- __cant_write_msg = textwrap.dedent("""
- can't create or remove files in install directory
-
- The following error occurred while trying to add or remove files in the
- installation directory:
-
- %s
-
- The installation directory you specified (via --install-dir, --prefix, or
- the distutils default setting) was:
-
- %s
- """).lstrip() # noqa
-
- __not_exists_id = textwrap.dedent("""
- This directory does not currently exist. Please create it and try again, or
- choose a different installation directory (using the -d or --install-dir
- option).
- """).lstrip() # noqa
-
- __access_msg = textwrap.dedent("""
- Perhaps your account does not have write access to this directory? If the
- installation directory is a system-owned directory, you may need to sign in
- as the administrator or "root" account. If you do not have administrative
- access to this machine, you may wish to choose a different installation
- directory, preferably one that is listed in your PYTHONPATH environment
- variable.
-
- For information on other options, you may wish to consult the
- documentation at:
-
- https://setuptools.pypa.io/en/latest/deprecated/easy_install.html
-
- Please make the appropriate changes for your system and try again.
- """).lstrip() # noqa
-
- def cant_write_to_target(self):
- msg = self.__cant_write_msg % (sys.exc_info()[1], self.install_dir,)
-
- if not os.path.exists(self.install_dir):
- msg += '\n' + self.__not_exists_id
- else:
- msg += '\n' + self.__access_msg
- raise DistutilsError(msg)
-
- def check_pth_processing(self):
- """Empirically verify whether .pth files are supported in inst. dir"""
- instdir = self.install_dir
- log.info("Checking .pth file support in %s", instdir)
- pth_file = self.pseudo_tempname() + ".pth"
- ok_file = pth_file + '.ok'
- ok_exists = os.path.exists(ok_file)
- tmpl = _one_liner("""
- import os
- f = open({ok_file!r}, 'w')
- f.write('OK')
- f.close()
- """) + '\n'
- try:
- if ok_exists:
- os.unlink(ok_file)
- dirname = os.path.dirname(ok_file)
- os.makedirs(dirname, exist_ok=True)
- f = open(pth_file, 'w')
- except (OSError, IOError):
- self.cant_write_to_target()
- else:
- try:
- f.write(tmpl.format(**locals()))
- f.close()
- f = None
- executable = sys.executable
- if os.name == 'nt':
- dirname, basename = os.path.split(executable)
- alt = os.path.join(dirname, 'pythonw.exe')
- use_alt = (
- basename.lower() == 'python.exe' and
- os.path.exists(alt)
- )
- if use_alt:
- # use pythonw.exe to avoid opening a console window
- executable = alt
-
- from distutils.spawn import spawn
-
- spawn([executable, '-E', '-c', 'pass'], 0)
-
- if os.path.exists(ok_file):
- log.info(
- "TEST PASSED: %s appears to support .pth files",
- instdir
- )
- return True
- finally:
- if f:
- f.close()
- if os.path.exists(ok_file):
- os.unlink(ok_file)
- if os.path.exists(pth_file):
- os.unlink(pth_file)
- if not self.multi_version:
- log.warn("TEST FAILED: %s does NOT support .pth files", instdir)
- return False
-
- def install_egg_scripts(self, dist):
- """Write all the scripts for `dist`, unless scripts are excluded"""
- if not self.exclude_scripts and dist.metadata_isdir('scripts'):
- for script_name in dist.metadata_listdir('scripts'):
- if dist.metadata_isdir('scripts/' + script_name):
- # The "script" is a directory, likely a Python 3
- # __pycache__ directory, so skip it.
- continue
- self.install_script(
- dist, script_name,
- dist.get_metadata('scripts/' + script_name)
- )
- self.install_wrapper_scripts(dist)
-
- def add_output(self, path):
- if os.path.isdir(path):
- for base, dirs, files in os.walk(path):
- for filename in files:
- self.outputs.append(os.path.join(base, filename))
- else:
- self.outputs.append(path)
-
- def not_editable(self, spec):
- if self.editable:
- raise DistutilsArgError(
- "Invalid argument %r: you can't use filenames or URLs "
- "with --editable (except via the --find-links option)."
- % (spec,)
- )
-
- def check_editable(self, spec):
- if not self.editable:
- return
-
- if os.path.exists(os.path.join(self.build_directory, spec.key)):
- raise DistutilsArgError(
- "%r already exists in %s; can't do a checkout there" %
- (spec.key, self.build_directory)
- )
-
- @contextlib.contextmanager
- def _tmpdir(self):
- tmpdir = tempfile.mkdtemp(prefix=u"easy_install-")
- try:
- # cast to str as workaround for #709 and #710 and #712
- yield str(tmpdir)
- finally:
- os.path.exists(tmpdir) and rmtree(tmpdir)
-
- def easy_install(self, spec, deps=False):
- with self._tmpdir() as tmpdir:
- if not isinstance(spec, Requirement):
- if URL_SCHEME(spec):
- # It's a url, download it to tmpdir and process
- self.not_editable(spec)
- dl = self.package_index.download(spec, tmpdir)
- return self.install_item(None, dl, tmpdir, deps, True)
-
- elif os.path.exists(spec):
- # Existing file or directory, just process it directly
- self.not_editable(spec)
- return self.install_item(None, spec, tmpdir, deps, True)
- else:
- spec = parse_requirement_arg(spec)
-
- self.check_editable(spec)
- dist = self.package_index.fetch_distribution(
- spec, tmpdir, self.upgrade, self.editable,
- not self.always_copy, self.local_index
- )
- if dist is None:
- msg = "Could not find suitable distribution for %r" % spec
- if self.always_copy:
- msg += " (--always-copy skips system and development eggs)"
- raise DistutilsError(msg)
- elif dist.precedence == DEVELOP_DIST:
- # .egg-info dists don't need installing, just process deps
- self.process_distribution(spec, dist, deps, "Using")
- return dist
- else:
- return self.install_item(spec, dist.location, tmpdir, deps)
-
- def install_item(self, spec, download, tmpdir, deps, install_needed=False):
-
- # Installation is also needed if file in tmpdir or is not an egg
- install_needed = install_needed or self.always_copy
- install_needed = install_needed or os.path.dirname(download) == tmpdir
- install_needed = install_needed or not download.endswith('.egg')
- install_needed = install_needed or (
- self.always_copy_from is not None and
- os.path.dirname(normalize_path(download)) ==
- normalize_path(self.always_copy_from)
- )
-
- if spec and not install_needed:
- # at this point, we know it's a local .egg, we just don't know if
- # it's already installed.
- for dist in self.local_index[spec.project_name]:
- if dist.location == download:
- break
- else:
- install_needed = True # it's not in the local index
-
- log.info("Processing %s", os.path.basename(download))
-
- if install_needed:
- dists = self.install_eggs(spec, download, tmpdir)
- for dist in dists:
- self.process_distribution(spec, dist, deps)
- else:
- dists = [self.egg_distribution(download)]
- self.process_distribution(spec, dists[0], deps, "Using")
-
- if spec is not None:
- for dist in dists:
- if dist in spec:
- return dist
-
- def select_scheme(self, name):
- try:
- install._select_scheme(self, name)
- except AttributeError:
- # stdlib distutils
- install.install.select_scheme(self, name.replace('posix', 'unix'))
-
- # FIXME: 'easy_install.process_distribution' is too complex (12)
- def process_distribution( # noqa: C901
- self, requirement, dist, deps=True, *info,
- ):
- self.update_pth(dist)
- self.package_index.add(dist)
- if dist in self.local_index[dist.key]:
- self.local_index.remove(dist)
- self.local_index.add(dist)
- self.install_egg_scripts(dist)
- self.installed_projects[dist.key] = dist
- log.info(self.installation_report(requirement, dist, *info))
- if (dist.has_metadata('dependency_links.txt') and
- not self.no_find_links):
- self.package_index.add_find_links(
- dist.get_metadata_lines('dependency_links.txt')
- )
- if not deps and not self.always_copy:
- return
- elif requirement is not None and dist.key != requirement.key:
- log.warn("Skipping dependencies for %s", dist)
- return # XXX this is not the distribution we were looking for
- elif requirement is None or dist not in requirement:
- # if we wound up with a different version, resolve what we've got
- distreq = dist.as_requirement()
- requirement = Requirement(str(distreq))
- log.info("Processing dependencies for %s", requirement)
- try:
- distros = WorkingSet([]).resolve(
- [requirement], self.local_index, self.easy_install
- )
- except DistributionNotFound as e:
- raise DistutilsError(str(e)) from e
- except VersionConflict as e:
- raise DistutilsError(e.report()) from e
- if self.always_copy or self.always_copy_from:
- # Force all the relevant distros to be copied or activated
- for dist in distros:
- if dist.key not in self.installed_projects:
- self.easy_install(dist.as_requirement())
- log.info("Finished processing dependencies for %s", requirement)
-
- def should_unzip(self, dist):
- if self.zip_ok is not None:
- return not self.zip_ok
- if dist.has_metadata('not-zip-safe'):
- return True
- if not dist.has_metadata('zip-safe'):
- return True
- return False
-
- def maybe_move(self, spec, dist_filename, setup_base):
- dst = os.path.join(self.build_directory, spec.key)
- if os.path.exists(dst):
- msg = (
- "%r already exists in %s; build directory %s will not be kept"
- )
- log.warn(msg, spec.key, self.build_directory, setup_base)
- return setup_base
- if os.path.isdir(dist_filename):
- setup_base = dist_filename
- else:
- if os.path.dirname(dist_filename) == setup_base:
- os.unlink(dist_filename) # get it out of the tmp dir
- contents = os.listdir(setup_base)
- if len(contents) == 1:
- dist_filename = os.path.join(setup_base, contents[0])
- if os.path.isdir(dist_filename):
- # if the only thing there is a directory, move it instead
- setup_base = dist_filename
- ensure_directory(dst)
- shutil.move(setup_base, dst)
- return dst
-
- def install_wrapper_scripts(self, dist):
- if self.exclude_scripts:
- return
- for args in ScriptWriter.best().get_args(dist):
- self.write_script(*args)
-
- def install_script(self, dist, script_name, script_text, dev_path=None):
- """Generate a legacy script wrapper and install it"""
- spec = str(dist.as_requirement())
- is_script = is_python_script(script_text, script_name)
-
- if is_script:
- body = self._load_template(dev_path) % locals()
- script_text = ScriptWriter.get_header(script_text) + body
- self.write_script(script_name, _to_bytes(script_text), 'b')
-
- @staticmethod
- def _load_template(dev_path):
- """
- There are a couple of template scripts in the package. This
- function loads one of them and prepares it for use.
- """
- # See https://github.com/pypa/setuptools/issues/134 for info
- # on script file naming and downstream issues with SVR4
- name = 'script.tmpl'
- if dev_path:
- name = name.replace('.tmpl', ' (dev).tmpl')
-
- raw_bytes = resource_string('setuptools', name)
- return raw_bytes.decode('utf-8')
-
- def write_script(self, script_name, contents, mode="t", blockers=()):
- """Write an executable file to the scripts directory"""
- self.delete_blockers( # clean up old .py/.pyw w/o a script
- [os.path.join(self.script_dir, x) for x in blockers]
- )
- log.info("Installing %s script to %s", script_name, self.script_dir)
- target = os.path.join(self.script_dir, script_name)
- self.add_output(target)
-
- if self.dry_run:
- return
-
- mask = current_umask()
- ensure_directory(target)
- if os.path.exists(target):
- os.unlink(target)
- with open(target, "w" + mode) as f:
- f.write(contents)
- chmod(target, 0o777 - mask)
-
- def install_eggs(self, spec, dist_filename, tmpdir):
- # .egg dirs or files are already built, so just return them
- installer_map = {
- '.egg': self.install_egg,
- '.exe': self.install_exe,
- '.whl': self.install_wheel,
- }
- try:
- install_dist = installer_map[
- dist_filename.lower()[-4:]
- ]
- except KeyError:
- pass
- else:
- return [install_dist(dist_filename, tmpdir)]
-
- # Anything else, try to extract and build
- setup_base = tmpdir
- if os.path.isfile(dist_filename) and not dist_filename.endswith('.py'):
- unpack_archive(dist_filename, tmpdir, self.unpack_progress)
- elif os.path.isdir(dist_filename):
- setup_base = os.path.abspath(dist_filename)
-
- if (setup_base.startswith(tmpdir) # something we downloaded
- and self.build_directory and spec is not None):
- setup_base = self.maybe_move(spec, dist_filename, setup_base)
-
- # Find the setup.py file
- setup_script = os.path.join(setup_base, 'setup.py')
-
- if not os.path.exists(setup_script):
- setups = glob(os.path.join(setup_base, '*', 'setup.py'))
- if not setups:
- raise DistutilsError(
- "Couldn't find a setup script in %s" %
- os.path.abspath(dist_filename)
- )
- if len(setups) > 1:
- raise DistutilsError(
- "Multiple setup scripts in %s" %
- os.path.abspath(dist_filename)
- )
- setup_script = setups[0]
-
- # Now run it, and return the result
- if self.editable:
- log.info(self.report_editable(spec, setup_script))
- return []
- else:
- return self.build_and_install(setup_script, setup_base)
-
- def egg_distribution(self, egg_path):
- if os.path.isdir(egg_path):
- metadata = PathMetadata(egg_path, os.path.join(egg_path,
- 'EGG-INFO'))
- else:
- metadata = EggMetadata(zipimport.zipimporter(egg_path))
- return Distribution.from_filename(egg_path, metadata=metadata)
-
- # FIXME: 'easy_install.install_egg' is too complex (11)
- def install_egg(self, egg_path, tmpdir): # noqa: C901
- destination = os.path.join(
- self.install_dir,
- os.path.basename(egg_path),
- )
- destination = os.path.abspath(destination)
- if not self.dry_run:
- ensure_directory(destination)
-
- dist = self.egg_distribution(egg_path)
- if not (
- os.path.exists(destination) and os.path.samefile(egg_path, destination)
- ):
- if os.path.isdir(destination) and not os.path.islink(destination):
- dir_util.remove_tree(destination, dry_run=self.dry_run)
- elif os.path.exists(destination):
- self.execute(
- os.unlink,
- (destination,),
- "Removing " + destination,
- )
- try:
- new_dist_is_zipped = False
- if os.path.isdir(egg_path):
- if egg_path.startswith(tmpdir):
- f, m = shutil.move, "Moving"
- else:
- f, m = shutil.copytree, "Copying"
- elif self.should_unzip(dist):
- self.mkpath(destination)
- f, m = self.unpack_and_compile, "Extracting"
- else:
- new_dist_is_zipped = True
- if egg_path.startswith(tmpdir):
- f, m = shutil.move, "Moving"
- else:
- f, m = shutil.copy2, "Copying"
- self.execute(
- f,
- (egg_path, destination),
- (m + " %s to %s") % (
- os.path.basename(egg_path),
- os.path.dirname(destination)
- ),
- )
- update_dist_caches(
- destination,
- fix_zipimporter_caches=new_dist_is_zipped,
- )
- except Exception:
- update_dist_caches(destination, fix_zipimporter_caches=False)
- raise
-
- self.add_output(destination)
- return self.egg_distribution(destination)
-
- def install_exe(self, dist_filename, tmpdir):
- # See if it's valid, get data
- cfg = extract_wininst_cfg(dist_filename)
- if cfg is None:
- raise DistutilsError(
- "%s is not a valid distutils Windows .exe" % dist_filename
- )
- # Create a dummy distribution object until we build the real distro
- dist = Distribution(
- None,
- project_name=cfg.get('metadata', 'name'),
- version=cfg.get('metadata', 'version'), platform=get_platform(),
- )
-
- # Convert the .exe to an unpacked egg
- egg_path = os.path.join(tmpdir, dist.egg_name() + '.egg')
- dist.location = egg_path
- egg_tmp = egg_path + '.tmp'
- _egg_info = os.path.join(egg_tmp, 'EGG-INFO')
- pkg_inf = os.path.join(_egg_info, 'PKG-INFO')
- ensure_directory(pkg_inf) # make sure EGG-INFO dir exists
- dist._provider = PathMetadata(egg_tmp, _egg_info) # XXX
- self.exe_to_egg(dist_filename, egg_tmp)
-
- # Write EGG-INFO/PKG-INFO
- if not os.path.exists(pkg_inf):
- f = open(pkg_inf, 'w')
- f.write('Metadata-Version: 1.0\n')
- for k, v in cfg.items('metadata'):
- if k != 'target_version':
- f.write('%s: %s\n' % (k.replace('_', '-').title(), v))
- f.close()
- script_dir = os.path.join(_egg_info, 'scripts')
- # delete entry-point scripts to avoid duping
- self.delete_blockers([
- os.path.join(script_dir, args[0])
- for args in ScriptWriter.get_args(dist)
- ])
- # Build .egg file from tmpdir
- bdist_egg.make_zipfile(
- egg_path, egg_tmp, verbose=self.verbose, dry_run=self.dry_run,
- )
- # install the .egg
- return self.install_egg(egg_path, tmpdir)
-
- # FIXME: 'easy_install.exe_to_egg' is too complex (12)
- def exe_to_egg(self, dist_filename, egg_tmp): # noqa: C901
- """Extract a bdist_wininst to the directories an egg would use"""
- # Check for .pth file and set up prefix translations
- prefixes = get_exe_prefixes(dist_filename)
- to_compile = []
- native_libs = []
- top_level = {}
-
- def process(src, dst):
- s = src.lower()
- for old, new in prefixes:
- if s.startswith(old):
- src = new + src[len(old):]
- parts = src.split('/')
- dst = os.path.join(egg_tmp, *parts)
- dl = dst.lower()
- if dl.endswith('.pyd') or dl.endswith('.dll'):
- parts[-1] = bdist_egg.strip_module(parts[-1])
- top_level[os.path.splitext(parts[0])[0]] = 1
- native_libs.append(src)
- elif dl.endswith('.py') and old != 'SCRIPTS/':
- top_level[os.path.splitext(parts[0])[0]] = 1
- to_compile.append(dst)
- return dst
- if not src.endswith('.pth'):
- log.warn("WARNING: can't process %s", src)
- return None
-
- # extract, tracking .pyd/.dll->native_libs and .py -> to_compile
- unpack_archive(dist_filename, egg_tmp, process)
- stubs = []
- for res in native_libs:
- if res.lower().endswith('.pyd'): # create stubs for .pyd's
- parts = res.split('/')
- resource = parts[-1]
- parts[-1] = bdist_egg.strip_module(parts[-1]) + '.py'
- pyfile = os.path.join(egg_tmp, *parts)
- to_compile.append(pyfile)
- stubs.append(pyfile)
- bdist_egg.write_stub(resource, pyfile)
- self.byte_compile(to_compile) # compile .py's
- bdist_egg.write_safety_flag(
- os.path.join(egg_tmp, 'EGG-INFO'),
- bdist_egg.analyze_egg(egg_tmp, stubs)) # write zip-safety flag
-
- for name in 'top_level', 'native_libs':
- if locals()[name]:
- txt = os.path.join(egg_tmp, 'EGG-INFO', name + '.txt')
- if not os.path.exists(txt):
- f = open(txt, 'w')
- f.write('\n'.join(locals()[name]) + '\n')
- f.close()
-
- def install_wheel(self, wheel_path, tmpdir):
- wheel = Wheel(wheel_path)
- assert wheel.is_compatible()
- destination = os.path.join(self.install_dir, wheel.egg_name())
- destination = os.path.abspath(destination)
- if not self.dry_run:
- ensure_directory(destination)
- if os.path.isdir(destination) and not os.path.islink(destination):
- dir_util.remove_tree(destination, dry_run=self.dry_run)
- elif os.path.exists(destination):
- self.execute(
- os.unlink,
- (destination,),
- "Removing " + destination,
- )
- try:
- self.execute(
- wheel.install_as_egg,
- (destination,),
- ("Installing %s to %s") % (
- os.path.basename(wheel_path),
- os.path.dirname(destination)
- ),
- )
- finally:
- update_dist_caches(destination, fix_zipimporter_caches=False)
- self.add_output(destination)
- return self.egg_distribution(destination)
-
- __mv_warning = textwrap.dedent("""
- Because this distribution was installed --multi-version, before you can
- import modules from this package in an application, you will need to
- 'import pkg_resources' and then use a 'require()' call similar to one of
- these examples, in order to select the desired version:
-
- pkg_resources.require("%(name)s") # latest installed version
- pkg_resources.require("%(name)s==%(version)s") # this exact version
- pkg_resources.require("%(name)s>=%(version)s") # this version or higher
- """).lstrip() # noqa
-
- __id_warning = textwrap.dedent("""
- Note also that the installation directory must be on sys.path at runtime for
- this to work. (e.g. by being the application's script directory, by being on
- PYTHONPATH, or by being added to sys.path by your code.)
- """) # noqa
-
- def installation_report(self, req, dist, what="Installed"):
- """Helpful installation message for display to package users"""
- msg = "\n%(what)s %(eggloc)s%(extras)s"
- if self.multi_version and not self.no_report:
- msg += '\n' + self.__mv_warning
- if self.install_dir not in map(normalize_path, sys.path):
- msg += '\n' + self.__id_warning
-
- eggloc = dist.location
- name = dist.project_name
- version = dist.version
- extras = '' # TODO: self.report_extras(req, dist)
- return msg % locals()
-
- __editable_msg = textwrap.dedent("""
- Extracted editable version of %(spec)s to %(dirname)s
-
- If it uses setuptools in its setup script, you can activate it in
- "development" mode by going to that directory and running::
-
- %(python)s setup.py develop
-
- See the setuptools documentation for the "develop" command for more info.
- """).lstrip() # noqa
-
- def report_editable(self, spec, setup_script):
- dirname = os.path.dirname(setup_script)
- python = sys.executable
- return '\n' + self.__editable_msg % locals()
-
- def run_setup(self, setup_script, setup_base, args):
- sys.modules.setdefault('distutils.command.bdist_egg', bdist_egg)
- sys.modules.setdefault('distutils.command.egg_info', egg_info)
-
- args = list(args)
- if self.verbose > 2:
- v = 'v' * (self.verbose - 1)
- args.insert(0, '-' + v)
- elif self.verbose < 2:
- args.insert(0, '-q')
- if self.dry_run:
- args.insert(0, '-n')
- log.info(
- "Running %s %s", setup_script[len(setup_base) + 1:], ' '.join(args)
- )
- try:
- run_setup(setup_script, args)
- except SystemExit as v:
- raise DistutilsError(
- "Setup script exited with %s" % (v.args[0],)
- ) from v
-
- def build_and_install(self, setup_script, setup_base):
- args = ['bdist_egg', '--dist-dir']
-
- dist_dir = tempfile.mkdtemp(
- prefix='egg-dist-tmp-', dir=os.path.dirname(setup_script)
- )
- try:
- self._set_fetcher_options(os.path.dirname(setup_script))
- args.append(dist_dir)
-
- self.run_setup(setup_script, setup_base, args)
- all_eggs = Environment([dist_dir])
- eggs = []
- for key in all_eggs:
- for dist in all_eggs[key]:
- eggs.append(self.install_egg(dist.location, setup_base))
- if not eggs and not self.dry_run:
- log.warn("No eggs found in %s (setup script problem?)",
- dist_dir)
- return eggs
- finally:
- rmtree(dist_dir)
- log.set_verbosity(self.verbose) # restore our log verbosity
-
- def _set_fetcher_options(self, base):
- """
- When easy_install is about to run bdist_egg on a source dist, that
- source dist might have 'setup_requires' directives, requiring
- additional fetching. Ensure the fetcher options given to easy_install
- are available to that command as well.
- """
- # find the fetch options from easy_install and write them out
- # to the setup.cfg file.
- ei_opts = self.distribution.get_option_dict('easy_install').copy()
- fetch_directives = (
- 'find_links', 'site_dirs', 'index_url', 'optimize', 'allow_hosts',
- )
- fetch_options = {}
- for key, val in ei_opts.items():
- if key not in fetch_directives:
- continue
- fetch_options[key] = val[1]
- # create a settings dictionary suitable for `edit_config`
- settings = dict(easy_install=fetch_options)
- cfg_filename = os.path.join(base, 'setup.cfg')
- setopt.edit_config(cfg_filename, settings)
-
- def update_pth(self, dist): # noqa: C901 # is too complex (11) # FIXME
- if self.pth_file is None:
- return
-
- for d in self.pth_file[dist.key]: # drop old entries
- if not self.multi_version and d.location == dist.location:
- continue
-
- log.info("Removing %s from easy-install.pth file", d)
- self.pth_file.remove(d)
- if d.location in self.shadow_path:
- self.shadow_path.remove(d.location)
-
- if not self.multi_version:
- if dist.location in self.pth_file.paths:
- log.info(
- "%s is already the active version in easy-install.pth",
- dist,
- )
- else:
- log.info("Adding %s to easy-install.pth file", dist)
- self.pth_file.add(dist) # add new entry
- if dist.location not in self.shadow_path:
- self.shadow_path.append(dist.location)
-
- if self.dry_run:
- return
-
- self.pth_file.save()
-
- if dist.key != 'setuptools':
- return
-
- # Ensure that setuptools itself never becomes unavailable!
- # XXX should this check for latest version?
- filename = os.path.join(self.install_dir, 'setuptools.pth')
- if os.path.islink(filename):
- os.unlink(filename)
- with open(filename, 'wt') as f:
- f.write(self.pth_file.make_relative(dist.location) + '\n')
-
- def unpack_progress(self, src, dst):
- # Progress filter for unpacking
- log.debug("Unpacking %s to %s", src, dst)
- return dst # only unpack-and-compile skips files for dry run
-
- def unpack_and_compile(self, egg_path, destination):
- to_compile = []
- to_chmod = []
-
- def pf(src, dst):
- if dst.endswith('.py') and not src.startswith('EGG-INFO/'):
- to_compile.append(dst)
- elif dst.endswith('.dll') or dst.endswith('.so'):
- to_chmod.append(dst)
- self.unpack_progress(src, dst)
- return not self.dry_run and dst or None
-
- unpack_archive(egg_path, destination, pf)
- self.byte_compile(to_compile)
- if not self.dry_run:
- for f in to_chmod:
- mode = ((os.stat(f)[stat.ST_MODE]) | 0o555) & 0o7755
- chmod(f, mode)
-
- def byte_compile(self, to_compile):
- if sys.dont_write_bytecode:
- return
-
- from distutils.util import byte_compile
-
- try:
- # try to make the byte compile messages quieter
- log.set_verbosity(self.verbose - 1)
-
- byte_compile(to_compile, optimize=0, force=1, dry_run=self.dry_run)
- if self.optimize:
- byte_compile(
- to_compile, optimize=self.optimize, force=1,
- dry_run=self.dry_run,
- )
- finally:
- log.set_verbosity(self.verbose) # restore original verbosity
-
- __no_default_msg = textwrap.dedent("""
- bad install directory or PYTHONPATH
-
- You are attempting to install a package to a directory that is not
- on PYTHONPATH and which Python does not read ".pth" files from. The
- installation directory you specified (via --install-dir, --prefix, or
- the distutils default setting) was:
-
- %s
-
- and your PYTHONPATH environment variable currently contains:
-
- %r
-
- Here are some of your options for correcting the problem:
-
- * You can choose a different installation directory, i.e., one that is
- on PYTHONPATH or supports .pth files
-
- * You can add the installation directory to the PYTHONPATH environment
- variable. (It must then also be on PYTHONPATH whenever you run
- Python and want to use the package(s) you are installing.)
-
- * You can set up the installation directory to support ".pth" files by
- using one of the approaches described here:
-
- https://setuptools.pypa.io/en/latest/deprecated/easy_install.html#custom-installation-locations
-
-
- Please make the appropriate changes for your system and try again.
- """).strip()
-
- def create_home_path(self):
- """Create directories under ~."""
- if not self.user:
- return
- home = convert_path(os.path.expanduser("~"))
- for path in only_strs(self.config_vars.values()):
- if path.startswith(home) and not os.path.isdir(path):
- self.debug_print("os.makedirs('%s', 0o700)" % path)
- os.makedirs(path, 0o700)
-
- INSTALL_SCHEMES = dict(
- posix=dict(
- install_dir='$base/lib/python$py_version_short/site-packages',
- script_dir='$base/bin',
- ),
- )
-
- DEFAULT_SCHEME = dict(
- install_dir='$base/Lib/site-packages',
- script_dir='$base/Scripts',
- )
-
- def _expand(self, *attrs):
- config_vars = self.get_finalized_command('install').config_vars
-
- if self.prefix:
- # Set default install_dir/scripts from --prefix
- config_vars = dict(config_vars)
- config_vars['base'] = self.prefix
- scheme = self.INSTALL_SCHEMES.get(os.name, self.DEFAULT_SCHEME)
- for attr, val in scheme.items():
- if getattr(self, attr, None) is None:
- setattr(self, attr, val)
-
- from distutils.util import subst_vars
-
- for attr in attrs:
- val = getattr(self, attr)
- if val is not None:
- val = subst_vars(val, config_vars)
- if os.name == 'posix':
- val = os.path.expanduser(val)
- setattr(self, attr, val)
-
-
-def _pythonpath():
- items = os.environ.get('PYTHONPATH', '').split(os.pathsep)
- return filter(None, items)
-
-
-def get_site_dirs():
- """
- Return a list of 'site' dirs
- """
-
- sitedirs = []
-
- # start with PYTHONPATH
- sitedirs.extend(_pythonpath())
-
- prefixes = [sys.prefix]
- if sys.exec_prefix != sys.prefix:
- prefixes.append(sys.exec_prefix)
- for prefix in prefixes:
- if not prefix:
- continue
-
- if sys.platform in ('os2emx', 'riscos'):
- sitedirs.append(os.path.join(prefix, "Lib", "site-packages"))
- elif os.sep == '/':
- sitedirs.extend([
- os.path.join(
- prefix,
- "lib",
- "python{}.{}".format(*sys.version_info),
- "site-packages",
- ),
- os.path.join(prefix, "lib", "site-python"),
- ])
- else:
- sitedirs.extend([
- prefix,
- os.path.join(prefix, "lib", "site-packages"),
- ])
- if sys.platform != 'darwin':
- continue
-
- # for framework builds *only* we add the standard Apple
- # locations. Currently only per-user, but /Library and
- # /Network/Library could be added too
- if 'Python.framework' not in prefix:
- continue
-
- home = os.environ.get('HOME')
- if not home:
- continue
-
- home_sp = os.path.join(
- home,
- 'Library',
- 'Python',
- '{}.{}'.format(*sys.version_info),
- 'site-packages',
- )
- sitedirs.append(home_sp)
- lib_paths = get_path('purelib'), get_path('platlib')
-
- sitedirs.extend(s for s in lib_paths if s not in sitedirs)
-
- if site.ENABLE_USER_SITE:
- sitedirs.append(site.USER_SITE)
-
- with contextlib.suppress(AttributeError):
- sitedirs.extend(site.getsitepackages())
-
- sitedirs = list(map(normalize_path, sitedirs))
-
- return sitedirs
-
-
-def expand_paths(inputs): # noqa: C901 # is too complex (11) # FIXME
- """Yield sys.path directories that might contain "old-style" packages"""
-
- seen = {}
-
- for dirname in inputs:
- dirname = normalize_path(dirname)
- if dirname in seen:
- continue
-
- seen[dirname] = 1
- if not os.path.isdir(dirname):
- continue
-
- files = os.listdir(dirname)
- yield dirname, files
-
- for name in files:
- if not name.endswith('.pth'):
- # We only care about the .pth files
- continue
- if name in ('easy-install.pth', 'setuptools.pth'):
- # Ignore .pth files that we control
- continue
-
- # Read the .pth file
- f = open(os.path.join(dirname, name))
- lines = list(yield_lines(f))
- f.close()
-
- # Yield existing non-dupe, non-import directory lines from it
- for line in lines:
- if line.startswith("import"):
- continue
-
- line = normalize_path(line.rstrip())
- if line in seen:
- continue
-
- seen[line] = 1
- if not os.path.isdir(line):
- continue
-
- yield line, os.listdir(line)
-
-
-def extract_wininst_cfg(dist_filename):
- """Extract configuration data from a bdist_wininst .exe
-
- Returns a configparser.RawConfigParser, or None
- """
- f = open(dist_filename, 'rb')
- try:
- endrec = zipfile._EndRecData(f)
- if endrec is None:
- return None
-
- prepended = (endrec[9] - endrec[5]) - endrec[6]
- if prepended < 12: # no wininst data here
- return None
- f.seek(prepended - 12)
-
- tag, cfglen, bmlen = struct.unpack("egg path translations for a given .exe file"""
-
- prefixes = [
- ('PURELIB/', ''),
- ('PLATLIB/pywin32_system32', ''),
- ('PLATLIB/', ''),
- ('SCRIPTS/', 'EGG-INFO/scripts/'),
- ('DATA/lib/site-packages', ''),
- ]
- z = zipfile.ZipFile(exe_filename)
- try:
- for info in z.infolist():
- name = info.filename
- parts = name.split('/')
- if len(parts) == 3 and parts[2] == 'PKG-INFO':
- if parts[1].endswith('.egg-info'):
- prefixes.insert(0, ('/'.join(parts[:2]), 'EGG-INFO/'))
- break
- if len(parts) != 2 or not name.endswith('.pth'):
- continue
- if name.endswith('-nspkg.pth'):
- continue
- if parts[0].upper() in ('PURELIB', 'PLATLIB'):
- contents = z.read(name).decode()
- for pth in yield_lines(contents):
- pth = pth.strip().replace('\\', '/')
- if not pth.startswith('import'):
- prefixes.append((('%s/%s/' % (parts[0], pth)), ''))
- finally:
- z.close()
- prefixes = [(x.lower(), y) for x, y in prefixes]
- prefixes.sort()
- prefixes.reverse()
- return prefixes
-
-
-class PthDistributions(Environment):
- """A .pth file with Distribution paths in it"""
-
- dirty = False
-
- def __init__(self, filename, sitedirs=()):
- self.filename = filename
- self.sitedirs = list(map(normalize_path, sitedirs))
- self.basedir = normalize_path(os.path.dirname(self.filename))
- self._load()
- super().__init__([], None, None)
- for path in yield_lines(self.paths):
- list(map(self.add, find_distributions(path, True)))
-
- def _load(self):
- self.paths = []
- saw_import = False
- seen = dict.fromkeys(self.sitedirs)
- if os.path.isfile(self.filename):
- f = open(self.filename, 'rt')
- for line in f:
- if line.startswith('import'):
- saw_import = True
- continue
- path = line.rstrip()
- self.paths.append(path)
- if not path.strip() or path.strip().startswith('#'):
- continue
- # skip non-existent paths, in case somebody deleted a package
- # manually, and duplicate paths as well
- path = self.paths[-1] = normalize_path(
- os.path.join(self.basedir, path)
- )
- if not os.path.exists(path) or path in seen:
- self.paths.pop() # skip it
- self.dirty = True # we cleaned up, so we're dirty now :)
- continue
- seen[path] = 1
- f.close()
-
- if self.paths and not saw_import:
- self.dirty = True # ensure anything we touch has import wrappers
- while self.paths and not self.paths[-1].strip():
- self.paths.pop()
-
- def save(self):
- """Write changed .pth file back to disk"""
- if not self.dirty:
- return
-
- rel_paths = list(map(self.make_relative, self.paths))
- if rel_paths:
- log.debug("Saving %s", self.filename)
- lines = self._wrap_lines(rel_paths)
- data = '\n'.join(lines) + '\n'
-
- if os.path.islink(self.filename):
- os.unlink(self.filename)
- with open(self.filename, 'wt') as f:
- f.write(data)
-
- elif os.path.exists(self.filename):
- log.debug("Deleting empty %s", self.filename)
- os.unlink(self.filename)
-
- self.dirty = False
-
- @staticmethod
- def _wrap_lines(lines):
- return lines
-
- def add(self, dist):
- """Add `dist` to the distribution map"""
- new_path = (
- dist.location not in self.paths and (
- dist.location not in self.sitedirs or
- # account for '.' being in PYTHONPATH
- dist.location == os.getcwd()
- )
- )
- if new_path:
- self.paths.append(dist.location)
- self.dirty = True
- super().add(dist)
-
- def remove(self, dist):
- """Remove `dist` from the distribution map"""
- while dist.location in self.paths:
- self.paths.remove(dist.location)
- self.dirty = True
- super().remove(dist)
-
- def make_relative(self, path):
- npath, last = os.path.split(normalize_path(path))
- baselen = len(self.basedir)
- parts = [last]
- sep = os.altsep == '/' and '/' or os.sep
- while len(npath) >= baselen:
- if npath == self.basedir:
- parts.append(os.curdir)
- parts.reverse()
- return sep.join(parts)
- npath, last = os.path.split(npath)
- parts.append(last)
- else:
- return path
-
-
-class RewritePthDistributions(PthDistributions):
- @classmethod
- def _wrap_lines(cls, lines):
- yield cls.prelude
- for line in lines:
- yield line
- yield cls.postlude
-
- prelude = _one_liner("""
- import sys
- sys.__plen = len(sys.path)
- """)
- postlude = _one_liner("""
- import sys
- new = sys.path[sys.__plen:]
- del sys.path[sys.__plen:]
- p = getattr(sys, '__egginsert', 0)
- sys.path[p:p] = new
- sys.__egginsert = p + len(new)
- """)
-
-
-if os.environ.get('SETUPTOOLS_SYS_PATH_TECHNIQUE', 'raw') == 'rewrite':
- PthDistributions = RewritePthDistributions
-
-
-def _first_line_re():
- """
- Return a regular expression based on first_line_re suitable for matching
- strings.
- """
- if isinstance(first_line_re.pattern, str):
- return first_line_re
-
- # first_line_re in Python >=3.1.4 and >=3.2.1 is a bytes pattern.
- return re.compile(first_line_re.pattern.decode())
-
-
-def auto_chmod(func, arg, exc):
- if func in [os.unlink, os.remove] and os.name == 'nt':
- chmod(arg, stat.S_IWRITE)
- return func(arg)
- et, ev, _ = sys.exc_info()
- # TODO: This code doesn't make sense. What is it trying to do?
- raise (ev[0], ev[1] + (" %s %s" % (func, arg)))
-
-
-def update_dist_caches(dist_path, fix_zipimporter_caches):
- """
- Fix any globally cached `dist_path` related data
-
- `dist_path` should be a path of a newly installed egg distribution (zipped
- or unzipped).
-
- sys.path_importer_cache contains finder objects that have been cached when
- importing data from the original distribution. Any such finders need to be
- cleared since the replacement distribution might be packaged differently,
- e.g. a zipped egg distribution might get replaced with an unzipped egg
- folder or vice versa. Having the old finders cached may then cause Python
- to attempt loading modules from the replacement distribution using an
- incorrect loader.
-
- zipimport.zipimporter objects are Python loaders charged with importing
- data packaged inside zip archives. If stale loaders referencing the
- original distribution, are left behind, they can fail to load modules from
- the replacement distribution. E.g. if an old zipimport.zipimporter instance
- is used to load data from a new zipped egg archive, it may cause the
- operation to attempt to locate the requested data in the wrong location -
- one indicated by the original distribution's zip archive directory
- information. Such an operation may then fail outright, e.g. report having
- read a 'bad local file header', or even worse, it may fail silently &
- return invalid data.
-
- zipimport._zip_directory_cache contains cached zip archive directory
- information for all existing zipimport.zipimporter instances and all such
- instances connected to the same archive share the same cached directory
- information.
-
- If asked, and the underlying Python implementation allows it, we can fix
- all existing zipimport.zipimporter instances instead of having to track
- them down and remove them one by one, by updating their shared cached zip
- archive directory information. This, of course, assumes that the
- replacement distribution is packaged as a zipped egg.
-
- If not asked to fix existing zipimport.zipimporter instances, we still do
- our best to clear any remaining zipimport.zipimporter related cached data
- that might somehow later get used when attempting to load data from the new
- distribution and thus cause such load operations to fail. Note that when
- tracking down such remaining stale data, we can not catch every conceivable
- usage from here, and we clear only those that we know of and have found to
- cause problems if left alive. Any remaining caches should be updated by
- whomever is in charge of maintaining them, i.e. they should be ready to
- handle us replacing their zip archives with new distributions at runtime.
-
- """
- # There are several other known sources of stale zipimport.zipimporter
- # instances that we do not clear here, but might if ever given a reason to
- # do so:
- # * Global setuptools pkg_resources.working_set (a.k.a. 'master working
- # set') may contain distributions which may in turn contain their
- # zipimport.zipimporter loaders.
- # * Several zipimport.zipimporter loaders held by local variables further
- # up the function call stack when running the setuptools installation.
- # * Already loaded modules may have their __loader__ attribute set to the
- # exact loader instance used when importing them. Python 3.4 docs state
- # that this information is intended mostly for introspection and so is
- # not expected to cause us problems.
- normalized_path = normalize_path(dist_path)
- _uncache(normalized_path, sys.path_importer_cache)
- if fix_zipimporter_caches:
- _replace_zip_directory_cache_data(normalized_path)
- else:
- # Here, even though we do not want to fix existing and now stale
- # zipimporter cache information, we still want to remove it. Related to
- # Python's zip archive directory information cache, we clear each of
- # its stale entries in two phases:
- # 1. Clear the entry so attempting to access zip archive information
- # via any existing stale zipimport.zipimporter instances fails.
- # 2. Remove the entry from the cache so any newly constructed
- # zipimport.zipimporter instances do not end up using old stale
- # zip archive directory information.
- # This whole stale data removal step does not seem strictly necessary,
- # but has been left in because it was done before we started replacing
- # the zip archive directory information cache content if possible, and
- # there are no relevant unit tests that we can depend on to tell us if
- # this is really needed.
- _remove_and_clear_zip_directory_cache_data(normalized_path)
-
-
-def _collect_zipimporter_cache_entries(normalized_path, cache):
- """
- Return zipimporter cache entry keys related to a given normalized path.
-
- Alternative path spellings (e.g. those using different character case or
- those using alternative path separators) related to the same path are
- included. Any sub-path entries are included as well, i.e. those
- corresponding to zip archives embedded in other zip archives.
-
- """
- result = []
- prefix_len = len(normalized_path)
- for p in cache:
- np = normalize_path(p)
- if (np.startswith(normalized_path) and
- np[prefix_len:prefix_len + 1] in (os.sep, '')):
- result.append(p)
- return result
-
-
-def _update_zipimporter_cache(normalized_path, cache, updater=None):
- """
- Update zipimporter cache data for a given normalized path.
-
- Any sub-path entries are processed as well, i.e. those corresponding to zip
- archives embedded in other zip archives.
-
- Given updater is a callable taking a cache entry key and the original entry
- (after already removing the entry from the cache), and expected to update
- the entry and possibly return a new one to be inserted in its place.
- Returning None indicates that the entry should not be replaced with a new
- one. If no updater is given, the cache entries are simply removed without
- any additional processing, the same as if the updater simply returned None.
-
- """
- for p in _collect_zipimporter_cache_entries(normalized_path, cache):
- # N.B. pypy's custom zipimport._zip_directory_cache implementation does
- # not support the complete dict interface:
- # * Does not support item assignment, thus not allowing this function
- # to be used only for removing existing cache entries.
- # * Does not support the dict.pop() method, forcing us to use the
- # get/del patterns instead. For more detailed information see the
- # following links:
- # https://github.com/pypa/setuptools/issues/202#issuecomment-202913420
- # http://bit.ly/2h9itJX
- old_entry = cache[p]
- del cache[p]
- new_entry = updater and updater(p, old_entry)
- if new_entry is not None:
- cache[p] = new_entry
-
-
-def _uncache(normalized_path, cache):
- _update_zipimporter_cache(normalized_path, cache)
-
-
-def _remove_and_clear_zip_directory_cache_data(normalized_path):
- def clear_and_remove_cached_zip_archive_directory_data(path, old_entry):
- old_entry.clear()
-
- _update_zipimporter_cache(
- normalized_path, zipimport._zip_directory_cache,
- updater=clear_and_remove_cached_zip_archive_directory_data)
-
-
-# PyPy Python implementation does not allow directly writing to the
-# zipimport._zip_directory_cache and so prevents us from attempting to correct
-# its content. The best we can do there is clear the problematic cache content
-# and have PyPy repopulate it as needed. The downside is that if there are any
-# stale zipimport.zipimporter instances laying around, attempting to use them
-# will fail due to not having its zip archive directory information available
-# instead of being automatically corrected to use the new correct zip archive
-# directory information.
-if '__pypy__' in sys.builtin_module_names:
- _replace_zip_directory_cache_data = \
- _remove_and_clear_zip_directory_cache_data
-else:
-
- def _replace_zip_directory_cache_data(normalized_path):
- def replace_cached_zip_archive_directory_data(path, old_entry):
- # N.B. In theory, we could load the zip directory information just
- # once for all updated path spellings, and then copy it locally and
- # update its contained path strings to contain the correct
- # spelling, but that seems like a way too invasive move (this cache
- # structure is not officially documented anywhere and could in
- # theory change with new Python releases) for no significant
- # benefit.
- old_entry.clear()
- zipimport.zipimporter(path)
- old_entry.update(zipimport._zip_directory_cache[path])
- return old_entry
-
- _update_zipimporter_cache(
- normalized_path, zipimport._zip_directory_cache,
- updater=replace_cached_zip_archive_directory_data)
-
-
-def is_python(text, filename=''):
- "Is this string a valid Python script?"
- try:
- compile(text, filename, 'exec')
- except (SyntaxError, TypeError):
- return False
- else:
- return True
-
-
-def is_sh(executable):
- """Determine if the specified executable is a .sh (contains a #! line)"""
- try:
- with io.open(executable, encoding='latin-1') as fp:
- magic = fp.read(2)
- except (OSError, IOError):
- return executable
- return magic == '#!'
-
-
-def nt_quote_arg(arg):
- """Quote a command line argument according to Windows parsing rules"""
- return subprocess.list2cmdline([arg])
-
-
-def is_python_script(script_text, filename):
- """Is this text, as a whole, a Python script? (as opposed to shell/bat/etc.
- """
- if filename.endswith('.py') or filename.endswith('.pyw'):
- return True # extension says it's Python
- if is_python(script_text, filename):
- return True # it's syntactically valid Python
- if script_text.startswith('#!'):
- # It begins with a '#!' line, so check if 'python' is in it somewhere
- return 'python' in script_text.splitlines()[0].lower()
-
- return False # Not any Python I can recognize
-
-
-try:
- from os import chmod as _chmod
-except ImportError:
- # Jython compatibility
- def _chmod(*args):
- pass
-
-
-def chmod(path, mode):
- log.debug("changing mode of %s to %o", path, mode)
- try:
- _chmod(path, mode)
- except os.error as e:
- log.debug("chmod failed: %s", e)
-
-
-class CommandSpec(list):
- """
- A command spec for a #! header, specified as a list of arguments akin to
- those passed to Popen.
- """
-
- options = []
- split_args = dict()
-
- @classmethod
- def best(cls):
- """
- Choose the best CommandSpec class based on environmental conditions.
- """
- return cls
-
- @classmethod
- def _sys_executable(cls):
- _default = os.path.normpath(sys.executable)
- return os.environ.get('__PYVENV_LAUNCHER__', _default)
-
- @classmethod
- def from_param(cls, param):
- """
- Construct a CommandSpec from a parameter to build_scripts, which may
- be None.
- """
- if isinstance(param, cls):
- return param
- if isinstance(param, list):
- return cls(param)
- if param is None:
- return cls.from_environment()
- # otherwise, assume it's a string.
- return cls.from_string(param)
-
- @classmethod
- def from_environment(cls):
- return cls([cls._sys_executable()])
-
- @classmethod
- def from_string(cls, string):
- """
- Construct a command spec from a simple string representing a command
- line parseable by shlex.split.
- """
- items = shlex.split(string, **cls.split_args)
- return cls(items)
-
- def install_options(self, script_text):
- self.options = shlex.split(self._extract_options(script_text))
- cmdline = subprocess.list2cmdline(self)
- if not isascii(cmdline):
- self.options[:0] = ['-x']
-
- @staticmethod
- def _extract_options(orig_script):
- """
- Extract any options from the first line of the script.
- """
- first = (orig_script + '\n').splitlines()[0]
- match = _first_line_re().match(first)
- options = match.group(1) or '' if match else ''
- return options.strip()
-
- def as_header(self):
- return self._render(self + list(self.options))
-
- @staticmethod
- def _strip_quotes(item):
- _QUOTES = '"\''
- for q in _QUOTES:
- if item.startswith(q) and item.endswith(q):
- return item[1:-1]
- return item
-
- @staticmethod
- def _render(items):
- cmdline = subprocess.list2cmdline(
- CommandSpec._strip_quotes(item.strip()) for item in items)
- return '#!' + cmdline + '\n'
-
-
-# For pbr compat; will be removed in a future version.
-sys_executable = CommandSpec._sys_executable()
-
-
-class WindowsCommandSpec(CommandSpec):
- split_args = dict(posix=False)
-
-
-class ScriptWriter:
- """
- Encapsulates behavior around writing entry point scripts for console and
- gui apps.
- """
-
- template = textwrap.dedent(r"""
- # EASY-INSTALL-ENTRY-SCRIPT: %(spec)r,%(group)r,%(name)r
- import re
- import sys
-
- # for compatibility with easy_install; see #2198
- __requires__ = %(spec)r
-
- try:
- from importlib.metadata import distribution
- except ImportError:
- try:
- from importlib_metadata import distribution
- except ImportError:
- from pkg_resources import load_entry_point
-
-
- def importlib_load_entry_point(spec, group, name):
- dist_name, _, _ = spec.partition('==')
- matches = (
- entry_point
- for entry_point in distribution(dist_name).entry_points
- if entry_point.group == group and entry_point.name == name
- )
- return next(matches).load()
-
-
- globals().setdefault('load_entry_point', importlib_load_entry_point)
-
-
- if __name__ == '__main__':
- sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
- sys.exit(load_entry_point(%(spec)r, %(group)r, %(name)r)())
- """).lstrip()
-
- command_spec_class = CommandSpec
-
- @classmethod
- def get_script_args(cls, dist, executable=None, wininst=False):
- # for backward compatibility
- warnings.warn("Use get_args", EasyInstallDeprecationWarning)
- writer = (WindowsScriptWriter if wininst else ScriptWriter).best()
- header = cls.get_script_header("", executable, wininst)
- return writer.get_args(dist, header)
-
- @classmethod
- def get_script_header(cls, script_text, executable=None, wininst=False):
- # for backward compatibility
- warnings.warn(
- "Use get_header", EasyInstallDeprecationWarning, stacklevel=2)
- if wininst:
- executable = "python.exe"
- return cls.get_header(script_text, executable)
-
- @classmethod
- def get_args(cls, dist, header=None):
- """
- Yield write_script() argument tuples for a distribution's
- console_scripts and gui_scripts entry points.
- """
- if header is None:
- header = cls.get_header()
- spec = str(dist.as_requirement())
- for type_ in 'console', 'gui':
- group = type_ + '_scripts'
- for name, ep in dist.get_entry_map(group).items():
- cls._ensure_safe_name(name)
- script_text = cls.template % locals()
- args = cls._get_script_args(type_, name, header, script_text)
- for res in args:
- yield res
-
- @staticmethod
- def _ensure_safe_name(name):
- """
- Prevent paths in *_scripts entry point names.
- """
- has_path_sep = re.search(r'[\\/]', name)
- if has_path_sep:
- raise ValueError("Path separators not allowed in script names")
-
- @classmethod
- def get_writer(cls, force_windows):
- # for backward compatibility
- warnings.warn("Use best", EasyInstallDeprecationWarning)
- return WindowsScriptWriter.best() if force_windows else cls.best()
-
- @classmethod
- def best(cls):
- """
- Select the best ScriptWriter for this environment.
- """
- if sys.platform == 'win32' or (os.name == 'java' and os._name == 'nt'):
- return WindowsScriptWriter.best()
- else:
- return cls
-
- @classmethod
- def _get_script_args(cls, type_, name, header, script_text):
- # Simply write the stub with no extension.
- yield (name, header + script_text)
-
- @classmethod
- def get_header(cls, script_text="", executable=None):
- """Create a #! line, getting options (if any) from script_text"""
- cmd = cls.command_spec_class.best().from_param(executable)
- cmd.install_options(script_text)
- return cmd.as_header()
-
-
-class WindowsScriptWriter(ScriptWriter):
- command_spec_class = WindowsCommandSpec
-
- @classmethod
- def get_writer(cls):
- # for backward compatibility
- warnings.warn("Use best", EasyInstallDeprecationWarning)
- return cls.best()
-
- @classmethod
- def best(cls):
- """
- Select the best ScriptWriter suitable for Windows
- """
- writer_lookup = dict(
- executable=WindowsExecutableLauncherWriter,
- natural=cls,
- )
- # for compatibility, use the executable launcher by default
- launcher = os.environ.get('SETUPTOOLS_LAUNCHER', 'executable')
- return writer_lookup[launcher]
-
- @classmethod
- def _get_script_args(cls, type_, name, header, script_text):
- "For Windows, add a .py extension"
- ext = dict(console='.pya', gui='.pyw')[type_]
- if ext not in os.environ['PATHEXT'].lower().split(';'):
- msg = (
- "{ext} not listed in PATHEXT; scripts will not be "
- "recognized as executables."
- ).format(**locals())
- warnings.warn(msg, UserWarning)
- old = ['.pya', '.py', '-script.py', '.pyc', '.pyo', '.pyw', '.exe']
- old.remove(ext)
- header = cls._adjust_header(type_, header)
- blockers = [name + x for x in old]
- yield name + ext, header + script_text, 't', blockers
-
- @classmethod
- def _adjust_header(cls, type_, orig_header):
- """
- Make sure 'pythonw' is used for gui and 'python' is used for
- console (regardless of what sys.executable is).
- """
- pattern = 'pythonw.exe'
- repl = 'python.exe'
- if type_ == 'gui':
- pattern, repl = repl, pattern
- pattern_ob = re.compile(re.escape(pattern), re.IGNORECASE)
- new_header = pattern_ob.sub(string=orig_header, repl=repl)
- return new_header if cls._use_header(new_header) else orig_header
-
- @staticmethod
- def _use_header(new_header):
- """
- Should _adjust_header use the replaced header?
-
- On non-windows systems, always use. On
- Windows systems, only use the replaced header if it resolves
- to an executable on the system.
- """
- clean_header = new_header[2:-1].strip('"')
- return sys.platform != 'win32' or find_executable(clean_header)
-
-
-class WindowsExecutableLauncherWriter(WindowsScriptWriter):
- @classmethod
- def _get_script_args(cls, type_, name, header, script_text):
- """
- For Windows, add a .py extension and an .exe launcher
- """
- if type_ == 'gui':
- launcher_type = 'gui'
- ext = '-script.pyw'
- old = ['.pyw']
- else:
- launcher_type = 'cli'
- ext = '-script.py'
- old = ['.py', '.pyc', '.pyo']
- hdr = cls._adjust_header(type_, header)
- blockers = [name + x for x in old]
- yield (name + ext, hdr + script_text, 't', blockers)
- yield (
- name + '.exe', get_win_launcher(launcher_type),
- 'b' # write in binary mode
- )
- if not is_64bit():
- # install a manifest for the launcher to prevent Windows
- # from detecting it as an installer (which it will for
- # launchers like easy_install.exe). Consider only
- # adding a manifest for launchers detected as installers.
- # See Distribute #143 for details.
- m_name = name + '.exe.manifest'
- yield (m_name, load_launcher_manifest(name), 't')
-
-
-# for backward-compatibility
-get_script_args = ScriptWriter.get_script_args
-get_script_header = ScriptWriter.get_script_header
-
-
-def get_win_launcher(type):
- """
- Load the Windows launcher (executable) suitable for launching a script.
-
- `type` should be either 'cli' or 'gui'
-
- Returns the executable as a byte string.
- """
- launcher_fn = '%s.exe' % type
- if is_64bit():
- if get_platform() == "win-arm64":
- launcher_fn = launcher_fn.replace(".", "-arm64.")
- else:
- launcher_fn = launcher_fn.replace(".", "-64.")
- else:
- launcher_fn = launcher_fn.replace(".", "-32.")
- return resource_string('setuptools', launcher_fn)
-
-
-def load_launcher_manifest(name):
- manifest = pkg_resources.resource_string(__name__, 'launcher manifest.xml')
- return manifest.decode('utf-8') % vars()
-
-
-def rmtree(path, ignore_errors=False, onerror=auto_chmod):
- return shutil.rmtree(path, ignore_errors, onerror)
-
-
-def current_umask():
- tmp = os.umask(0o022)
- os.umask(tmp)
- return tmp
-
-
-def only_strs(values):
- """
- Exclude non-str values. Ref #3063.
- """
- return filter(lambda val: isinstance(val, str), values)
-
-
-class EasyInstallDeprecationWarning(SetuptoolsDeprecationWarning):
- """
- Warning for EasyInstall deprecations, bypassing suppression.
- """
diff --git a/spaces/BreadBytes1/PL-Dashboard/app.py b/spaces/BreadBytes1/PL-Dashboard/app.py
deleted file mode 100644
index 8a6c348d95b43c3fbf09bc657b429790d29bcfb7..0000000000000000000000000000000000000000
--- a/spaces/BreadBytes1/PL-Dashboard/app.py
+++ /dev/null
@@ -1,992 +0,0 @@
-# ---
-# jupyter:
-# jupytext:
-# text_representation:
-# extension: .py
-# format_name: light
-# format_version: '1.5'
-# jupytext_version: 1.14.2
-# kernelspec:
-# display_name: Python [conda env:bbytes] *
-# language: python
-# name: conda-env-bbytes-py
-# ---
-
-# +
-import csv
-import pandas as pd
-from datetime import datetime, timedelta
-import numpy as np
-import datetime as dt
-import matplotlib.pyplot as plt
-from pathlib import Path
-import time
-import plotly.graph_objects as go
-import plotly.io as pio
-from PIL import Image
-
-import streamlit as st
-import plotly.express as px
-import altair as alt
-import dateutil.parser
-from matplotlib.colors import LinearSegmentedColormap
-
-
-# +
-class color:
- PURPLE = '\033[95m'
- CYAN = '\033[96m'
- DARKCYAN = '\033[36m'
- BLUE = '\033[94m'
- GREEN = '\033[92m'
- YELLOW = '\033[93m'
- RED = '\033[91m'
- BOLD = '\033[1m'
- UNDERLINE = '\033[4m'
- END = '\033[0m'
-
-@st.experimental_memo
-def print_PL(amnt, thresh, extras = "" ):
- if amnt > 0:
- return color.BOLD + color.GREEN + str(amnt) + extras + color.END
- elif amnt < 0:
- return color.BOLD + color.RED + str(amnt)+ extras + color.END
- elif np.isnan(amnt):
- return str(np.nan)
- else:
- return str(amnt + extras)
-
-@st.experimental_memo
-def get_headers(logtype):
- otimeheader = ""
- cheader = ""
- plheader = ""
- fmat = '%Y-%m-%d %H:%M:%S'
-
- if logtype == "ByBit":
- otimeheader = 'Create Time'
- cheader = 'Contracts'
- plheader = 'Closed P&L'
- fmat = '%Y-%m-%d %H:%M:%S'
-
- if logtype == "BitGet":
- otimeheader = 'Date'
- cheader = 'Futures'
- plheader = 'Realized P/L'
- fmat = '%Y-%m-%d %H:%M:%S'
-
- if logtype == "MEXC":
- otimeheader = 'Trade time'
- cheader = 'Futures'
- plheader = 'closing position'
- fmat = '%Y/%m/%d %H:%M'
-
- if logtype == "Binance":
- otimeheader = 'Date'
- cheader = 'Symbol'
- plheader = 'Realized Profit'
- fmat = '%Y-%m-%d %H:%M:%S'
-
- #if logtype == "Kucoin":
- # otimeheader = 'Time'
- # cheader = 'Contract'
- # plheader = ''
- # fmat = '%Y/%m/%d %H:%M:%S'
-
-
- if logtype == "Kraken":
- otimeheader = 'time'
- cheader = 'asset'
- plheader = 'amount'
- fmat = '%Y-%m-%d %H:%M:%S.%f'
-
- if logtype == "OkX":
- otimeheader = '\ufeffOrder Time'
- cheader = '\ufeffInstrument'
- plheader = '\ufeffPL'
- fmat = '%Y-%m-%d %H:%M:%S'
-
- return otimeheader.lower(), cheader.lower(), plheader.lower(), fmat
-
-@st.experimental_memo
-def get_coin_info(df_coin, principal_balance,plheader):
- numtrades = int(len(df_coin))
- numwin = int(sum(df_coin[plheader] > 0))
- numloss = int(sum(df_coin[plheader] < 0))
- winrate = np.round(100*numwin/numtrades,2)
-
- grosswin = sum(df_coin[df_coin[plheader] > 0][plheader])
- grossloss = sum(df_coin[df_coin[plheader] < 0][plheader])
- if grossloss != 0:
- pfactor = -1*np.round(grosswin/grossloss,2)
- else:
- pfactor = np.nan
-
- cum_PL = np.round(sum(df_coin[plheader].values),2)
- cum_PL_perc = np.round(100*cum_PL/principal_balance,2)
- mean_PL = np.round(sum(df_coin[plheader].values/len(df_coin)),2)
- mean_PL_perc = np.round(100*mean_PL/principal_balance,2)
-
- return numtrades, numwin, numloss, winrate, pfactor, cum_PL, cum_PL_perc, mean_PL, mean_PL_perc
-
-@st.experimental_memo
-def get_hist_info(df_coin, principal_balance,plheader):
- numtrades = int(len(df_coin))
- numwin = int(sum(df_coin[plheader] > 0))
- numloss = int(sum(df_coin[plheader] < 0))
- if numtrades != 0:
- winrate = int(np.round(100*numwin/numtrades,2))
- else:
- winrate = np.nan
-
- grosswin = sum(df_coin[df_coin[plheader] > 0][plheader])
- grossloss = sum(df_coin[df_coin[plheader] < 0][plheader])
- if grossloss != 0:
- pfactor = -1*np.round(grosswin/grossloss,2)
- else:
- pfactor = np.nan
- return numtrades, numwin, numloss, winrate, pfactor
-
-@st.experimental_memo
-def get_rolling_stats(df, lev, otimeheader, days):
- max_roll = (df[otimeheader].max() - df[otimeheader].min()).days
-
- if max_roll >= days:
- rollend = df[otimeheader].max()-timedelta(days=days)
- rolling_df = df[df[otimeheader] >= rollend]
-
- if len(rolling_df) > 0:
- rolling_perc = rolling_df['Return Per Trade'].dropna().cumprod().values[-1]-1
- else:
- rolling_perc = np.nan
- else:
- rolling_perc = np.nan
- return 100*rolling_perc
-@st.experimental_memo
-def cc_coding(row):
- return ['background-color: lightgrey'] * len(row) if row['Exit Date'] <= datetime.strptime('2022-12-16 00:00:00','%Y-%m-%d %H:%M:%S').date() else [''] * len(row)
-def ctt_coding(row):
- return ['background-color: lightgrey'] * len(row) if row['Exit Date'] <= datetime.strptime('2023-01-02 00:00:00','%Y-%m-%d %H:%M:%S').date() else [''] * len(row)
-def conditional_formatter(value):
- return "${:.2f}".format(value) if not (abs(value) < 1.00) else "${:.4f}".format(value)
-
-@st.experimental_memo
-def my_style(v, props=''):
- props = 'color:red' if v < 0 else 'color:green'
- return props
-
-def filt_df(df, cheader, symbol_selections):
-
- df = df.copy()
- df = df[df[cheader].isin(symbol_selections)]
-
- return df
-
-def tv_reformat(close50filename):
- try:
- data = pd.read_csv(open(close50filename,'r'), sep='[,|\t]', engine='python')
- except:
- data = pd.DataFrame([])
-
- if data.empty:
- return data
- else:
- entry_df = data[data['Type'].str.contains("Entry")]
- exit_df = data[data['Type'].str.contains("Exit")]
-
- entry_df.index = range(len(entry_df))
- exit_df.index = range(len(exit_df))
-
- df = pd.DataFrame([], columns=['Trade','Entry Date','Buy Price', 'Sell Price','Exit Date', 'P/L per token', 'P/L %', 'Drawdown %'])
-
- df['Signal'] = [string.split(' ')[1] for string in entry_df['Type']]
- df['Trade'] = entry_df.index
- df['Entry Date'] = entry_df['Date/Time']
- df['Buy Price'] = entry_df['Price USDT']
-
- df['Sell Price'] = exit_df['Price USDT']
- df['Exit Date'] = exit_df['Date/Time']
- df['P/L per token'] = df['Sell Price'] - df['Buy Price']
- df['P/L %'] = exit_df['Profit %']
- df['Drawdown %'] = exit_df['Drawdown %']
- df['Close 50'] = [int(i == "Close 50% of Position") for i in exit_df['Signal']]
- df = df.sort_values(['Entry Date','Close 50'], ascending = [False, True])
- df.index = range(len(df))
-
- df.loc[df['Close 50'] == 1, 'Exit Date'] = np.copy(df.loc[df[df['Close 50'] == 1].index.values -1]['Exit Date'])
-
- grouped_df = df.groupby('Entry Date').agg({'Signal' : 'first', 'Entry Date': 'min', 'Buy Price':'mean',
- 'Sell Price' : 'mean',
- 'Exit Date': 'max',
- 'P/L per token': 'mean',
- 'P/L %' : 'mean'})
-
- grouped_df.insert(0,'Trade', range(len(grouped_df)))
- grouped_df.index = range(len(grouped_df))
- return grouped_df
-
-def load_data(filename, otimeheader, fmat):
- df = pd.read_csv(open(filename,'r'), sep='\t') # so as not to mutate cached value
- close50filename = filename.split('.')[0] + '-50.' + filename.split('.')[1]
- df2 = tv_reformat(close50filename)
-
- if filename == "CT-Trade-Log.csv":
- df.columns = ['Trade','Entry Date','Buy Price', 'Sell Price','Exit Date', 'P/L per token', 'P/L %', 'Drawdown %']
- df.insert(1, 'Signal', ['Long']*len(df))
- elif filename == "CC-Trade-Log.csv" or filename == "PB-Trade-Log.csv":
- df.columns = ['Trade','Signal','Entry Date','Buy Price', 'Sell Price','Exit Date', 'P/L per token', 'P/L %', 'Drawdown %']
- else:
- df.columns = ['Trade','Signal','Entry Date','Buy Price', 'Sell Price','Exit Date', 'P/L per token', 'P/L %']
-
- if filename != "CT-Toasted-Trade-Log.csv":
- df['Signal'] = df['Signal'].str.replace(' ', '', regex=True)
- df['Buy Price'] = df['Buy Price'].str.replace('$', '', regex=True)
- df['Sell Price'] = df['Sell Price'].str.replace('$', '', regex=True)
- df['Buy Price'] = df['Buy Price'].str.replace(',', '', regex=True)
- df['Sell Price'] = df['Sell Price'].str.replace(',', '', regex=True)
- df['P/L per token'] = df['P/L per token'].str.replace('$', '', regex=True)
- df['P/L per token'] = df['P/L per token'].str.replace(',', '', regex=True)
- df['P/L %'] = df['P/L %'].str.replace('%', '', regex=True)
-
- df['Buy Price'] = pd.to_numeric(df['Buy Price'])
- df['Sell Price'] = pd.to_numeric(df['Sell Price'])
- df['P/L per token'] = pd.to_numeric(df['P/L per token'])
- df['P/L %'] = pd.to_numeric(df['P/L %'])
-
- if df2.empty:
- df = df
- else:
- df = pd.concat([df,df2], axis=0, ignore_index=True)
-
- if filename == "CT-Trade-Log.csv":
- df['Signal'] = ['Long']*len(df)
-
- dateheader = 'Date'
- theader = 'Time'
-
- df[dateheader] = [tradetimes.split(" ")[0] for tradetimes in df[otimeheader].values]
- df[theader] = [tradetimes.split(" ")[1] for tradetimes in df[otimeheader].values]
-
- df[otimeheader]= [dateutil.parser.parse(date+' '+time)
- for date,time in zip(df[dateheader],df[theader])]
- df[otimeheader] = pd.to_datetime(df[otimeheader])
- df['Exit Date'] = pd.to_datetime(df['Exit Date'])
- df.sort_values(by=otimeheader, inplace=True)
-
- df[dateheader] = [dateutil.parser.parse(date).date() for date in df[dateheader]]
- df[theader] = [dateutil.parser.parse(time).time() for time in df[theader]]
- df['Trade'] = df.index + 1 #reindex
-
- if filename == "CT-Trade-Log.csv":
- df['DCA'] = np.nan
-
- for exit in pd.unique(df['Exit Date']):
- df_exit = df[df['Exit Date']==exit]
- if dateutil.parser.parse(str(exit)) < dateutil.parser.parse('2023-02-07 13:00:00'):
- for i in range(len(df_exit)):
- ind = df_exit.index[i]
- df.loc[ind,'DCA'] = i+1
-
- else:
- for i in range(len(df_exit)):
- ind = df_exit.index[i]
- df.loc[ind,'DCA'] = i+1.1
- return df
-
-
-def get_sd_df(sd_df, sd, bot_selections, dca1, dca2, dca3, dca4, dca5, dca6, fees, lev, dollar_cap, principal_balance):
- sd = 2*.00026
- # ------ Standard Dev. Calculations.
- if bot_selections == "Cinnamon Toast":
- dca_map = {1: dca1/100, 2: dca2/100, 3: dca3/100, 4: dca4/100, 1.1: dca5/100, 2.1: dca6/100}
- sd_df['DCA %'] = sd_df['DCA'].map(dca_map)
- sd_df['Calculated Return % (+)'] = df['Signal'].map(signal_map)*(df['DCA %'])*(1-fees)*((df['Sell Price']*(1+df['Signal'].map(signal_map)*sd) - df['Buy Price']*(1-df['Signal'].map(signal_map)*sd))/df['Buy Price']*(1-df['Signal'].map(signal_map)*sd) - fees) #accounts for fees on open and close of trade
- sd_df['Calculated Return % (-)'] = df['Signal'].map(signal_map)*(df['DCA %'])*(1-fees)*((df['Sell Price']*(1-df['Signal'].map(signal_map)*sd)-df['Buy Price']*(1+df['Signal'].map(signal_map)*sd))/df['Buy Price']*(1+df['Signal'].map(signal_map)*sd) - fees) #accounts for fees on open and close of trade
- sd_df['DCA'] = np.floor(sd_df['DCA'].values)
-
- sd_df['Return Per Trade (+)'] = np.nan
- sd_df['Return Per Trade (-)'] = np.nan
- sd_df['Balance used in Trade (+)'] = np.nan
- sd_df['Balance used in Trade (-)'] = np.nan
- sd_df['New Balance (+)'] = np.nan
- sd_df['New Balance (-)'] = np.nan
-
- g1 = sd_df.groupby('Exit Date').sum(numeric_only=True)['Calculated Return % (+)'].reset_index(name='Return Per Trade (+)')
- g2 = sd_df.groupby('Exit Date').sum(numeric_only=True)['Calculated Return % (-)'].reset_index(name='Return Per Trade (-)')
- sd_df.loc[sd_df['DCA']==1.0,'Return Per Trade (+)'] = 1+lev*g1['Return Per Trade (+)'].values
- sd_df.loc[sd_df['DCA']==1.0,'Return Per Trade (-)'] = 1+lev*g2['Return Per Trade (-)'].values
-
- sd_df['Compounded Return (+)'] = sd_df['Return Per Trade (+)'].cumprod()
- sd_df['Compounded Return (-)'] = sd_df['Return Per Trade (-)'].cumprod()
- sd_df.loc[sd_df['DCA']==1.0,'New Balance (+)'] = [min(dollar_cap/lev, bal*principal_balance) for bal in sd_df.loc[sd_df['DCA']==1.0,'Compounded Return (+)']]
- sd_df.loc[sd_df['DCA']==1.0,'Balance used in Trade (+)'] = np.concatenate([[principal_balance], sd_df.loc[sd_df['DCA']==1.0,'New Balance (+)'].values[:-1]])
-
- sd_df.loc[sd_df['DCA']==1.0,'New Balance (-)'] = [min(dollar_cap/lev, bal*principal_balance) for bal in sd_df.loc[sd_df['DCA']==1.0,'Compounded Return (-)']]
- sd_df.loc[sd_df['DCA']==1.0,'Balance used in Trade (-)'] = np.concatenate([[principal_balance], sd_df.loc[sd_df['DCA']==1.0,'New Balance (-)'].values[:-1]])
- else:
- sd_df['Calculated Return % (+)'] = df['Signal'].map(signal_map)*(1-fees)*((df['Sell Price']*(1+df['Signal'].map(signal_map)*sd) - df['Buy Price']*(1-df['Signal'].map(signal_map)*sd))/df['Buy Price']*(1-df['Signal'].map(signal_map)*sd) - fees) #accounts for fees on open and close of trade
- sd_df['Calculated Return % (-)'] = df['Signal'].map(signal_map)*(1-fees)*((df['Sell Price']*(1-df['Signal'].map(signal_map)*sd)-df['Buy Price']*(1+df['Signal'].map(signal_map)*sd))/df['Buy Price']*(1+df['Signal'].map(signal_map)*sd) - fees) #accounts for fees on open and close of trade
- sd_df['Return Per Trade (+)'] = np.nan
- sd_df['Return Per Trade (-)'] = np.nan
-
- g1 = sd_df.groupby('Exit Date').sum(numeric_only=True)['Calculated Return % (+)'].reset_index(name='Return Per Trade (+)')
- g2 = sd_df.groupby('Exit Date').sum(numeric_only=True)['Calculated Return % (-)'].reset_index(name='Return Per Trade (-)')
- sd_df['Return Per Trade (+)'] = 1+lev*g1['Return Per Trade (+)'].values
- sd_df['Return Per Trade (-)'] = 1+lev*g2['Return Per Trade (-)'].values
-
- sd_df['Compounded Return (+)'] = sd_df['Return Per Trade (+)'].cumprod()
- sd_df['Compounded Return (-)'] = sd_df['Return Per Trade (-)'].cumprod()
- sd_df['New Balance (+)'] = [min(dollar_cap/lev, bal*principal_balance) for bal in sd_df['Compounded Return (+)']]
- sd_df['Balance used in Trade (+)'] = np.concatenate([[principal_balance], sd_df['New Balance (+)'].values[:-1]])
-
- sd_df['New Balance (-)'] = [min(dollar_cap/lev, bal*principal_balance) for bal in sd_df['Compounded Return (-)']]
- sd_df['Balance used in Trade (-)'] = np.concatenate([[principal_balance], sd_df['New Balance (-)'].values[:-1]])
-
- sd_df['Net P/L Per Trade (+)'] = (sd_df['Return Per Trade (+)']-1)*sd_df['Balance used in Trade (+)']
- sd_df['Cumulative P/L (+)'] = sd_df['Net P/L Per Trade (+)'].cumsum()
-
- sd_df['Net P/L Per Trade (-)'] = (sd_df['Return Per Trade (-)']-1)*sd_df['Balance used in Trade (-)']
- sd_df['Cumulative P/L (-)'] = sd_df['Net P/L Per Trade (-)'].cumsum()
- return sd_df
-
-def runapp() -> None:
- #st.header("Trading Bot Dashboard :bread: :moneybag:")
- #st.write("Welcome to the Trading Bot Dashboard by BreadBytes! You can use this dashboard to track " +
- # "the performance of our trading bots, or upload and track your own performance data from a supported exchange.")
- #if 'auth_user' not in st.session_state:
- # with st.form("Login"):
- # user = st.text_input("Username")
- # secret = st.text_input("Password")
-
- # submitted = st.form_submit_button("Submit")
- # if submitted:
- # if user == st.secrets.get("db_username") and secret == st.secrets.get("db_password"):
- # st.success("Success!")
- # st.session_state['auth_user'] = True
- # else:
- # st.success("Incorrect username and/or password. Please try again.")
- # st.session_state['auth_user'] = False
-
- #try:
- # st.session_state['auth_user'] == True
- #except:
- # st.error("Please log in.")
- # return
-
- #if st.session_state['auth_user'] == True:
- if True:
- st.sidebar.header("FAQ")
-
- with st.sidebar.subheader("FAQ"):
- st.markdown(Path("FAQ_README.md").read_text(), unsafe_allow_html=True)
-
- no_errors = True
-
- exchanges = ["ByBit", "BitGet", "Binance","Kraken","MEXC","OkX", "BreadBytes Historical Logs"]
- logtype = st.selectbox("Select your Exchange", options=exchanges)
-
- if logtype != "BreadBytes Historical Logs":
- uploaded_data = st.file_uploader(
- "Drag and Drop files here or click Browse files.", type=[".csv", ".xlsx"], accept_multiple_files=False
- )
- if uploaded_data is None:
- st.info("Please upload a file, or select BreadBytes Historical Logs as your exchange.")
- else:
- st.success("Your file was uploaded successfully!")
-
- uploadtype = uploaded_data.name.split(".")[1]
- if uploadtype == "csv":
- df = pd.read_csv(uploaded_data)
- if uploadtype == "xlsx":
- df = pd.read_excel(uploaded_data)
-
- otimeheader, cheader, plheader, fmat = get_headers(logtype)
-
- df.columns = [c.lower() for c in df.columns]
-
- if not(uploaded_data is None):
- with st.container():
- bot_selections = "Other"
- if bot_selections == "Other":
- try:
- symbols = list(df[cheader].unique())
- symbol_selections = st.multiselect(
- "Select/Deselect Asset(s)", options=symbols, default=symbols
- )
- except:
- st.error("Please select your exchange or upload a supported trade log file.")
- no_errors = False
- if no_errors and symbol_selections == None:
- st.error("Please select at least one asset.")
- no_errors = False
-
-
- if no_errors:
- if logtype == 'Binance':
- otimeheader = df.filter(regex=otimeheader).columns.values[0]
- fmat = '%Y-%m-%d %H:%M:%S'
- df = df[df[plheader] != 0]
- #if logtype == "Kucoin":
- # df = df.replace('\r\n','', regex=True)
- with st.container():
- col1, col2 = st.columns(2)
- with col1:
- try:
- startdate = st.date_input("Start Date", value=pd.to_datetime(df[otimeheader]).min())
- except:
- st.error("Please select your exchange or upload a supported trade log file.")
- no_errors = False
- with col2:
- try:
- enddate = st.date_input("End Date", value=pd.to_datetime(df[otimeheader]).max())
- except:
- st.error("Please select your exchange or upload a supported trade log file.")
- no_errors = False
- #st.sidebar.subheader("Customize your Dashboard")
-
- if no_errors and (enddate < startdate):
- st.error("End Date must be later than Start date. Please try again.")
- no_errors = False
- with st.container():
- col1,col2 = st.columns(2)
- with col1:
- principal_balance = st.number_input('Starting Balance', min_value=0.00, value=1000.00, max_value= 1000000.00, step=10.00)
-
- with st.expander("Raw Trade Log"):
- st.write(df)
-
-
- if no_errors:
- df = filt_df(df, cheader, symbol_selections)
-
- if len(df) == 0:
- st.error("There are no available trades matching your selections. Please try again!")
- no_errors = False
-
- if no_errors:
- ## reformating / necessary calculations
- if logtype == 'BitGet':
- try:
- badcol = df.filter(regex='Unnamed').columns.values[0]
- except:
- badcol = []
- df = df[[col for col in df.columns if col != badcol]]
- df = df[df[plheader] != 0]
- if uploadtype == "xlsx":
- fmat = '%Y-%m-%d %H:%M:%S.%f'
- if logtype == 'MEXC':
- df = df[df[plheader] != 0]
- # collapse on transaction ID then calculate oppsition prices!!!
- if logtype == "Kraken":
- df = df.replace('\r\n','', regex=True)
- df[otimeheader] = [str(time.split(".")[0]) for time in df[otimeheader].values]
- df = df[df['type']=='margin']
- df[plheader] = df[plheader]-df['fee']
- fmat = '%Y-%m-%d %H:%M:%S'
- if len(df) == 0:
- st.error("File Type Error. Please upload a Ledger history file from Kraken.")
- no_errors = False
-
- if no_errors:
- dateheader = 'Trade Date'
- theader = 'Trade Time'
-
- if type(df[otimeheader].values[0]) != str: #clunky fix to catch non-strings since np.datetime64 unstable
- df[otimeheader] = [str(date) for date in df[otimeheader]]
-
- df[dateheader] = [tradetimes.split(" ")[0] for tradetimes in df[otimeheader].values]
- df[theader] = [tradetimes.split(" ")[1] for tradetimes in df[otimeheader].values]
-
- dfmat = fmat.split(" ")[0]
- tfmat = fmat.split(" ")[1]
-
- df[otimeheader]= [datetime.strptime(date+' '+time,fmat)
- for date,time in zip(df[dateheader],df[theader])]
-
- df[dateheader] = [datetime.strptime(date,dfmat).date() for date in df[dateheader].values]
- df[theader] = [datetime.strptime(time,tfmat).time() for time in df[theader].values]
-
- df[otimeheader] = pd.to_datetime(df[otimeheader])
-
- df.sort_values(by=otimeheader, inplace=True)
- df.index = range(0,len(df))
-
- start = df.iloc[0][dateheader] if (not startdate) else startdate
- stop = df.iloc[len(df)-1][dateheader] if (not enddate) else enddate
-
- df = df[(df[dateheader] >= start) & (df[dateheader] <= stop)]
-
- results_df = pd.DataFrame([], columns = ['Coin', '# of Trades', 'Wins', 'Losses', 'Win Rate',
- 'Profit Factor', 'Cum. P/L', 'Cum. P/L (%)', 'Avg. P/L', 'Avg. P/L (%)'])
-
- for currency in pd.unique(df[cheader]):
- df_coin = df[(df[cheader] == currency) & (df[dateheader] >= start) & (df[dateheader] <= stop)]
- data = get_coin_info(df_coin, principal_balance, plheader)
- results_df.loc[len(results_df)] = list([currency]) + list(i for i in data)
-
- if bot_selections == "Other" and len(pd.unique(df[cheader])) > 1:
- df_dates = df[(df[dateheader] >= start) & (df[dateheader] <= stop)]
- data = get_coin_info(df_dates, principal_balance, plheader)
- results_df.loc[len(results_df)] = list(['Total']) + list(i for i in data)
-
- account_plural = "s" if len(bot_selections) > 1 else ""
- st.subheader(f"Results for your Account{account_plural}")
- totals = results_df[~(results_df['Coin'] == 'Total')].groupby('Coin', as_index=False).sum()
- if len(bot_selections) > 1:
- st.metric(
- "Gains for All Accounts",
- f"${totals['Cum. P/L'].sum():.2f}",
- f"{totals['Cum. P/L (%)'].sum():.2f} %",
- )
-
- max_col = 4
- tot_rows = int(np.ceil(len(totals)/max_col))
-
- for r in np.arange(0,tot_rows):
- #for column, row in zip(st.columns(len(totals)), totals.itertuples()):
- for column, row in zip(st.columns(max_col), totals.iloc[r*max_col:(r+1)*max_col].itertuples()):
- column.metric(
- row.Coin,
- f"${row._7:.2f}",
- f"{row._8:.2f} %",
- )
- st.subheader(f"Historical Performance")
- cmap=LinearSegmentedColormap.from_list('rg',["r", "grey", "g"], N=100)
- df['Cumulative P/L'] = df[plheader].cumsum()
- if logtype == "Binance": #Binance (utc) doesnt show up in st line charts???
- xx = dateheader
- else:
- xx = otimeheader
-
-
- #st.line_chart(data=df, x=xx, y='Cumulative P/L', use_container_width=True)
- # Create figure
- fig = go.Figure()
-
- pyLogo = Image.open("logo.png")
-
- # Add trace
- fig.add_trace(
- go.Scatter(x=df[xx], y=np.round(df['Cumulative P/L'].values,2), line_shape='spline', line = {'smoothing': .2, 'color' : 'rgba(31, 119, 200,.8)'}, name='Cumulative P/L')
- )
-
- fig.add_layout_image(
- dict(
- source=pyLogo,
- xref="paper",
- yref="paper",
- x = 0.05, #dfdata['Exit Date'].astype('int64').min() // 10**9,
- y = .85, #dfdata['Cumulative P/L'].max(),
- sizex= .9, #(dfdata['Exit Date'].astype('int64').max() - dfdata['Exit Date'].astype('int64').min()) // 10**9,
- sizey= .9, #(dfdata['Cumulative P/L'].max() - dfdata['Cumulative P/L'].min()),
- sizing="contain",
- opacity=0.2,
- layer = "below")
- )
-
- #style layout
- fig.update_layout(
- height = 600,
- xaxis=dict(
- title="Exit Date",
- tickmode='array',
- ),
- yaxis=dict(
- title="Cumulative P/L"
- ) )
-
- st.plotly_chart(fig, theme=None, use_container_width=True,height=600)
-
- st.subheader("Summarized Results")
- if df.empty:
- st.error("Oops! None of the data provided matches your selection(s). Please try again.")
- no_errors = False
- else:
- st.dataframe(results_df.style.format({'Win Rate': '{:.2f}%','Profit Factor' : '{:.2f}',
- 'Avg. P/L (%)': '{:.2f}%', 'Cum. P/L (%)': '{:.2f}%',
- 'Cum. P/L': '{:.2f}', 'Avg. P/L': '{:.2f}'})\
- .text_gradient(subset=['Win Rate'],cmap=cmap, vmin = 0, vmax = 100)\
- .text_gradient(subset=['Profit Factor'],cmap=cmap, vmin = 0, vmax = 2), use_container_width=True)
-
- if logtype == "BreadBytes Historical Logs" and no_errors:
-
- bots = ["Cinnamon Toast", "Short Bread", "Cosmic Cupcake", "Pure Bread"]
- bot_selections = st.selectbox("Select your Trading Bot", options=bots)
- otimeheader = 'Exit Date'
- fmat = '%Y-%m-%d %H:%M:%S'
- fees = .075/100
-
- if bot_selections == "Cinnamon Toast":
- lev_cap = 5
- dollar_cap = 1000000000.00
- data = load_data("CT-Trade-Log.csv",otimeheader, fmat)
- if bot_selections == "French Toast":
- lev_cap = 3
- dollar_cap = 10000000000.00
- data = load_data("FT-Trade-Log.csv",otimeheader, fmat)
- if bot_selections == "Short Bread":
- lev_cap = 5
- dollar_cap = 1000000000.00
- data = load_data("SB-Trade-Log.csv",otimeheader, fmat)
- if bot_selections == "Cosmic Cupcake":
- lev_cap = 3
- dollar_cap = 1000000000.00
- data = load_data("CC-Trade-Log.csv",otimeheader, fmat)
- if bot_selections == "Pure Bread":
- lev_cap = 3
- dollar_cap = 1000000000.00
- data = load_data("PB-Trade-Log.csv",otimeheader, fmat)
-
- df = data.copy(deep=True)
-
- dateheader = 'Date'
- theader = 'Time'
-
- st.subheader("Choose your settings:")
- with st.form("user input", ):
- if no_errors:
- with st.container():
- col1, col2 = st.columns(2)
- with col1:
- try:
- startdate = st.date_input("Start Date", value=pd.to_datetime(df[otimeheader]).min())
- except:
- st.error("Please select your exchange or upload a supported trade log file.")
- no_errors = False
- with col2:
- try:
- enddate = st.date_input("End Date", value=datetime.today())
- except:
- st.error("Please select your exchange or upload a supported trade log file.")
- no_errors = False
- #st.sidebar.subheader("Customize your Dashboard")
-
- if no_errors and (enddate < startdate):
- st.error("End Date must be later than Start date. Please try again.")
- no_errors = False
- with st.container():
- col1,col2 = st.columns(2)
- with col2:
- lev = st.number_input('Leverage', min_value=1, value=1, max_value= lev_cap, step=1)
- with col1:
- principal_balance = st.number_input('Starting Balance', min_value=0.00, value=1000.00, max_value= dollar_cap, step=.01)
-
- if bot_selections == "Cinnamon Toast":
- st.write("Choose your DCA setup (for trades before 02/07/2023)")
- with st.container():
- col1, col2, col3, col4 = st.columns(4)
- with col1:
- dca1 = st.number_input('DCA 1 Allocation', min_value=0, value=25, max_value= 100, step=1)
- with col2:
- dca2 = st.number_input('DCA 2 Allocation', min_value=0, value=25, max_value= 100, step=1)
- with col3:
- dca3 = st.number_input('DCA 3 Allocation', min_value=0, value=25, max_value= 100, step=1)
- with col4:
- dca4 = st.number_input('DCA 4 Allocation', min_value=0, value=25, max_value= 100, step=1)
- st.write("Choose your DCA setup (for trades on or after 02/07/2023)")
- with st.container():
- col1, col2 = st.columns(2)
- with col1:
- dca5 = st.number_input('DCA 1 Allocation', min_value=0, value=50, max_value= 100, step=1)
- with col2:
- dca6 = st.number_input('DCA 2 Allocation', min_value=0, value=50, max_value= 100, step=1)
-
- #hack way to get button centered
- c = st.columns(9)
- with c[4]:
- submitted = st.form_submit_button("Get Cookin'!")
-
- if submitted and principal_balance * lev > dollar_cap:
- lev = np.floor(dollar_cap/principal_balance)
- st.error(f"WARNING: (Starting Balance)*(Leverage) exceeds the ${dollar_cap} limit. Using maximum available leverage of {lev}")
-
- if submitted and no_errors:
- df = df[(df[dateheader] >= startdate) & (df[dateheader] <= enddate)]
- signal_map = {'Long': 1, 'Short':-1}
-
-
- if len(df) == 0:
- st.error("There are no available trades matching your selections. Please try again!")
- no_errors = False
-
- if no_errors:
- if bot_selections == "Cinnamon Toast":
- dca_map = {1: dca1/100, 2: dca2/100, 3: dca3/100, 4: dca4/100, 1.1: dca5/100, 2.1: dca6/100}
- df['DCA %'] = df['DCA'].map(dca_map)
- df['Calculated Return %'] = df['Signal'].map(signal_map)*(df['DCA %'])*(1-fees)*((df['Sell Price']-df['Buy Price'])/df['Buy Price'] - fees) #accounts for fees on open and close of trade
- df['DCA'] = np.floor(df['DCA'].values)
-
- df['Return Per Trade'] = np.nan
- df['Balance used in Trade'] = np.nan
- df['New Balance'] = np.nan
-
- g = df.groupby('Exit Date').sum(numeric_only=True)['Calculated Return %'].reset_index(name='Return Per Trade')
- df.loc[df['DCA']==1.0,'Return Per Trade'] = 1+lev*g['Return Per Trade'].values
-
- df['Compounded Return'] = df['Return Per Trade'].cumprod()
- df.loc[df['DCA']==1.0,'New Balance'] = [min(dollar_cap/lev, bal*principal_balance) for bal in df.loc[df['DCA']==1.0,'Compounded Return']]
- df.loc[df['DCA']==1.0,'Balance used in Trade'] = np.concatenate([[principal_balance], df.loc[df['DCA']==1.0,'New Balance'].values[:-1]])
- else:
- df['Calculated Return %'] = df['Signal'].map(signal_map)*(1-fees)*((df['Sell Price']-df['Buy Price'])/df['Buy Price'] - fees) #accounts for fees on open and close of trade
- df['Return Per Trade'] = np.nan
- g = df.groupby('Exit Date').sum(numeric_only=True)['Calculated Return %'].reset_index(name='Return Per Trade')
- df['Return Per Trade'] = 1+lev*g['Return Per Trade'].values
-
- df['Compounded Return'] = df['Return Per Trade'].cumprod()
- df['New Balance'] = [min(dollar_cap/lev, bal*principal_balance) for bal in df['Compounded Return']]
- df['Balance used in Trade'] = np.concatenate([[principal_balance], df['New Balance'].values[:-1]])
- df['Net P/L Per Trade'] = (df['Return Per Trade']-1)*df['Balance used in Trade']
- df['Cumulative P/L'] = df['Net P/L Per Trade'].cumsum()
-
- if bot_selections == "Cinnamon Toast" or bot_selections == "Cosmic Cupcake":
- cum_pl = df.loc[df.drop('Drawdown %', axis=1).dropna().index[-1],'Cumulative P/L'] + principal_balance
- #cum_sdp = sd_df.loc[sd_df.drop('Drawdown %', axis=1).dropna().index[-1],'Cumulative P/L (+)'] + principal_balance
- #cum_sdm = sd_df.loc[sd_df.drop('Drawdown %', axis=1).dropna().index[-1],'Cumulative P/L (-)'] + principal_balance
- else:
- cum_pl = df.loc[df.dropna().index[-1],'Cumulative P/L'] + principal_balance
- #cum_sdp = sd_df.loc[sd_df.dropna().index[-1],'Cumulative P/L (+)'] + principal_balance
- #cum_sdm = sd_df.loc[sd_df.dropna().index[-1],'Cumulative P/L (-)'] + principal_balance
- #sd = 2*.00026
- #sd_df = get_sd_df(get_sd_df(df.copy(), sd, bot_selections, dca1, dca2, dca3, dca4, dca5, dca6, fees, lev, dollar_cap, principal_balance)
-
- effective_return = 100*((cum_pl - principal_balance)/principal_balance)
-
- st.header(f"{bot_selections} Results")
- with st.container():
-
- if len(bot_selections) > 1:
- col1, col2 = st.columns(2)
- with col1:
- st.metric(
- "Total Account Balance",
- f"${cum_pl:.2f}",
- f"{100*(cum_pl-principal_balance)/(principal_balance):.2f} %",
- )
-
-# with col2:
-# st.write("95% of trades should fall within this 2 std. dev. range.")
-# st.metric(
-# "High Range (+ 2 std. dev.)",
-# f"", #${cum_sdp:.2f}
-# f"{100*(cum_sdp-principal_balance)/(principal_balance):.2f} %",
-# )
-# st.metric(
-# "Low Range (- 2 std. dev.)",
-# f"" ,#${cum_sdm:.2f}"
-# f"{100*(cum_sdm-principal_balance)/(principal_balance):.2f} %",
-# )
- if bot_selections == "Cinnamon Toast" or bot_selections == "Cosmic Cupcake" or bot_selections == "Pure Bread":
- #st.line_chart(data=df.drop('Drawdown %', axis=1).dropna(), x='Exit Date', y='Cumulative P/L', use_container_width=True)
- dfdata = df.drop('Drawdown %', axis=1).dropna()
- #sd_df = sd_df.drop('Drawdown %', axis=1).dropna()
- else:
- #st.line_chart(data=df.dropna(), x='Exit Date', y='Cumulative P/L', use_container_width=True)
- dfdata = df.dropna()
- #sd_df = sd_df.dropna()
-
- # Create figure
- fig = go.Figure()
-
- pyLogo = Image.open("logo.png")
-
-# fig.add_traces(go.Scatter(x=sd_df['Exit Date'], y = sd_df['Cumulative P/L (+)'],line_shape='spline',
-# line = dict(smoothing = 1.3, color='rgba(31, 119, 200,0)'), showlegend = False)
-# )
-
-# fig.add_traces(go.Scatter(x=sd_df['Exit Date'], y = sd_df['Cumulative P/L (-)'],
-# line = dict(smoothing = 1.3, color='rgba(31, 119, 200,0)'), line_shape='spline',
-# fill='tonexty',
-# fillcolor = 'rgba(31, 119, 200,.2)', name = '+/- Standard Deviation')
-# )
-
- # Add trace
- fig.add_trace(
- go.Scatter(x=dfdata['Exit Date'], y=np.round(dfdata['Cumulative P/L'].values,2), line_shape='spline',
- line = {'smoothing': 1.0, 'color' : 'rgba(31, 119, 200,.8)'},
- name='Cumulative P/L')
- )
- buyhold = (principal_balance/dfdata['Buy Price'][dfdata.index[0]])*(dfdata['Buy Price']-dfdata['Buy Price'][dfdata.index[0]])
- fig.add_trace(go.Scatter(x=dfdata['Exit Date'], y=np.round(buyhold.values,2), line_shape='spline',
- line = {'smoothing': 1.0, 'color' :'red'}, name = 'Buy & Hold Return')
- )
-
- fig.add_layout_image(
- dict(
- source=pyLogo,
- xref="paper",
- yref="paper",
- x = 0.05, #dfdata['Exit Date'].astype('int64').min() // 10**9,
- y = .85, #dfdata['Cumulative P/L'].max(),
- sizex= .9, #(dfdata['Exit Date'].astype('int64').max() - dfdata['Exit Date'].astype('int64').min()) // 10**9,
- sizey= .9, #(dfdata['Cumulative P/L'].max() - dfdata['Cumulative P/L'].min()),
- sizing="contain",
- opacity=0.2,
- layer = "below")
- )
-
- #style layout
- fig.update_layout(
- height = 600,
- xaxis=dict(
- title="Exit Date",
- tickmode='array',
- ),
- yaxis=dict(
- title="Cumulative P/L"
- ) )
-
- st.plotly_chart(fig, theme=None, use_container_width=True,height=600)
- st.write()
- df['Per Trade Return Rate'] = df['Return Per Trade']-1
-
- totals = pd.DataFrame([], columns = ['# of Trades', 'Wins', 'Losses', 'Win Rate', 'Profit Factor'])
- if bot_selections == "Cinnamon Toast" or bot_selections == "Cosmic Cupcake" or bot_selections == "Pure Bread":
- data = get_hist_info(df.drop('Drawdown %', axis=1).dropna(), principal_balance,'Per Trade Return Rate')
- else:
- data = get_hist_info(df.dropna(), principal_balance,'Per Trade Return Rate')
- totals.loc[len(totals)] = list(i for i in data)
-
- totals['Cum. P/L'] = cum_pl-principal_balance
- totals['Cum. P/L (%)'] = 100*(cum_pl-principal_balance)/principal_balance
-
- if df.empty:
- st.error("Oops! None of the data provided matches your selection(s). Please try again.")
- else:
- with st.container():
- for row in totals.itertuples():
- col1, col2, col3, col4= st.columns(4)
- c1, c2, c3, c4 = st.columns(4)
- with col1:
- st.metric(
- "Total Trades",
- f"{row._1:.0f}",
- )
- with c1:
- st.metric(
- "Profit Factor",
- f"{row._5:.2f}",
- )
- with col2:
- st.metric(
- "Wins",
- f"{row.Wins:.0f}",
- )
- with c2:
- st.metric(
- "Cumulative P/L",
- f"${row._6:.2f}",
- f"{row._7:.2f} %",
- )
- with col3:
- st.metric(
- "Losses",
- f"{row.Losses:.0f}",
- )
- with c3:
- st.metric(
- "Rolling 7 Days",
- "",#f"{(1+get_rolling_stats(df,otimeheader, 30))*principal_balance:.2f}",
- f"{get_rolling_stats(df,lev, otimeheader, 7):.2f}%",
- )
- st.metric(
- "Rolling 30 Days",
- "",#f"{(1+get_rolling_stats(df,otimeheader, 30))*principal_balance:.2f}",
- f"{get_rolling_stats(df,lev, otimeheader, 30):.2f}%",
- )
-
- with col4:
- st.metric(
- "Win Rate",
- f"{row._4:.1f}%",
- )
- with c4:
- st.metric(
- "Rolling 90 Days",
- "",#f"{(1+get_rolling_stats(df,otimeheader, 30))*principal_balance:.2f}",
- f"{get_rolling_stats(df,lev, otimeheader, 90):.2f}%",
- )
- st.metric(
- "Rolling 180 Days",
- "",#f"{(1+get_rolling_stats(df,otimeheader, 30))*principal_balance:.2f}",
- f"{get_rolling_stats(df,lev, otimeheader, 180):.2f}%",
- )
-
- if bot_selections == "Cinnamon Toast" and no_errors:
- if submitted:
- grouped_df = df.groupby('Exit Date').agg({'Signal':'min','Entry Date': 'min','Exit Date': 'max','Buy Price': 'mean',
- 'Sell Price' : 'max',
- 'Net P/L Per Trade': 'mean',
- 'Calculated Return %' : lambda x: np.round(100*lev*x.sum(),2),
- 'DCA': lambda x: int(np.floor(x.max()))})
- grouped_df.index = range(1, len(grouped_df)+1)
- grouped_df.rename(columns={'DCA' : '# of DCAs', 'Buy Price':'Avg. Buy Price',
- 'Net P/L Per Trade':'Net P/L',
- 'Calculated Return %':'P/L %'}, inplace=True)
- else:
- dca_map = {1: 25/100, 2: 25/100, 3: 25/100, 4: 25/100, 1.1: 50/100, 2.1: 50/100}
- df['DCA %'] = df['DCA'].map(dca_map)
- df['Calculated Return %'] = (df['DCA %'])*(1-fees)*((df['Sell Price']-df['Buy Price'])/df['Buy Price'] - fees) #accounts for fees on open and close of trade
-
- grouped_df = df.groupby('Exit Date').agg({'Signal':'min','Entry Date': 'min','Exit Date': 'max','Buy Price': 'mean',
- 'Sell Price' : 'max',
- 'P/L per token': 'mean',
- 'Calculated Return %' : lambda x: np.round(100*x.sum(),2),
- 'DCA': lambda x: int(np.floor(x.max()))})
- grouped_df.index = range(1, len(grouped_df)+1)
- grouped_df.rename(columns={'DCA' : '# of DCAs', 'Buy Price':'Avg. Buy Price',
- 'Calculated Return %':'P/L %',
- 'P/L per token':'Net P/L'}, inplace=True)
-
- else:
- if submitted and not(df.empty):
- grouped_df = df.groupby('Exit Date').agg({'Signal':'min','Entry Date': 'min','Exit Date': 'max','Buy Price': 'mean',
- 'Sell Price' : 'max',
- 'Net P/L Per Trade': 'mean',
- 'Calculated Return %' : lambda x: np.round(100*lev*x.sum(),2)})
- grouped_df.index = range(1, len(grouped_df)+1)
- grouped_df.rename(columns={'Buy Price':'Avg. Buy Price',
- 'Net P/L Per Trade':'Net P/L',
- 'Calculated Return %':'P/L %'}, inplace=True)
- else:
- grouped_df = df.groupby('Exit Date').agg({'Signal':'min','Entry Date': 'min','Exit Date': 'max','Buy Price': 'mean',
- 'Sell Price' : 'max',
- 'P/L per token': 'mean',
- 'P/L %':'mean'})
- grouped_df.index = range(1, len(grouped_df)+1)
- grouped_df.rename(columns={'Buy Price':'Avg. Buy Price',
- 'P/L per token':'Net P/L'}, inplace=True)
- st.subheader("Trade Logs")
- grouped_df['Entry Date'] = pd.to_datetime(grouped_df['Entry Date'])
- grouped_df['Exit Date'] = pd.to_datetime(grouped_df['Exit Date'])
- if bot_selections == "Cosmic Cupcake" or bot_selections == "CT Toasted":
- coding = cc_coding if bot_selections == "Cosmic Cupcake" else ctt_coding
- st.dataframe(grouped_df.style.format({'Entry Date':'{:%m-%d-%Y %H:%M:%S}','Exit Date':'{:%m-%d-%Y %H:%M:%S}','Avg. Buy Price': '${:.2f}', 'Sell Price': '${:.2f}', 'Net P/L':'${:.2f}', 'P/L %':'{:.2f}%'})\
- .apply(coding, axis=1)\
- .applymap(my_style,subset=['Net P/L'])\
- .applymap(my_style,subset=['P/L %']), use_container_width=True)
- new_title = '
Not Live Traded
'
- st.markdown(new_title, unsafe_allow_html=True)
- elif bot_selections == "Pure Bread":
- st.dataframe(grouped_df.style.format({'Entry Date':'{:%m-%d-%Y %H:%M:%S}','Exit Date':'{:%m-%d-%Y %H:%M:%S}','Avg. Buy Price': '${:.4f}', 'Sell Price': '${:.4f}', 'Net P/L': conditional_formatter, 'P/L %':'{:.2f}%'})\
- .applymap(my_style,subset=['Net P/L'])\
- .applymap(my_style,subset=['P/L %']), use_container_width=True)
- else:
- st.dataframe(grouped_df.style.format({'Entry Date':'{:%m-%d-%Y %H:%M:%S}','Exit Date':'{:%m-%d-%Y %H:%M:%S}','Avg. Buy Price': '${:.2f}', 'Sell Price': '${:.2f}', 'Net P/L':'${:.2f}', 'P/L %':'{:.2f}%'})\
- .applymap(my_style,subset=['Net P/L'])\
- .applymap(my_style,subset=['P/L %']), use_container_width=True)
-
-# st.subheader("Checking Status")
-# if submitted:
-# st.dataframe(sd_df)
-
-if __name__ == "__main__":
- st.set_page_config(
- "Trading Bot Dashboard",
- layout="wide",
- )
- runapp()
-# -
-
-
-
-
diff --git a/spaces/BwayKC/darkstorm2150-Protogen_v2.2_Official_Release/README.md b/spaces/BwayKC/darkstorm2150-Protogen_v2.2_Official_Release/README.md
deleted file mode 100644
index 25c9b91b661817a6bad5b1adbc2873b9c94f9a22..0000000000000000000000000000000000000000
--- a/spaces/BwayKC/darkstorm2150-Protogen_v2.2_Official_Release/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Darkstorm2150-Protogen V2.2 Official Release
-emoji: 💻
-colorFrom: red
-colorTo: gray
-sdk: gradio
-sdk_version: 3.16.0
-app_file: app.py
-pinned: false
-license: openrail
-duplicated_from: jroust/darkstorm2150-Protogen_v2.2_Official_Release
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CAMP-ViL/Xplainer/article.md b/spaces/CAMP-ViL/Xplainer/article.md
deleted file mode 100644
index ba13f417b290f7079f811b8ef491c8f530555ed1..0000000000000000000000000000000000000000
--- a/spaces/CAMP-ViL/Xplainer/article.md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-We propose a new way of explainability for zero-shot diagnosis prediction in the clinical domain. Instead of directly predicting a diagnosis, we prompt the model to classify the existence of descriptive observations, which a radiologist would look for on an X-Ray scan, and use the descriptor probabilities to estimate the likelihood of a diagnosis, making our model explainable by design. For this we leverage BioVil, a pretrained CLIP model for X-rays and apply contrastive observation-based prompting. We evaluate Xplainer on two chest X-ray
-datasets, CheXpert and ChestX-ray14, and demonstrate its effectiveness
-in improving the performance and explainability of zero-shot diagnosis.
-**Authors**: [Chantal Pellegrini][cp], [Matthias Keicher][mk], [Ege Özsoy][eo], [Petra Jiraskova][pj], [Rickmer Braren][rb], [Nassir Navab][nn]
-
-[cp]:https://www.cs.cit.tum.de/camp/members/chantal-pellegrini/
-[eo]:https://www.cs.cit.tum.de/camp/members/ege-oezsoy/
-[mk]:https://www.cs.cit.tum.de/camp/members/matthias-keicher/
-[pj]:https://campus.tum.de/tumonline/ee/ui/ca2/app/desktop/#/pl/ui/$ctx/visitenkarte.show_vcard?$ctx=design=ca2;header=max;lang=de&pPersonenGruppe=3&pPersonenId=46F3A857F258DEE6
-[rb]:https://radiologie.mri.tum.de/de/person/prof-dr-rickmer-f-braren
-[nn]:https://www.cs.cit.tum.de/camp/members/cv-nassir-navab/nassir-navab/
-
-**License**: MIT
-
-**Where to send questions or comments about the model**: Open an issue on [`Xplainer`](https://github.com/ChantalMP/Xplainer) repo.
-
-**Intended Use**: This model is intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper.
-
-**Primary intended uses/users**: Vision-Language and CAD researchers
-
-
-## Citation
-```bib
-@article{pellegrini2023xplainer,
- title={Xplainer: From X-Ray Observations to Explainable Zero-Shot Diagnosis},
- author={Pellegrini, Chantal and Keicher, Matthias and {\"O}zsoy, Ege and Jiraskova, Petra and Braren, Rickmer and Navab, Nassir},
- journal={arXiv preprint arXiv:2303.13391},
- year={2023}
-}
-```
\ No newline at end of file
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/README.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/README.md
deleted file mode 100644
index 404b49b7ce647a3d9e612af373cbb0f66aed79da..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/README.md
+++ /dev/null
@@ -1,56 +0,0 @@
-
-
-Detectron2 is Facebook AI Research's next generation software system
-that implements state-of-the-art object detection algorithms.
-It is a ground-up rewrite of the previous version,
-[Detectron](https://github.com/facebookresearch/Detectron/),
-and it originates from [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark/).
-
-
-
-
-
-### What's New
-* It is powered by the [PyTorch](https://pytorch.org) deep learning framework.
-* Includes more features such as panoptic segmentation, densepose, Cascade R-CNN, rotated bounding boxes, etc.
-* Can be used as a library to support [different projects](projects/) on top of it.
- We'll open source more research projects in this way.
-* It [trains much faster](https://detectron2.readthedocs.io/notes/benchmarks.html).
-
-See our [blog post](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-/)
-to see more demos and learn about detectron2.
-
-## Installation
-
-See [INSTALL.md](INSTALL.md).
-
-## Quick Start
-
-See [GETTING_STARTED.md](GETTING_STARTED.md),
-or the [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5).
-
-Learn more at our [documentation](https://detectron2.readthedocs.org).
-And see [projects/](projects/) for some projects that are built on top of detectron2.
-
-## Model Zoo and Baselines
-
-We provide a large set of baseline results and trained models available for download in the [Detectron2 Model Zoo](MODEL_ZOO.md).
-
-
-## License
-
-Detectron2 is released under the [Apache 2.0 license](LICENSE).
-
-## Citing Detectron
-
-If you use Detectron2 in your research or wish to refer to the baseline results published in the [Model Zoo](MODEL_ZOO.md), please use the following BibTeX entry.
-
-```BibTeX
-@misc{wu2019detectron2,
- author = {Yuxin Wu and Alexander Kirillov and Francisco Massa and
- Wan-Yen Lo and Ross Girshick},
- title = {Detectron2},
- howpublished = {\url{https://github.com/facebookresearch/detectron2}},
- year = {2019}
-}
-```
diff --git a/spaces/CVPR/Text2Human/Text2Human/models/archs/unet_arch.py b/spaces/CVPR/Text2Human/Text2Human/models/archs/unet_arch.py
deleted file mode 100644
index b110d6938a0a1565e07518bb98a04eb608fc3f14..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Text2Human/Text2Human/models/archs/unet_arch.py
+++ /dev/null
@@ -1,693 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.utils.checkpoint as cp
-from mmcv.cnn import (UPSAMPLE_LAYERS, ConvModule, build_activation_layer,
- build_norm_layer, build_upsample_layer, constant_init,
- kaiming_init)
-from mmcv.runner import load_checkpoint
-from mmcv.utils.parrots_wrapper import _BatchNorm
-from mmseg.utils import get_root_logger
-
-
-class UpConvBlock(nn.Module):
- """Upsample convolution block in decoder for UNet.
-
- This upsample convolution block consists of one upsample module
- followed by one convolution block. The upsample module expands the
- high-level low-resolution feature map and the convolution block fuses
- the upsampled high-level low-resolution feature map and the low-level
- high-resolution feature map from encoder.
-
- Args:
- conv_block (nn.Sequential): Sequential of convolutional layers.
- in_channels (int): Number of input channels of the high-level
- skip_channels (int): Number of input channels of the low-level
- high-resolution feature map from encoder.
- out_channels (int): Number of output channels.
- num_convs (int): Number of convolutional layers in the conv_block.
- Default: 2.
- stride (int): Stride of convolutional layer in conv_block. Default: 1.
- dilation (int): Dilation rate of convolutional layer in conv_block.
- Default: 1.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- conv_cfg (dict | None): Config dict for convolution layer.
- Default: None.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- upsample_cfg (dict): The upsample config of the upsample module in
- decoder. Default: dict(type='InterpConv'). If the size of
- high-level feature map is the same as that of skip feature map
- (low-level feature map from encoder), it does not need upsample the
- high-level feature map and the upsample_cfg is None.
- dcn (bool): Use deformable convoluton in convolutional layer or not.
- Default: None.
- plugins (dict): plugins for convolutional layers. Default: None.
- """
-
- def __init__(self,
- conv_block,
- in_channels,
- skip_channels,
- out_channels,
- num_convs=2,
- stride=1,
- dilation=1,
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- upsample_cfg=dict(type='InterpConv'),
- dcn=None,
- plugins=None):
- super(UpConvBlock, self).__init__()
- assert dcn is None, 'Not implemented yet.'
- assert plugins is None, 'Not implemented yet.'
-
- self.conv_block = conv_block(
- in_channels=2 * skip_channels,
- out_channels=out_channels,
- num_convs=num_convs,
- stride=stride,
- dilation=dilation,
- with_cp=with_cp,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- dcn=None,
- plugins=None)
- if upsample_cfg is not None:
- self.upsample = build_upsample_layer(
- cfg=upsample_cfg,
- in_channels=in_channels,
- out_channels=skip_channels,
- with_cp=with_cp,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg)
- else:
- self.upsample = ConvModule(
- in_channels,
- skip_channels,
- kernel_size=1,
- stride=1,
- padding=0,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg)
-
- def forward(self, skip, x):
- """Forward function."""
-
- x = self.upsample(x)
- out = torch.cat([skip, x], dim=1)
- out = self.conv_block(out)
-
- return out
-
-
-class BasicConvBlock(nn.Module):
- """Basic convolutional block for UNet.
-
- This module consists of several plain convolutional layers.
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- num_convs (int): Number of convolutional layers. Default: 2.
- stride (int): Whether use stride convolution to downsample
- the input feature map. If stride=2, it only uses stride convolution
- in the first convolutional layer to downsample the input feature
- map. Options are 1 or 2. Default: 1.
- dilation (int): Whether use dilated convolution to expand the
- receptive field. Set dilation rate of each convolutional layer and
- the dilation rate of the first convolutional layer is always 1.
- Default: 1.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- conv_cfg (dict | None): Config dict for convolution layer.
- Default: None.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- dcn (bool): Use deformable convoluton in convolutional layer or not.
- Default: None.
- plugins (dict): plugins for convolutional layers. Default: None.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_convs=2,
- stride=1,
- dilation=1,
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- dcn=None,
- plugins=None):
- super(BasicConvBlock, self).__init__()
- assert dcn is None, 'Not implemented yet.'
- assert plugins is None, 'Not implemented yet.'
-
- self.with_cp = with_cp
- convs = []
- for i in range(num_convs):
- convs.append(
- ConvModule(
- in_channels=in_channels if i == 0 else out_channels,
- out_channels=out_channels,
- kernel_size=3,
- stride=stride if i == 0 else 1,
- dilation=1 if i == 0 else dilation,
- padding=1 if i == 0 else dilation,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
-
- self.convs = nn.Sequential(*convs)
-
- def forward(self, x):
- """Forward function."""
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(self.convs, x)
- else:
- out = self.convs(x)
- return out
-
-
-class DeconvModule(nn.Module):
- """Deconvolution upsample module in decoder for UNet (2X upsample).
-
- This module uses deconvolution to upsample feature map in the decoder
- of UNet.
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- kernel_size (int): Kernel size of the convolutional layer. Default: 4.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- with_cp=False,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- *,
- kernel_size=4,
- scale_factor=2):
- super(DeconvModule, self).__init__()
-
- assert (kernel_size - scale_factor >= 0) and\
- (kernel_size - scale_factor) % 2 == 0,\
- f'kernel_size should be greater than or equal to scale_factor '\
- f'and (kernel_size - scale_factor) should be even numbers, '\
- f'while the kernel size is {kernel_size} and scale_factor is '\
- f'{scale_factor}.'
-
- stride = scale_factor
- padding = (kernel_size - scale_factor) // 2
- self.with_cp = with_cp
- deconv = nn.ConvTranspose2d(
- in_channels,
- out_channels,
- kernel_size=kernel_size,
- stride=stride,
- padding=padding)
-
- norm_name, norm = build_norm_layer(norm_cfg, out_channels)
- activate = build_activation_layer(act_cfg)
- self.deconv_upsamping = nn.Sequential(deconv, norm, activate)
-
- def forward(self, x):
- """Forward function."""
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(self.deconv_upsamping, x)
- else:
- out = self.deconv_upsamping(x)
- return out
-
-
-@UPSAMPLE_LAYERS.register_module()
-class InterpConv(nn.Module):
- """Interpolation upsample module in decoder for UNet.
-
- This module uses interpolation to upsample feature map in the decoder
- of UNet. It consists of one interpolation upsample layer and one
- convolutional layer. It can be one interpolation upsample layer followed
- by one convolutional layer (conv_first=False) or one convolutional layer
- followed by one interpolation upsample layer (conv_first=True).
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- conv_cfg (dict | None): Config dict for convolution layer.
- Default: None.
- conv_first (bool): Whether convolutional layer or interpolation
- upsample layer first. Default: False. It means interpolation
- upsample layer followed by one convolutional layer.
- kernel_size (int): Kernel size of the convolutional layer. Default: 1.
- stride (int): Stride of the convolutional layer. Default: 1.
- padding (int): Padding of the convolutional layer. Default: 1.
- upsampe_cfg (dict): Interpolation config of the upsample layer.
- Default: dict(
- scale_factor=2, mode='bilinear', align_corners=False).
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- with_cp=False,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- *,
- conv_cfg=None,
- conv_first=False,
- kernel_size=1,
- stride=1,
- padding=0,
- upsampe_cfg=dict(
- scale_factor=2, mode='bilinear', align_corners=False)):
- super(InterpConv, self).__init__()
-
- self.with_cp = with_cp
- conv = ConvModule(
- in_channels,
- out_channels,
- kernel_size=kernel_size,
- stride=stride,
- padding=padding,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg)
- upsample = nn.Upsample(**upsampe_cfg)
- if conv_first:
- self.interp_upsample = nn.Sequential(conv, upsample)
- else:
- self.interp_upsample = nn.Sequential(upsample, conv)
-
- def forward(self, x):
- """Forward function."""
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(self.interp_upsample, x)
- else:
- out = self.interp_upsample(x)
- return out
-
-
-class UNet(nn.Module):
- """UNet backbone.
- U-Net: Convolutional Networks for Biomedical Image Segmentation.
- https://arxiv.org/pdf/1505.04597.pdf
-
- Args:
- in_channels (int): Number of input image channels. Default" 3.
- base_channels (int): Number of base channels of each stage.
- The output channels of the first stage. Default: 64.
- num_stages (int): Number of stages in encoder, normally 5. Default: 5.
- strides (Sequence[int 1 | 2]): Strides of each stage in encoder.
- len(strides) is equal to num_stages. Normally the stride of the
- first stage in encoder is 1. If strides[i]=2, it uses stride
- convolution to downsample in the correspondence encoder stage.
- Default: (1, 1, 1, 1, 1).
- enc_num_convs (Sequence[int]): Number of convolutional layers in the
- convolution block of the correspondence encoder stage.
- Default: (2, 2, 2, 2, 2).
- dec_num_convs (Sequence[int]): Number of convolutional layers in the
- convolution block of the correspondence decoder stage.
- Default: (2, 2, 2, 2).
- downsamples (Sequence[int]): Whether use MaxPool to downsample the
- feature map after the first stage of encoder
- (stages: [1, num_stages)). If the correspondence encoder stage use
- stride convolution (strides[i]=2), it will never use MaxPool to
- downsample, even downsamples[i-1]=True.
- Default: (True, True, True, True).
- enc_dilations (Sequence[int]): Dilation rate of each stage in encoder.
- Default: (1, 1, 1, 1, 1).
- dec_dilations (Sequence[int]): Dilation rate of each stage in decoder.
- Default: (1, 1, 1, 1).
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- conv_cfg (dict | None): Config dict for convolution layer.
- Default: None.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- upsample_cfg (dict): The upsample config of the upsample module in
- decoder. Default: dict(type='InterpConv').
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only. Default: False.
- dcn (bool): Use deformable convolution in convolutional layer or not.
- Default: None.
- plugins (dict): plugins for convolutional layers. Default: None.
-
- Notice:
- The input image size should be devisible by the whole downsample rate
- of the encoder. More detail of the whole downsample rate can be found
- in UNet._check_input_devisible.
-
- """
-
- def __init__(self,
- in_channels=3,
- base_channels=64,
- num_stages=5,
- strides=(1, 1, 1, 1, 1),
- enc_num_convs=(2, 2, 2, 2, 2),
- dec_num_convs=(2, 2, 2, 2),
- downsamples=(True, True, True, True),
- enc_dilations=(1, 1, 1, 1, 1),
- dec_dilations=(1, 1, 1, 1),
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- upsample_cfg=dict(type='InterpConv'),
- norm_eval=False,
- dcn=None,
- plugins=None):
- super(UNet, self).__init__()
- assert dcn is None, 'Not implemented yet.'
- assert plugins is None, 'Not implemented yet.'
- assert len(strides) == num_stages, \
- 'The length of strides should be equal to num_stages, '\
- f'while the strides is {strides}, the length of '\
- f'strides is {len(strides)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(enc_num_convs) == num_stages, \
- 'The length of enc_num_convs should be equal to num_stages, '\
- f'while the enc_num_convs is {enc_num_convs}, the length of '\
- f'enc_num_convs is {len(enc_num_convs)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(dec_num_convs) == (num_stages-1), \
- 'The length of dec_num_convs should be equal to (num_stages-1), '\
- f'while the dec_num_convs is {dec_num_convs}, the length of '\
- f'dec_num_convs is {len(dec_num_convs)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(downsamples) == (num_stages-1), \
- 'The length of downsamples should be equal to (num_stages-1), '\
- f'while the downsamples is {downsamples}, the length of '\
- f'downsamples is {len(downsamples)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(enc_dilations) == num_stages, \
- 'The length of enc_dilations should be equal to num_stages, '\
- f'while the enc_dilations is {enc_dilations}, the length of '\
- f'enc_dilations is {len(enc_dilations)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(dec_dilations) == (num_stages-1), \
- 'The length of dec_dilations should be equal to (num_stages-1), '\
- f'while the dec_dilations is {dec_dilations}, the length of '\
- f'dec_dilations is {len(dec_dilations)}, and the num_stages is '\
- f'{num_stages}.'
- self.num_stages = num_stages
- self.strides = strides
- self.downsamples = downsamples
- self.norm_eval = norm_eval
-
- self.encoder = nn.ModuleList()
- self.decoder = nn.ModuleList()
-
- for i in range(num_stages):
- enc_conv_block = []
- if i != 0:
- if strides[i] == 1 and downsamples[i - 1]:
- enc_conv_block.append(nn.MaxPool2d(kernel_size=2))
- upsample = (strides[i] != 1 or downsamples[i - 1])
- self.decoder.append(
- UpConvBlock(
- conv_block=BasicConvBlock,
- in_channels=base_channels * 2**i,
- skip_channels=base_channels * 2**(i - 1),
- out_channels=base_channels * 2**(i - 1),
- num_convs=dec_num_convs[i - 1],
- stride=1,
- dilation=dec_dilations[i - 1],
- with_cp=with_cp,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- upsample_cfg=upsample_cfg if upsample else None,
- dcn=None,
- plugins=None))
-
- enc_conv_block.append(
- BasicConvBlock(
- in_channels=in_channels,
- out_channels=base_channels * 2**i,
- num_convs=enc_num_convs[i],
- stride=strides[i],
- dilation=enc_dilations[i],
- with_cp=with_cp,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- dcn=None,
- plugins=None))
- self.encoder.append((nn.Sequential(*enc_conv_block)))
- in_channels = base_channels * 2**i
-
- def forward(self, x):
- enc_outs = []
-
- for enc in self.encoder:
- x = enc(x)
- enc_outs.append(x)
- dec_outs = [x]
- for i in reversed(range(len(self.decoder))):
- x = self.decoder[i](enc_outs[i], x)
- dec_outs.append(x)
-
- return dec_outs
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
- constant_init(m, 1)
- else:
- raise TypeError('pretrained must be a str or None')
-
-
-class ShapeUNet(nn.Module):
- """ShapeUNet backbone with small modifications.
- U-Net: Convolutional Networks for Biomedical Image Segmentation.
- https://arxiv.org/pdf/1505.04597.pdf
-
- Args:
- in_channels (int): Number of input image channels. Default" 3.
- base_channels (int): Number of base channels of each stage.
- The output channels of the first stage. Default: 64.
- num_stages (int): Number of stages in encoder, normally 5. Default: 5.
- strides (Sequence[int 1 | 2]): Strides of each stage in encoder.
- len(strides) is equal to num_stages. Normally the stride of the
- first stage in encoder is 1. If strides[i]=2, it uses stride
- convolution to downsample in the correspondance encoder stage.
- Default: (1, 1, 1, 1, 1).
- enc_num_convs (Sequence[int]): Number of convolutional layers in the
- convolution block of the correspondance encoder stage.
- Default: (2, 2, 2, 2, 2).
- dec_num_convs (Sequence[int]): Number of convolutional layers in the
- convolution block of the correspondance decoder stage.
- Default: (2, 2, 2, 2).
- downsamples (Sequence[int]): Whether use MaxPool to downsample the
- feature map after the first stage of encoder
- (stages: [1, num_stages)). If the correspondance encoder stage use
- stride convolution (strides[i]=2), it will never use MaxPool to
- downsample, even downsamples[i-1]=True.
- Default: (True, True, True, True).
- enc_dilations (Sequence[int]): Dilation rate of each stage in encoder.
- Default: (1, 1, 1, 1, 1).
- dec_dilations (Sequence[int]): Dilation rate of each stage in decoder.
- Default: (1, 1, 1, 1).
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- conv_cfg (dict | None): Config dict for convolution layer.
- Default: None.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- upsample_cfg (dict): The upsample config of the upsample module in
- decoder. Default: dict(type='InterpConv').
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only. Default: False.
- dcn (bool): Use deformable convoluton in convolutional layer or not.
- Default: None.
- plugins (dict): plugins for convolutional layers. Default: None.
-
- Notice:
- The input image size should be devisible by the whole downsample rate
- of the encoder. More detail of the whole downsample rate can be found
- in UNet._check_input_devisible.
-
- """
-
- def __init__(self,
- in_channels=3,
- base_channels=64,
- num_stages=5,
- attr_embedding=128,
- strides=(1, 1, 1, 1, 1),
- enc_num_convs=(2, 2, 2, 2, 2),
- dec_num_convs=(2, 2, 2, 2),
- downsamples=(True, True, True, True),
- enc_dilations=(1, 1, 1, 1, 1),
- dec_dilations=(1, 1, 1, 1),
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- upsample_cfg=dict(type='InterpConv'),
- norm_eval=False,
- dcn=None,
- plugins=None):
- super(ShapeUNet, self).__init__()
- assert dcn is None, 'Not implemented yet.'
- assert plugins is None, 'Not implemented yet.'
- assert len(strides) == num_stages, \
- 'The length of strides should be equal to num_stages, '\
- f'while the strides is {strides}, the length of '\
- f'strides is {len(strides)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(enc_num_convs) == num_stages, \
- 'The length of enc_num_convs should be equal to num_stages, '\
- f'while the enc_num_convs is {enc_num_convs}, the length of '\
- f'enc_num_convs is {len(enc_num_convs)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(dec_num_convs) == (num_stages-1), \
- 'The length of dec_num_convs should be equal to (num_stages-1), '\
- f'while the dec_num_convs is {dec_num_convs}, the length of '\
- f'dec_num_convs is {len(dec_num_convs)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(downsamples) == (num_stages-1), \
- 'The length of downsamples should be equal to (num_stages-1), '\
- f'while the downsamples is {downsamples}, the length of '\
- f'downsamples is {len(downsamples)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(enc_dilations) == num_stages, \
- 'The length of enc_dilations should be equal to num_stages, '\
- f'while the enc_dilations is {enc_dilations}, the length of '\
- f'enc_dilations is {len(enc_dilations)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(dec_dilations) == (num_stages-1), \
- 'The length of dec_dilations should be equal to (num_stages-1), '\
- f'while the dec_dilations is {dec_dilations}, the length of '\
- f'dec_dilations is {len(dec_dilations)}, and the num_stages is '\
- f'{num_stages}.'
- self.num_stages = num_stages
- self.strides = strides
- self.downsamples = downsamples
- self.norm_eval = norm_eval
-
- self.encoder = nn.ModuleList()
- self.decoder = nn.ModuleList()
-
- for i in range(num_stages):
- enc_conv_block = []
- if i != 0:
- if strides[i] == 1 and downsamples[i - 1]:
- enc_conv_block.append(nn.MaxPool2d(kernel_size=2))
- upsample = (strides[i] != 1 or downsamples[i - 1])
- self.decoder.append(
- UpConvBlock(
- conv_block=BasicConvBlock,
- in_channels=base_channels * 2**i,
- skip_channels=base_channels * 2**(i - 1),
- out_channels=base_channels * 2**(i - 1),
- num_convs=dec_num_convs[i - 1],
- stride=1,
- dilation=dec_dilations[i - 1],
- with_cp=with_cp,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- upsample_cfg=upsample_cfg if upsample else None,
- dcn=None,
- plugins=None))
-
- enc_conv_block.append(
- BasicConvBlock(
- in_channels=in_channels + attr_embedding,
- out_channels=base_channels * 2**i,
- num_convs=enc_num_convs[i],
- stride=strides[i],
- dilation=enc_dilations[i],
- with_cp=with_cp,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- dcn=None,
- plugins=None))
- self.encoder.append((nn.Sequential(*enc_conv_block)))
- in_channels = base_channels * 2**i
-
- def forward(self, x, attr_embedding):
- enc_outs = []
- Be, Ce = attr_embedding.size()
- for enc in self.encoder:
- _, _, H, W = x.size()
- x = enc(
- torch.cat([
- x,
- attr_embedding.view(Be, Ce, 1, 1).expand((Be, Ce, H, W))
- ],
- dim=1))
- enc_outs.append(x)
- dec_outs = [x]
- for i in reversed(range(len(self.decoder))):
- x = self.decoder[i](enc_outs[i], x)
- dec_outs.append(x)
-
- return dec_outs
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
- constant_init(m, 1)
- else:
- raise TypeError('pretrained must be a str or None')
diff --git a/spaces/CVPR/Text2Human/Text2Human/sample_from_parsing.py b/spaces/CVPR/Text2Human/Text2Human/sample_from_parsing.py
deleted file mode 100644
index 954f389e7e3b320c763e755400ea5fd6aaf8736d..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Text2Human/Text2Human/sample_from_parsing.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import argparse
-import logging
-import os.path as osp
-import random
-
-import torch
-
-from data.segm_attr_dataset import DeepFashionAttrSegmDataset
-from models import create_model
-from utils.logger import get_root_logger
-from utils.options import dict2str, dict_to_nonedict, parse
-from utils.util import make_exp_dirs, set_random_seed
-
-
-def main():
- # options
- parser = argparse.ArgumentParser()
- parser.add_argument('-opt', type=str, help='Path to option YAML file.')
- args = parser.parse_args()
- opt = parse(args.opt, is_train=False)
-
- # mkdir and loggers
- make_exp_dirs(opt)
- log_file = osp.join(opt['path']['log'], f"test_{opt['name']}.log")
- logger = get_root_logger(
- logger_name='base', log_level=logging.INFO, log_file=log_file)
- logger.info(dict2str(opt))
-
- # convert to NoneDict, which returns None for missing keys
- opt = dict_to_nonedict(opt)
-
- # random seed
- seed = opt['manual_seed']
- if seed is None:
- seed = random.randint(1, 10000)
- logger.info(f'Random seed: {seed}')
- set_random_seed(seed)
-
- test_dataset = DeepFashionAttrSegmDataset(
- img_dir=opt['test_img_dir'],
- segm_dir=opt['segm_dir'],
- pose_dir=opt['pose_dir'],
- ann_dir=opt['test_ann_file'])
- test_loader = torch.utils.data.DataLoader(
- dataset=test_dataset, batch_size=4, shuffle=False)
- logger.info(f'Number of test set: {len(test_dataset)}.')
-
- model = create_model(opt)
- _ = model.inference(test_loader, opt['path']['results_root'])
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/Cambino/dog-classifier-gradio/app.py b/spaces/Cambino/dog-classifier-gradio/app.py
deleted file mode 100644
index b9010e8dfcdda7216d590a5e8bc72d027743744f..0000000000000000000000000000000000000000
--- a/spaces/Cambino/dog-classifier-gradio/app.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import torch
-from PIL import Image
-from torchvision import transforms
-import pandas as pd
-import numpy as np
-
-from DogBreedClassifier import DogBreedClassifier
-
-import gradio as gr
-
-class_labels = pd.read_csv('stanford dogs breeds')
-class_labels = class_labels["breed_names"].tolist()
-
-# LOAD MODEL
-dog_model = DogBreedClassifier()
-dog_model.load_state_dict(torch.load("model_state_dict.pth", map_location=torch.device("cpu")))
-
-torch.hub.download_url_to_file("https://upload.wikimedia.org/wikipedia/commons/thumb/3/34/Labrador_on_Quantock_%282175262184%29.jpg/1200px-Labrador_on_Quantock_%282175262184%29.jpg", "labrador-retriever.jpeg")
-torch.hub.download_url_to_file("https://upload.wikimedia.org/wikipedia/commons/d/d0/German_Shepherd_-_DSC_0346_%2810096362833%29.jpg", "german-shepard.jpeg")
-torch.hub.download_url_to_file("https://i.imgur.com/HqlDKO4.jpg", "shiba-ushi.jpeg")
-
-
-def process_input(image):
- """image is pil image object"""
- image_transforms = transforms.Compose([transforms.Resize((224, 224)),
- transforms.ToTensor(),
- transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
- ])
-
- # apply transforms and add batch dimension for model input
- image = image_transforms(image)
- image_input = image.unsqueeze(0)
-
- return image_input
-
-def predict(inp):
-
- image_input = process_input(inp)
-
- dog_model.eval()
- with torch.no_grad():
- prediction = torch.nn.functional.softmax(dog_model(image_input)[0], dim=0)
- confidences = {class_labels[i]: float(prediction[i]) for i in range(len(class_labels))}
-
- return confidences
-
-# confidences = predict(dog_model, dudley_image)
-#
-# print(confidences)
-
-gr.Interface(fn=predict,
- inputs=gr.Image(type="pil"),
- outputs=gr.Label(num_top_classes=10),
- examples=["labrador-retriever.jpeg", "german-shepard.jpeg", "shiba-ushi.jpeg"]).launch()
\ No newline at end of file
diff --git a/spaces/Chris4K/llms_compare/Xforce-Keygen-UPD-64bits-Vehicle-Tracking-2018.md b/spaces/Chris4K/llms_compare/Xforce-Keygen-UPD-64bits-Vehicle-Tracking-2018.md
deleted file mode 100644
index e26ca5fd1eab72d5af931d088aab248d7b4bc70f..0000000000000000000000000000000000000000
--- a/spaces/Chris4K/llms_compare/Xforce-Keygen-UPD-64bits-Vehicle-Tracking-2018.md
+++ /dev/null
@@ -1,118 +0,0 @@
-## Xforce Keygen 64bits Vehicle Tracking 2018
-
-
-
-
-
-
-
-
-
-**LINK ✵✵✵ [https://urluso.com/2tBNy0](https://urluso.com/2tBNy0)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# How to Use Xforce Keygen 64bits for Vehicle Tracking 2018
-
-
-
-If you are looking for a way to crack Autodesk products quickly and accurately, you may want to try Xforce Keygen 64bits. Xforce Keygen is a software that activates your AutoCad and allows you to create your own artificial virtual reality world or probably everything[^3^]. In this article, we will show you how to use Xforce Keygen 64bits for Vehicle Tracking 2018, a software that helps you design, analyze and simulate transportation systems[^2^].
-
-
-
-## Steps to Use Xforce Keygen 64bits for Vehicle Tracking 2018
-
-
-
-1. Download Xforce Keygen 64bits from the link below[^1^]. Make sure you choose the correct version for your operating system.
-
-2. Extract the zip file and run the xf-adsk2018\_v3.exe file as administrator.
-
-3. Select Vehicle Tracking 2018 from the list of Autodesk products and click on Generate.
-
-4. Copy the generated activation code and paste it in the Autodesk activation screen.
-
-5. Click on Next and enjoy your cracked Vehicle Tracking 2018.
-
-
-
-## Tips and Warnings
-
-
-
-- Xforce Keygen 64bits is not an official product of Autodesk and may not work properly or cause damage to your system. Use it at your own risk.
-
-- Xforce Keygen 64bits may be detected as a virus or malware by some antivirus programs. You may need to disable your antivirus or add an exception for the file before running it.
-
-- Xforce Keygen 64bits may violate the terms and conditions of Autodesk and result in legal consequences. You should only use it for educational or testing purposes and not for commercial use.
-
-
-
-## Conclusion
-
-
-
-Xforce Keygen 64bits is a software that can help you crack Vehicle Tracking 2018 and other Autodesk products. However, it is not a safe or legal way to activate your software and may cause problems for your system or your license. We recommend that you purchase a genuine copy of Vehicle Tracking 2018 from Autodesk or its authorized dealers if you want to use it for professional or personal purposes.
-
-
-
-## What is Vehicle Tracking 2018?
-
-
-
-Vehicle Tracking 2018 is a software that helps you design, analyze and simulate transportation systems. It allows you to create realistic 3D models of vehicles, roads, intersections, roundabouts, parking lots and more. You can also perform various calculations and simulations to optimize your designs and ensure safety and efficiency. Vehicle Tracking 2018 integrates with AutoCAD and other Autodesk products, such as Civil 3D and InfraWorks.
-
-
-
-## What are the Benefits of Vehicle Tracking 2018?
-
-
-
-Vehicle Tracking 2018 can help you improve your transportation engineering projects in many ways. Some of the benefits are:
-
-
-
-- It can save you time and money by reducing errors and rework.
-
-- It can enhance your creativity and productivity by providing you with powerful tools and features.
-
-- It can improve your communication and collaboration with stakeholders and clients by generating clear and accurate documentation and visualizations.
-
-- It can increase your confidence and satisfaction by delivering high-quality and reliable results.
-
-
-
-## How to Learn Vehicle Tracking 2018?
-
-
-
-If you want to learn how to use Vehicle Tracking 2018 effectively, you can follow these steps:
-
-
-
-1. Watch the tutorial videos on the Autodesk website or YouTube channel. They will show you the basics and advanced features of Vehicle Tracking 2018.
-
-2. Read the user guide and help files that come with the software. They will provide you with detailed instructions and explanations of the functions and commands of Vehicle Tracking 2018.
-
-3. Practice with the sample files and exercises that are included in the software. They will help you apply what you have learned and test your skills.
-
-4. Join the online community forums and blogs of Vehicle Tracking 2018 users. They will offer you tips, tricks, feedback and support from other professionals and experts.
-
-
-
- 145887f19f
-
-
-
-
-
diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/high_EQ/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/high_EQ/__init__.py
deleted file mode 100644
index febab3af7f19928cffdae37991309780f99e489b..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/meme-api/meme_generator/memes/high_EQ/__init__.py
+++ /dev/null
@@ -1,43 +0,0 @@
-from pathlib import Path
-from typing import List, Tuple
-
-from pil_utils import BuildImage
-
-from meme_generator import add_meme
-from meme_generator.exception import TextOverLength
-
-img_dir = Path(__file__).parent / "images"
-
-
-def high_EQ(images, texts: List[str], args):
- frame = BuildImage.open(img_dir / "0.jpg")
-
- def draw(pos: Tuple[float, float, float, float], text: str):
- try:
- frame.draw_text(
- pos,
- text,
- max_fontsize=100,
- min_fontsize=50,
- allow_wrap=True,
- fill="white",
- stroke_fill="black",
- stroke_ratio=0.05,
- )
- except ValueError:
- raise TextOverLength(text)
-
- draw((40, 540, 602, 1140), texts[0])
- draw((682, 540, 1244, 1140), texts[1])
- return frame.save_jpg()
-
-
-add_meme(
- "high_EQ",
- high_EQ,
- min_texts=2,
- max_texts=2,
- default_texts=["高情商", "低情商"],
- keywords=["低情商xx高情商xx"],
- patterns=[r"低情商[\s::]*(.+?)\s+高情商[\s::]*(.+)"],
-)
diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/models/eva_vit.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/models/eva_vit.py
deleted file mode 100644
index 864bffd0c2ffad18c642ce55e9d0ccf44fbe5a56..0000000000000000000000000000000000000000
--- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/models/eva_vit.py
+++ /dev/null
@@ -1,442 +0,0 @@
-# Based on EVA, BEIT, timm and DeiT code bases
-# https://github.com/baaivision/EVA
-# https://github.com/rwightman/pytorch-image-models/tree/master/timm
-# https://github.com/microsoft/unilm/tree/master/beit
-# https://github.com/facebookresearch/deit/
-# https://github.com/facebookresearch/dino
-# --------------------------------------------------------'
-import math
-from functools import partial
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-from timm.models.layers import drop_path, to_2tuple, trunc_normal_
-from timm.models.registry import register_model
-
-from video_llama.common.dist_utils import download_cached_file
-
-def _cfg(url='', **kwargs):
- return {
- 'url': url,
- 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None,
- 'crop_pct': .9, 'interpolation': 'bicubic',
- 'mean': (0.5, 0.5, 0.5), 'std': (0.5, 0.5, 0.5),
- **kwargs
- }
-
-
-class DropPath(nn.Module):
- """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
- """
- def __init__(self, drop_prob=None):
- super(DropPath, self).__init__()
- self.drop_prob = drop_prob
-
- def forward(self, x):
- return drop_path(x, self.drop_prob, self.training)
-
- def extra_repr(self) -> str:
- return 'p={}'.format(self.drop_prob)
-
-
-class Mlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- # x = self.drop(x)
- # commit this for the orignal BERT implement
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-class Attention(nn.Module):
- def __init__(
- self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0.,
- proj_drop=0., window_size=None, attn_head_dim=None):
- super().__init__()
- self.num_heads = num_heads
- head_dim = dim // num_heads
- if attn_head_dim is not None:
- head_dim = attn_head_dim
- all_head_dim = head_dim * self.num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- self.qkv = nn.Linear(dim, all_head_dim * 3, bias=False)
- if qkv_bias:
- self.q_bias = nn.Parameter(torch.zeros(all_head_dim))
- self.v_bias = nn.Parameter(torch.zeros(all_head_dim))
- else:
- self.q_bias = None
- self.v_bias = None
-
- if window_size:
- self.window_size = window_size
- self.num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros(self.num_relative_distance, num_heads)) # 2*Wh-1 * 2*Ww-1, nH
- # cls to token & token 2 cls & cls to cls
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(window_size[0])
- coords_w = torch.arange(window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * window_size[1] - 1
- relative_position_index = \
- torch.zeros(size=(window_size[0] * window_size[1] + 1, ) * 2, dtype=relative_coords.dtype)
- relative_position_index[1:, 1:] = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- relative_position_index[0, 0:] = self.num_relative_distance - 3
- relative_position_index[0:, 0] = self.num_relative_distance - 2
- relative_position_index[0, 0] = self.num_relative_distance - 1
-
- self.register_buffer("relative_position_index", relative_position_index)
- else:
- self.window_size = None
- self.relative_position_bias_table = None
- self.relative_position_index = None
-
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(all_head_dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- def forward(self, x, rel_pos_bias=None):
- B, N, C = x.shape
- qkv_bias = None
- if self.q_bias is not None:
- qkv_bias = torch.cat((self.q_bias, torch.zeros_like(self.v_bias, requires_grad=False), self.v_bias))
- # qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias)
- qkv = qkv.reshape(B, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- if self.relative_position_bias_table is not None:
- relative_position_bias = \
- self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1] + 1,
- self.window_size[0] * self.window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if rel_pos_bias is not None:
- attn = attn + rel_pos_bias
-
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B, N, -1)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class Block(nn.Module):
-
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., init_values=None, act_layer=nn.GELU, norm_layer=nn.LayerNorm,
- window_size=None, attn_head_dim=None):
- super().__init__()
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
- attn_drop=attn_drop, proj_drop=drop, window_size=window_size, attn_head_dim=attn_head_dim)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- if init_values is not None and init_values > 0:
- self.gamma_1 = nn.Parameter(init_values * torch.ones((dim)),requires_grad=True)
- self.gamma_2 = nn.Parameter(init_values * torch.ones((dim)),requires_grad=True)
- else:
- self.gamma_1, self.gamma_2 = None, None
-
- def forward(self, x, rel_pos_bias=None):
- if self.gamma_1 is None:
- x = x + self.drop_path(self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- else:
- x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias))
- x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x)))
- return x
-
-
-class PatchEmbed(nn.Module):
- """ Image to Patch Embedding
- """
- def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
- self.patch_shape = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])
- self.img_size = img_size
- self.patch_size = patch_size
- self.num_patches = num_patches
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
-
- def forward(self, x, **kwargs):
- B, C, H, W = x.shape
- # FIXME look at relaxing size constraints
- assert H == self.img_size[0] and W == self.img_size[1], \
- f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
- x = self.proj(x).flatten(2).transpose(1, 2)
- return x
-
-
-class RelativePositionBias(nn.Module):
-
- def __init__(self, window_size, num_heads):
- super().__init__()
- self.window_size = window_size
- self.num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros(self.num_relative_distance, num_heads)) # 2*Wh-1 * 2*Ww-1, nH
- # cls to token & token 2 cls & cls to cls
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(window_size[0])
- coords_w = torch.arange(window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * window_size[1] - 1
- relative_position_index = \
- torch.zeros(size=(window_size[0] * window_size[1] + 1,) * 2, dtype=relative_coords.dtype)
- relative_position_index[1:, 1:] = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- relative_position_index[0, 0:] = self.num_relative_distance - 3
- relative_position_index[0:, 0] = self.num_relative_distance - 2
- relative_position_index[0, 0] = self.num_relative_distance - 1
-
- self.register_buffer("relative_position_index", relative_position_index)
-
- # trunc_normal_(self.relative_position_bias_table, std=.02)
-
- def forward(self):
- relative_position_bias = \
- self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1] + 1,
- self.window_size[0] * self.window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH
- return relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
-
-
-class VisionTransformer(nn.Module):
- """ Vision Transformer with support for patch or hybrid CNN input stage
- """
- def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,
- num_heads=12, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop_rate=0., attn_drop_rate=0.,
- drop_path_rate=0., norm_layer=nn.LayerNorm, init_values=None,
- use_abs_pos_emb=True, use_rel_pos_bias=False, use_shared_rel_pos_bias=False,
- use_mean_pooling=True, init_scale=0.001, use_checkpoint=False):
- super().__init__()
- self.image_size = img_size
- self.num_classes = num_classes
- self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
-
- self.patch_embed = PatchEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
- num_patches = self.patch_embed.num_patches
-
- self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
- if use_abs_pos_emb:
- self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim))
- else:
- self.pos_embed = None
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- if use_shared_rel_pos_bias:
- self.rel_pos_bias = RelativePositionBias(window_size=self.patch_embed.patch_shape, num_heads=num_heads)
- else:
- self.rel_pos_bias = None
- self.use_checkpoint = use_checkpoint
-
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
- self.use_rel_pos_bias = use_rel_pos_bias
- self.blocks = nn.ModuleList([
- Block(
- dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer,
- init_values=init_values, window_size=self.patch_embed.patch_shape if use_rel_pos_bias else None)
- for i in range(depth)])
-# self.norm = nn.Identity() if use_mean_pooling else norm_layer(embed_dim)
-# self.fc_norm = norm_layer(embed_dim) if use_mean_pooling else None
-# self.head = nn.Linear(embed_dim, num_classes) if num_classes > 0 else nn.Identity()
-
- if self.pos_embed is not None:
- trunc_normal_(self.pos_embed, std=.02)
- trunc_normal_(self.cls_token, std=.02)
- # trunc_normal_(self.mask_token, std=.02)
-# if isinstance(self.head, nn.Linear):
-# trunc_normal_(self.head.weight, std=.02)
- self.apply(self._init_weights)
- self.fix_init_weight()
-# if isinstance(self.head, nn.Linear):
-# self.head.weight.data.mul_(init_scale)
-# self.head.bias.data.mul_(init_scale)
-
- def fix_init_weight(self):
- def rescale(param, layer_id):
- param.div_(math.sqrt(2.0 * layer_id))
-
- for layer_id, layer in enumerate(self.blocks):
- rescale(layer.attn.proj.weight.data, layer_id + 1)
- rescale(layer.mlp.fc2.weight.data, layer_id + 1)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- def get_classifier(self):
- return self.head
-
- def reset_classifier(self, num_classes, global_pool=''):
- self.num_classes = num_classes
- self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
-
- def forward_features(self, x):
- x = self.patch_embed(x)
- batch_size, seq_len, _ = x.size()
-
- cls_tokens = self.cls_token.expand(batch_size, -1, -1) # stole cls_tokens impl from Phil Wang, thanks
- x = torch.cat((cls_tokens, x), dim=1)
- if self.pos_embed is not None:
- x = x + self.pos_embed
- x = self.pos_drop(x)
-
- rel_pos_bias = self.rel_pos_bias() if self.rel_pos_bias is not None else None
- for blk in self.blocks:
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x, rel_pos_bias)
- else:
- x = blk(x, rel_pos_bias)
- return x
-# x = self.norm(x)
-
-# if self.fc_norm is not None:
-# t = x[:, 1:, :]
-# return self.fc_norm(t.mean(1))
-# else:
-# return x[:, 0]
-
- def forward(self, x):
- x = self.forward_features(x)
-# x = self.head(x)
- return x
-
- def get_intermediate_layers(self, x):
- x = self.patch_embed(x)
- batch_size, seq_len, _ = x.size()
-
- cls_tokens = self.cls_token.expand(batch_size, -1, -1) # stole cls_tokens impl from Phil Wang, thanks
- x = torch.cat((cls_tokens, x), dim=1)
- if self.pos_embed is not None:
- x = x + self.pos_embed
- x = self.pos_drop(x)
-
- features = []
- rel_pos_bias = self.rel_pos_bias() if self.rel_pos_bias is not None else None
- for blk in self.blocks:
- x = blk(x, rel_pos_bias)
- features.append(x)
-
- return features
-
-
-def interpolate_pos_embed(model, checkpoint_model):
- if 'pos_embed' in checkpoint_model:
- pos_embed_checkpoint = checkpoint_model['pos_embed'].float()
- embedding_size = pos_embed_checkpoint.shape[-1]
- num_patches = model.patch_embed.num_patches
- num_extra_tokens = model.pos_embed.shape[-2] - num_patches
- # height (== width) for the checkpoint position embedding
- orig_size = int((pos_embed_checkpoint.shape[-2] - num_extra_tokens) ** 0.5)
- # height (== width) for the new position embedding
- new_size = int(num_patches ** 0.5)
- # class_token and dist_token are kept unchanged
- if orig_size != new_size:
- print("Position interpolate from %dx%d to %dx%d" % (orig_size, orig_size, new_size, new_size))
- extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens]
- # only the position tokens are interpolated
- pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:]
- pos_tokens = pos_tokens.reshape(-1, orig_size, orig_size, embedding_size).permute(0, 3, 1, 2)
- pos_tokens = torch.nn.functional.interpolate(
- pos_tokens, size=(new_size, new_size), mode='bicubic', align_corners=False)
- pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2)
- new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1)
- checkpoint_model['pos_embed'] = new_pos_embed
-
-
-def convert_weights_to_fp16(model: nn.Module):
- """Convert applicable model parameters to fp16"""
-
- def _convert_weights_to_fp16(l):
- if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)):
- l.weight.data = l.weight.data.half()
- if l.bias is not None:
- l.bias.data = l.bias.data.half()
-
-# if isinstance(l, (nn.MultiheadAttention, Attention)):
-# for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]:
-# tensor = getattr(l, attr)
-# if tensor is not None:
-# tensor.data = tensor.data.half()
-
- model.apply(_convert_weights_to_fp16)
-
-
-def create_eva_vit_g(img_size=224,drop_path_rate=0.4,use_checkpoint=False,precision="fp16"):
- model = VisionTransformer(
- img_size=img_size,
- patch_size=14,
- use_mean_pooling=False,
- embed_dim=1408,
- depth=39,
- num_heads=1408//88,
- mlp_ratio=4.3637,
- qkv_bias=True,
- drop_path_rate=drop_path_rate,
- norm_layer=partial(nn.LayerNorm, eps=1e-6),
- use_checkpoint=use_checkpoint,
- )
- url = "https://storage.googleapis.com/sfr-vision-language-research/LAVIS/models/BLIP2/eva_vit_g.pth"
- cached_file = download_cached_file(
- url, check_hash=False, progress=True
- )
- state_dict = torch.load(cached_file, map_location="cpu")
- interpolate_pos_embed(model,state_dict)
-
- incompatible_keys = model.load_state_dict(state_dict, strict=False)
-# print(incompatible_keys)
-
- if precision == "fp16":
-# model.to("cuda")
- convert_weights_to_fp16(model)
- return model
\ No newline at end of file
diff --git a/spaces/DEEMOSTECH/ChatAvatar/static/css/main.eeca761a.css b/spaces/DEEMOSTECH/ChatAvatar/static/css/main.eeca761a.css
deleted file mode 100644
index ea0dec10de52d5e813955f07b26865d19809cccb..0000000000000000000000000000000000000000
--- a/spaces/DEEMOSTECH/ChatAvatar/static/css/main.eeca761a.css
+++ /dev/null
@@ -1,2 +0,0 @@
-html{overflow-x:hidden;overflow-y:overlay}body{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;box-sizing:border-box;color:#cfcfcf;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;margin:0}code{font-family:source-code-pro,Menlo,Monaco,Consolas,Courier New,monospace}.root{display:flex;justify-content:center;width:100%}.container{height:100vh;width:100%}.\!container{width:100%!important}@media (min-width:640px){.container{max-width:640px}.\!container{max-width:640px!important}}@media (min-width:768px){.container{max-width:768px}.\!container{max-width:768px!important}}@media (min-width:1024px){.container{max-width:1024px}.\!container{max-width:1024px!important}}@media (min-width:1280px){.container{max-width:1280px}.\!container{max-width:1280px!important}}@media (min-width:1536px){.container{max-width:1536px}.\!container{max-width:1536px!important}}.App{--theme-color:#4a00e0;--font-dark-color:#434343;--font-gray-color:#aaa;--font-light-color:#cfcfcf;--bg-light-color:#fff;--bg-gray0-color:#f8f8f8;--bg-gray1-color:#ececec;--bg-gray2-color:#7c7c7c;--bg-gray3-color:#373737;--bg-theme-color:#e7e3f1;--bg-dark-color:#121317;--side-gap:5rem;--radius:0.5rem;--shadow:-10px 0px 12px 1px hsla(0,0%,53%,.16);display:flex;justify-content:space-between;padding:16px;text-align:center}.App *{box-sizing:border-box;transition:all .3s}.App ::-webkit-scrollbar-thumb{background-color:rgba(0,0,0,.2)}textarea{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;border:1px solid transparent;color:var(--font-dark-color);font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen,Ubuntu,Cantarell,Fira Sans,Droid Sans,Helvetica Neue,sans-serif;font-size:1rem;line-height:1.5rem;outline:none;padding:0;resize:none}textarea:focus{border-color:var(--theme-color)}img{-webkit-user-drag:none;-webkit-user-select:none;user-select:none}.gallery_con__Y2mej{align-items:flex-start;display:flex;justify-content:center;margin-top:4rem;padding:0 1.25rem;width:100%}.gallery_menuCon__fVdFJ{margin-right:2rem;width:-webkit-max-content;width:max-content}.gallery_menu__U2btD{align-items:center;background-color:initial;border:2px solid transparent;border-radius:1.5rem;cursor:pointer;display:flex;height:3rem;justify-content:center;line-height:1rem;margin-bottom:1rem;text-align:center;width:6rem}.gallery_menu__U2btD.gallery_selected__T2qcs,.gallery_menu__U2btD:hover{background-color:var(--bg-gray3-color);color:#fff}.gallery_menu__U2btD.gallery_selected__T2qcs{border-color:#fff}.gallery_cardsCon__wAfcp{align-items:flex-start;display:flex;flex-grow:1;flex-shrink:1;flex-wrap:wrap;justify-content:space-between;max-height:100vh;max-width:calc(1600px + 9rem)}.gallery_cardsCon__wAfcp::-webkit-scrollbar-thumb{background-color:hsla(0,0%,100%,.2);border:5px solid #121317;border-radius:8px}.gallery_card__noUoL{background-color:var(--bg-gray3-color);border-radius:var(--radius);cursor:pointer;font-size:.75rem;height:260px;margin-bottom:1rem;overflow:hidden;position:relative;width:200px}.gallery_coverImg__BYj-o,.gallery_coverImg__BYj-o img{height:100%;width:100%}.gallery_prompt__9PEmb{background-color:#f8f8f880;border-radius:var(--radius);bottom:1rem;color:var(--font-dark-color);height:0;left:1rem;overflow:hidden;padding:0 .5rem;position:absolute;right:1rem;text-align:left;white-space:pre-wrap;word-break:break-all}.gallery_prompt__9PEmb.gallery_show__c2k50{height:-webkit-fit-content;height:-moz-fit-content;height:fit-content;padding:.5rem}.gallery_infoCon__E8oLy{align-items:center;bottom:1rem;color:var(--font-dark-color);display:flex;justify-content:flex-start;left:1rem;position:absolute;right:1rem}.gallery_avatar__KWBmI,.gallery_avatar__KWBmI img{border-radius:12px;height:24px;overflow:hidden;width:24px}.gallery_avatar__KWBmI{margin-right:1rem}.gallery_spaceholder__xJwYU{flex-grow:1;flex-shrink:1}.header_con__M\+u1W{align-items:center;display:flex;justify-content:center;padding:0 var(--side-gap);width:100vw}.header_header__Y7CqP{align-items:center;border-bottom:1px solid hsla(0,0%,100%,.1);display:flex;justify-content:space-between;padding:1rem 0;width:100%}.header_logoCon__MIdGL{align-items:flex-start;display:flex;height:3rem;justify-content:center}.header_logo__90zuC{height:3rem;margin-right:1rem}.header_logoCon__MIdGL>div{font-size:2rem;font-weight:700;line-height:2rem;margin-top:5px}.header_avatar__B3zXB{background:var(--bg-gray2-color);border-radius:50%;overflow:hidden}.header_avatar__B3zXB,.header_avatar__B3zXB img{height:3rem;width:3rem}.login_con__\+RJgQ{background:#000;box-shadow:-5px 0 20px 0 hsla(0,0%,100%,.2);height:100vh;padding:var(--side-gap);position:fixed;right:0;top:0;z-index:9}.login_close__JulM-{cursor:pointer;-webkit-user-select:none;user-select:none}.result_con__gHOU1{align-items:center;color:var(--font-dark-color);justify-content:center;width:50%;z-index:999}.result_con__gHOU1 *{flex-shrink:0}.result_board__PCvVJ{background-color:var(--bg-light-color);border-radius:var(--radius);display:flex;flex-flow:column;height:100%;width:100%}.result_colHead__k0Mk-{background:#f9fafb;border:0 solid #e5e7eb;border-radius:8px;flex:0 1 auto;padding:8px}.result_colInner__9FccK{background:#fff;border:1px solid #e5e7eb;border-radius:8px;box-shadow:0 1px 2px 0 rgba(0,0,0,.05);flex-wrap:wrap;gap:1px;margin-bottom:1rem;overflow:hidden;padding:10px 12px}.result_colDetail__jggqg,.result_colInner__9FccK{align-items:center;flex-direction:column;justify-content:flex-start}.result_colDetail__jggqg{background:#f9fafb;border:0 solid #e5e7eb;border-radius:8px;display:flex;flex:1 1 auto;margin-top:1rem;padding:8px 8px 24px}.result_colContent__FYZno{background:#fff;border:1px solid #e5e7eb;border-radius:8px;height:100%;width:100%}.result_colTitle__R8k\+A{align-items:flex-end;color:#6b7280;display:flex;font-size:.875rem;justify-content:space-between;line-height:1.2rem;margin-bottom:8px;width:100%}.result_passwordCon__OjFSI{border-top:1px solid #e5e7eb;padding:8px 12px 2px}.result_emailCon__eEqXk{padding-bottom:10px;padding-left:12px;padding-right:12px}.result_colTitle__R8k\+A>div{margin-bottom:.5rem}.result_colTitle__R8k\+A>div.result_restart__fLq8E{border-radius:5px;cursor:pointer;font-size:1rem;font-weight:400;margin-bottom:0;margin-left:1rem;padding:.5rem;-webkit-user-select:none;user-select:none}.result_restart__fLq8E:hover{background-color:var(--bg-gray0-color);color:var(--font-dark-color)}.result_spaceholder__GAxGZ{flex-grow:1;flex-shrink:1}.result_lang__85-De{cursor:pointer;font-weight:400;margin-right:1rem;-webkit-user-select:none;user-select:none}.result_lang__85-De.result_en__n-Jo7{margin-left:1rem;margin-right:0;width:4rem}.result_lang__85-De:hover{font-weight:700}.result_lang__85-De.result_selected__kDzD1{color:var(--font-dark-color);font-weight:700}.result_regene__yKazF{color:var(--theme-color);cursor:pointer;font-weight:400;-webkit-user-select:none;user-select:none}.result_chatCon__Hm\+zJ{background-color:var(--bg-gray0-color);border-radius:var(--radius);height:calc(100% - 4rem);padding:1rem}.result_chatCon__Hm\+zJ,.result_chatMsgCon__x8UTP{align-items:center;display:flex;flex-direction:column;flex-grow:1;flex-shrink:1;justify-content:flex-start;width:100%}.result_chatMsgCon__x8UTP{overflow-y:overlay;text-align:left}.result_chatMsgCon__x8UTP::-webkit-scrollbar-thumb{border:none;border-radius:3px}.result_chatMsgCon__x8UTP::-webkit-scrollbar{width:6px}.result_chatMsgRow__dr9Qg{align-items:flex-start;display:flex;flex-direction:row;justify-content:flex-start;margin-bottom:1rem;width:100%}.result_chatMsgRow__dr9Qg.result_user__bUuRg{flex-direction:row-reverse}.result_avatar__B2zOp{background:var(--bg-gray2-color);border-radius:1.5rem;margin-left:0;margin-right:1rem;overflow:hidden}.result_avatar__B2zOp,.result_avatar__B2zOp img{height:3rem;width:3rem}.result_user__bUuRg .result_avatar__B2zOp{margin-left:1rem;margin-right:0}.result_bubble__GexXm{background:var(--bg-theme-color);border-radius:var(--radius);flex-shrink:1;line-height:1.5rem;padding:.75rem 1rem;white-space:pre-wrap;word-break:break-all}.result_bubble__GexXm.result_unactive__zyVF2{background:var(--bg-gray1-color)}.result_user__bUuRg .result_bubble__GexXm{background:var(--bg-light-color)}.result_chatIptCon__LXDF-{align-items:center;display:flex;flex-direction:column;justify-content:flex-start;width:100%}.result_chatTipsCon__w4uUf{align-items:flex-end;display:flex;flex-direction:row;justify-content:flex-start;margin-top:1rem;max-width:100%;overflow-x:auto;overflow-y:hidden;width:100%}.result_chatTipsCon__w4uUf::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_chatTips__6b9zJ{background:var(--bg-light-color);border-radius:var(--radius);cursor:pointer;margin-right:1rem;padding:1rem;text-align:left;white-space:pre-wrap;width:15.5rem;word-break:break-all}.result_chatTips__6b9zJ:last-child{margin-right:0}.result_chatRowCon__jLGk3{align-items:flex-start;display:flex;flex-direction:row;justify-content:space-between;margin-top:1rem;width:100%}.result_iptLineCon__nLuWa{flex-grow:1;flex-shrink:1;line-height:1.5rem;margin-right:1rem;position:relative;text-align:left}.result_iptSpaceholder__hAkD5{border:1px solid transparent;max-height:calc(9rem + 2px);visibility:hidden}.result_iptSpaceholder__hAkD5,.result_ipt__tA\+g4{padding:.75rem 1rem;white-space:pre-wrap;word-break:break-all}.result_ipt__tA\+g4{background:var(--bg-light-color);border-radius:var(--radius);bottom:0;left:0;overflow-y:auto;position:absolute;right:0;top:0}.result_ipt__tA\+g4::-webkit-scrollbar-thumb{border-color:var(--bg-light-color)}.result_btn__h5tQr{align-items:center;background-color:var(--theme-color);border:1px solid var(--theme-color);border-radius:1.5rem;color:#fff;cursor:pointer;display:flex;font-weight:700;height:calc(3rem - 2px);justify-content:center;line-height:1rem;padding:0 1.5rem;-webkit-user-select:none;user-select:none}.result_con__gHOU1 .result_btn__h5tQr.result_disabled__lB61-{background:var(--bg-gray2-color);border-color:var(--bg-gray2-color);color:var(--font-light-color);cursor:not-allowed}.result_iptArea__23TZc{background:#fff;border:1px solid #e5e7eb;border-radius:8px;box-shadow:0 0 0 3px transparent,inset 0 2px 4px 0 rgba(0,0,0,.05);color:#1f2937;display:block;font-size:14px;height:42px;line-height:1.4;outline:none!important;padding:10px;position:relative;width:100%}.result_iptArea__23TZc:focus{border-color:#93c5fd;box-shadow:0 0 0 3px #dfedfe,inset 0 2px 4px 0 transparent}.result_iptArea__23TZc::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_clearBtn__r6e0y{background:linear-gradient(to bottom right,#f3f4f6,#e5e7eb);border:1px solid #e5e7eb;border-radius:8px;color:#374151;cursor:pointer;font-size:16px;font-weight:600;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_clearBtn__r6e0y:hover{background:linear-gradient(to bottom right,#f3f4f6,#f3f4f6);border:1px solid #e5e7eb}.result_clearBtnLogin__LOsgV{background:linear-gradient(to bottom right,#f3f4f6,#e5e7eb);border:1px solid #e5e7eb;border-radius:8px;color:#374151;cursor:pointer;font-size:16px;font-weight:700;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_inputError__qtPTq{border-color:#f56565;box-shadow:0 0 0 3px #fed7d7,inset 0 2px 4px 0 transparent}.result_clearBtnLogin__LOsgV:hover{background:linear-gradient(to bottom right,#f3f4f6,#f3f4f6);border:1px solid #e5e7eb}.result_btnCon__LEoi5{display:flex;justify-content:space-between}.result_generateBtn__UGmBG{background:linear-gradient(to bottom right,#ffedd5,#fdba74);border:1px solid #fed7aa;border-radius:8px;color:#ea580c;cursor:pointer;font-size:16px;font-weight:600;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_generateBtn__UGmBG:hover{background:linear-gradient(to bottom right,#ffecd3,#fed7ab);border:1px solid #ffd8b4}.result_generateBtnLogin__nkLOj{background:linear-gradient(to bottom right,#ffedd5,#fdba74);border:1px solid #fed7aa;border-radius:8px;color:#ea580c;cursor:pointer;font-size:16px;font-weight:700;height:42px;min-width:max(160px,48%);padding:8px 16px}.result_generateBtnLogin__nkLOj:hover{background:linear-gradient(to bottom right,#ffecd3,#fed7ab);border:1px solid #ffd8b4}.result_candidateCon__x9kyB{align-items:flex-start;background-color:var(--bg-gray0-color);border-radius:var(--radius);display:flex;flex-direction:row;flex-grow:1;flex-shrink:1;height:100%;justify-content:space-between;overflow-y:overlay;padding:1rem;position:relative;width:100%}.result_candidateCon__x9kyB::-webkit-scrollbar-thumb{border-color:var(--bg-gray0-color)}.result_candidateCol__eoHna{margin-right:1rem;position:relative;width:calc(33.33333% - .66667rem)}.result_candidateCol__eoHna:last-child{margin-right:0}.result_candidateCol__eoHna img{border-radius:var(--radius);cursor:pointer;margin-bottom:1rem}.result_creatorCon__tIm3e{align-items:flex-end;color:var(--font-gray-color);display:flex;font-size:1.2rem;font-weight:700;justify-content:flex-start;line-height:1.2rem;margin-bottom:1rem;width:100%}.result_creatorInfoCon__pET8h{text-align:left}.result_creatorName__VLTXL{color:var(--font-dark-color);font-size:1.2rem;font-weight:700;line-height:1.8rem}.result_creatorInfo__CkbWU{color:var(--font-gray-color);font-size:1rem;line-height:1.2rem}.result_modelView__Y25w5{background:var(--bg-gray0-color);border-radius:var(--radius);flex-grow:1;flex-shrink:1;height:100%;overflow:hidden;width:100%}.result_modelInfoCon__bXw5O{align-items:center;display:flex;flex-direction:column;justify-content:flex-end;text-align:left}.result_progressInfo__g9iwR{margin-bottom:.5rem;width:100%}.result_progressTrack__I6zDn{background:var(--bg-light-color);border-radius:2px;height:4px;position:relative;width:100%}.result_progressThumb__mbBQj{background-color:var(--theme-color);border-radius:2px;height:4px;left:0;position:absolute;top:0}.result_modelPrompt__DzUbD{background:var(--bg-light-color);border-radius:var(--radius);margin-top:1rem;min-height:3rem;padding:1rem;width:100%}.result_loadingCon__XVvXD,.result_progressCon__O57XA{font-size:14px;position:absolute;top:calc(50% - 10px)}.result_loadingCon__XVvXD{z-index:-111}.result_icon__dFKnM{height:20px;position:absolute;top:calc(50% - 10px)}.result_hideModel__3phD0{display:none}.welcome_con__o1kmf{align-items:center;background:#121317;border-radius:.5rem;display:flex;flex-direction:column;justify-content:flex-start;padding-bottom:2rem;padding-top:2rem;position:relative;width:45%}.welcome_con__o1kmf>img{position:absolute;top:0;width:100%}.welcome_mainCon__H1gv\+{margin-top:.5rem;z-index:999}.welcome_title__Gd8m4{color:#fff;font-family:Courier New;font-size:5rem;font-weight:700;line-height:5rem}.welcome_ioCon__PQZXU{background-color:#fff;border-radius:1rem;border-style:solid;margin-left:8rem;margin-right:8rem;margin-top:24rem;padding:2rem;width:calc(100% - 16rem)}.welcome_iptCon__KpWEL{align-items:center;background:#ededf2;border-radius:1rem;display:flex;height:4rem;justify-content:space-between;margin-bottom:2rem;width:100%}.welcome_iptCon__KpWEL>img{height:2rem;margin-right:1rem;position:static;width:2rem}.welcome_ipt__ayi9Z{background:#ededf2;border:none;border-radius:1rem;color:var(--font-dark-color);flex-grow:1;font-size:1rem;height:100%;outline:none;padding:0 2rem}.welcome_ipt__ayi9Z::-webkit-input-placeholder{font-size:1rem}.welcome_ipt__ayi9Z::placeholder{font-size:1rem}.welcome_btnCon__Mx-ta,.welcome_btn__jCuoG{align-items:center;display:flex;justify-content:center}.welcome_btn__jCuoG{border:1px solid #8f8f8f;border-radius:1rem;cursor:pointer;height:3rem;line-height:1rem;-webkit-user-select:none;user-select:none;width:100%}.welcome_btn__jCuoG:last-child{background:#4a00e0;border:none;font-weight:700}.welcome_btn__jCuoG.welcome_disabled__pcSzv{cursor:not-allowed}.welcome_btn__jCuoG:hover{color:#fff}
-/*# sourceMappingURL=main.eeca761a.css.map*/
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_typedattr.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_typedattr.py
deleted file mode 100644
index bf9202eeab91d263f4badade4601efd111b91523..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_typedattr.py
+++ /dev/null
@@ -1,83 +0,0 @@
-from __future__ import annotations
-
-import sys
-from typing import Any, Callable, Mapping, TypeVar, overload
-
-from ._exceptions import TypedAttributeLookupError
-
-if sys.version_info >= (3, 8):
- from typing import final
-else:
- from typing_extensions import final
-
-T_Attr = TypeVar("T_Attr")
-T_Default = TypeVar("T_Default")
-undefined = object()
-
-
-def typed_attribute() -> Any:
- """Return a unique object, used to mark typed attributes."""
- return object()
-
-
-class TypedAttributeSet:
- """
- Superclass for typed attribute collections.
-
- Checks that every public attribute of every subclass has a type annotation.
- """
-
- def __init_subclass__(cls) -> None:
- annotations: dict[str, Any] = getattr(cls, "__annotations__", {})
- for attrname in dir(cls):
- if not attrname.startswith("_") and attrname not in annotations:
- raise TypeError(
- f"Attribute {attrname!r} is missing its type annotation"
- )
-
- super().__init_subclass__()
-
-
-class TypedAttributeProvider:
- """Base class for classes that wish to provide typed extra attributes."""
-
- @property
- def extra_attributes(self) -> Mapping[T_Attr, Callable[[], T_Attr]]:
- """
- A mapping of the extra attributes to callables that return the corresponding values.
-
- If the provider wraps another provider, the attributes from that wrapper should also be
- included in the returned mapping (but the wrapper may override the callables from the
- wrapped instance).
-
- """
- return {}
-
- @overload
- def extra(self, attribute: T_Attr) -> T_Attr:
- ...
-
- @overload
- def extra(self, attribute: T_Attr, default: T_Default) -> T_Attr | T_Default:
- ...
-
- @final
- def extra(self, attribute: Any, default: object = undefined) -> object:
- """
- extra(attribute, default=undefined)
-
- Return the value of the given typed extra attribute.
-
- :param attribute: the attribute (member of a :class:`~TypedAttributeSet`) to look for
- :param default: the value that should be returned if no value is found for the attribute
- :raises ~anyio.TypedAttributeLookupError: if the search failed and no default value was
- given
-
- """
- try:
- return self.extra_attributes[attribute]()
- except KeyError:
- if default is undefined:
- raise TypedAttributeLookupError("Attribute not found") from None
- else:
- return default
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/cython.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/cython.py
deleted file mode 100644
index 2a42d94a3591e0e8e47f184b303e4aec0a6337ef..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/cython.py
+++ /dev/null
@@ -1,27 +0,0 @@
-""" Exports a no-op 'cython' namespace similar to
-https://github.com/cython/cython/blob/master/Cython/Shadow.py
-
-This allows to optionally compile @cython decorated functions
-(when cython is available at built time), or run the same code
-as pure-python, without runtime dependency on cython module.
-
-We only define the symbols that we use. E.g. see fontTools.cu2qu
-"""
-
-from types import SimpleNamespace
-
-
-def _empty_decorator(x):
- return x
-
-
-compiled = False
-
-for name in ("double", "complex", "int"):
- globals()[name] = None
-
-for name in ("cfunc", "inline"):
- globals()[name] = _empty_decorator
-
-locals = lambda **_: _empty_decorator
-returns = lambda _: _empty_decorator
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_J_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_J_.py
deleted file mode 100644
index bc8fe92aac9d18bfd5ee565588d8cebf7d00afd1..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_J_.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .T_S_I_V_ import table_T_S_I_V_
-
-
-class table_T_S_I_J_(table_T_S_I_V_):
- pass
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_api.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_api.py
deleted file mode 100644
index 571289cf2b31c1d864d72d71b216456a9cf5b216..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_api.py
+++ /dev/null
@@ -1,445 +0,0 @@
-import typing
-from contextlib import contextmanager
-
-from ._client import Client
-from ._config import DEFAULT_TIMEOUT_CONFIG
-from ._models import Response
-from ._types import (
- AuthTypes,
- CertTypes,
- CookieTypes,
- HeaderTypes,
- ProxiesTypes,
- QueryParamTypes,
- RequestContent,
- RequestData,
- RequestFiles,
- TimeoutTypes,
- URLTypes,
- VerifyTypes,
-)
-
-
-def request(
- method: str,
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Optional[AuthTypes] = None,
- proxies: typing.Optional[ProxiesTypes] = None,
- timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
- follow_redirects: bool = False,
- verify: VerifyTypes = True,
- cert: typing.Optional[CertTypes] = None,
- trust_env: bool = True,
-) -> Response:
- """
- Sends an HTTP request.
-
- **Parameters:**
-
- * **method** - HTTP method for the new `Request` object: `GET`, `OPTIONS`,
- `HEAD`, `POST`, `PUT`, `PATCH`, or `DELETE`.
- * **url** - URL for the new `Request` object.
- * **params** - *(optional)* Query parameters to include in the URL, as a
- string, dictionary, or sequence of two-tuples.
- * **content** - *(optional)* Binary content to include in the body of the
- request, as bytes or a byte iterator.
- * **data** - *(optional)* Form data to include in the body of the request,
- as a dictionary.
- * **files** - *(optional)* A dictionary of upload files to include in the
- body of the request.
- * **json** - *(optional)* A JSON serializable object to include in the body
- of the request.
- * **headers** - *(optional)* Dictionary of HTTP headers to include in the
- request.
- * **cookies** - *(optional)* Dictionary of Cookie items to include in the
- request.
- * **auth** - *(optional)* An authentication class to use when sending the
- request.
- * **proxies** - *(optional)* A dictionary mapping proxy keys to proxy URLs.
- * **timeout** - *(optional)* The timeout configuration to use when sending
- the request.
- * **follow_redirects** - *(optional)* Enables or disables HTTP redirects.
- * **verify** - *(optional)* SSL certificates (a.k.a CA bundle) used to
- verify the identity of requested hosts. Either `True` (default CA bundle),
- a path to an SSL certificate file, an `ssl.SSLContext`, or `False`
- (which will disable verification).
- * **cert** - *(optional)* An SSL certificate used by the requested host
- to authenticate the client. Either a path to an SSL certificate file, or
- two-tuple of (certificate file, key file), or a three-tuple of (certificate
- file, key file, password).
- * **trust_env** - *(optional)* Enables or disables usage of environment
- variables for configuration.
-
- **Returns:** `Response`
-
- Usage:
-
- ```
- >>> import httpx
- >>> response = httpx.request('GET', 'https://httpbin.org/get')
- >>> response
-
- ```
- """
- with Client(
- cookies=cookies,
- proxies=proxies,
- cert=cert,
- verify=verify,
- timeout=timeout,
- trust_env=trust_env,
- ) as client:
- return client.request(
- method=method,
- url=url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- auth=auth,
- follow_redirects=follow_redirects,
- )
-
-
-@contextmanager
-def stream(
- method: str,
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Optional[AuthTypes] = None,
- proxies: typing.Optional[ProxiesTypes] = None,
- timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
- follow_redirects: bool = False,
- verify: VerifyTypes = True,
- cert: typing.Optional[CertTypes] = None,
- trust_env: bool = True,
-) -> typing.Iterator[Response]:
- """
- Alternative to `httpx.request()` that streams the response body
- instead of loading it into memory at once.
-
- **Parameters**: See `httpx.request`.
-
- See also: [Streaming Responses][0]
-
- [0]: /quickstart#streaming-responses
- """
- with Client(
- cookies=cookies,
- proxies=proxies,
- cert=cert,
- verify=verify,
- timeout=timeout,
- trust_env=trust_env,
- ) as client:
- with client.stream(
- method=method,
- url=url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- auth=auth,
- follow_redirects=follow_redirects,
- ) as response:
- yield response
-
-
-def get(
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Optional[AuthTypes] = None,
- proxies: typing.Optional[ProxiesTypes] = None,
- follow_redirects: bool = False,
- cert: typing.Optional[CertTypes] = None,
- verify: VerifyTypes = True,
- timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
- trust_env: bool = True,
-) -> Response:
- """
- Sends a `GET` request.
-
- **Parameters**: See `httpx.request`.
-
- Note that the `data`, `files`, `json` and `content` parameters are not available
- on this function, as `GET` requests should not include a request body.
- """
- return request(
- "GET",
- url,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- proxies=proxies,
- follow_redirects=follow_redirects,
- cert=cert,
- verify=verify,
- timeout=timeout,
- trust_env=trust_env,
- )
-
-
-def options(
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Optional[AuthTypes] = None,
- proxies: typing.Optional[ProxiesTypes] = None,
- follow_redirects: bool = False,
- cert: typing.Optional[CertTypes] = None,
- verify: VerifyTypes = True,
- timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
- trust_env: bool = True,
-) -> Response:
- """
- Sends an `OPTIONS` request.
-
- **Parameters**: See `httpx.request`.
-
- Note that the `data`, `files`, `json` and `content` parameters are not available
- on this function, as `OPTIONS` requests should not include a request body.
- """
- return request(
- "OPTIONS",
- url,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- proxies=proxies,
- follow_redirects=follow_redirects,
- cert=cert,
- verify=verify,
- timeout=timeout,
- trust_env=trust_env,
- )
-
-
-def head(
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Optional[AuthTypes] = None,
- proxies: typing.Optional[ProxiesTypes] = None,
- follow_redirects: bool = False,
- cert: typing.Optional[CertTypes] = None,
- verify: VerifyTypes = True,
- timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
- trust_env: bool = True,
-) -> Response:
- """
- Sends a `HEAD` request.
-
- **Parameters**: See `httpx.request`.
-
- Note that the `data`, `files`, `json` and `content` parameters are not available
- on this function, as `HEAD` requests should not include a request body.
- """
- return request(
- "HEAD",
- url,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- proxies=proxies,
- follow_redirects=follow_redirects,
- cert=cert,
- verify=verify,
- timeout=timeout,
- trust_env=trust_env,
- )
-
-
-def post(
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Optional[AuthTypes] = None,
- proxies: typing.Optional[ProxiesTypes] = None,
- follow_redirects: bool = False,
- cert: typing.Optional[CertTypes] = None,
- verify: VerifyTypes = True,
- timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
- trust_env: bool = True,
-) -> Response:
- """
- Sends a `POST` request.
-
- **Parameters**: See `httpx.request`.
- """
- return request(
- "POST",
- url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- proxies=proxies,
- follow_redirects=follow_redirects,
- cert=cert,
- verify=verify,
- timeout=timeout,
- trust_env=trust_env,
- )
-
-
-def put(
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Optional[AuthTypes] = None,
- proxies: typing.Optional[ProxiesTypes] = None,
- follow_redirects: bool = False,
- cert: typing.Optional[CertTypes] = None,
- verify: VerifyTypes = True,
- timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
- trust_env: bool = True,
-) -> Response:
- """
- Sends a `PUT` request.
-
- **Parameters**: See `httpx.request`.
- """
- return request(
- "PUT",
- url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- proxies=proxies,
- follow_redirects=follow_redirects,
- cert=cert,
- verify=verify,
- timeout=timeout,
- trust_env=trust_env,
- )
-
-
-def patch(
- url: URLTypes,
- *,
- content: typing.Optional[RequestContent] = None,
- data: typing.Optional[RequestData] = None,
- files: typing.Optional[RequestFiles] = None,
- json: typing.Optional[typing.Any] = None,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Optional[AuthTypes] = None,
- proxies: typing.Optional[ProxiesTypes] = None,
- follow_redirects: bool = False,
- cert: typing.Optional[CertTypes] = None,
- verify: VerifyTypes = True,
- timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
- trust_env: bool = True,
-) -> Response:
- """
- Sends a `PATCH` request.
-
- **Parameters**: See `httpx.request`.
- """
- return request(
- "PATCH",
- url,
- content=content,
- data=data,
- files=files,
- json=json,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- proxies=proxies,
- follow_redirects=follow_redirects,
- cert=cert,
- verify=verify,
- timeout=timeout,
- trust_env=trust_env,
- )
-
-
-def delete(
- url: URLTypes,
- *,
- params: typing.Optional[QueryParamTypes] = None,
- headers: typing.Optional[HeaderTypes] = None,
- cookies: typing.Optional[CookieTypes] = None,
- auth: typing.Optional[AuthTypes] = None,
- proxies: typing.Optional[ProxiesTypes] = None,
- follow_redirects: bool = False,
- cert: typing.Optional[CertTypes] = None,
- verify: VerifyTypes = True,
- timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG,
- trust_env: bool = True,
-) -> Response:
- """
- Sends a `DELETE` request.
-
- **Parameters**: See `httpx.request`.
-
- Note that the `data`, `files`, `json` and `content` parameters are not available
- on this function, as `DELETE` requests should not include a request body.
- """
- return request(
- "DELETE",
- url,
- params=params,
- headers=headers,
- cookies=cookies,
- auth=auth,
- proxies=proxies,
- follow_redirects=follow_redirects,
- cert=cert,
- verify=verify,
- timeout=timeout,
- trust_env=trust_env,
- )
diff --git a/spaces/DShrimp/PoseMaker/src/body.py b/spaces/DShrimp/PoseMaker/src/body.py
deleted file mode 100644
index ecf06938faf81a153c0090e8ceccc5ff94771ee5..0000000000000000000000000000000000000000
--- a/spaces/DShrimp/PoseMaker/src/body.py
+++ /dev/null
@@ -1,218 +0,0 @@
-import cv2
-import numpy as np
-import math
-import time
-from scipy.ndimage.filters import gaussian_filter
-import matplotlib.pyplot as plt
-import matplotlib
-import torch
-from torchvision import transforms
-
-from src import util
-from src.model import bodypose_model
-
-class Body(object):
- def __init__(self, model_path):
- self.model = bodypose_model()
- if torch.cuda.is_available():
- self.model = self.model.cuda()
- model_dict = util.transfer(self.model, torch.load(model_path))
- self.model.load_state_dict(model_dict)
- self.model.eval()
-
- def __call__(self, oriImg):
- # scale_search = [0.5, 1.0, 1.5, 2.0]
- scale_search = [0.5]
- boxsize = 368
- stride = 8
- padValue = 128
- thre1 = 0.1
- thre2 = 0.05
- multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search]
- heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 19))
- paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38))
-
- for m in range(len(multiplier)):
- scale = multiplier[m]
- imageToTest = cv2.resize(oriImg, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC)
- imageToTest_padded, pad = util.padRightDownCorner(imageToTest, stride, padValue)
- im = np.transpose(np.float32(imageToTest_padded[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5
- im = np.ascontiguousarray(im)
-
- data = torch.from_numpy(im).float()
- if torch.cuda.is_available():
- data = data.cuda()
- # data = data.permute([2, 0, 1]).unsqueeze(0).float()
- with torch.no_grad():
- Mconv7_stage6_L1, Mconv7_stage6_L2 = self.model(data)
- Mconv7_stage6_L1 = Mconv7_stage6_L1.cpu().numpy()
- Mconv7_stage6_L2 = Mconv7_stage6_L2.cpu().numpy()
-
- # extract outputs, resize, and remove padding
- # heatmap = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[1]].data), (1, 2, 0)) # output 1 is heatmaps
- heatmap = np.transpose(np.squeeze(Mconv7_stage6_L2), (1, 2, 0)) # output 1 is heatmaps
- heatmap = cv2.resize(heatmap, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC)
- heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]
- heatmap = cv2.resize(heatmap, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)
-
- # paf = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[0]].data), (1, 2, 0)) # output 0 is PAFs
- paf = np.transpose(np.squeeze(Mconv7_stage6_L1), (1, 2, 0)) # output 0 is PAFs
- paf = cv2.resize(paf, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC)
- paf = paf[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]
- paf = cv2.resize(paf, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)
-
- heatmap_avg += heatmap_avg + heatmap / len(multiplier)
- paf_avg += + paf / len(multiplier)
-
- all_peaks = []
- peak_counter = 0
-
- for part in range(18):
- map_ori = heatmap_avg[:, :, part]
- one_heatmap = gaussian_filter(map_ori, sigma=3)
-
- map_left = np.zeros(one_heatmap.shape)
- map_left[1:, :] = one_heatmap[:-1, :]
- map_right = np.zeros(one_heatmap.shape)
- map_right[:-1, :] = one_heatmap[1:, :]
- map_up = np.zeros(one_heatmap.shape)
- map_up[:, 1:] = one_heatmap[:, :-1]
- map_down = np.zeros(one_heatmap.shape)
- map_down[:, :-1] = one_heatmap[:, 1:]
-
- peaks_binary = np.logical_and.reduce(
- (one_heatmap >= map_left, one_heatmap >= map_right, one_heatmap >= map_up, one_heatmap >= map_down, one_heatmap > thre1))
- peaks = list(zip(np.nonzero(peaks_binary)[1], np.nonzero(peaks_binary)[0])) # note reverse
- peaks_with_score = [x + (map_ori[x[1], x[0]],) for x in peaks]
- peak_id = range(peak_counter, peak_counter + len(peaks))
- peaks_with_score_and_id = [peaks_with_score[i] + (peak_id[i],) for i in range(len(peak_id))]
-
- all_peaks.append(peaks_with_score_and_id)
- peak_counter += len(peaks)
-
- # find connection in the specified sequence, center 29 is in the position 15
- limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \
- [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \
- [1, 16], [16, 18], [3, 17], [6, 18]]
- # the middle joints heatmap correpondence
- mapIdx = [[31, 32], [39, 40], [33, 34], [35, 36], [41, 42], [43, 44], [19, 20], [21, 22], \
- [23, 24], [25, 26], [27, 28], [29, 30], [47, 48], [49, 50], [53, 54], [51, 52], \
- [55, 56], [37, 38], [45, 46]]
-
- connection_all = []
- special_k = []
- mid_num = 10
-
- for k in range(len(mapIdx)):
- score_mid = paf_avg[:, :, [x - 19 for x in mapIdx[k]]]
- candA = all_peaks[limbSeq[k][0] - 1]
- candB = all_peaks[limbSeq[k][1] - 1]
- nA = len(candA)
- nB = len(candB)
- indexA, indexB = limbSeq[k]
- if (nA != 0 and nB != 0):
- connection_candidate = []
- for i in range(nA):
- for j in range(nB):
- vec = np.subtract(candB[j][:2], candA[i][:2])
- norm = math.sqrt(vec[0] * vec[0] + vec[1] * vec[1])
- norm = max(0.001, norm)
- vec = np.divide(vec, norm)
-
- startend = list(zip(np.linspace(candA[i][0], candB[j][0], num=mid_num), \
- np.linspace(candA[i][1], candB[j][1], num=mid_num)))
-
- vec_x = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 0] \
- for I in range(len(startend))])
- vec_y = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 1] \
- for I in range(len(startend))])
-
- score_midpts = np.multiply(vec_x, vec[0]) + np.multiply(vec_y, vec[1])
- score_with_dist_prior = sum(score_midpts) / len(score_midpts) + min(
- 0.5 * oriImg.shape[0] / norm - 1, 0)
- criterion1 = len(np.nonzero(score_midpts > thre2)[0]) > 0.8 * len(score_midpts)
- criterion2 = score_with_dist_prior > 0
- if criterion1 and criterion2:
- connection_candidate.append(
- [i, j, score_with_dist_prior, score_with_dist_prior + candA[i][2] + candB[j][2]])
-
- connection_candidate = sorted(connection_candidate, key=lambda x: x[2], reverse=True)
- connection = np.zeros((0, 5))
- for c in range(len(connection_candidate)):
- i, j, s = connection_candidate[c][0:3]
- if (i not in connection[:, 3] and j not in connection[:, 4]):
- connection = np.vstack([connection, [candA[i][3], candB[j][3], s, i, j]])
- if (len(connection) >= min(nA, nB)):
- break
-
- connection_all.append(connection)
- else:
- special_k.append(k)
- connection_all.append([])
-
- # last number in each row is the total parts number of that person
- # the second last number in each row is the score of the overall configuration
- subset = -1 * np.ones((0, 20))
- candidate = np.array([item for sublist in all_peaks for item in sublist])
-
- for k in range(len(mapIdx)):
- if k not in special_k:
- partAs = connection_all[k][:, 0]
- partBs = connection_all[k][:, 1]
- indexA, indexB = np.array(limbSeq[k]) - 1
-
- for i in range(len(connection_all[k])): # = 1:size(temp,1)
- found = 0
- subset_idx = [-1, -1]
- for j in range(len(subset)): # 1:size(subset,1):
- if subset[j][indexA] == partAs[i] or subset[j][indexB] == partBs[i]:
- subset_idx[found] = j
- found += 1
-
- if found == 1:
- j = subset_idx[0]
- if subset[j][indexB] != partBs[i]:
- subset[j][indexB] = partBs[i]
- subset[j][-1] += 1
- subset[j][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]
- elif found == 2: # if found 2 and disjoint, merge them
- j1, j2 = subset_idx
- membership = ((subset[j1] >= 0).astype(int) + (subset[j2] >= 0).astype(int))[:-2]
- if len(np.nonzero(membership == 2)[0]) == 0: # merge
- subset[j1][:-2] += (subset[j2][:-2] + 1)
- subset[j1][-2:] += subset[j2][-2:]
- subset[j1][-2] += connection_all[k][i][2]
- subset = np.delete(subset, j2, 0)
- else: # as like found == 1
- subset[j1][indexB] = partBs[i]
- subset[j1][-1] += 1
- subset[j1][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]
-
- # if find no partA in the subset, create a new subset
- elif not found and k < 17:
- row = -1 * np.ones(20)
- row[indexA] = partAs[i]
- row[indexB] = partBs[i]
- row[-1] = 2
- row[-2] = sum(candidate[connection_all[k][i, :2].astype(int), 2]) + connection_all[k][i][2]
- subset = np.vstack([subset, row])
- # delete some rows of subset which has few parts occur
- deleteIdx = []
- for i in range(len(subset)):
- if subset[i][-1] < 4 or subset[i][-2] / subset[i][-1] < 0.4:
- deleteIdx.append(i)
- subset = np.delete(subset, deleteIdx, axis=0)
-
- # subset: n*20 array, 0-17 is the index in candidate, 18 is the total score, 19 is the total parts
- # candidate: x, y, score, id
- return candidate, subset
-
-if __name__ == "__main__":
- body_estimation = Body('../model/body_pose_model.pth')
-
- test_image = '../images/ski.jpg'
- oriImg = cv2.imread(test_image) # B,G,R order
- candidate, subset = body_estimation(oriImg)
- canvas = util.draw_bodypose(oriImg, candidate, subset)
- plt.imshow(canvas[:, :, [2, 1, 0]])
- plt.show()
diff --git a/spaces/ECCV2022/bytetrack/yolox/evaluators/__init__.py b/spaces/ECCV2022/bytetrack/yolox/evaluators/__init__.py
deleted file mode 100644
index 5d704e05c79409fb053be1a8f8ce4676a015b054..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/yolox/evaluators/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# Copyright (c) Megvii, Inc. and its affiliates.
-
-from .coco_evaluator import COCOEvaluator
-from .mot_evaluator import MOTEvaluator
diff --git a/spaces/Egrt/GCycleGAN/utils/dataloader.py b/spaces/Egrt/GCycleGAN/utils/dataloader.py
deleted file mode 100644
index cce62ca505f238ddcc52e59de28b596b2c1c3ae7..0000000000000000000000000000000000000000
--- a/spaces/Egrt/GCycleGAN/utils/dataloader.py
+++ /dev/null
@@ -1,45 +0,0 @@
-import numpy as np
-import torch
-from PIL import Image
-from torch.utils.data.dataset import Dataset
-
-from utils.utils import cvtColor, preprocess_input
-
-
-class CycleGanDataset(Dataset):
- def __init__(self, annotation_lines_A, annotation_lines_B, input_shape):
- super(CycleGanDataset, self).__init__()
-
- self.annotation_lines_A = annotation_lines_A
- self.annotation_lines_B = annotation_lines_B
- self.length_A = len(self.annotation_lines_A)
- self.length_B = len(self.annotation_lines_B)
-
- self.input_shape = input_shape
-
- def __len__(self):
- return max(self.length_A, self.length_B)
-
- def __getitem__(self, index):
- index_A = index % self.length_A
- image_A = Image.open(self.annotation_lines_A[index_A].split(';')[1].split()[0])
- image_A = cvtColor(image_A).resize([self.input_shape[1], self.input_shape[0]], Image.BICUBIC)
- image_A = np.array(image_A, dtype=np.float32)
- image_A = np.transpose(preprocess_input(image_A), (2, 0, 1))
-
- index_B = index % self.length_B
- image_B = Image.open(self.annotation_lines_B[index_B].split(';')[1].split()[0])
- image_B = cvtColor(image_B).resize([self.input_shape[1], self.input_shape[0]], Image.BICUBIC)
- image_B = np.array(image_B, dtype=np.float32)
- image_B = np.transpose(preprocess_input(image_B), (2, 0, 1))
- return image_A, image_B
-
-def CycleGan_dataset_collate(batch):
- images_A = []
- images_B = []
- for image_A, image_B in batch:
- images_A.append(image_A)
- images_B.append(image_B)
- images_A = torch.from_numpy(np.array(images_A, np.float32))
- images_B = torch.from_numpy(np.array(images_B, np.float32))
- return images_A, images_B
diff --git a/spaces/EronSamez/RVC_HFmeu/LazyImport.py b/spaces/EronSamez/RVC_HFmeu/LazyImport.py
deleted file mode 100644
index 5bdb05ddd5a546a43adba7274b4c3465bb77f2f5..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/LazyImport.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from importlib.util import find_spec, LazyLoader, module_from_spec
-from sys import modules
-
-def lazyload(name):
- if name in modules:
- return modules[name]
- else:
- spec = find_spec(name)
- loader = LazyLoader(spec.loader)
- module = module_from_spec(spec)
- modules[name] = module
- loader.exec_module(module)
- return module
\ No newline at end of file
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/dbnet_r50dcnv2_fpnc.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/dbnet_r50dcnv2_fpnc.py
deleted file mode 100644
index 1cd1f1baf011554c03c16575b69ebd94eae986b0..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/dbnet_r50dcnv2_fpnc.py
+++ /dev/null
@@ -1,23 +0,0 @@
-model = dict(
- type='DBNet',
- backbone=dict(
- type='mmdet.ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=False,
- style='pytorch',
- dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False),
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'),
- stage_with_dcn=(False, True, True, True)),
- neck=dict(
- type='FPNC', in_channels=[256, 512, 1024, 2048], lateral_channels=256),
- bbox_head=dict(
- type='DBHead',
- in_channels=256,
- loss=dict(type='DBLoss', alpha=5.0, beta=10.0, bbce_loss=True),
- postprocessor=dict(type='DBPostprocessor', text_repr_type='quad')),
- train_cfg=None,
- test_cfg=None)
diff --git a/spaces/FantasticGNU/AnomalyGPT/model/modeling_llama.py b/spaces/FantasticGNU/AnomalyGPT/model/modeling_llama.py
deleted file mode 100644
index 12d980e189d902fb1a6d9ea05dc3ca91959b1c8c..0000000000000000000000000000000000000000
--- a/spaces/FantasticGNU/AnomalyGPT/model/modeling_llama.py
+++ /dev/null
@@ -1,755 +0,0 @@
-# This script is based on https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py
-
-""" PyTorch LLaMA model."""
-import math
-from typing import List, Optional, Tuple, Union
-
-import torch
-import torch.utils.checkpoint
-from torch import nn
-from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
-
-from transformers.activations import ACT2FN
-from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast, SequenceClassifierOutputWithPast
-from transformers.modeling_utils import PreTrainedModel
-from transformers.utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings
-from transformers.models.llama.configuration_llama import LlamaConfig
-
-
-logger = logging.get_logger(__name__)
-
-_CONFIG_FOR_DOC = "LlamaConfig"
-
-
-# Copied from transformers.models.bart.modeling_bart._make_causal_mask
-def _make_causal_mask(
- input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
-):
- """
- Make causal mask used for bi-directional self-attention.
- """
- bsz, tgt_len = input_ids_shape
- mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
- mask_cond = torch.arange(mask.size(-1), device=device)
- mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
- mask = mask.to(dtype)
-
- if past_key_values_length > 0:
- mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
- return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
-
-
-# Copied from transformers.models.bart.modeling_bart._expand_mask
-def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
- """
- Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
- """
- bsz, src_len = mask.size()
- tgt_len = tgt_len if tgt_len is not None else src_len
-
- expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
-
- inverted_mask = 1.0 - expanded_mask
-
- return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
-
-
-class LlamaRMSNorm(nn.Module):
- def __init__(self, hidden_size, eps=1e-6):
- """
- LlamaRMSNorm is equivalent to T5LayerNorm
- """
- super().__init__()
- self.weight = nn.Parameter(torch.ones(hidden_size))
- self.variance_epsilon = eps
-
- def forward(self, hidden_states):
- variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
- hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
-
- # convert into half-precision if necessary
- if self.weight.dtype in [torch.float16, torch.bfloat16]:
- hidden_states = hidden_states.to(self.weight.dtype)
-
- return self.weight * hidden_states
-
-
-class LlamaRotaryEmbedding(torch.nn.Module):
- def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
- super().__init__()
- inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim))
- self.register_buffer("inv_freq", inv_freq)
-
- # Build here to make `torch.jit.trace` work.
- self.max_seq_len_cached = max_position_embeddings
- t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype)
- freqs = torch.einsum("i,j->ij", t, self.inv_freq)
- # Different from paper, but it uses a different permutation in order to obtain the same calculation
- emb = torch.cat((freqs, freqs), dim=-1)
- self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
- self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
-
- def forward(self, x, seq_len=None):
- # x: [bs, num_attention_heads, seq_len, head_size]
- # This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case.
- if seq_len > self.max_seq_len_cached:
- self.max_seq_len_cached = seq_len
- t = torch.arange(self.max_seq_len_cached, device=x.device, dtype=self.inv_freq.dtype)
- freqs = torch.einsum("i,j->ij", t, self.inv_freq)
- # Different from paper, but it uses a different permutation in order to obtain the same calculation
- emb = torch.cat((freqs, freqs), dim=-1).to(x.device)
- self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False)
- self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False)
- return (
- self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
- self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype),
- )
-
-
-def rotate_half(x):
- """Rotates half the hidden dims of the input."""
- x1 = x[..., : x.shape[-1] // 2]
- x2 = x[..., x.shape[-1] // 2 :]
- return torch.cat((-x2, x1), dim=-1)
-
-
-def apply_rotary_pos_emb(q, k, cos, sin, position_ids):
- gather_indices = position_ids[:, None, :, None] # [bs, 1, seq_len, 1]
- gather_indices = gather_indices.repeat(1, cos.shape[1], 1, cos.shape[3])
- cos = torch.gather(cos.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices)
- sin = torch.gather(sin.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices)
- q_embed = (q * cos) + (rotate_half(q) * sin)
- k_embed = (k * cos) + (rotate_half(k) * sin)
- return q_embed, k_embed
-
-
-class LlamaMLP(nn.Module):
- def __init__(
- self,
- hidden_size: int,
- intermediate_size: int,
- hidden_act: str,
- ):
- super().__init__()
- self.gate_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
- self.down_proj = nn.Linear(intermediate_size, hidden_size, bias=False)
- self.up_proj = nn.Linear(hidden_size, intermediate_size, bias=False)
- self.act_fn = ACT2FN[hidden_act]
-
- def forward(self, x):
- return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
-
-
-class LlamaAttention(nn.Module):
- """Multi-headed attention from 'Attention Is All You Need' paper"""
-
- def __init__(self, config: LlamaConfig):
- super().__init__()
- self.config = config
- self.hidden_size = config.hidden_size
- self.num_heads = config.num_attention_heads
- self.head_dim = self.hidden_size // self.num_heads
- self.max_position_embeddings = config.max_position_embeddings
-
- if (self.head_dim * self.num_heads) != self.hidden_size:
- raise ValueError(
- f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
- f" and `num_heads`: {self.num_heads})."
- )
- self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
- self.k_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
- self.v_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False)
- self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
- self.rotary_emb = LlamaRotaryEmbedding(self.head_dim, max_position_embeddings=self.max_position_embeddings)
-
- def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
- return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_value: Optional[Tuple[torch.Tensor]] = None,
- output_attentions: bool = False,
- use_cache: bool = False,
- ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
- bsz, q_len, _ = hidden_states.size()
-
- query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
- key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
- value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
-
- kv_seq_len = key_states.shape[-2]
- if past_key_value is not None:
- kv_seq_len += past_key_value[0].shape[-2]
- cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
- query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
- # [bsz, nh, t, hd]
-
- if past_key_value is not None:
- # reuse k, v, self_attention
- key_states = torch.cat([past_key_value[0], key_states], dim=2)
- value_states = torch.cat([past_key_value[1], value_states], dim=2)
-
- past_key_value = (key_states, value_states) if use_cache else None
-
- attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
-
- if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
- raise ValueError(
- f"Attention weights should be of size {(bsz * self.num_heads, q_len, kv_seq_len)}, but is"
- f" {attn_weights.size()}"
- )
-
- if attention_mask is not None:
- if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
- raise ValueError(
- f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
- )
- attn_weights = attn_weights + attention_mask
- attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min))
-
- # upcast attention to fp32
- attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
- attn_output = torch.matmul(attn_weights, value_states)
-
- if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
- raise ValueError(
- f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
- f" {attn_output.size()}"
- )
-
- attn_output = attn_output.transpose(1, 2)
- attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
-
- attn_output = self.o_proj(attn_output)
-
- if not output_attentions:
- attn_weights = None
-
- return attn_output, attn_weights, past_key_value
-
-
-class LlamaDecoderLayer(nn.Module):
- def __init__(self, config: LlamaConfig):
- super().__init__()
- self.hidden_size = config.hidden_size
- self.self_attn = LlamaAttention(config=config)
- self.mlp = LlamaMLP(
- hidden_size=self.hidden_size,
- intermediate_size=config.intermediate_size,
- hidden_act=config.hidden_act,
- )
- self.input_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
- self.post_attention_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_value: Optional[Tuple[torch.Tensor]] = None,
- output_attentions: Optional[bool] = False,
- use_cache: Optional[bool] = False,
- ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
- """
- Args:
- hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
- attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
- `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more detail.
- use_cache (`bool`, *optional*):
- If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
- (see `past_key_values`).
- past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
- """
-
- residual = hidden_states
-
- hidden_states = self.input_layernorm(hidden_states)
-
- # Self Attention
- hidden_states, self_attn_weights, present_key_value = self.self_attn(
- hidden_states=hidden_states,
- attention_mask=attention_mask,
- position_ids=position_ids,
- past_key_value=past_key_value,
- output_attentions=output_attentions,
- use_cache=use_cache,
- )
- hidden_states = residual + hidden_states
-
- # Fully Connected
- residual = hidden_states
- hidden_states = self.post_attention_layernorm(hidden_states)
- hidden_states = self.mlp(hidden_states)
- hidden_states = residual + hidden_states
-
- outputs = (hidden_states,)
-
- if output_attentions:
- outputs += (self_attn_weights,)
-
- if use_cache:
- outputs += (present_key_value,)
-
- return outputs
-
-
-LLAMA_START_DOCSTRING = r"""
- This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
- library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
- etc.)
-
- This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
- Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
- and behavior.
-
- Parameters:
- config ([`LlamaConfig`]):
- Model configuration class with all the parameters of the model. Initializing with a config file does not
- load the weights associated with the model, only the configuration. Check out the
- [`~PreTrainedModel.from_pretrained`] method to load the model weights.
-"""
-
-
-@add_start_docstrings(
- "The bare LLaMA Model outputting raw hidden-states without any specific head on top.",
- LLAMA_START_DOCSTRING,
-)
-class LlamaPreTrainedModel(PreTrainedModel):
- config_class = LlamaConfig
- base_model_prefix = "model"
- supports_gradient_checkpointing = True
- _no_split_modules = ["LlamaDecoderLayer"]
- _keys_to_ignore_on_load_unexpected = [r"decoder\.version"]
-
- def _init_weights(self, module):
- std = self.config.initializer_range
- if isinstance(module, nn.Linear):
- module.weight.data.normal_(mean=0.0, std=std)
- if module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.Embedding):
- module.weight.data.normal_(mean=0.0, std=std)
- if module.padding_idx is not None:
- module.weight.data[module.padding_idx].zero_()
-
- def _set_gradient_checkpointing(self, module, value=False):
- if isinstance(module, LlamaModel):
- module.gradient_checkpointing = value
-
-
-LLAMA_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
- it.
-
- Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
-
- Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
- `past_key_values`).
-
- If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
- and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
- information on the default strategy.
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
- position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
- config.n_positions - 1]`.
-
- [What are position IDs?](../glossary#position-ids)
- past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
- `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
- `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
-
- Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
- blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
-
- If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
- don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
- `decoder_input_ids` of shape `(batch_size, sequence_length)`.
- inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
- is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
- model's internal embedding lookup matrix.
- use_cache (`bool`, *optional*):
- If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
- `past_key_values`).
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-
-@add_start_docstrings(
- "The bare LLaMA Model outputting raw hidden-states without any specific head on top.",
- LLAMA_START_DOCSTRING,
-)
-class LlamaModel(LlamaPreTrainedModel):
- """
- Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`LlamaDecoderLayer`]
-
- Args:
- config: LlamaConfig
- """
-
- def __init__(self, config: LlamaConfig):
- super().__init__(config)
- self.padding_idx = config.pad_token_id
- self.vocab_size = config.vocab_size
-
- self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
- self.layers = nn.ModuleList([LlamaDecoderLayer(config) for _ in range(config.num_hidden_layers)])
- self.norm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
-
- self.gradient_checkpointing = False
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_input_embeddings(self):
- return self.embed_tokens
-
- def set_input_embeddings(self, value):
- self.embed_tokens = value
-
- # Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask
- def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
- # create causal mask
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- combined_attention_mask = None
- if input_shape[-1] > 1:
- combined_attention_mask = _make_causal_mask(
- input_shape,
- inputs_embeds.dtype,
- device=inputs_embeds.device,
- past_key_values_length=past_key_values_length,
- )
-
- if attention_mask is not None:
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to(
- inputs_embeds.device
- )
- combined_attention_mask = (
- expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
- )
-
- return combined_attention_mask
-
- @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_values: Optional[List[torch.FloatTensor]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- query_embeds: Optional[torch.FloatTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, BaseModelOutputWithPast]:
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- use_cache = use_cache if use_cache is not None else self.config.use_cache
-
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- # retrieve input_ids and inputs_embeds
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
- elif input_ids is not None:
- batch_size, seq_length = input_ids.shape
- elif inputs_embeds is not None:
- batch_size, seq_length, _ = inputs_embeds.shape
- else:
- raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
-
- if inputs_embeds is None:
- inputs_embeds = self.embed_tokens(input_ids)
- if query_embeds is not None:
- inputs_embeds = torch.cat([query_embeds, inputs_embeds], dim=1)
- batch_size, seq_length, _ = inputs_embeds.shape
-
- seq_length_with_past = seq_length
- past_key_values_length = 0
-
- if past_key_values is not None:
- past_key_values_length = past_key_values[0][0].shape[2]
- seq_length_with_past = seq_length_with_past + past_key_values_length
-
- if position_ids is None:
- device = input_ids.device if input_ids is not None else inputs_embeds.device
- position_ids = torch.arange(
- past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
- )
- position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
- else:
- position_ids = position_ids.view(-1, seq_length).long()
-
- # embed positions
- if attention_mask is None:
- attention_mask = torch.ones(
- (batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device
- )
- attention_mask = self._prepare_decoder_attention_mask(
- attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length
- )
-
- hidden_states = inputs_embeds
-
- if self.gradient_checkpointing and self.training:
- if use_cache:
- logger.warning_once(
- "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
- )
- use_cache = False
-
- # decoder layers
- all_hidden_states = () if output_hidden_states else None
- all_self_attns = () if output_attentions else None
- next_decoder_cache = () if use_cache else None
-
- for idx, decoder_layer in enumerate(self.layers):
- if output_hidden_states:
- all_hidden_states += (hidden_states,)
-
- past_key_value = past_key_values[idx] if past_key_values is not None else None
-
- if self.gradient_checkpointing and self.training:
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- # None for past_key_value
- return module(*inputs, output_attentions, None)
-
- return custom_forward
-
- layer_outputs = torch.utils.checkpoint.checkpoint(
- create_custom_forward(decoder_layer),
- hidden_states,
- attention_mask,
- position_ids,
- None,
- )
- else:
- layer_outputs = decoder_layer(
- hidden_states,
- attention_mask=attention_mask,
- position_ids=position_ids,
- past_key_value=past_key_value,
- output_attentions=output_attentions,
- use_cache=use_cache,
- )
-
- hidden_states = layer_outputs[0]
-
- if use_cache:
- next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
-
- if output_attentions:
- all_self_attns += (layer_outputs[1],)
-
- hidden_states = self.norm(hidden_states)
-
- # add hidden states from the last decoder layer
- if output_hidden_states:
- all_hidden_states += (hidden_states,)
-
- next_cache = next_decoder_cache if use_cache else None
- if not return_dict:
- return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
- return BaseModelOutputWithPast(
- last_hidden_state=hidden_states,
- past_key_values=next_cache,
- hidden_states=all_hidden_states,
- attentions=all_self_attns,
- )
-
-
-class LlamaForCausalLM(LlamaPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
- self.model = LlamaModel(config)
-
- self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_input_embeddings(self):
- return self.model.embed_tokens
-
- def set_input_embeddings(self, value):
- self.model.embed_tokens = value
-
- def get_output_embeddings(self):
- return self.lm_head
-
- def set_output_embeddings(self, new_embeddings):
- self.lm_head = new_embeddings
-
- def set_decoder(self, decoder):
- self.model = decoder
-
- def get_decoder(self):
- return self.model
-
- @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_values: Optional[List[torch.FloatTensor]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- query_embeds: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, CausalLMOutputWithPast]:
- r"""
- Args:
- labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
- config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
- (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
-
- Returns:
-
- Example:
-
- ```python
- >>> from transformers import AutoTokenizer, LlamaForCausalLM
-
- >>> model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
- >>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
-
- >>> prompt = "Hey, are you consciours? Can you talk to me?"
- >>> inputs = tokenizer(prompt, return_tensors="pt")
-
- >>> # Generate
- >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
- >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
- "Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you."
- ```"""
-
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
- outputs = self.model(
- input_ids=input_ids,
- attention_mask=attention_mask,
- position_ids=position_ids,
- past_key_values=past_key_values,
- inputs_embeds=inputs_embeds,
- query_embeds=query_embeds,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- hidden_states = outputs[0]
- logits = self.lm_head(hidden_states)
-
- loss = None
- if labels is not None:
- # Shift so that tokens < n predict n
- shift_logits = logits[..., :-1, :].contiguous()
- shift_labels = labels[..., 1:].contiguous()
- # Flatten the tokens
- loss_fct = CrossEntropyLoss()
- shift_logits = shift_logits.view(-1, self.config.vocab_size)
- shift_labels = shift_labels.view(-1)
- # Enable model parallelism
- shift_labels = shift_labels.to(shift_logits.device)
- loss = loss_fct(shift_logits, shift_labels)
-
- if not return_dict:
- output = (logits,) + outputs[1:]
- return (loss,) + output if loss is not None else output
-
- return CausalLMOutputWithPast(
- loss=loss,
- logits=logits,
- past_key_values=outputs.past_key_values,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
- def prepare_inputs_for_generation(
- self, input_ids, query_embeds=None, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs
- ):
- if past_key_values:
- input_ids = input_ids[:, -1:]
-
- position_ids = kwargs.get("position_ids", None)
- if attention_mask is not None and position_ids is None:
- # create position_ids on the fly for batch generation
- position_ids = attention_mask.long().cumsum(-1) - 1
- position_ids.masked_fill_(attention_mask == 0, 1)
- if past_key_values:
- position_ids = position_ids[:, -1].unsqueeze(-1)
- query_embeds = None
-
- # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
- if inputs_embeds is not None and past_key_values is None:
- model_inputs = {"inputs_embeds": inputs_embeds}
- else:
- model_inputs = {"input_ids": input_ids}
-
- model_inputs.update(
- {
- "position_ids": position_ids,
- "query_embeds": query_embeds,
- "past_key_values": past_key_values,
- "use_cache": kwargs.get("use_cache"),
- "attention_mask": attention_mask,
- }
- )
- return model_inputs
-
- @staticmethod
- def _reorder_cache(past_key_values, beam_idx):
- reordered_past = ()
- for layer_past in past_key_values:
- reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),)
- return reordered_past
-
diff --git a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/attentions.py b/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/attentions.py
deleted file mode 100644
index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000
--- a/spaces/Faridmaruf/rvc-Blue-archives/lib/infer_pack/attentions.py
+++ /dev/null
@@ -1,417 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from lib.infer_pack import commons
-from lib.infer_pack import modules
-from lib.infer_pack.modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- window_size=10,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- window_size=window_size,
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(
- self,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size=1,
- p_dropout=0.0,
- proximal_bias=False,
- proximal_init=True,
- **kwargs
- ):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(
- MultiHeadAttention(
- hidden_channels,
- hidden_channels,
- n_heads,
- p_dropout=p_dropout,
- proximal_bias=proximal_bias,
- proximal_init=proximal_init,
- )
- )
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(
- MultiHeadAttention(
- hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout
- )
- )
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(
- FFN(
- hidden_channels,
- hidden_channels,
- filter_channels,
- kernel_size,
- p_dropout=p_dropout,
- causal=True,
- )
- )
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(
- device=x.device, dtype=x.dtype
- )
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(
- self,
- channels,
- out_channels,
- n_heads,
- p_dropout=0.0,
- window_size=None,
- heads_share=True,
- block_length=None,
- proximal_bias=False,
- proximal_init=False,
- ):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
- self.emb_rel_v = nn.Parameter(
- torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels)
- * rel_stddev
- )
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert (
- t_s == t_t
- ), "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(
- query / math.sqrt(self.k_channels), key_relative_embeddings
- )
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(
- device=scores.device, dtype=scores.dtype
- )
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert (
- t_s == t_t
- ), "Local attention is only available for self-attention."
- block_mask = (
- torch.ones_like(scores)
- .triu(-self.block_length)
- .tril(self.block_length)
- )
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(
- self.emb_rel_v, t_s
- )
- output = output + self._matmul_with_relative_values(
- relative_weights, value_relative_embeddings
- )
- output = (
- output.transpose(2, 3).contiguous().view(b, d, t_t)
- ) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]),
- )
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[
- :, slice_start_position:slice_end_position
- ]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(
- x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]])
- )
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[
- :, :, :length, length - 1 :
- ]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(
- x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]])
- )
- x_flat = x.view([batch, heads, length**2 + length * (length - 1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- filter_channels,
- kernel_size,
- p_dropout=0.0,
- activation=None,
- causal=False,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Flux9665/SpeechCloning/reference_audios/__init__.py b/spaces/Flux9665/SpeechCloning/reference_audios/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Fox1997/vits-uma-genshin-honkai/models.py b/spaces/Fox1997/vits-uma-genshin-honkai/models.py
deleted file mode 100644
index 52e15d1b9775038fd6e82b2efe6f95f51c66802d..0000000000000000000000000000000000000000
--- a/spaces/Fox1997/vits-uma-genshin-honkai/models.py
+++ /dev/null
@@ -1,534 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
- logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths):
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
- k, u, padding=(k-u)//2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel//(2**(i+1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i*self.num_kernels+j](x)
- else:
- xs += self.resblocks[i*self.num_kernels+j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2,3,5,7,11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
-
- self.use_sdp = use_sdp
-
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- if use_sdp:
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- else:
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers > 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid=None):
-
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
- if self.use_sdp:
- l_length = self.dp(x, x_mask, w, g=g)
- l_length = l_length / torch.sum(x_mask)
- else:
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
- device = next(self.parameters()).device # 获取模型所在的设备
- x, m_p, logs_p, x_mask = self.enc_p(x.to(device), x_lengths.to(device))
- if self.n_speakers > 0:
- g = self.emb_g(sid.to(device)).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- if self.use_sdp:
- logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
- else:
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:,:,:max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
-
- def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
- assert self.n_speakers > 0, "n_speakers have to be larger than 0."
- g_src = self.emb_g(sid_src).unsqueeze(-1)
- g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
- z_p = self.flow(z, y_mask, g=g_src)
- z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
- o_hat = self.dec(z_hat * y_mask, g=g_tgt)
- return o_hat, y_mask, (z, z_p, z_hat)
-
diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/test_all_singletask.sh b/spaces/Gen-Sim/Gen-Sim/scripts/test_all_singletask.sh
deleted file mode 100644
index bcca5fd6b4b72e126764b5638940ea384158a40a..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/scripts/test_all_singletask.sh
+++ /dev/null
@@ -1 +0,0 @@
-sh scripts/test_singletask.sh data "align-rope assembling-kits-seq palletizing-boxes towers-of-hanoi assembling-kits align-box-corner manipulating-rope packing-boxes place-red-in-green put-block-in-bowl task packing-boxes-pairs sweeping-piles separating-piles stack-block-pyramid-seq towers-of-hanoi-seq packing-shapes stack-block-pyramid block-insertion packing-google-objects color-coordinated-ball-stacking cylinder-ring-stack stack-three-layer-red-wall color-ordered-blocks-on-pallet build-cylinder-structure build-bridge pyramid-blocks-assemble sort-and-assemble-block-castle stack-blocks-in-container block-on-cylinder-on-pallet corner-sort-cylinders align-pair-colored-blocks-along-line color-specific-container-fill colored-cylinder-in-square construct-colorful-arch color-coordinated-ball-insertion insert-sphere-into-container build-wheel color-coordinated-sphere-and-cylinder-assembly push-piles-into-letter color-coordinated-zone-stacking create-pyramid-with-color-coded-ells color-coordinated-arch-construction color-coordinated-sphere-insertion put-kit-in-bowl move-piles-along-line insert-ell-along-square-path multi-level-block-construction build-car color-coded-blocks-on-corner move-kit-from-zone-to-cylinder multi-level-insertion-and-zone-matching color-coordinated-insertion ball-in-bowl-obstacle-course-new colorful-block-tower-on-cylinder-base manipulating-two-ropes construct-corner-building color-coordinated-block-bridge ball-on-box-on-container color-sequenced-sphere-placement construct-corner-blocks sort-insert-color-coordinated-blocks color-ordered-container-arrangement symmetric-block-bridge-construction connect-boxes-with-rope align-rope-cross-zone vertical-insertion-blocks cylinder-stand-alignment color-coordinated-zone-arrangement insert-blocks-lineup create-pyramid-blocks-and-container mix-piles put-blues-around-red color-sequenced-pyramid-packing put-blocks-between-zones color-coordinated-cylinder-pyramid sweep-and-sort-blocks multi-level-pyramid-construction guided-block-path rainbow-stack color-ordered-insertion-new mixed-color-block-barrier-insertion color-coordinated-block-shifting align-balls-in-colored-zones multicolor-block-bridge sequential-insertion-and-stacking move-bowl-from-pallet-to-corner insertion-in-color-sequenced-zones align-spheres-in-colored-zones color-blocks-in-cylinder-maze color-coordinated-sphere-on-pallet-pyramid sort-and-stack-clr-blocks corner-block-challenge sequential-block-insertion place-blue-on-line-ends kit-in-bowl-in-zone align-rope-along-line sphere-container-color-match stack-color-coordinated-blocks assemble-single-car color-structured-block-tower color-sorted-block-race align-balls-in-colored-boxes color-coordinated-cylinder-ball-match build-house align-cylinders-in-zones sphere-align-stand ball-in-bowl-obstacle-course color-coordinated-block-tower color-sorted-container-stack color-coordinated-cylinder-stand-assembly color-ordered-insertion block-pyramid-with-limited-space color-cued-ball-corner-sorting sorting-blocks-into-pallets place-ball-in-elevated-bowl Four-corner-pyramid-challenge colored-balls-sorting-in-corner color-coordinated-box-ball-matching color-coordinated-cylinder-tower ball-sorting-with-blocks-barrier build-two-circles cylinder-balancing-and-placement"
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/schedules/schedule_1x.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/schedules/schedule_1x.py
deleted file mode 100644
index 13b3783cbbe93b6c32bc415dc50f633dffa4aec7..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/schedules/schedule_1x.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# optimizer
-optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
-optimizer_config = dict(grad_clip=None)
-# learning policy
-lr_config = dict(
- policy='step',
- warmup='linear',
- warmup_iters=500,
- warmup_ratio=0.001,
- step=[8, 11])
-runner = dict(type='EpochBasedRunner', max_epochs=12)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/guided_anchoring/ga_rpn_x101_32x4d_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/guided_anchoring/ga_rpn_x101_32x4d_fpn_1x_coco.py
deleted file mode 100644
index 1e0fe4931e9cb340fcf3b80a4f9380abee500238..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/guided_anchoring/ga_rpn_x101_32x4d_fpn_1x_coco.py
+++ /dev/null
@@ -1,13 +0,0 @@
-_base_ = './ga_rpn_r50_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://resnext101_32x4d',
- backbone=dict(
- type='ResNeXt',
- depth=101,
- groups=32,
- base_width=4,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- style='pytorch'))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_40k_voc12aug.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_40k_voc12aug.py
deleted file mode 100644
index 803c42da35eda861bf32ce0e7866cdc9fad96d0d..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_40k_voc12aug.py
+++ /dev/null
@@ -1,7 +0,0 @@
-_base_ = [
- '../_base_/models/psanet_r50-d8.py',
- '../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/setup.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/setup.py
deleted file mode 100644
index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000
--- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/setup.py
+++ /dev/null
@@ -1,65 +0,0 @@
-"""
- Copyright (c) Meta Platforms, Inc. and affiliates.
- All rights reserved.
-
- This source code is licensed under the license found in the
- LICENSE file in the root directory of this source tree.
-
-"""
-
-from pathlib import Path
-
-from setuptools import setup, find_packages
-
-
-NAME = 'audiocraft'
-DESCRIPTION = 'Audio research library for PyTorch'
-
-URL = 'https://github.com/fairinternal/audiocraft'
-AUTHOR = 'FAIR Speech & Audio'
-EMAIL = 'defossez@meta.com'
-REQUIRES_PYTHON = '>=3.8.0'
-
-for line in open('audiocraft/__init__.py'):
- line = line.strip()
- if '__version__' in line:
- context = {}
- exec(line, context)
- VERSION = context['__version__']
-
-HERE = Path(__file__).parent
-
-try:
- with open(HERE / "README.md", encoding='utf-8') as f:
- long_description = '\n' + f.read()
-except FileNotFoundError:
- long_description = DESCRIPTION
-
-REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')]
-
-setup(
- name=NAME,
- version=VERSION,
- description=DESCRIPTION,
- author_email=EMAIL,
- long_description=long_description,
- long_description_content_type='text/markdown',
- author=AUTHOR,
- url=URL,
- python_requires=REQUIRES_PYTHON,
- install_requires=REQUIRED,
- extras_require={
- 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'],
- },
- packages=find_packages(),
- package_data={'audiocraft': ['py.typed']},
- include_package_data=True,
- license='MIT License',
- classifiers=[
- # Trove classifiers
- # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
- 'License :: OSI Approved :: MIT License',
- 'Topic :: Multimedia :: Sound/Audio',
- 'Topic :: Scientific/Engineering :: Artificial Intelligence',
- ],
-)
diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/utils/word_vectorizer.py b/spaces/Grezz/generate_human_motion/VQ-Trans/utils/word_vectorizer.py
deleted file mode 100644
index 557ff97a9539c084167f3eca51fb50f53f33c8ea..0000000000000000000000000000000000000000
--- a/spaces/Grezz/generate_human_motion/VQ-Trans/utils/word_vectorizer.py
+++ /dev/null
@@ -1,99 +0,0 @@
-import numpy as np
-import pickle
-from os.path import join as pjoin
-
-POS_enumerator = {
- 'VERB': 0,
- 'NOUN': 1,
- 'DET': 2,
- 'ADP': 3,
- 'NUM': 4,
- 'AUX': 5,
- 'PRON': 6,
- 'ADJ': 7,
- 'ADV': 8,
- 'Loc_VIP': 9,
- 'Body_VIP': 10,
- 'Obj_VIP': 11,
- 'Act_VIP': 12,
- 'Desc_VIP': 13,
- 'OTHER': 14,
-}
-
-Loc_list = ('left', 'right', 'clockwise', 'counterclockwise', 'anticlockwise', 'forward', 'back', 'backward',
- 'up', 'down', 'straight', 'curve')
-
-Body_list = ('arm', 'chin', 'foot', 'feet', 'face', 'hand', 'mouth', 'leg', 'waist', 'eye', 'knee', 'shoulder', 'thigh')
-
-Obj_List = ('stair', 'dumbbell', 'chair', 'window', 'floor', 'car', 'ball', 'handrail', 'baseball', 'basketball')
-
-Act_list = ('walk', 'run', 'swing', 'pick', 'bring', 'kick', 'put', 'squat', 'throw', 'hop', 'dance', 'jump', 'turn',
- 'stumble', 'dance', 'stop', 'sit', 'lift', 'lower', 'raise', 'wash', 'stand', 'kneel', 'stroll',
- 'rub', 'bend', 'balance', 'flap', 'jog', 'shuffle', 'lean', 'rotate', 'spin', 'spread', 'climb')
-
-Desc_list = ('slowly', 'carefully', 'fast', 'careful', 'slow', 'quickly', 'happy', 'angry', 'sad', 'happily',
- 'angrily', 'sadly')
-
-VIP_dict = {
- 'Loc_VIP': Loc_list,
- 'Body_VIP': Body_list,
- 'Obj_VIP': Obj_List,
- 'Act_VIP': Act_list,
- 'Desc_VIP': Desc_list,
-}
-
-
-class WordVectorizer(object):
- def __init__(self, meta_root, prefix):
- vectors = np.load(pjoin(meta_root, '%s_data.npy'%prefix))
- words = pickle.load(open(pjoin(meta_root, '%s_words.pkl'%prefix), 'rb'))
- self.word2idx = pickle.load(open(pjoin(meta_root, '%s_idx.pkl'%prefix), 'rb'))
- self.word2vec = {w: vectors[self.word2idx[w]] for w in words}
-
- def _get_pos_ohot(self, pos):
- pos_vec = np.zeros(len(POS_enumerator))
- if pos in POS_enumerator:
- pos_vec[POS_enumerator[pos]] = 1
- else:
- pos_vec[POS_enumerator['OTHER']] = 1
- return pos_vec
-
- def __len__(self):
- return len(self.word2vec)
-
- def __getitem__(self, item):
- word, pos = item.split('/')
- if word in self.word2vec:
- word_vec = self.word2vec[word]
- vip_pos = None
- for key, values in VIP_dict.items():
- if word in values:
- vip_pos = key
- break
- if vip_pos is not None:
- pos_vec = self._get_pos_ohot(vip_pos)
- else:
- pos_vec = self._get_pos_ohot(pos)
- else:
- word_vec = self.word2vec['unk']
- pos_vec = self._get_pos_ohot('OTHER')
- return word_vec, pos_vec
-
-
-class WordVectorizerV2(WordVectorizer):
- def __init__(self, meta_root, prefix):
- super(WordVectorizerV2, self).__init__(meta_root, prefix)
- self.idx2word = {self.word2idx[w]: w for w in self.word2idx}
-
- def __getitem__(self, item):
- word_vec, pose_vec = super(WordVectorizerV2, self).__getitem__(item)
- word, pos = item.split('/')
- if word in self.word2vec:
- return word_vec, pose_vec, self.word2idx[word]
- else:
- return word_vec, pose_vec, self.word2idx['unk']
-
- def itos(self, idx):
- if idx == len(self.idx2word):
- return "pad"
- return self.idx2word[idx]
\ No newline at end of file
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/BBSNet/__init__.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/BBSNet/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/HUIYI/huiyili/README.md b/spaces/HUIYI/huiyili/README.md
deleted file mode 100644
index 0bf56fba96b150ada2ad6881805ce3c329169e3e..0000000000000000000000000000000000000000
--- a/spaces/HUIYI/huiyili/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Huiyili
-emoji: 👁
-colorFrom: purple
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan/model.py b/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan/model.py
deleted file mode 100644
index a230961c4d1bf0bd2d1efe7972b4baa33c5d7013..0000000000000000000000000000000000000000
--- a/spaces/HaHaBill/LandShapes-Antarctica/models/stylegan/model.py
+++ /dev/null
@@ -1,456 +0,0 @@
-# Copyright 2020 Erik Härkönen. All rights reserved.
-# This file is licensed to you under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License. You may obtain a copy
-# of the License at http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software distributed under
-# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS
-# OF ANY KIND, either express or implied. See the License for the specific language
-# governing permissions and limitations under the License.
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from collections import OrderedDict
-from pathlib import Path
-import requests
-import pickle
-import sys
-
-import numpy as np
-
-# Reimplementation of StyleGAN in PyTorch
-# Source: https://github.com/lernapparat/lernapparat/blob/master/style_gan/pytorch_style_gan.ipynb
-
-class MyLinear(nn.Module):
- """Linear layer with equalized learning rate and custom learning rate multiplier."""
- def __init__(self, input_size, output_size, gain=2**(0.5), use_wscale=False, lrmul=1, bias=True):
- super().__init__()
- he_std = gain * input_size**(-0.5) # He init
- # Equalized learning rate and custom learning rate multiplier.
- if use_wscale:
- init_std = 1.0 / lrmul
- self.w_mul = he_std * lrmul
- else:
- init_std = he_std / lrmul
- self.w_mul = lrmul
- self.weight = torch.nn.Parameter(torch.randn(output_size, input_size) * init_std)
- if bias:
- self.bias = torch.nn.Parameter(torch.zeros(output_size))
- self.b_mul = lrmul
- else:
- self.bias = None
-
- def forward(self, x):
- bias = self.bias
- if bias is not None:
- bias = bias * self.b_mul
- return F.linear(x, self.weight * self.w_mul, bias)
-
-class MyConv2d(nn.Module):
- """Conv layer with equalized learning rate and custom learning rate multiplier."""
- def __init__(self, input_channels, output_channels, kernel_size, gain=2**(0.5), use_wscale=False, lrmul=1, bias=True,
- intermediate=None, upscale=False):
- super().__init__()
- if upscale:
- self.upscale = Upscale2d()
- else:
- self.upscale = None
- he_std = gain * (input_channels * kernel_size ** 2) ** (-0.5) # He init
- self.kernel_size = kernel_size
- if use_wscale:
- init_std = 1.0 / lrmul
- self.w_mul = he_std * lrmul
- else:
- init_std = he_std / lrmul
- self.w_mul = lrmul
- self.weight = torch.nn.Parameter(torch.randn(output_channels, input_channels, kernel_size, kernel_size) * init_std)
- if bias:
- self.bias = torch.nn.Parameter(torch.zeros(output_channels))
- self.b_mul = lrmul
- else:
- self.bias = None
- self.intermediate = intermediate
-
- def forward(self, x):
- bias = self.bias
- if bias is not None:
- bias = bias * self.b_mul
-
- have_convolution = False
- if self.upscale is not None and min(x.shape[2:]) * 2 >= 128:
- # this is the fused upscale + conv from StyleGAN, sadly this seems incompatible with the non-fused way
- # this really needs to be cleaned up and go into the conv...
- w = self.weight * self.w_mul
- w = w.permute(1, 0, 2, 3)
- # probably applying a conv on w would be more efficient. also this quadruples the weight (average)?!
- w = F.pad(w, (1,1,1,1))
- w = w[:, :, 1:, 1:]+ w[:, :, :-1, 1:] + w[:, :, 1:, :-1] + w[:, :, :-1, :-1]
- x = F.conv_transpose2d(x, w, stride=2, padding=(w.size(-1)-1)//2)
- have_convolution = True
- elif self.upscale is not None:
- x = self.upscale(x)
-
- if not have_convolution and self.intermediate is None:
- return F.conv2d(x, self.weight * self.w_mul, bias, padding=self.kernel_size//2)
- elif not have_convolution:
- x = F.conv2d(x, self.weight * self.w_mul, None, padding=self.kernel_size//2)
-
- if self.intermediate is not None:
- x = self.intermediate(x)
- if bias is not None:
- x = x + bias.view(1, -1, 1, 1)
- return x
-
-class NoiseLayer(nn.Module):
- """adds noise. noise is per pixel (constant over channels) with per-channel weight"""
- def __init__(self, channels):
- super().__init__()
- self.weight = nn.Parameter(torch.zeros(channels))
- self.noise = None
-
- def forward(self, x, noise=None):
- if noise is None and self.noise is None:
- noise = torch.randn(x.size(0), 1, x.size(2), x.size(3), device=x.device, dtype=x.dtype)
- elif noise is None:
- # here is a little trick: if you get all the noiselayers and set each
- # modules .noise attribute, you can have pre-defined noise.
- # Very useful for analysis
- noise = self.noise
- x = x + self.weight.view(1, -1, 1, 1) * noise
- return x
-
-class StyleMod(nn.Module):
- def __init__(self, latent_size, channels, use_wscale):
- super(StyleMod, self).__init__()
- self.lin = MyLinear(latent_size,
- channels * 2,
- gain=1.0, use_wscale=use_wscale)
-
- def forward(self, x, latent):
- style = self.lin(latent) # style => [batch_size, n_channels*2]
- shape = [-1, 2, x.size(1)] + (x.dim() - 2) * [1]
- style = style.view(shape) # [batch_size, 2, n_channels, ...]
- x = x * (style[:, 0] + 1.) + style[:, 1]
- return x
-
-class PixelNormLayer(nn.Module):
- def __init__(self, epsilon=1e-8):
- super().__init__()
- self.epsilon = epsilon
- def forward(self, x):
- return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdim=True) + self.epsilon)
-
-class BlurLayer(nn.Module):
- def __init__(self, kernel=[1, 2, 1], normalize=True, flip=False, stride=1):
- super(BlurLayer, self).__init__()
- kernel=[1, 2, 1]
- kernel = torch.tensor(kernel, dtype=torch.float32)
- kernel = kernel[:, None] * kernel[None, :]
- kernel = kernel[None, None]
- if normalize:
- kernel = kernel / kernel.sum()
- if flip:
- kernel = kernel[:, :, ::-1, ::-1]
- self.register_buffer('kernel', kernel)
- self.stride = stride
-
- def forward(self, x):
- # expand kernel channels
- kernel = self.kernel.expand(x.size(1), -1, -1, -1)
- x = F.conv2d(
- x,
- kernel,
- stride=self.stride,
- padding=int((self.kernel.size(2)-1)/2),
- groups=x.size(1)
- )
- return x
-
-def upscale2d(x, factor=2, gain=1):
- assert x.dim() == 4
- if gain != 1:
- x = x * gain
- if factor != 1:
- shape = x.shape
- x = x.view(shape[0], shape[1], shape[2], 1, shape[3], 1).expand(-1, -1, -1, factor, -1, factor)
- x = x.contiguous().view(shape[0], shape[1], factor * shape[2], factor * shape[3])
- return x
-
-class Upscale2d(nn.Module):
- def __init__(self, factor=2, gain=1):
- super().__init__()
- assert isinstance(factor, int) and factor >= 1
- self.gain = gain
- self.factor = factor
- def forward(self, x):
- return upscale2d(x, factor=self.factor, gain=self.gain)
-
-class G_mapping(nn.Sequential):
- def __init__(self, nonlinearity='lrelu', use_wscale=True):
- act, gain = {'relu': (torch.relu, np.sqrt(2)),
- 'lrelu': (nn.LeakyReLU(negative_slope=0.2), np.sqrt(2))}[nonlinearity]
- layers = [
- ('pixel_norm', PixelNormLayer()),
- ('dense0', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense0_act', act),
- ('dense1', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense1_act', act),
- ('dense2', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense2_act', act),
- ('dense3', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense3_act', act),
- ('dense4', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense4_act', act),
- ('dense5', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense5_act', act),
- ('dense6', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense6_act', act),
- ('dense7', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
- ('dense7_act', act)
- ]
- super().__init__(OrderedDict(layers))
-
- def forward(self, x):
- return super().forward(x)
-
-class Truncation(nn.Module):
- def __init__(self, avg_latent, max_layer=8, threshold=0.7):
- super().__init__()
- self.max_layer = max_layer
- self.threshold = threshold
- self.register_buffer('avg_latent', avg_latent)
- def forward(self, x):
- assert x.dim() == 3
- interp = torch.lerp(self.avg_latent, x, self.threshold)
- do_trunc = (torch.arange(x.size(1)) < self.max_layer).view(1, -1, 1)
- return torch.where(do_trunc, interp, x)
-
-class LayerEpilogue(nn.Module):
- """Things to do at the end of each layer."""
- def __init__(self, channels, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer):
- super().__init__()
- layers = []
- if use_noise:
- layers.append(('noise', NoiseLayer(channels)))
- layers.append(('activation', activation_layer))
- if use_pixel_norm:
- layers.append(('pixel_norm', PixelNorm()))
- if use_instance_norm:
- layers.append(('instance_norm', nn.InstanceNorm2d(channels)))
- self.top_epi = nn.Sequential(OrderedDict(layers))
- if use_styles:
- self.style_mod = StyleMod(dlatent_size, channels, use_wscale=use_wscale)
- else:
- self.style_mod = None
- def forward(self, x, dlatents_in_slice=None):
- x = self.top_epi(x)
- if self.style_mod is not None:
- x = self.style_mod(x, dlatents_in_slice)
- else:
- assert dlatents_in_slice is None
- return x
-
-
-class InputBlock(nn.Module):
- def __init__(self, nf, dlatent_size, const_input_layer, gain, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer):
- super().__init__()
- self.const_input_layer = const_input_layer
- self.nf = nf
- if self.const_input_layer:
- # called 'const' in tf
- self.const = nn.Parameter(torch.ones(1, nf, 4, 4))
- self.bias = nn.Parameter(torch.ones(nf))
- else:
- self.dense = MyLinear(dlatent_size, nf*16, gain=gain/4, use_wscale=use_wscale) # tweak gain to match the official implementation of Progressing GAN
- self.epi1 = LayerEpilogue(nf, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer)
- self.conv = MyConv2d(nf, nf, 3, gain=gain, use_wscale=use_wscale)
- self.epi2 = LayerEpilogue(nf, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer)
-
- def forward(self, dlatents_in_range):
- batch_size = dlatents_in_range.size(0)
- if self.const_input_layer:
- x = self.const.expand(batch_size, -1, -1, -1)
- x = x + self.bias.view(1, -1, 1, 1)
- else:
- x = self.dense(dlatents_in_range[:, 0]).view(batch_size, self.nf, 4, 4)
- x = self.epi1(x, dlatents_in_range[:, 0])
- x = self.conv(x)
- x = self.epi2(x, dlatents_in_range[:, 1])
- return x
-
-
-class GSynthesisBlock(nn.Module):
- def __init__(self, in_channels, out_channels, blur_filter, dlatent_size, gain, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer):
- # 2**res x 2**res # res = 3..resolution_log2
- super().__init__()
- if blur_filter:
- blur = BlurLayer(blur_filter)
- else:
- blur = None
- self.conv0_up = MyConv2d(in_channels, out_channels, kernel_size=3, gain=gain, use_wscale=use_wscale,
- intermediate=blur, upscale=True)
- self.epi1 = LayerEpilogue(out_channels, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer)
- self.conv1 = MyConv2d(out_channels, out_channels, kernel_size=3, gain=gain, use_wscale=use_wscale)
- self.epi2 = LayerEpilogue(out_channels, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, activation_layer)
-
- def forward(self, x, dlatents_in_range):
- x = self.conv0_up(x)
- x = self.epi1(x, dlatents_in_range[:, 0])
- x = self.conv1(x)
- x = self.epi2(x, dlatents_in_range[:, 1])
- return x
-
-class G_synthesis(nn.Module):
- def __init__(self,
- dlatent_size = 512, # Disentangled latent (W) dimensionality.
- num_channels = 3, # Number of output color channels.
- resolution = 1024, # Output resolution.
- fmap_base = 8192, # Overall multiplier for the number of feature maps.
- fmap_decay = 1.0, # log2 feature map reduction when doubling the resolution.
- fmap_max = 512, # Maximum number of feature maps in any layer.
- use_styles = True, # Enable style inputs?
- const_input_layer = True, # First layer is a learned constant?
- use_noise = True, # Enable noise inputs?
- randomize_noise = True, # True = randomize noise inputs every time (non-deterministic), False = read noise inputs from variables.
- nonlinearity = 'lrelu', # Activation function: 'relu', 'lrelu'
- use_wscale = True, # Enable equalized learning rate?
- use_pixel_norm = False, # Enable pixelwise feature vector normalization?
- use_instance_norm = True, # Enable instance normalization?
- dtype = torch.float32, # Data type to use for activations and outputs.
- blur_filter = [1,2,1], # Low-pass filter to apply when resampling activations. None = no filtering.
- ):
-
- super().__init__()
- def nf(stage):
- return min(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_max)
- self.dlatent_size = dlatent_size
- resolution_log2 = int(np.log2(resolution))
- assert resolution == 2**resolution_log2 and resolution >= 4
-
- act, gain = {'relu': (torch.relu, np.sqrt(2)),
- 'lrelu': (nn.LeakyReLU(negative_slope=0.2), np.sqrt(2))}[nonlinearity]
- num_layers = resolution_log2 * 2 - 2
- num_styles = num_layers if use_styles else 1
- torgbs = []
- blocks = []
- for res in range(2, resolution_log2 + 1):
- channels = nf(res-1)
- name = '{s}x{s}'.format(s=2**res)
- if res == 2:
- blocks.append((name,
- InputBlock(channels, dlatent_size, const_input_layer, gain, use_wscale,
- use_noise, use_pixel_norm, use_instance_norm, use_styles, act)))
-
- else:
- blocks.append((name,
- GSynthesisBlock(last_channels, channels, blur_filter, dlatent_size, gain, use_wscale, use_noise, use_pixel_norm, use_instance_norm, use_styles, act)))
- last_channels = channels
- self.torgb = MyConv2d(channels, num_channels, 1, gain=1, use_wscale=use_wscale)
- self.blocks = nn.ModuleDict(OrderedDict(blocks))
-
- def forward(self, dlatents_in):
- # Input: Disentangled latents (W) [minibatch, num_layers, dlatent_size].
- # lod_in = tf.cast(tf.get_variable('lod', initializer=np.float32(0), trainable=False), dtype)
- batch_size = dlatents_in.size(0)
- for i, m in enumerate(self.blocks.values()):
- if i == 0:
- x = m(dlatents_in[:, 2*i:2*i+2])
- else:
- x = m(x, dlatents_in[:, 2*i:2*i+2])
- rgb = self.torgb(x)
- return rgb
-
-
-class StyleGAN_G(nn.Sequential):
- def __init__(self, resolution, truncation=1.0):
- self.resolution = resolution
- self.layers = OrderedDict([
- ('g_mapping', G_mapping()),
- #('truncation', Truncation(avg_latent)),
- ('g_synthesis', G_synthesis(resolution=resolution)),
- ])
- super().__init__(self.layers)
-
- def forward(self, x, latent_is_w=False):
- if isinstance(x, list):
- assert len(x) == 18, 'Must provide 1 or 18 latents'
- if not latent_is_w:
- x = [self.layers['g_mapping'].forward(l) for l in x]
- x = torch.stack(x, dim=1)
- else:
- if not latent_is_w:
- x = self.layers['g_mapping'].forward(x)
- x = x.unsqueeze(1).expand(-1, 18, -1)
-
- x = self.layers['g_synthesis'].forward(x)
-
- return x
-
- # From: https://github.com/lernapparat/lernapparat/releases/download/v2019-02-01/
- def load_weights(self, checkpoint):
- self.load_state_dict(torch.load(checkpoint))
-
- def export_from_tf(self, pickle_path):
- module_path = Path(__file__).parent / 'stylegan_tf'
- sys.path.append(str(module_path.resolve()))
-
- import dnnlib, dnnlib.tflib, pickle, torch, collections
- dnnlib.tflib.init_tf()
-
- weights = pickle.load(open(pickle_path,'rb'))
- weights_pt = [collections.OrderedDict([(k, torch.from_numpy(v.value().eval())) for k,v in w.trainables.items()]) for w in weights]
- #torch.save(weights_pt, pytorch_name)
-
- # then on the PyTorch side run
- state_G, state_D, state_Gs = weights_pt #torch.load('./karras2019stylegan-ffhq-1024x1024.pt')
- def key_translate(k):
- k = k.lower().split('/')
- if k[0] == 'g_synthesis':
- if not k[1].startswith('torgb'):
- k.insert(1, 'blocks')
- k = '.'.join(k)
- k = (k.replace('const.const','const').replace('const.bias','bias').replace('const.stylemod','epi1.style_mod.lin')
- .replace('const.noise.weight','epi1.top_epi.noise.weight')
- .replace('conv.noise.weight','epi2.top_epi.noise.weight')
- .replace('conv.stylemod','epi2.style_mod.lin')
- .replace('conv0_up.noise.weight', 'epi1.top_epi.noise.weight')
- .replace('conv0_up.stylemod','epi1.style_mod.lin')
- .replace('conv1.noise.weight', 'epi2.top_epi.noise.weight')
- .replace('conv1.stylemod','epi2.style_mod.lin')
- .replace('torgb_lod0','torgb'))
- else:
- k = '.'.join(k)
- return k
-
- def weight_translate(k, w):
- k = key_translate(k)
- if k.endswith('.weight'):
- if w.dim() == 2:
- w = w.t()
- elif w.dim() == 1:
- pass
- else:
- assert w.dim() == 4
- w = w.permute(3, 2, 0, 1)
- return w
-
- # we delete the useless torgb filters
- param_dict = {key_translate(k) : weight_translate(k, v) for k,v in state_Gs.items() if 'torgb_lod' not in key_translate(k)}
- if 1:
- sd_shapes = {k : v.shape for k,v in self.state_dict().items()}
- param_shapes = {k : v.shape for k,v in param_dict.items() }
-
- for k in list(sd_shapes)+list(param_shapes):
- pds = param_shapes.get(k)
- sds = sd_shapes.get(k)
- if pds is None:
- print ("sd only", k, sds)
- elif sds is None:
- print ("pd only", k, pds)
- elif sds != pds:
- print ("mismatch!", k, pds, sds)
-
- self.load_state_dict(param_dict, strict=False) # needed for the blur kernels
- torch.save(self.state_dict(), Path(pickle_path).with_suffix('.pt'))
\ No newline at end of file
diff --git a/spaces/HaoFeng2019/DocTr/IllTr.py b/spaces/HaoFeng2019/DocTr/IllTr.py
deleted file mode 100644
index 741d8f42580566d52ac56b1776275e98a4252abc..0000000000000000000000000000000000000000
--- a/spaces/HaoFeng2019/DocTr/IllTr.py
+++ /dev/null
@@ -1,284 +0,0 @@
-import torch
-import torch.nn as nn
-from torch.functional import Tensor
-from torch.nn.modules.activation import Tanhshrink
-from timm.models.layers import trunc_normal_
-from functools import partial
-
-
-class Ffn(nn.Module):
- # feed forward network layer after attention
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-class Attention(nn.Module):
- def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.):
- super().__init__()
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim ** -0.5
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- def forward(self, x, task_embed=None, level=0):
- N, L, D = x.shape
- qkv = self.qkv(x).reshape(N, L, 3, self.num_heads, D // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- # for decoder's task_embedding of different levels of attention layers
- if task_embed != None:
- _N, _H, _L, _D = q.shape
- task_embed = task_embed.reshape(1, _H, _L, _D)
- if level == 1:
- q += task_embed
- k += task_embed
- if level == 2:
- q += task_embed
-
- attn = (q @ k.transpose(-2, -1)) * self.scale
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(N, L, D)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class EncoderLayer(nn.Module):
- def __init__(self, dim, num_heads, ffn_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
- self.norm2 = norm_layer(dim)
- ffn_hidden_dim = int(dim * ffn_ratio)
- self.ffn = Ffn(in_features=dim, hidden_features=ffn_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x):
- x = x + self.attn(self.norm1(x))
- x = x + self.ffn(self.norm2(x))
- return x
-
-
-class DecoderLayer(nn.Module):
- def __init__(self, dim, num_heads, ffn_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- act_layer=nn.GELU, norm_layer=nn.LayerNorm):
- super().__init__()
- self.norm1 = norm_layer(dim)
- self.attn1 = Attention(
- dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
- self.norm2 = norm_layer(dim)
- self.attn2 = Attention(
- dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
- self.norm3 = norm_layer(dim)
- ffn_hidden_dim = int(dim * ffn_ratio)
- self.ffn = Ffn(in_features=dim, hidden_features=ffn_hidden_dim, act_layer=act_layer, drop=drop)
-
- def forward(self, x, task_embed):
- x = x + self.attn1(self.norm1(x), task_embed=task_embed, level=1)
- x = x + self.attn2(self.norm2(x), task_embed=task_embed, level=2)
- x = x + self.ffn(self.norm3(x))
- return x
-
-
-class ResBlock(nn.Module):
- def __init__(self, channels):
- super(ResBlock, self).__init__()
- self.conv1 = nn.Conv2d(channels, channels, kernel_size=5, stride=1,
- padding=2, bias=False)
- self.bn1 = nn.InstanceNorm2d(channels)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = nn.Conv2d(channels, channels, kernel_size=5, stride=1,
- padding=2, bias=False)
- self.bn2 = nn.InstanceNorm2d(channels)
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class Head(nn.Module):
- def __init__(self, in_channels, out_channels):
- super(Head, self).__init__()
- self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1,
- padding=1, bias=False)
- self.bn1 = nn.InstanceNorm2d(out_channels)
- self.relu = nn.ReLU(inplace=True)
- self.resblock = ResBlock(out_channels)
-
- def forward(self, x):
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.resblock(out)
-
- return out
-
-
-class PatchEmbed(nn.Module):
- """ Feature to Patch Embedding
- input : N C H W
- output: N num_patch P^2*C
- """
- def __init__(self, patch_size=1, in_channels=64):
- super().__init__()
- self.patch_size = patch_size
- self.dim = self.patch_size ** 2 * in_channels
-
- def forward(self, x):
- N, C, H, W = ori_shape = x.shape
-
- p = self.patch_size
- num_patches = (H // p) * (W // p)
- out = torch.zeros((N, num_patches, self.dim)).to(x.device)
- i, j = 0, 0
- for k in range(num_patches):
- if i + p > W:
- i = 0
- j += p
- out[:, k, :] = x[:, :, i:i + p, j:j + p].flatten(1)
- i += p
- return out, ori_shape
-
-
-class DePatchEmbed(nn.Module):
- """ Patch Embedding to Feature
- input : N num_patch P^2*C
- output: N C H W
- """
- def __init__(self, patch_size=1, in_channels=64):
- super().__init__()
- self.patch_size = patch_size
- self.num_patches = None
- self.dim = self.patch_size ** 2 * in_channels
-
- def forward(self, x, ori_shape):
- N, num_patches, dim = x.shape
- _, C, H, W = ori_shape
- p = self.patch_size
- out = torch.zeros(ori_shape).to(x.device)
- i, j = 0, 0
- for k in range(num_patches):
- if i + p > W:
- i = 0
- j += p
- out[:, :, i:i + p, j:j + p] = x[:, k, :].reshape(N, C, p, p)
- i += p
- return out
-
-
-class Tail(nn.Module):
- def __init__(self, in_channels, out_channels):
- super(Tail, self).__init__()
- self.output = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=False)
-
- def forward(self, x):
- out = self.output(x)
- return out
-
-
-class IllTr_Net(nn.Module):
- """ Vision Transformer with support for patch or hybrid CNN input stage
- """
-
- def __init__(self, patch_size=1, in_channels=3, mid_channels=16, num_classes=1000, depth=12,
- num_heads=8, ffn_ratio=4., qkv_bias=False, qk_scale=None, drop_rate=0., attn_drop_rate=0.,
- norm_layer=nn.LayerNorm):
- super(IllTr_Net, self).__init__()
-
- self.num_classes = num_classes
- self.embed_dim = patch_size * patch_size * mid_channels
- self.head = Head(in_channels, mid_channels)
- self.patch_embedding = PatchEmbed(patch_size=patch_size, in_channels=mid_channels)
- self.embed_dim = self.patch_embedding.dim
- if self.embed_dim % num_heads != 0:
- raise RuntimeError("Embedding dim must be devided by numbers of heads")
-
- self.pos_embed = nn.Parameter(torch.zeros(1, (128 // patch_size) ** 2, self.embed_dim))
- self.task_embed = nn.Parameter(torch.zeros(6, 1, (128 // patch_size) ** 2, self.embed_dim))
-
- self.encoder = nn.ModuleList([
- EncoderLayer(
- dim=self.embed_dim, num_heads=num_heads, ffn_ratio=ffn_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, norm_layer=norm_layer)
- for _ in range(depth)])
- self.decoder = nn.ModuleList([
- DecoderLayer(
- dim=self.embed_dim, num_heads=num_heads, ffn_ratio=ffn_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, norm_layer=norm_layer)
- for _ in range(depth)])
-
- self.de_patch_embedding = DePatchEmbed(patch_size=patch_size, in_channels=mid_channels)
- # tail
- self.tail = Tail(int(mid_channels), in_channels)
-
- self.acf = nn.Hardtanh(0,1)
-
- trunc_normal_(self.pos_embed, std=.02)
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- def forward(self, x):
- x = self.head(x)
- x, ori_shape = self.patch_embedding(x)
- x = x + self.pos_embed[:, :x.shape[1]]
-
- for blk in self.encoder:
- x = blk(x)
-
- for blk in self.decoder:
- x = blk(x, self.task_embed[0, :, :x.shape[1]])
-
- x = self.de_patch_embedding(x, ori_shape)
- x = self.tail(x)
-
- x = self.acf(x)
- return x
-
-
-def IllTr(**kwargs):
- model = IllTr_Net(
- patch_size=4, depth=6, num_heads=8, ffn_ratio=4, qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6),
- **kwargs)
-
- return model
diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/cache_dir/HuggingFaceM4/OBELICS_opt_out_docs_removed_2023_07_12_train_images/zipf/zipf_fig.html b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/cache_dir/HuggingFaceM4/OBELICS_opt_out_docs_removed_2023_07_12_train_images/zipf/zipf_fig.html
deleted file mode 100644
index d91342315c5fe3546ac22ba30d429c8eeb568fd5..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/cache_dir/HuggingFaceM4/OBELICS_opt_out_docs_removed_2023_07_12_train_images/zipf/zipf_fig.html
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
-
-
You can skip the queue and load custom models in the colab:
-
You can also duplicate this space and upgrade to gpu by going to settings:
-
-
- """
- )
- with gr.Row():
-
- with gr.Column(scale=55):
- with gr.Group():
- model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name)
- with gr.Box(visible=False) as custom_model_group:
- custom_model_path = gr.Textbox(label="Custom model path", placeholder="nitrosocke/Future-Diffusion", interactive=False)
- gr.HTML("
Custom models have to be downloaded first, so give it some time.
")
-
- with gr.Row():
- prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder="Enter prompt. Style applied automatically").style(container=False)
- generate = gr.Button(value="Generate").style(rounded=(False, True, True, False))
-
-
- image_out = gr.Image(height=512)
- # gallery = gr.Gallery(
- # label="Generated images", show_label=False, elem_id="gallery"
- # ).style(grid=[1], height="auto")
- error_output = gr.Markdown()
-
- with gr.Column(scale=45):
- with gr.Tab("Options"):
- with gr.Group():
- neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image")
-
- # n_images = gr.Slider(label="Images", value=1, minimum=1, maximum=4, step=1)
-
- with gr.Row():
- guidance = gr.Slider(label="Guidance scale", value=7, maximum=15, step=1)
- steps = gr.Slider(label="Steps", value=20, minimum=2, maximum=30, step=1)
-
- with gr.Row():
- width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=64)
- height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=64)
-
- seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1)
-
- with gr.Tab("Image to image"):
- with gr.Group():
- image = gr.Image(label="Image", height=256, tool="editor", type="pil")
- strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5)
-
- if is_colab:
- model_name.change(on_model_change, inputs=model_name, outputs=[custom_model_group, prompt], queue=False)
- custom_model_path.change(custom_model_changed, inputs=custom_model_path, outputs=None)
- # n_images.change(lambda n: gr.Gallery().style(grid=[2 if n > 1 else 1], height="auto"), inputs=n_images, outputs=gallery)
-
- inputs = [model_name, prompt, guidance, steps, width, height, seed, image, strength, neg_prompt]
- outputs = [image_out, error_output]
- prompt.submit(inference, inputs=inputs, outputs=outputs)
- generate.click(inference, inputs=inputs, outputs=outputs)
-
- ex = gr.Examples([
- [models[0].name, "city scene at night intricate street level", "blurry fog soft", 7, 20],
- [models[0].name, "beautiful female cyborg sitting in a cafe close up", "bad anatomy bad eyes blurry soft", 7, 20],
- [models[0].name, "cyborg dog neon eyes", "extra mouth extra legs blurry soft bloom bad anatomy", 7, 20],
-
- ], inputs=[model_name, prompt, neg_prompt, guidance, steps, seed], outputs=outputs, fn=inference, cache_examples=False)
-
- gr.HTML("""
-
-
-
Model by Nitrosocke.
-
- """)
-
-print(f"Space built in {time.time() - start_time:.2f} seconds")
-
-if not is_colab:
- demo.queue(concurrency_count=1)
-demo.launch(debug=is_colab, share=is_colab)
\ No newline at end of file
diff --git a/spaces/IwanK/heart_failuere/eda.py b/spaces/IwanK/heart_failuere/eda.py
deleted file mode 100644
index 0f495daf1d4fd0bdf858881d33c7592ef4f3e9bc..0000000000000000000000000000000000000000
--- a/spaces/IwanK/heart_failuere/eda.py
+++ /dev/null
@@ -1,107 +0,0 @@
-import streamlit as st
-import pandas as pd
-import seaborn as sns
-import matplotlib.pyplot as plt
-import plotly.express as px
-from PIL import Image
-
-
-
-import pandas as pd
-import streamlit as st
-import seaborn as sns
-import matplotlib.pyplot as plt
-from PIL import Image
-
-def visualize_numeric_distribution(df, column):
- # Create subplots
- fig, axs = plt.subplots(1, 2, figsize=(12, 6))
-
- # Distribution plot
- sns.histplot(data=df, x=column, hue='DEATH_EVENT', multiple='stack', kde=True, ax=axs[0])
- axs[0].set_title('Distribution of {}'.format(column))
- axs[0].axvline(df[column].mean(), color='red', linestyle='dashed', linewidth=2, label='Mean')
- axs[0].axvline(df[column].median(), color='green', linestyle='dashed', linewidth=2, label='Median')
- axs[0].set_xlabel(column)
- axs[0].set_ylabel('Count')
- axs[0].legend()
-
- # Box plot
- sns.boxplot(data=df, x='DEATH_EVENT', y=column, ax=axs[1])
- axs[1].set_title('Box Plot of {}'.format(column))
- axs[1].set_xlabel('DEATH_EVENT')
- axs[1].set_ylabel(column)
-
- plt.tight_layout()
- return fig
-
-def visualize_death_event_by_categorical(df, column):
- # Group data by column and death event, and calculate the count
- counts = df.groupby([column, 'DEATH_EVENT']).size().reset_index(name='Count')
-
- # Bar plot
- fig, axs = plt.subplots(1, 2, figsize=(15, 6))
- sns.barplot(x='DEATH_EVENT', y='Count', hue=column, data=counts, ax=axs[0])
- for p in axs[0].patches:
- height = p.get_height()
- axs[0].text(p.get_x() + p.get_width()/2., height + 3, '{:1.0f}'.format(height), ha="center")
- axs[0].set_title('Death Event Count by {}'.format(column))
- axs[0].set_xticklabels(['Survival', 'Death'])
- axs[0].set_xlabel('Death Event')
- axs[0].set_ylabel('Count')
-
- # Pie chart
- labels = []
- sizes = []
- for value in df[column].unique():
- survival_count = df[(df[column] == value) & (df['DEATH_EVENT'] == 0)].shape[0]
- death_count = df[(df[column] == value) & (df['DEATH_EVENT'] == 1)].shape[0]
- labels.append('{} - Survival'.format(value))
- labels.append('{} - Death'.format(value))
- sizes.extend([survival_count, death_count])
- axs[1].pie(sizes, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90)
- axs[1].set_title('Death Event Percentage')
- axs[1].axis('equal')
-
- plt.tight_layout()
- return fig
-
-def run():
- st.set_option('deprecation.showPyplotGlobalUse', False)
-
- #buat title
- st.title('Heart Failure Prediction')
-
- #Sub Header
- st.subheader('Analisa ')
-
- #tambah gambar
- image = Image.open('Thumb.jpg')
- st.image(image, caption='Heart Failure')
-
- #menambah deskripsi
- st.write('Page ini merupakan eksplorasi sederhana pada data set Heart Failure.')
-
- #garis lurus
- st.markdown('---')
-
- #load dataframe
- data = pd.read_csv('https://raw.githubusercontent.com/IwanKurniawann/dataset/main/h8dsft_P1G3_iwan_kurniawan.csv')
- st.dataframe(data)
-
- # List of columns
- numerical_columns = ['age', 'creatinine_phosphokinase', 'ejection_fraction', 'platelets', 'serum_creatinine', 'serum_sodium', 'time']
- categorical_columns = ['anaemia', 'diabetes', 'high_blood_pressure', 'sex', 'smoking']
-
- # Visualize death event by categorical columns
- column_visualize = st.selectbox('Categorical: ', categorical_columns)
- fig = visualize_death_event_by_categorical(data, column_visualize)
- st.pyplot(fig)
-
- # Visualize numeric distributions
- column_visualize = st.selectbox('Numerical: ', numerical_columns)
- fig = visualize_numeric_distribution(data, column_visualize)
- st.pyplot(fig)
-
-if __name__ == '__main__':
- run()
diff --git a/spaces/Izal887/rvc-hutao/app.py b/spaces/Izal887/rvc-hutao/app.py
deleted file mode 100644
index d1d4fb32cf4b9622530b9fdba4af2ffea3a48c79..0000000000000000000000000000000000000000
--- a/spaces/Izal887/rvc-hutao/app.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import os
-import json
-import argparse
-import traceback
-import logging
-import gradio as gr
-import numpy as np
-import librosa
-import torch
-import asyncio
-import edge_tts
-from datetime import datetime
-from fairseq import checkpoint_utils
-from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono
-from vc_infer_pipeline import VC
-from config import (
- is_half,
- device
-)
-logging.getLogger("numba").setLevel(logging.WARNING)
-limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces
-
-def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy):
- def vc_fn(
- input_audio,
- f0_up_key,
- f0_method,
- index_rate,
- tts_mode,
- tts_text,
- tts_voice
- ):
- try:
- if tts_mode:
- if len(tts_text) > 100 and limitation:
- return "Text is too long", None
- if tts_text is None or tts_voice is None:
- return "You need to enter text and select a voice", None
- asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3"))
- audio, sr = librosa.load("tts.mp3", sr=16000, mono=True)
- else:
- if args.files:
- audio, sr = librosa.load(input_audio, sr=16000, mono=True)
- else:
- if input_audio is None:
- return "You need to upload an audio", None
- sampling_rate, audio = input_audio
- duration = audio.shape[0] / sampling_rate
- if duration > 20 and limitation:
- return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- times = [0, 0, 0]
- f0_up_key = int(f0_up_key)
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- 0,
- audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- file_big_npy,
- index_rate,
- if_f0,
- )
- print(
- f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s"
- )
- return "Success", (tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- print(info)
- return info, (None, None)
- return vc_fn
-
-def load_hubert():
- global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(device)
- if is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-def change_to_tts_mode(tts_mode):
- if tts_mode:
- return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True)
- else:
- return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False)
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--api', action="store_true", default=False)
- parser.add_argument("--share", action="store_true", default=False, help="share gradio app")
- parser.add_argument("--files", action="store_true", default=False, help="load audio from path")
- args, unknown = parser.parse_known_args()
- load_hubert()
- models = []
- tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices())
- voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list]
- with open("weights/model_info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for name, info in models_info.items():
- if not info['enable']:
- continue
- title = info['title']
- author = info.get("author", None)
- cover = f"weights/{name}/{info['cover']}"
- index = f"weights/{name}/{info['feature_retrieval_library']}"
- npy = f"weights/{name}/{info['feature_file']}"
- cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩
- net_g.eval().to(device)
- if is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, device, is_half)
- models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy)))
- with gr.Blocks() as app:
- gr.Markdown(
- "#
RVC Models\n"
- "##
The input audio should be clean and pure voice without background music.\n"
- "\n\n"
- "[](https://colab.research.google.com/drive/12rbZk9CoXD1m84dqBW5IKMBjiVY6tcoj?usp=share_link)\n\n"
- "[](https://huggingface.co/spaces/ardha27pi/rvc-models?duplicate=true)\n\n"
- "[](https://github.com/ardha27/AI-Song-Cover-RVC)\n\n"
- "[](https://ko-fi.com/R6R7AH1FA)\n\n"
- )
- with gr.Tabs():
- for (name, title, author, cover, vc_fn) in models:
- with gr.TabItem(name):
- with gr.Row():
- gr.Markdown(
- '
'
- f'
{title}
\n'+
- (f'
Model author: {author}
' if author else "")+
- (f'' if cover else "")+
- '
-
-### Contents
-- [Training](#training)
-- [Demo](#Demo)
-- [References](#references)
-
-## Training
-
-1. Prepare training data:
- -- download [CelebAMask-HQ dataset](https://github.com/switchablenorms/CelebAMask-HQ)
-
- -- change file path in the `prepropess_data.py` and run
-```Shell
-python prepropess_data.py
-```
-
-2. Train the model using CelebAMask-HQ dataset:
-Just run the train script:
-```
- $ CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 train.py
-```
-
-If you do not wish to train the model, you can download [our pre-trained model](https://drive.google.com/open?id=154JgKpzCPW82qINcVieuPH3fZ2e0P812) and save it in `res/cp`.
-
-
-## Demo
-1. Evaluate the trained model using:
-```Shell
-# evaluate using GPU
-python test.py
-```
-
-## Face makeup using parsing maps
-[**face-makeup.PyTorch**](https://github.com/zllrunning/face-makeup.PyTorch)
-
-
-{% endfor %}
-
-{% endblock %}
diff --git a/spaces/PsykoNOT/hakurei-waifu-diffusion/README.md b/spaces/PsykoNOT/hakurei-waifu-diffusion/README.md
deleted file mode 100644
index bc1b009256e2723917f4942400eec3742d3b5935..0000000000000000000000000000000000000000
--- a/spaces/PsykoNOT/hakurei-waifu-diffusion/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Hakurei Waifu Diffusion
-emoji: 📉
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git "a/spaces/Qiukai/gpt/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" "b/spaces/Qiukai/gpt/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py"
deleted file mode 100644
index 3da831fd07e361a532777c83bb02cff265b94abd..0000000000000000000000000000000000000000
--- "a/spaces/Qiukai/gpt/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py"
+++ /dev/null
@@ -1,194 +0,0 @@
-from toolbox import update_ui
-from toolbox import CatchException, report_execption, write_results_to_file, get_conf
-import re, requests, unicodedata, os
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
-def download_arxiv_(url_pdf):
- if 'arxiv.org' not in url_pdf:
- if ('.' in url_pdf) and ('/' not in url_pdf):
- new_url = 'https://arxiv.org/abs/'+url_pdf
- print('下载编号:', url_pdf, '自动定位:', new_url)
- # download_arxiv_(new_url)
- return download_arxiv_(new_url)
- else:
- print('不能识别的URL!')
- return None
- if 'abs' in url_pdf:
- url_pdf = url_pdf.replace('abs', 'pdf')
- url_pdf = url_pdf + '.pdf'
-
- url_abs = url_pdf.replace('.pdf', '').replace('pdf', 'abs')
- title, other_info = get_name(_url_=url_abs)
-
- paper_id = title.split()[0] # '[1712.00559]'
- if '2' in other_info['year']:
- title = other_info['year'] + ' ' + title
-
- known_conf = ['NeurIPS', 'NIPS', 'Nature', 'Science', 'ICLR', 'AAAI']
- for k in known_conf:
- if k in other_info['comment']:
- title = k + ' ' + title
-
- download_dir = './gpt_log/arxiv/'
- os.makedirs(download_dir, exist_ok=True)
-
- title_str = title.replace('?', '?')\
- .replace(':', ':')\
- .replace('\"', '“')\
- .replace('\n', '')\
- .replace(' ', ' ')\
- .replace(' ', ' ')
-
- requests_pdf_url = url_pdf
- file_path = download_dir+title_str
- # if os.path.exists(file_path):
- # print('返回缓存文件')
- # return './gpt_log/arxiv/'+title_str
-
- print('下载中')
- proxies, = get_conf('proxies')
- r = requests.get(requests_pdf_url, proxies=proxies)
- with open(file_path, 'wb+') as f:
- f.write(r.content)
- print('下载完成')
-
- # print('输出下载命令:','aria2c -o \"%s\" %s'%(title_str,url_pdf))
- # subprocess.call('aria2c --all-proxy=\"172.18.116.150:11084\" -o \"%s\" %s'%(download_dir+title_str,url_pdf), shell=True)
-
- x = "%s %s %s.bib" % (paper_id, other_info['year'], other_info['authors'])
- x = x.replace('?', '?')\
- .replace(':', ':')\
- .replace('\"', '“')\
- .replace('\n', '')\
- .replace(' ', ' ')\
- .replace(' ', ' ')
- return './gpt_log/arxiv/'+title_str, other_info
-
-
-def get_name(_url_):
- import os
- from bs4 import BeautifulSoup
- print('正在获取文献名!')
- print(_url_)
-
- # arxiv_recall = {}
- # if os.path.exists('./arxiv_recall.pkl'):
- # with open('./arxiv_recall.pkl', 'rb') as f:
- # arxiv_recall = pickle.load(f)
-
- # if _url_ in arxiv_recall:
- # print('在缓存中')
- # return arxiv_recall[_url_]
-
- proxies, = get_conf('proxies')
- res = requests.get(_url_, proxies=proxies)
-
- bs = BeautifulSoup(res.text, 'html.parser')
- other_details = {}
-
- # get year
- try:
- year = bs.find_all(class_='dateline')[0].text
- year = re.search(r'(\d{4})', year, re.M | re.I).group(1)
- other_details['year'] = year
- abstract = bs.find_all(class_='abstract mathjax')[0].text
- other_details['abstract'] = abstract
- except:
- other_details['year'] = ''
- print('年份获取失败')
-
- # get author
- try:
- authors = bs.find_all(class_='authors')[0].text
- authors = authors.split('Authors:')[1]
- other_details['authors'] = authors
- except:
- other_details['authors'] = ''
- print('authors获取失败')
-
- # get comment
- try:
- comment = bs.find_all(class_='metatable')[0].text
- real_comment = None
- for item in comment.replace('\n', ' ').split(' '):
- if 'Comments' in item:
- real_comment = item
- if real_comment is not None:
- other_details['comment'] = real_comment
- else:
- other_details['comment'] = ''
- except:
- other_details['comment'] = ''
- print('年份获取失败')
-
- title_str = BeautifulSoup(
- res.text, 'html.parser').find('title').contents[0]
- print('获取成功:', title_str)
- # arxiv_recall[_url_] = (title_str+'.pdf', other_details)
- # with open('./arxiv_recall.pkl', 'wb') as f:
- # pickle.dump(arxiv_recall, f)
-
- return title_str+'.pdf', other_details
-
-
-
-@CatchException
-def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
-
- CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……"
- import glob
- import os
-
- # 基本信息:功能、贡献者
- chatbot.append(["函数插件功能?", CRAZY_FUNCTION_INFO])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import pdfminer, bs4
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 清空历史,以免输入溢出
- history = []
-
- # 提取摘要,下载PDF文档
- try:
- pdf_path, info = download_arxiv_(txt)
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"下载pdf文件未成功")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 翻译摘要等
- i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}"
- i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}'
- chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- msg = '正常'
- # ** gpt request **
- # 单线,获取文章meta信息
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say,
- inputs_show_user=i_say_show_user,
- llm_kwargs=llm_kwargs,
- chatbot=chatbot, history=[],
- sys_prompt="Your job is to collect information from materials and translate to Chinese。",
- )
-
- chatbot[-1] = (i_say_show_user, gpt_say)
- history.append(i_say_show_user); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
- # 写入文件
- import shutil
- # 重置文件的创建时间
- shutil.copyfile(pdf_path, f'./gpt_log/{os.path.basename(pdf_path)}'); os.remove(pdf_path)
- res = write_results_to_file(history)
- chatbot.append(("完成了吗?", res + "\n\nPDF文件也已经下载"))
- yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面
-
diff --git a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/onnx_inference.py b/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/onnx_inference.py
deleted file mode 100644
index 6633659fc83b19d82611d3c9cc840e9c547734d0..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/onnx_inference.py
+++ /dev/null
@@ -1,149 +0,0 @@
-import librosa
-import numpy as np
-import onnxruntime
-import soundfile
-
-import logging
-
-logger = logging.getLogger(__name__)
-
-
-class ContentVec:
- def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None):
- logger.info("Load model(s) from {}".format(vec_path))
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- elif device == "dml":
- providers = ["DmlExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(vec_path, providers=providers)
-
- def __call__(self, wav):
- return self.forward(wav)
-
- def forward(self, wav):
- feats = wav
- if feats.ndim == 2: # double channels
- feats = feats.mean(-1)
- assert feats.ndim == 1, feats.ndim
- feats = np.expand_dims(np.expand_dims(feats, 0), 0)
- onnx_input = {self.model.get_inputs()[0].name: feats}
- logits = self.model.run(None, onnx_input)[0]
- return logits.transpose(0, 2, 1)
-
-
-def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs):
- if f0_predictor == "pm":
- from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor
-
- f0_predictor_object = PMF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "harvest":
- from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import (
- HarvestF0Predictor,
- )
-
- f0_predictor_object = HarvestF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- elif f0_predictor == "dio":
- from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor
-
- f0_predictor_object = DioF0Predictor(
- hop_length=hop_length, sampling_rate=sampling_rate
- )
- else:
- raise Exception("Unknown f0 predictor")
- return f0_predictor_object
-
-
-class OnnxRVC:
- def __init__(
- self,
- model_path,
- sr=40000,
- hop_size=512,
- vec_path="vec-768-layer-12",
- device="cpu",
- ):
- vec_path = f"pretrained/{vec_path}.onnx"
- self.vec_model = ContentVec(vec_path, device)
- if device == "cpu" or device is None:
- providers = ["CPUExecutionProvider"]
- elif device == "cuda":
- providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
- elif device == "dml":
- providers = ["DmlExecutionProvider"]
- else:
- raise RuntimeError("Unsportted Device")
- self.model = onnxruntime.InferenceSession(model_path, providers=providers)
- self.sampling_rate = sr
- self.hop_size = hop_size
-
- def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd):
- onnx_input = {
- self.model.get_inputs()[0].name: hubert,
- self.model.get_inputs()[1].name: hubert_length,
- self.model.get_inputs()[2].name: pitch,
- self.model.get_inputs()[3].name: pitchf,
- self.model.get_inputs()[4].name: ds,
- self.model.get_inputs()[5].name: rnd,
- }
- return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16)
-
- def inference(
- self,
- raw_path,
- sid,
- f0_method="dio",
- f0_up_key=0,
- pad_time=0.5,
- cr_threshold=0.02,
- ):
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- f0_predictor = get_f0_predictor(
- f0_method,
- hop_length=self.hop_size,
- sampling_rate=self.sampling_rate,
- threshold=cr_threshold,
- )
- wav, sr = librosa.load(raw_path, sr=self.sampling_rate)
- org_length = len(wav)
- if org_length / sr > 50.0:
- raise RuntimeError("Reached Max Length")
-
- wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000)
- wav16k = wav16k
-
- hubert = self.vec_model(wav16k)
- hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32)
- hubert_length = hubert.shape[1]
-
- pitchf = f0_predictor.compute_f0(wav, hubert_length)
- pitchf = pitchf * 2 ** (f0_up_key / 12)
- pitch = pitchf.copy()
- f0_mel = 1127 * np.log(1 + pitch / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- pitch = np.rint(f0_mel).astype(np.int64)
-
- pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32)
- pitch = pitch.reshape(1, len(pitch))
- ds = np.array([sid]).astype(np.int64)
-
- rnd = np.random.randn(1, 192, hubert_length).astype(np.float32)
- hubert_length = np.array([hubert_length]).astype(np.int64)
-
- out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze()
- out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant")
- return out_wav[0:org_length]
diff --git a/spaces/Ramse/TTS_Hindi/modules/hifigan/mel_utils.py b/spaces/Ramse/TTS_Hindi/modules/hifigan/mel_utils.py
deleted file mode 100644
index 06e0f7d4d16fa3e4aefc8949347455f5a6e938da..0000000000000000000000000000000000000000
--- a/spaces/Ramse/TTS_Hindi/modules/hifigan/mel_utils.py
+++ /dev/null
@@ -1,80 +0,0 @@
-import numpy as np
-import torch
-import torch.utils.data
-from librosa.filters import mel as librosa_mel_fn
-from scipy.io.wavfile import read
-
-MAX_WAV_VALUE = 32768.0
-
-
-def load_wav(full_path):
- sampling_rate, data = read(full_path)
- return data, sampling_rate
-
-
-def dynamic_range_compression(x, C=1, clip_val=1e-5):
- return np.log(np.clip(x, a_min=clip_val, a_max=None) * C)
-
-
-def dynamic_range_decompression(x, C=1):
- return np.exp(x) / C
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def mel_spectrogram(y, hparams, center=False, complex=False):
- # hop_size: 512 # For 22050Hz, 275 ~= 12.5 ms (0.0125 * sample_rate)
- # win_size: 2048 # For 22050Hz, 1100 ~= 50 ms (If None, win_size: fft_size) (0.05 * sample_rate)
- # fmin: 55 # Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To test depending on dataset. Pitch info: male~[65, 260], female~[100, 525])
- # fmax: 10000 # To be increased/reduced depending on data.
- # fft_size: 2048 # Extra window size is filled with 0 paddings to match this parameter
- # n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax,
- n_fft = hparams['fft_size']
- num_mels = hparams['audio_num_mel_bins']
- sampling_rate = hparams['audio_sample_rate']
- hop_size = hparams['hop_size']
- win_size = hparams['win_size']
- fmin = hparams['fmin']
- fmax = hparams['fmax']
- y = y.clamp(min=-1., max=1.)
- global mel_basis, hann_window
- if fmax not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[str(fmax) + '_' + str(y.device)] = torch.from_numpy(mel).float().to(y.device)
- hann_window[str(y.device)] = torch.hann_window(win_size).to(y.device)
-
- y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)),
- mode='reflect')
- y = y.squeeze(1)
-
- spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[str(y.device)],
- center=center, pad_mode='reflect', normalized=False, onesided=True)
-
- if not complex:
- spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9))
- spec = torch.matmul(mel_basis[str(fmax) + '_' + str(y.device)], spec)
- spec = spectral_normalize_torch(spec)
- else:
- B, C, T, _ = spec.shape
- spec = spec.transpose(1, 2) # [B, T, n_fft, 2]
- return spec
diff --git a/spaces/Rifd/Sdallmodels/README.md b/spaces/Rifd/Sdallmodels/README.md
deleted file mode 100644
index f6ecccea6fe6e5144f2960109bd5cc0309e433cb..0000000000000000000000000000000000000000
--- a/spaces/Rifd/Sdallmodels/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: 530 Txt2Image Models Toy World
-emoji: 🪅🌐
-colorFrom: gray
-colorTo: green
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: true
-duplicated_from: Omnibus/maximum_multiplier_places
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Ritori/TTS_Yui/utils.py b/spaces/Ritori/TTS_Yui/utils.py
deleted file mode 100644
index 7c5fd29a282c48bae55bf62ac00585f6778ca1fe..0000000000000000000000000000000000000000
--- a/spaces/Ritori/TTS_Yui/utils.py
+++ /dev/null
@@ -1,41 +0,0 @@
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-
-
-from hparams import create_hparams
-#hparam = create_hparams()
-#hparam.cuda_enabled = False
-
-def get_mask_from_lengths(lengths):
- max_len = torch.max(lengths).item()
-
- #if hparam.cuda_enabled :
- if create_hparams.cuda_enabled :
- ids = torch.arange(0, max_len, out=torch.cuda.LongTensor(max_len))
- mask = (ids < lengths.unsqueeze(1)).bool()
- else :
- ids = torch.arange(0, max_len, out=torch.LongTensor(max_len))
- mask = (ids < lengths.unsqueeze(1)).bool()
-
- return mask
-
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding='utf-8') as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def to_gpu(x):
- x = x.contiguous()
-
- if torch.cuda.is_available():
- x = x.cuda(non_blocking=True)
- return torch.autograd.Variable(x)
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/pipelines/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/pipelines/__init__.py
deleted file mode 100644
index c6f424debd1623e7511dd77da464a6639d816745..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/pipelines/__init__.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from .auto_augment import (AutoAugment, BrightnessTransform, ColorTransform,
- ContrastTransform, EqualizeTransform, Rotate, Shear,
- Translate)
-from .compose import Compose
-from .formating import (Collect, DefaultFormatBundle, ImageToTensor,
- ToDataContainer, ToTensor, Transpose, to_tensor)
-from .instaboost import InstaBoost
-from .loading import (LoadAnnotations, LoadImageFromFile, LoadImageFromWebcam,
- LoadMultiChannelImageFromFiles, LoadProposals)
-from .test_time_aug import MultiScaleFlipAug
-from .transforms import (Albu, CutOut, Expand, MinIoURandomCrop, Normalize,
- Pad, PhotoMetricDistortion, RandomCenterCropPad,
- RandomCrop, RandomFlip, Resize, SegRescale)
-
-__all__ = [
- 'Compose', 'to_tensor', 'ToTensor', 'ImageToTensor', 'ToDataContainer',
- 'Transpose', 'Collect', 'DefaultFormatBundle', 'LoadAnnotations',
- 'LoadImageFromFile', 'LoadImageFromWebcam',
- 'LoadMultiChannelImageFromFiles', 'LoadProposals', 'MultiScaleFlipAug',
- 'Resize', 'RandomFlip', 'Pad', 'RandomCrop', 'Normalize', 'SegRescale',
- 'MinIoURandomCrop', 'Expand', 'PhotoMetricDistortion', 'Albu',
- 'InstaBoost', 'RandomCenterCropPad', 'AutoAugment', 'CutOut', 'Shear',
- 'Rotate', 'ColorTransform', 'EqualizeTransform', 'BrightnessTransform',
- 'ContrastTransform', 'Translate'
-]
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/utils/make_divisible.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/utils/make_divisible.py
deleted file mode 100644
index 75ad756052529f52fe83bb95dd1f0ecfc9a13078..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/models/utils/make_divisible.py
+++ /dev/null
@@ -1,27 +0,0 @@
-def make_divisible(value, divisor, min_value=None, min_ratio=0.9):
- """Make divisible function.
-
- This function rounds the channel number to the nearest value that can be
- divisible by the divisor. It is taken from the original tf repo. It ensures
- that all layers have a channel number that is divisible by divisor. It can
- be seen here: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py # noqa
-
- Args:
- value (int): The original channel number.
- divisor (int): The divisor to fully divide the channel number.
- min_value (int): The minimum value of the output channel.
- Default: None, means that the minimum value equal to the divisor.
- min_ratio (float): The minimum ratio of the rounded channel number to
- the original channel number. Default: 0.9.
-
- Returns:
- int: The modified output channel number.
- """
-
- if min_value is None:
- min_value = divisor
- new_value = max(min_value, int(value + divisor / 2) // divisor * divisor)
- # Make sure that round down does not go down by more than (1-min_ratio).
- if new_value < min_ratio * value:
- new_value += divisor
- return new_value
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/utils/logger.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/utils/logger.py
deleted file mode 100644
index 4149d9eda3dfef07490352d22ac40c42460315e4..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmseg/utils/logger.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import logging
-
-from annotator.uniformer.mmcv.utils import get_logger
-
-
-def get_root_logger(log_file=None, log_level=logging.INFO):
- """Get the root logger.
-
- The logger will be initialized if it has not been initialized. By default a
- StreamHandler will be added. If `log_file` is specified, a FileHandler will
- also be added. The name of the root logger is the top-level package name,
- e.g., "mmseg".
-
- Args:
- log_file (str | None): The log filename. If specified, a FileHandler
- will be added to the root logger.
- log_level (int): The root logger level. Note that only the process of
- rank 0 is affected, while other processes will set the level to
- "Error" and be silent most of the time.
-
- Returns:
- logging.Logger: The root logger.
- """
-
- logger = get_logger(name='mmseg', log_file=log_file, log_level=log_level)
-
- return logger
diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/datasets/mask_generator_512.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/datasets/mask_generator_512.py
deleted file mode 100644
index d61f93e4c2ce6fc7478171b7788a6fecbceb3ace..0000000000000000000000000000000000000000
--- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/datasets/mask_generator_512.py
+++ /dev/null
@@ -1,93 +0,0 @@
-import numpy as np
-from PIL import Image, ImageDraw
-import math
-import random
-
-
-def RandomBrush(
- max_tries,
- s,
- min_num_vertex = 4,
- max_num_vertex = 18,
- mean_angle = 2*math.pi / 5,
- angle_range = 2*math.pi / 15,
- min_width = 12,
- max_width = 48):
- H, W = s, s
- average_radius = math.sqrt(H*H+W*W) / 8
- mask = Image.new('L', (W, H), 0)
- for _ in range(np.random.randint(max_tries)):
- num_vertex = np.random.randint(min_num_vertex, max_num_vertex)
- angle_min = mean_angle - np.random.uniform(0, angle_range)
- angle_max = mean_angle + np.random.uniform(0, angle_range)
- angles = []
- vertex = []
- for i in range(num_vertex):
- if i % 2 == 0:
- angles.append(2*math.pi - np.random.uniform(angle_min, angle_max))
- else:
- angles.append(np.random.uniform(angle_min, angle_max))
-
- h, w = mask.size
- vertex.append((int(np.random.randint(0, w)), int(np.random.randint(0, h))))
- for i in range(num_vertex):
- r = np.clip(
- np.random.normal(loc=average_radius, scale=average_radius//2),
- 0, 2*average_radius)
- new_x = np.clip(vertex[-1][0] + r * math.cos(angles[i]), 0, w)
- new_y = np.clip(vertex[-1][1] + r * math.sin(angles[i]), 0, h)
- vertex.append((int(new_x), int(new_y)))
-
- draw = ImageDraw.Draw(mask)
- width = int(np.random.uniform(min_width, max_width))
- draw.line(vertex, fill=1, width=width)
- for v in vertex:
- draw.ellipse((v[0] - width//2,
- v[1] - width//2,
- v[0] + width//2,
- v[1] + width//2),
- fill=1)
- if np.random.random() > 0.5:
- mask.transpose(Image.FLIP_LEFT_RIGHT)
- if np.random.random() > 0.5:
- mask.transpose(Image.FLIP_TOP_BOTTOM)
- mask = np.asarray(mask, np.uint8)
- if np.random.random() > 0.5:
- mask = np.flip(mask, 0)
- if np.random.random() > 0.5:
- mask = np.flip(mask, 1)
- return mask
-
-def RandomMask(s, hole_range=[0,1]):
- coef = min(hole_range[0] + hole_range[1], 1.0)
- while True:
- mask = np.ones((s, s), np.uint8)
- def Fill(max_size):
- w, h = np.random.randint(max_size), np.random.randint(max_size)
- ww, hh = w // 2, h // 2
- x, y = np.random.randint(-ww, s - w + ww), np.random.randint(-hh, s - h + hh)
- mask[max(y, 0): min(y + h, s), max(x, 0): min(x + w, s)] = 0
- def MultiFill(max_tries, max_size):
- for _ in range(np.random.randint(max_tries)):
- Fill(max_size)
- MultiFill(int(5 * coef), s // 2)
- MultiFill(int(3 * coef), s)
- mask = np.logical_and(mask, 1 - RandomBrush(int(9 * coef), s)) # hole denoted as 0, reserved as 1
- hole_ratio = 1 - np.mean(mask)
- if hole_range is not None and (hole_ratio <= hole_range[0] or hole_ratio >= hole_range[1]):
- continue
- return mask[np.newaxis, ...].astype(np.float32)
-
-def BatchRandomMask(batch_size, s, hole_range=[0, 1]):
- return np.stack([RandomMask(s, hole_range=hole_range) for _ in range(batch_size)], axis=0)
-
-
-if __name__ == '__main__':
- res = 512
- # res = 256
- cnt = 2000
- tot = 0
- for i in range(cnt):
- mask = RandomMask(s=res)
- tot += mask.mean()
- print(tot / cnt)
diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/evaluatoin/cal_fid_pids_uids.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/evaluatoin/cal_fid_pids_uids.py
deleted file mode 100644
index ba57c3fcd47ac6aa6c292588de1a0a1696bea655..0000000000000000000000000000000000000000
--- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/evaluatoin/cal_fid_pids_uids.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import cv2
-import os
-import sys
-sys.path.insert(0, '../')
-import numpy as np
-import math
-import glob
-import pyspng
-import PIL.Image
-import torch
-import dnnlib
-import scipy.linalg
-import sklearn.svm
-
-
-_feature_detector_cache = dict()
-
-def get_feature_detector(url, device=torch.device('cpu'), num_gpus=1, rank=0, verbose=False):
- assert 0 <= rank < num_gpus
- key = (url, device)
- if key not in _feature_detector_cache:
- is_leader = (rank == 0)
- if not is_leader and num_gpus > 1:
- torch.distributed.barrier() # leader goes first
- with dnnlib.util.open_url(url, verbose=(verbose and is_leader)) as f:
- _feature_detector_cache[key] = torch.jit.load(f).eval().to(device)
- if is_leader and num_gpus > 1:
- torch.distributed.barrier() # others follow
- return _feature_detector_cache[key]
-
-
-def read_image(image_path):
- with open(image_path, 'rb') as f:
- if pyspng is not None and image_path.endswith('.png'):
- image = pyspng.load(f.read())
- else:
- image = np.array(PIL.Image.open(f))
- if image.ndim == 2:
- image = image[:, :, np.newaxis] # HW => HWC
- if image.shape[2] == 1:
- image = np.repeat(image, 3, axis=2)
- image = image.transpose(2, 0, 1) # HWC => CHW
- image = torch.from_numpy(image).unsqueeze(0).to(torch.uint8)
-
- return image
-
-
-class FeatureStats:
- def __init__(self, capture_all=False, capture_mean_cov=False, max_items=None):
- self.capture_all = capture_all
- self.capture_mean_cov = capture_mean_cov
- self.max_items = max_items
- self.num_items = 0
- self.num_features = None
- self.all_features = None
- self.raw_mean = None
- self.raw_cov = None
-
- def set_num_features(self, num_features):
- if self.num_features is not None:
- assert num_features == self.num_features
- else:
- self.num_features = num_features
- self.all_features = []
- self.raw_mean = np.zeros([num_features], dtype=np.float64)
- self.raw_cov = np.zeros([num_features, num_features], dtype=np.float64)
-
- def is_full(self):
- return (self.max_items is not None) and (self.num_items >= self.max_items)
-
- def append(self, x):
- x = np.asarray(x, dtype=np.float32)
- assert x.ndim == 2
- if (self.max_items is not None) and (self.num_items + x.shape[0] > self.max_items):
- if self.num_items >= self.max_items:
- return
- x = x[:self.max_items - self.num_items]
-
- self.set_num_features(x.shape[1])
- self.num_items += x.shape[0]
- if self.capture_all:
- self.all_features.append(x)
- if self.capture_mean_cov:
- x64 = x.astype(np.float64)
- self.raw_mean += x64.sum(axis=0)
- self.raw_cov += x64.T @ x64
-
- def append_torch(self, x, num_gpus=1, rank=0):
- assert isinstance(x, torch.Tensor) and x.ndim == 2
- assert 0 <= rank < num_gpus
- if num_gpus > 1:
- ys = []
- for src in range(num_gpus):
- y = x.clone()
- torch.distributed.broadcast(y, src=src)
- ys.append(y)
- x = torch.stack(ys, dim=1).flatten(0, 1) # interleave samples
- self.append(x.cpu().numpy())
-
- def get_all(self):
- assert self.capture_all
- return np.concatenate(self.all_features, axis=0)
-
- def get_all_torch(self):
- return torch.from_numpy(self.get_all())
-
- def get_mean_cov(self):
- assert self.capture_mean_cov
- mean = self.raw_mean / self.num_items
- cov = self.raw_cov / self.num_items
- cov = cov - np.outer(mean, mean)
- return mean, cov
-
- def save(self, pkl_file):
- with open(pkl_file, 'wb') as f:
- pickle.dump(self.__dict__, f)
-
- @staticmethod
- def load(pkl_file):
- with open(pkl_file, 'rb') as f:
- s = dnnlib.EasyDict(pickle.load(f))
- obj = FeatureStats(capture_all=s.capture_all, max_items=s.max_items)
- obj.__dict__.update(s)
- return obj
-
-
-def calculate_metrics(folder1, folder2):
- l1 = sorted(glob.glob(folder1 + '/*.png') + glob.glob(folder1 + '/*.jpg'))
- l2 = sorted(glob.glob(folder2 + '/*.png') + glob.glob(folder2 + '/*.jpg'))
- assert(len(l1) == len(l2))
- print('length:', len(l1))
-
- # l1 = l1[:3]; l2 = l2[:3];
-
- # build detector
- detector_url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt'
- detector_kwargs = dict(return_features=True) # Return raw features before the softmax layer.
- device = torch.device('cuda:0')
- detector = get_feature_detector(url=detector_url, device=device, num_gpus=1, rank=0, verbose=False)
- detector.eval()
-
- stat1 = FeatureStats(capture_all=True, capture_mean_cov=True, max_items=len(l1))
- stat2 = FeatureStats(capture_all=True, capture_mean_cov=True, max_items=len(l1))
-
- with torch.no_grad():
- for i, (fpath1, fpath2) in enumerate(zip(l1, l2)):
- print(i)
- _, name1 = os.path.split(fpath1)
- _, name2 = os.path.split(fpath2)
- name1 = name1.split('.')[0]
- name2 = name2.split('.')[0]
- assert name1 == name2, 'Illegal mapping: %s, %s' % (name1, name2)
-
- img1 = read_image(fpath1).to(device)
- img2 = read_image(fpath2).to(device)
- assert img1.shape == img2.shape, 'Illegal shape'
- fea1 = detector(img1, **detector_kwargs)
- stat1.append_torch(fea1, num_gpus=1, rank=0)
- fea2 = detector(img2, **detector_kwargs)
- stat2.append_torch(fea2, num_gpus=1, rank=0)
-
- # calculate fid
- mu1, sigma1 = stat1.get_mean_cov()
- mu2, sigma2 = stat2.get_mean_cov()
- m = np.square(mu1 - mu2).sum()
- s, _ = scipy.linalg.sqrtm(np.dot(sigma1, sigma2), disp=False) # pylint: disable=no-member
- fid = np.real(m + np.trace(sigma1 + sigma2 - s * 2))
-
- # calculate pids and uids
- fake_activations = stat1.get_all()
- real_activations = stat2.get_all()
- svm = sklearn.svm.LinearSVC(dual=False)
- svm_inputs = np.concatenate([real_activations, fake_activations])
- svm_targets = np.array([1] * real_activations.shape[0] + [0] * fake_activations.shape[0])
- print('SVM fitting ...')
- svm.fit(svm_inputs, svm_targets)
- uids = 1 - svm.score(svm_inputs, svm_targets)
- real_outputs = svm.decision_function(real_activations)
- fake_outputs = svm.decision_function(fake_activations)
- pids = np.mean(fake_outputs > real_outputs)
-
- return fid, pids, uids
-
-
-if __name__ == '__main__':
- folder1 = 'path to the inpainted result'
- folder2 = 'path to the gt'
-
- fid, pids, uids = calculate_metrics(folder1, folder2)
- print('fid: %.4f, pids: %.4f, uids: %.4f' % (fid, pids, uids))
- with open('fid_pids_uids.txt', 'w') as f:
- f.write('fid: %.4f, pids: %.4f, uids: %.4f' % (fid, pids, uids))
-
diff --git a/spaces/RunningYou/mediapipe_inpainting/app.py b/spaces/RunningYou/mediapipe_inpainting/app.py
deleted file mode 100644
index b2c609e6ad891de0a17df9d986d0315e7b4c892a..0000000000000000000000000000000000000000
--- a/spaces/RunningYou/mediapipe_inpainting/app.py
+++ /dev/null
@@ -1,120 +0,0 @@
-from PIL import Image
-import numpy as np
-import torch
-import PIL
-import os
-import cv2
-import mediapipe as mp
-import gradio as gr
-from diffusers import StableDiffusionInpaintPipeline
-
-YOUR_TOKEN = os.environ.get('HF_TOKEN_SD')
-
-device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-model_path = "runwayml/stable-diffusion-inpainting"
-
-if torch.cuda.is_available():
- pipe = StableDiffusionInpaintPipeline.from_pretrained(model_path, revision="fp16", torch_dtype=torch.float16,
- use_auth_token=YOUR_TOKEN).to(device)
-else:
- pipe = StableDiffusionInpaintPipeline.from_pretrained(model_path, use_auth_token=YOUR_TOKEN).to(device)
-
-
-def image_grid(imgs, cols, rows=1):
- assert len(imgs) == rows * cols
-
- w, h = imgs[0].size
- grid = PIL.Image.new('RGB', size=(cols * w, rows * h))
- grid_w, grid_h = grid.size
-
- for i, img in enumerate(imgs):
- grid.paste(img, box=(i % cols * w, i // cols * h))
- return grid
-
-
-def mediapipe_segmentation(image_file, mask_file):
- mp_drawing = mp.solutions.drawing_utils
- mp_selfie_segmentation = mp.solutions.selfie_segmentation
-
- # For static images:
- BG_COLOR = (0, 0, 0) # gray
- MASK_COLOR = (255, 255, 255) # white
- with mp_selfie_segmentation.SelfieSegmentation(model_selection=0) as selfie_segmentation:
- image = cv2.imread(image_file)
- image_height, image_width, _ = image.shape
- # Convert the BGR image to RGB before processing.
- results = selfie_segmentation.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
-
- # blurred_image = cv2.GaussianBlur(image,(55,55),0)
- # condition = np.stack((results.segmentation_mask,) * 3, axis=-1) > 0.1
- # output_image = np.where(condition, image, blurred_image)
-
- # Draw selfie segmentation on the background image.
- # To improve segmentation around boundaries, consider applying a joint
- # bilateral filter to "results.segmentation_mask" with "image".
- condition = np.stack((results.segmentation_mask,) * 3, axis=-1) > 0.1
- # Generate solid color images for showing the output selfie segmentation mask.
- fg_image = np.zeros(image.shape, dtype=np.uint8)
- fg_image[:] = MASK_COLOR
- bg_image = np.zeros(image.shape, dtype=np.uint8)
- bg_image[:] = BG_COLOR
- output_image = np.where(condition, fg_image, bg_image)
- cv2.imwrite(mask_file, output_image)
-
-
-def image_inpainting(prompt, image_path, mask_image_path, num_samples=4, is_origin=False):
- image = PIL.Image.open(image_path).convert("RGB").resize((512, 512))
- mask_image = PIL.Image.open(mask_image_path).convert("RGB").resize((512, 512))
- num_samples = int(num_samples) if num_samples <= 4 else 4
- if not is_origin:
- guidance_scale = 7.5
- generator = torch.Generator(device=device).manual_seed(0) # change the seed to get different results
-
- images = pipe(prompt=prompt, image=image, mask_image=mask_image, guidance_scale=guidance_scale,
- generator=generator, num_images_per_prompt=num_samples).images
- else:
- images = pipe(prompt=prompt, image=image, mask_image=mask_image, num_images_per_prompt=num_samples).images
-
- # insert initial image in the list so we can compare side by side
- # images.insert(0, image)
- return image_grid(images, num_samples, 1)
-
-
-title = "Person Matting & Stable Diffusion In-Painting"
-description = "Inpainting Stable Diffusion mediapipe + Stable Diffusion "
-
-
-def predict1(dict, prompt, num_samples):
- dict['image'].save('image.png')
- # dict['mask'].save('mask.png')
- mediapipe_segmentation('image.png', 'm_mask.png')
- image = image_inpainting(prompt, num_samples=num_samples, image_path='image.png', mask_image_path='m_mask.png',
- is_origin=False)
- return image
-
-
-def predict2(dict, prompt, num_samples):
- dict['image'].save('image.png')
- if 'mask' in dict:
- dict['mask'].save('mask.png')
- image = image_inpainting(prompt, num_samples=num_samples, image_path='image.png', mask_image_path='mask.png',
- is_origin=True)
- return image
-
-
-image_input = gr.Image(source='upload', tool='sketch', type='pil')
-prompt = gr.Textbox(label='prompt')
-number = gr.Slider(1, 4, value=2, label='num_samples')
-
-examples = [
- [os.path.join(os.path.dirname(__file__), 'example1.png'), 'a bench in a field', 2],
- # [os.path.join(os.path.dirname(__file__), 'example2.png'), 'a big ship parked on the shore', 2],
- # [os.path.join(os.path.dirname(__file__), 'example3.png'), 'a palace with many steps', 2]
-]
-
-greeter_1 = gr.Interface(predict1, inputs=[image_input, prompt, number], outputs=gr.Image(label='auto'))
-greeter_2 = gr.Interface(predict2, inputs=[image_input, prompt, number], outputs=gr.Image(label='paint'))
-demo = gr.Parallel(greeter_1, greeter_2, examples=examples, cache_examples=False)
-
-if __name__ == "__main__":
- demo.launch(enable_queue=True)
diff --git a/spaces/RustX/CSV-ChatBot/modules/layout.py b/spaces/RustX/CSV-ChatBot/modules/layout.py
deleted file mode 100644
index f0df244fd9b8dff7e47fb5169965a15eb750ba60..0000000000000000000000000000000000000000
--- a/spaces/RustX/CSV-ChatBot/modules/layout.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import streamlit as st
-
-
-class Layout:
- def show_header(self):
- """
- Displays the header of the app
- """
- st.markdown(
- """
-
CSV-ChatBot, Talk with your csv-data ! / CSV-ChatBot, csv 데이터로 대화하세요! 💬
- """,
- unsafe_allow_html=True,
- )
-
- def show_api_key_missing(self):
- """
- Displays a message if the user has not entered an API key
- """
- st.markdown(
- """
-