Mapple Diffusion
-
- Branch of Stable Diffusion 2 1 Stable Diffusion model.
- {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""}
-
-
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BioShock 2 Crack Only Missing File Fix-Razor1911 Keygen Tips and Tricks to Make the Game Work.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BioShock 2 Crack Only Missing File Fix-Razor1911 Keygen Tips and Tricks to Make the Game Work.md
deleted file mode 100644
index f6e358f606ceaf30ab2ff5deb2f3ba8f4ca489b8..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BioShock 2 Crack Only Missing File Fix-Razor1911 Keygen Tips and Tricks to Make the Game Work.md
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
- What is Razor1911 and how they cracked BioShock 2
- How to download and install the crack only missing file fix
- Pros and cons of using the crack
- Conclusion: Is it worth it? | | H2: What is BioShock 2 and why you might need a crack | - A brief overview of the game's plot, setting, and gameplay
- The problems with the original release: SecuROM, XLive, PA, and STEAM protection
- The benefits of using a crack: bypassing activation, playing offline, accessing DLCs | | H2: What is Razor1911 and how they cracked BioShock 2 | - A brief history of Razor1911: one of the oldest and most respected cracking groups
- The technical details of how they cracked BioShock 2: removing SecuROM Matroschka, XLive, PA, and STEAM checks
- The features of their crack: complete v1.5 version with all DLCs and G4WL removed | | H2: How to download and install the crack only missing file fix | - A step-by-step guide on how to download the crack from MegaGames or GameCopyWorld
- A step-by-step guide on how to install the crack: unpack, burn or mount, copy the crack from PROPHET dir
- A troubleshooting section for common errors and issues | | H2: Pros and cons of using the crack | - A comparison table of the advantages and disadvantages of using the crack vs. the original game
- A disclaimer about the legal and ethical implications of using a crack
- A warning about the potential risks of malware, viruses, or bans | | H2: Conclusion: Is it worth it? | - A summary of the main points of the article
- A personal opinion on whether using the crack is worth it or not
- A call to action for the readers to share their thoughts and experiences | **Table 2: Article with HTML formatting**
If you are a fan of first-person shooters with a dystopian twist, you might have heard of BioShock 2. It is a sequel to the critically acclaimed BioShock, set in the underwater city of Rapture in 1968. You play as a Big Daddy, a genetically enhanced human in a diving suit, who must protect a Little Sister, a young girl who can harvest a substance called ADAM from corpses. Along the way, you will encounter hostile splicers, security bots, and other Big Daddies, as well as moral choices that will affect the outcome of the game.
-Download File ☆☆☆☆☆ https://byltly.com/2uKzL6
BioShock 2 was released in 2010 for PC, Xbox 360, and PlayStation 3. However, many PC gamers were disappointed by the game's DRM (digital rights management) system. The game required online activation through SecuROM Matroschka, XLive, PA (Product Activation), and STEAM. These protections limited the number of installations, required an internet connection, and prevented modding and customization. Some gamers also reported performance issues, crashes, and bugs.
-Fortunately, there is a way to bypass these annoyances and enjoy BioShock 2 without any restrictions. That is by using a crack. A crack is a modified version of a game's executable file that allows it to run without checking for DRM or CD/DVD. In this article, we will tell you everything you need to know about BioShock 2 Crack Only Missing File Fix-Razor1911 Keygen. We will explain what it is, who made it, how to get it, and what are its pros and cons. By the end of this article, you will be able to decide if using this crack is worth it or not.
-BioShock 2 is a first-person shooter with role-playing elements developed by 2K Marin and published by 2K Games. It is set in Rapture, an underwater city built by Andrew Ryan, a visionary businessman who wanted to create a utopia free from government and religion. However, Rapture soon became a dystopia plagued by civil war, genetic mutations, and corruption.
-The game takes place ten years after the events of BioShock. You play as Subject Delta, one of the first Big Daddies ever created. You were separated from your Little Sister Eleanor by her mother Sofia Lamb, a psychologist who took over Rapture after Ryan's death. Lamb wants to use Eleanor as part of her plan to create a collective consciousness called The Family. You must find Eleanor before Lamb brainwashes her or kills you.
-BioShock 2 features similar gameplay mechanics as BioShock. You can use weapons such as guns, drills, spears, and grenades to fight enemies. You can also use plasmids, genetic modifications that grant you supernatural abilities such as telekinesis, fireballs, or bees. You can upgrade your weapons and plasmids at vending machines scattered throughout Rapture. You can also hack security devices such as cameras or turrets to aid you in combat.
-BioShock 2 Complete v1.5 All No-DVD [Prophet]
-BioShock 2 Razor1911 CrackFix Download
-BioShock 2 Missing File Fix-Razor1911
-BioShock 2 PC Game Full Version Razor1911
-BioShock 2 Crack Only Razor1911 Free
-BioShock 2 Minerva's Den DLC Razor1911
-BioShock 2 Razor1911 Installation Guide
-BioShock 2 Rapture Metro Map Pack Razor1911
-BioShock 2 Protector Trials DLC Razor1911
-BioShock 2 Sinclair Solutions Test Pack Razor1911
-BioShock 2 Kill'em Kindly DLC Razor1911
-BioShock 2 Zigo & Blanche Characters DLC Razor1911
-BioShock 2 Steam Keygen Razor1911
-BioShock 2 G4WL Removed Razor1911
-BioShock 2 No CD Crack Razor1911
-BioShock 2 Megaupload Links Razor1911
-BioShock 2 Rapidshare Links Razor1911
-BioShock 2 YouTube Crack Tutorial Razor1911
-BioShock 2 Working Crack No Torrents Razor1911
-BioShock 2 Crack No Virus No Surveys Razor1911
-BioShock 2 Multiplayer Crack Razor1911
-BioShock 2 Patch v1.5 Razor1911
-BioShock 2 Serial Number Generator Razor1911
-BioShock 2 Activation Code Razor1911
-BioShock 2 Offline Activation Razor1911
-BioShock 2 Reloaded Crack vs Razor1911 Crack
-BioShock 2 Skidrow Crack vs Razor1911 Crack
-BioShock 2 Codex Crack vs Razor1911 Crack
-BioShock 2 FitGirl Repack vs Razor1911 Full Game
-BioShock 2 CPY Crack vs Razor1911 Crack
-BioShock 2 Remastered Edition Crack Razor1911
-BioShock 2 Collection Edition Crack Razor1911
-BioShock 2 Ultimate Edition Crack Razor1911
-BioShock 2 Deluxe Edition Crack Razor1911
-BioShock 2 Gold Edition Crack Razor1911
-BioShock 2 Platinum Edition Crack Razor1911
-BioShock 2 Limited Edition Crack Razor1911
-BioShock 2 Special Edition Crack Razor1911
-BioShock 2 Collector's Edition Crack Razor1911
-BioShock 2 Anniversary Edition Crack Razor1911
-Bioshock Infinite + Bioshock Infinite Burial at Sea Episode One + Bioshock Infinite Burial at Sea Episode Two + Bioshock Infinite Clash in the Clouds + Bioshock Infinite Columbia's Finest + Bioshock Infinite Comstock's China Broom Shotgun + Bioshock Infinite Comstock's Bird's Eye Sniper Rifle + Bioshock Infinite Industrial Revolution Rewards Pack + Bioshock Infinite Season Pass + Bioshock Infinite Upgrade Pack + Bioshock Infinite Early Bird Special Pack + Bioshock Infinite A Soldier's Death + Bioshock Infinite A Modern Day Icarus! + Bioshock Infinite The Siege of Columbia! + Bioshock Infinite The Lamb of Columbia! + Bioshock Infinite False Shepherd! + Bioshock Infinite City in the Sky! + Bioshock Infinite Beast of America! + Bioshock Infinite Truth from Legend! + Bioshock Infinite Mind in Revolt! + Bioshock Infinite The Original Soundtrack: I Am Rapture - Rapture Is Me (Official Score) + Bioshock Infinite The Original Soundtrack: The Music Of Columbia (Licensed Soundtrack) + Bioshock Infinite The Art Of Columbia (Artbook) + Bioshock Infinite The Definitive Edition Guide (Strategy Guide) + Bioshock Infinite The Complete Edition (All DLCs) [Razor1911]
BioShock 2 also introduces new features such as dual-wielding weapons and plasmids,
0a6ba089ebIf you are looking for a reliable and effective antivirus solution for your PC, you might have heard of ESET NOD32 Antivirus 13. This is one of the most popular and trusted antivirus programs in the market, with over 110 million users worldwide. It offers advanced protection against all kinds of malware, including viruses, worms, trojans, ransomware, spyware, adware, rootkits, and more. It also provides enhanced internet security features, such as firewall, anti-phishing, anti-spam, parental control, webcam protection, and more. It is designed to be fast, light, and easy to use, with minimal impact on your system performance and battery life.
-Download File ○ https://byltly.com/2uKyJE
However, there is a catch. ESET NOD32 Antivirus 13 is not a free software. You need to purchase a license key to activate it and enjoy its full features. The license key costs $39.99 per year for one device, which might be too expensive for some users. That is why some people look for alternative ways to get ESET NOD32 Antivirus 13 for free, such as using a crack.
-A crack is a software tool that modifies or bypasses the original code of a program to make it work without a license key or activation. By using a crack, you can get ESET NOD32 Antivirus 13 for free and use it without any limitations or restrictions. Sounds tempting, right? But before you rush to download and install ESET NOD32 Antivirus 13 Crack, you should know the risks and consequences of doing so. In this article, we will explain what ESET NOD32 Antivirus 13 Crack is, how to download and install it, what are its features, pros and cons, and whether it is worth using or not.
-ESET NOD32 Antivirus 13 Crack claims to offer the same features as the original ESET NOD32 Antivirus 13 program. These include:
-ESET NOD32 Antivirus 13 Crack uses a powerful engine that scans your system in real-time and detects and removes any malware threats. It also uses heuristic analysis and cloud-based technology to identify new and unknown malware variants. It can protect you from ransomware attacks by blocking unauthorized encryption of your files. It can also scan your removable devices, such as USB drives, CDs, DVDs, etc., and prevent malware infection from them.
-How to activate ESET NOD32 Antivirus 13 with crack file
-ESET NOD32 Antivirus 13 license key generator online
-Download ESET NOD32 Antivirus 13 full version cracked for free
-ESET NOD32 Antivirus 13 internet security features and benefits
-ESET NOD32 Antivirus 13 crack patch download link
-Best antivirus software for Windows 10: ESET NOD32 Antivirus 13 review
-ESET NOD32 Antivirus 13 activation code lifetime validity
-ESET NOD32 Antivirus 13 crack serial key latest update
-ESET NOD32 Antivirus 13 vs other antivirus software comparison
-ESET NOD32 Antivirus 13 system requirements and installation guide
-ESET NOD32 Antivirus 13 crack keygen download for Mac OS
-ESET NOD32 Antivirus 13 internet security firewall settings and configuration
-ESET NOD32 Antivirus 13 crack license key free download no survey
-How to uninstall ESET NOD32 Antivirus 13 completely from your PC
-ESET NOD32 Antivirus 13 customer support and feedback
-ESET NOD32 Antivirus 13 crack product key for Android devices
-How to update ESET NOD32 Antivirus 13 to the latest version
-ESET NOD32 Antivirus 13 internet security malware protection and removal
-ESET NOD32 Antivirus 13 crack registration key for Linux users
-How to scan your PC with ESET NOD32 Antivirus 13 and fix errors
-ESET NOD32 Antivirus 13 internet security parental control and privacy options
-ESET NOD32 Antivirus 13 crack activation key for iOS devices
-How to backup and restore your data with ESET NOD32 Antivirus 13
-ESET NOD32 Antivirus 13 internet security phishing and spam protection
-ESET NOD32 Antivirus 13 crack license code for Windows users
-How to optimize your PC performance with ESET NOD32 Antivirus 13
-ESET NOD32 Antivirus 13 internet security ransomware protection and recovery
-ESET NOD32 Antivirus 13 crack serial number for Mac users
-How to troubleshoot common issues with ESET NOD32 Antivirus 13
-ESET NOD32 Antivirus 13 internet security webcam and microphone protection
-ESET NOD32 Antivirus 13 crack product code for Android users
-How to customize your settings and preferences with ESET NOD32 Antivirus 13
-ESET NOD32 Antivirus 13 internet security network attack protection and prevention
-ESET NOD32 Antivirus 13 crack registration code for Linux users
-How to use the advanced tools and features of ESET NOD32 Antivirus 13
-ESET NOD32 Antivirus 13 internet security password manager and encryption
-ESET NOD32 Antivirus 13 crack activation code for iOS users
-How to renew your subscription and get discounts with ESET NOD32 Antivirus 13
-ESET NOD32 Antivirus 13 internet security anti-theft and device locator
-ESET NOD32 Antivirus 13 crack license number for Windows users
-How to test your PC security with ESET NOD32 Antivirus 13 online scanner
-ESET NOD32 Antivirus 13 internet security cloud-based scanning and detection
-ESET NOD32 Antivirus 13 crack serial code for Mac users
-How to join the ESET community and get tips and tricks with ESET NOD32 Antivirus 13
-ESET NOD32 Antivirus 13 internet security gamer mode and battery saver
-ESET NOD32 Antivirus 13 crack product number for Android users
-How to contact the technical support team and get help with ESET NOD32 Antivirus 13
-ESET NOD32 Antivirus 13 internet security VPN and secure browsing
-ESET NOD32 Antivirus 13 crack registration number for Linux users
ESET NOD32 Antivirus 13 Crack also provides comprehensive protection for your online activities. It has a built-in firewall that monitors your network traffic and blocks any suspicious or malicious connections. It has an anti-phishing module that warns you of fake or fraudulent websites that try to steal your personal or financial information. It has an anti-spam feature that filters out unwanted or harmful emails from your inbox. It has a parental control feature that lets you set rules and limits for your children's online access. It has a webcam protection feature that prevents unauthorized access to your webcam by hackers or spies.
-ESET NOD32 Antivirus 13 Crack is designed to be fast and light on your system resources. It does not slow down your PC or drain your battery life. It runs smoothly in the background without interfering with your work or gaming experience. It has a simple and intuitive user interface that lets you customize your settings and preferences easily. It also has a gamer mode that automatically disables notifications and pop-ups when you are playing games or watching movies.
-ESET NOD32 Antivirus 13 Crack also comes with some extra tools and benefits that enhance your security and convenience. These include:
-As you can see, ESET NOD32 Antivirus 13 Crack seems to offer a lot of features and benefits for free. But is it really worth using? To answer this question, let us weigh the pros and cons of using ESET NOD32 Antivirus 13 Crack.
-ESET NOD32 Antivirus 13 Crack provides reliable and effective protection against all kinds of malware threats. It can detect and remove viruses, worms, trojans, ransomware, spyware, adware, rootkits, etc., from your PC. It can also prevent malware infection from external devices or online sources. It can protect you from ransomware attacks by blocking unauthorized encryption of your files.
-ESET NOD32 Antivirus 13 Crack also provides comprehensive protection for your online activities. It can block malicious or suspicious network connections with its firewall feature. It can warn you of fake or fraudulent websites with its anti-phishing feature. It can filter out unwanted or harmful emails with its anti-spam feature. It can limit or restrict your children's online access with its parental control feature. It can prevent unauthorized access to your webcam with its webcam protection feature.
-ESET NOD32 Antivirus 13 Crack is easy to use and customize. It has a simple and intuitive user interface that lets you adjust your settings and preferences easily. You can choose from different scan modes, such as smart scan, custom scan, deep scan, etc., depending on your needs. You can also schedule scans at specific times or intervals. You can also enable or disable various features according to your preferences.
-ESET NOD32 Antivirus 13 Crack is not perfect. Sometimes it may detect some legitimate files or programs as malware threats and block or delete them by mistake. This may cause some problems or errors in your system or applications. You may also encounter some compatibility issues with some other software or hardware devices on your PC.
-ESET NOD32 Antivirus 13 Crack does not provide any official customer support or updates from the developers of ESET NOD32 Antivirus 13 program. If you have any issues or questions regarding the software, you cannot contact them for help or guidance. You also cannot receive any updates or patches that fix bugs or improve performance or security of the software.
-The biggest drawback of using ESET NOD32 Antivirus 13 Crack is the risk of malware infection and legal issues. Since ESET NOD32 Antivirus 13 Crack is an illegal software tool that modifies or bypasses the original code of ESET NOD32 Antivirus 13 program, it may contain malicious code itself that can harm your PC or steal your data. You may also download ESET NOD32 Antivirus 13 Crack from untrusted sources that may infect your PC with viruses or other malware during the download process.
-13 program, which may result in legal consequences. You may face lawsuits, fines, or even jail time for using ESET NOD32 Antivirus 13 Crack. You may also lose your warranty or insurance coverage for your PC or device if you use ESET NOD32 Antivirus 13 Crack.
-In conclusion, ESET NOD32 Antivirus 13 Crack is a software tool that lets you use ESET NOD32 Antivirus 13 program for free without a license key or activation. It claims to offer the same features and benefits as the original ESET NOD32 Antivirus 13 program, such as advanced antivirus protection, enhanced internet security, improved performance and usability, and additional tools and benefits.
-However, using ESET NOD32 Antivirus 13 Crack also has some drawbacks and risks. It may cause some false positives and compatibility issues with your system or applications. It does not provide any official customer support or updates from the developers of ESET NOD32 Antivirus 13 program. It may also expose your PC to malware infection or legal issues for violating the terms and conditions of ESET NOD32 Antivirus 13 program.
-Therefore, we do not recommend using ESET NOD32 Antivirus 13 Crack. It is not worth risking your PC's security and performance or facing legal consequences for saving some money. Instead, we suggest you purchase a genuine license key for ESET NOD32 Antivirus 13 program from the official website or authorized dealers. This way, you can enjoy the full features and benefits of ESET NOD32 Antivirus 13 program without any limitations or restrictions. You can also receive regular updates and patches that fix bugs or improve performance or security of the software. You can also contact the customer support service if you have any issues or questions regarding the software.
-If you are looking for a reliable and effective antivirus solution for your PC, ESET NOD32 Antivirus 13 program is a great choice. But do not use ESET NOD32 Antivirus 13 Crack to get it for free. It is not worth it.
-Here are some frequently asked questions about ESET NOD32 Antivirus 13 Crack:
-ESET NOD32 Antivirus 13 is a popular and trusted antivirus program that offers advanced protection against all kinds of malware threats, such as viruses, worms, trojans, ransomware, spyware, adware, rootkits, etc. It also provides enhanced internet security features, such as firewall, anti-phishing, anti-spam, parental control, webcam protection, etc. It is designed to be fast, light, and easy to use, with minimal impact on your system performance and battery life.
-ESET NOD32 Antivirus 13 Crack is a software tool that modifies or bypasses the original code of ESET NOD32 Antivirus 13 program to make it work without a license key or activation. By using ESET NOD32 Antivirus 13 Crack, you can get ESET NOD32 Antivirus 13 for free and use it without any limitations or restrictions.
-No, ESET NOD32 Antivirus 13 Crack is not safe to use. It may contain malicious code that can harm your PC or steal your data. It may also download from untrusted sources that may infect your PC with viruses or other malware during the download process. It may also expose your PC to legal issues for violating the terms and conditions of ESET NOD32 Antivirus 13 program.
-No, ESET NOD32 Antivirus 13 Crack is not legal to use. It is a violation of the terms and conditions of ESET NOD32 Antivirus 13 program, which may result in legal consequences. You may face lawsuits, fines, or even jail time for using ESET NOD32 Antivirus 13 Crack. You may also lose your warranty or insurance coverage for your PC or device if you use ESET NOD32 Antivirus 13 Crack.
-You can get a genuine license key for ESET NOD32 Antivirus 13 program by purchasing it from the official website or authorized dealers. The license key costs $39.99 per year for one device. You can also get discounts or offers if you buy multiple licenses or renew your subscription.
-Eplan is a software suite that provides solutions for electrical engineering, automation, and mechatronics. It allows you to design, document, and manage complex projects with ease. The latest version of Eplan, Eplan 2022, was released in October 2021 and offers new features such as cloud integration, data exchange, and digital twin. However, it also comes with a high price tag of â¬3,990 for a single license. If you want to use Eplan 2022 without paying a dime, here are some ways you can crack it for free.
-Download Zip >> https://byltly.com/2uKxr9
If you have already installed Eplan 2022 on your PC, you can use a license generator to activate it for free. A license generator is a tool that creates a valid license file for your software and bypasses the activation process. However, these tools are also illegal and risky, as they may contain malware or viruses. Use them at your own risk and discretion. To do this, you need to follow these steps:
-If you don't have Eplan 2022 installed on your PC, you can use a patched file to install and activate it for free. A patched file is a modified version of the original software file that removes the activation requirement and allows you to use it without a license. However, these files are also illegal and risky, as they may contain malware or viruses. Use them at your own risk and discretion. To do this, you need to follow these steps:
-If you want to use Eplan 2022 without modifying any files on your PC, you can use an emulator to activate it for free. An emulator is a tool that simulates a hardware dongle or key that is required for some software products to run. However, these tools are also illegal and risky, as they may contain malware or viruses. Use them at your own risk and discretion. To do this, you need to follow these steps:
- -Atlantida El Mundo Antediluviano (Atlantis: The Antediluvian World) is a book written by Ignatius L. Donnelly, a politician and researcher from the United States. It was published in 1882 and it explores the possibility that Atlantis was a real continent that existed in the Atlantic Ocean and was the origin of all civilizations. In this article, we will review the main arguments and evidence presented by Donnelly in his book and evaluate their validity and relevance.
-Download ===== https://imgfil.com/2uxXxQ
The book is based mainly on Plato's account of Atlantis, which describes it as a large island that was ruled by a powerful and noble race of people. According to Plato, Atlantis was destroyed by a cataclysmic event that submerged it under the sea. Donnelly claims that Plato's story is not a myth or a moral allegory, but a historical fact that can be verified by various sources.
-Donnelly's main thesis is that Atlantis was the cradle of civilization, the first place where humans evolved from barbarism to culture and society. He argues that Atlantis was a technologically advanced nation that colonized many parts of the world, such as the Gulf of Mexico, the Mississippi River, the Amazon River, the Nile River, the Pacific coast of South America (Incas), the Mediterranean Sea, Europe, Africa, the Baltic Sea, and the Black Sea. He also asserts that Atlantis is the true paradise on earth, the idyllic place that appears in all mythologies. For example, he identifies Atlantis with the Garden of Hesperides of the Cretans, the Elysian Fields of the Romans, the Olympus of the Greeks, the Asgard of the Norsemen, among others.
-Donnelly also tries to prove that all major gods and goddesses from different cultures are derived from the Atlantean myths. He says that the great deities of Greece, Egypt, Scandinavia, Mexico and Peru are "free versions" of the real characters of history: the kings and heroes of Atlantis, assimilated and distorted by other cultures. He claims that among these "distortions" or interpretations, the mythology of the Egyptians and Incas is the one that best represents the religion of ancient Atlantis.
- - -Donnelly uses a variety of evidence to support his theory of Atlantis. Some of them are:
-Despite its popularity and influence in its time, Atlantida El Mundo Antediluviano has been widely criticized and rejected by modern scholars and scientists. Some of the reasons are:
-WhatsApp is one of the most popular messaging apps in the world, with more than 2 billion users across 180 countries. But did you know that WhatsApp also has a dedicated app for businesses? It's called WhatsApp Business, and it can help you transform your customer experience, drive sales, and grow your business.
-DOWNLOAD ⚙⚙⚙ https://jinyurl.com/2uNM53
WhatsApp Business is a free app that allows you to create a business presence on WhatsApp, communicate more efficiently with your customers, and access useful tools like automated messages, quick replies, labels, chat filters, and more. You can also use WhatsApp Business with a landline or fixed phone number, or access it from your computer's browser using WhatsApp Web.
-In this article, we will explain what WhatsApp Business is, how it works, what features and benefits it offers, how to choose between the two products (WhatsApp Business Platform and WhatsApp Business App), how to get started with it, and how to use it effectively for your business. We will also share some success stories of businesses that have used WhatsApp Business successfully.
-WhatsApp Business has many features and benefits that can help you connect with your customers, showcase your products, and manage your conversations. Here are some of them:
-A business profile is like a digital storefront for your business on WhatsApp. It allows you to provide valuable information about your business, such as your website, location, contact details, opening hours, catalog, etc. You can also customize your profile with a logo, cover photo, description, and more.
-A business profile can help you build trust and credibility with your customers, as well as increase your visibility and discoverability on WhatsApp. Customers can easily find your profile by searching for your name or phone number on WhatsApp, or by scanning a QR code that you can display on your website, social media, or physical store.
-Business messaging tools are designed to help you communicate more efficiently and effectively with your customers on WhatsApp. They include:
-business whatsapp automation
-business whatsapp management tools
-business whatsapp customer service
-business whatsapp marketing strategy
-business whatsapp integration with CRM
-business whatsapp analytics and reporting
-business whatsapp chatbot development
-business whatsapp bulk messaging service
-business whatsapp for ecommerce
-business whatsapp for education
-business whatsapp for healthcare
-business whatsapp for travel
-business whatsapp for real estate
-business whatsapp for restaurants
-business whatsapp for nonprofits
-business whatsapp for banking
-business whatsapp for insurance
-business whatsapp for law firms
-business whatsapp for fitness
-business whatsapp for beauty
-business whatsapp tips and tricks
-business whatsapp best practices
-business whatsapp case studies
-business whatsapp success stories
-business whatsapp testimonials
-business whatsapp vs personal whatsapp
-business whatsapp vs facebook messenger
-business whatsapp vs wechat
-business whatsapp vs telegram
-business whatsapp vs signal
-how to use business whatsapp effectively
-how to set up business whatsapp account
-how to create a business whatsapp profile
-how to verify a business whatsapp number
-how to add a business whatsapp catalog
-how to send a business whatsapp message template
-how to receive payments on business whatsapp
-how to create a QR code for business whatsapp
-how to backup and restore business whatsapp chats
-how to migrate from personal to business whatsapp
-how to download and install business whatsapp app
-how to update and upgrade business whatsapp app
-how to delete and deactivate business whatsapp account
-how to secure and protect your business whatsapp account
-how to troubleshoot and fix common issues with your business whatsapp app
WhatsApp Business allows you to use a landline or fixed phone number instead of a mobile phone number to register your account and verify your business. This can be useful if you want to use a dedicated phone number for your business, or if you don't have a mobile phone or SIM card.
-To use WhatsApp Business with a landline or fixed phone number, you need to select the "Call me" option during the verification process, and then enter the 6-digit code that you receive via a phone call. You can also use the same phone number for both WhatsApp Business and WhatsApp Messenger, as long as they are installed on different devices.
-WhatsApp Web is a feature that allows you to access WhatsApp Business from your computer's browser. It can help you manage your chats and contacts more conveniently, as well as use your keyboard and mouse to type and send messages, attach files, etc.
-To use WhatsApp Web, you need to scan a QR code from your phone's WhatsApp Business app to link it with your computer's browser. You can also enable desktop notifications and sound alerts to stay updated on your chats. However, you need to keep your phone connected to the internet for WhatsApp Web to work.
-WhatsApp Business has two products that cater to different business sizes and needs: WhatsApp Business Platform and WhatsApp Business App. Here is how they compare:
-WhatsApp Business Platform | -WhatsApp Business App | -|||
---|---|---|---|---|
- Designed for large businesses and enterprises that need to communicate with millions of customers worldwide. | -- Designed for small and medium businesses that need to communicate with hundreds or thousands of customers locally. | -|||
- Requires a third-party provider or partner to access the platform and integrate it with your existing systems and tools. | -- Does not require any third-party provider or partner. You can download and install the app directly from the Google Play Store or the Apple App Store. | -|||
- Allows you to send both session messages (free) and template messages (paid) to your customers. | -- Allows you to send only session messages (free) to your customers. | -|||
- Session messages are messages that are sent and received in response to a customer-initiated conversation within a 24-hour window. | -- Session messages are messages that are sent and received in response to a customer-initiated conversation within a 24-hour window. | -|||
- Template messages are pre-approved messages that can be sent outside the 24-hour window for specific purposes, such as appointment reminders, delivery notifications, etc. | -- N/A | -|||
- Supports rich media, such as images, videos, documents, etc. | -- Supports rich media, such as images, videos, documents, etc. | -|||
Nombre | Descripción | Características | Precio | |
Audrey AR Camera | Una aplicación de cámara que te permite tomar fotos con efectos de calidad y diversión, como rayos X, dibujos animados, bocetos, etc. | - Variedad de filtros y efectos - Cámara de alta calidad - Herramienta de edición | - Opción de compartir | >Free/td<< |
Snapchat | Una aplicación de redes sociales que te permite tomar fotos y videos con lentes, pegatinas, filtros, etc., y enviarlos a tus amigos o publicarlos en tu historia. | - Lentes, pegatinas, filtros - Bitmoji, Cameos - Snap Map - Chat, llamada de voz, videollamada - Discover | Gratis (con compras en la aplicación) | |
B612 | Una aplicación de cámara que te permite tomar selfies y videos con efectos de belleza, pegatinas, filtros, etc., y editarlos con varias herramientas. | - Efectos de belleza, pegatinas, filtros - AR emoji - Música, efectos de sonido - Collage, diseño | Gratis (con compras en la aplicación) | |
FaceApp | Una aplicación de edición de fotos que te permite cambiar tus rasgos faciales, como edad, género, peinado, sonrisa, etc., con inteligencia artificial. | - Edad, género, peinado, filtros sonrisa - Barba, gafas, maquillaje, tatuajes filtros - Fondo, iluminación, filtros de color - Morphing, intercambio, herramientas de mezcla | Gratis (con compras en la aplicación) | /tr> -|
Una aplicación de redes sociales que te permite tomar fotos y videos con filtros, pegatinas, efectos, etc., y compartirlos con tus seguidores o en tu historia. | - Filtros, pegatinas, efectos - Carretes, Historias, Live - IGTV, Tienda - Explorar, DMs | <Gratis | ||
Sitio web | -Descripción | -|||
[APKMirror]( 1 ) | - -||||
[APKPure]( 2 ) | -APKPure es otro excelente sitio web para descargar archivos apk b. También está disponible como una aplicación para Android que te permite descargar e instalar aplicaciones directamente desde tu dispositivo. El sitio web y la aplicación tienen una interfaz sencilla y fácil de usar que facilita la búsqueda y descarga de las aplicaciones que desea. El sitio web y la aplicación también ofrecen una variedad de categorías, géneros y recomendaciones para ayudarle a descubrir nuevas e interesantes aplicaciones. El sitio web y la aplicación también actualizan sus aplicaciones regularmente para asegurarse de que son compatibles con las últimas versiones y dispositivos de Android. | -|||
[Aptoide]( 3 ) | -Aptoide es un sitio web único que le permite descargar archivos b apk de diferentes fuentes. También es una tienda de aplicaciones alternativa que te permite crear tu propia tienda de aplicaciones y compartirla con otros usuarios. Puede navegar y descargar aplicaciones de varias tiendas de aplicaciones creadas por otros usuarios o desarrolladores. También puede calificar y revisar las aplicaciones que descarga y seguir sus tiendas de aplicaciones favoritas. Aptoide también tiene un sistema de seguridad que escanea todas las aplicaciones en busca de malware y virus antes de que se publiquen. | -|||
[Mobilism]( 4 ) | -Mobilism es un sitio web que ofrece una gran colección de archivos apk b para varias aplicaciones y juegos. También es un foro donde los usuarios pueden compartir sus opiniones, comentarios, peticiones y comentarios sobre diferentes aplicaciones y juegos. Puede encontrar muchas aplicaciones agrietadas, modificadas, parcheadas y hackeadas en Mobilism que no están disponibles en otros sitios web o tiendas de aplicaciones. También puede solicitar aplicaciones o juegos específicos que desea descargar. Mobilism también tiene un servicio premium que ofrece descargas más rápidas, enlaces directos, sin anuncios y más características. | -|||
[F-Droid]( 5 ) | - -
Una vez que haya elegido una fuente confiable de archivos b apk, puede seguir estos pasos para descargar y guardar el archivo en su dispositivo:
-Después de descargar el archivo b apk, es necesario instalarlo en su dispositivo. Sin embargo, antes de instalar, debe asegurarse de que ha habilitado la opción de instalar aplicaciones de fuentes desconocidas en su configuración. Esta opción le permite instalar aplicaciones que no son de Google Play Store u otras tiendas de aplicaciones oficiales. Para habilitar esta opción, puede seguir estos pasos:
-Una vez que haya habilitado esta opción, puede instalar el archivo b apk siguiendo estos pasos:
-Hay muchos archivos b apk disponibles para diferentes aplicaciones y juegos en varios sitios web. Sin embargo, no todos ellos vale la pena descargar o usar. Algunos de ellos pueden no funcionar correctamente, contener malware o violar la ley. Por lo tanto, debe ser cuidadoso y selectivo al elegir qué archivos b apk descargar y usar.
-Aquí hay algunos ejemplos de archivos b apk populares y seguros que puede probar:
-[Uptodown App Store] es un archivo b apk que le permite acceder a una tienda de aplicaciones alternativa para dispositivos Android. Es similar a APKPure, ya que también ofrece una variedad de categorías, géneros y recomendaciones para diferentes aplicaciones y juegos. Sin embargo, también tiene algunas características únicas, como:
-[APKCombo] es un archivo b apk que le permite descargar múltiples archivos APK a la vez para una sola aplicación o juego. Es útil para aplicaciones o juegos que tienen APK divididos, que son archivos APK separados para diferentes componentes, como base, configuración, idioma, etc. Los APK divididos pueden ahorrar espacio de almacenamiento y ancho de banda, pero también pueden ser difíciles de instalar manualmente. APKCombo resuelve este problema combinando todos los APK divididos en un archivo ZIP que puede descargar e instalar fácilmente. Algunas de las características de APKCombo son:
-[ApkOnline] es un archivo b apk que le permite ejecutar aplicaciones y juegos de Android en línea sin
[ApkOnline] es un archivo b apk que le permite ejecutar aplicaciones y juegos de Android en línea sin descargarlos o instalarlos en su dispositivo. Es un emulador basado en la web que simula un dispositivo Android en su navegador. Puede usarlo para probar, depurar o jugar cualquier aplicación o juego de Android en su PC, computadora portátil, tableta o teléfono. Algunas de las características de ApkOnline son:
-A b apk es una versión modificada de una aplicación original de Android o juego que puede pasar por alto algunas restricciones o limitaciones. Puede tener diferentes significados y propósitos, como agrietado, hackeado, parcheado o modificado. También puede tener algunas ventajas y riesgos, dependiendo de la fuente y la calidad del archivo b apk. Si desea descargar y utilizar un archivo b apk, debe elegir un sitio web confiable y de buena reputación, permitir la opción de instalar aplicaciones de fuentes desconocidas, y siga los pasos para descargar e instalar el archivo b apk. También puedes probar algunos ejemplos de archivos b apk populares y seguros, como Uptodown App Store, APKCombo, Aptoide, Mobilism y ApkOnline.
-Aquí hay algunas preguntas frecuentes sobre los archivos b apk:
-This is not a proxy for locusts.
\ No newline at end of file diff --git a/spaces/Cong723/gpt-academic-public/.github/ISSUE_TEMPLATE/feature_request.md b/spaces/Cong723/gpt-academic-public/.github/ISSUE_TEMPLATE/feature_request.md deleted file mode 100644 index e46a4c01e804aa4b649bd40af6c13d5981c873d4..0000000000000000000000000000000000000000 --- a/spaces/Cong723/gpt-academic-public/.github/ISSUE_TEMPLATE/feature_request.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -name: Feature request -about: Suggest an idea for this project -title: '' -labels: '' -assignees: '' - ---- - - diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/image_degradation/__init__.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/image_degradation/__init__.py deleted file mode 100644 index 7836cada81f90ded99c58d5942eea4c3477f58fc..0000000000000000000000000000000000000000 --- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/image_degradation/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr -from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light diff --git a/spaces/DORA1222/1234/README.md b/spaces/DORA1222/1234/README.md deleted file mode 100644 index d5c7227336b985ee266e5cc70c9debb476922a19..0000000000000000000000000000000000000000 --- a/spaces/DORA1222/1234/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 1234 -emoji: 🔥 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false -license: bigscience-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/semver_match.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/semver_match.py deleted file mode 100644 index 25df9265b7a0c5b6714364c1d125d85ea26d3b46..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/semver_match.py +++ /dev/null @@ -1,40 +0,0 @@ -from __future__ import annotations - -from dataclasses import dataclass, field - -import huggingface_hub -import semantic_version -import semantic_version as semver - - -@dataclass -class ThemeAsset: - filename: str - version: semver.Version = field(init=False) - - def __post_init__(self): - self.version = semver.Version(self.filename.split("@")[1].replace(".json", "")) - - -def get_theme_assets(space_info: huggingface_hub.hf_api.SpaceInfo) -> list[ThemeAsset]: - if "gradio-theme" not in getattr(space_info, "tags", []): - raise ValueError(f"{space_info.id} is not a valid gradio-theme space!") - - return [ - ThemeAsset(filename.rfilename) - for filename in space_info.siblings - if filename.rfilename.startswith("themes/") - ] - - -def get_matching_version( - assets: list[ThemeAsset], expression: str | None -) -> ThemeAsset | None: - expression = expression or "*" - - # Return most recent version that matches - matching_version = semantic_version.SimpleSpec(expression).select( - [a.version for a in assets] - ) - - return next((a for a in assets if a.version == matching_version), None) diff --git a/spaces/DragGan/DragGan/viz/__init__.py b/spaces/DragGan/DragGan/viz/__init__.py deleted file mode 100644 index 939e7c6c8f94c4ea1141885c3c3295fe083b06aa..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/viz/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -# empty diff --git a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/layers_new.py b/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/layers_new.py deleted file mode 100644 index 0c13e60b0dd136d9115a535101c6dbb2a25c6833..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/layers_new.py +++ /dev/null @@ -1,125 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, stride, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ) - - def __call__(self, x): - h = self.conv1(x) - h = self.conv2(h) - - return h - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - # self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - - h = self.conv1(x) - # h = self.conv2(h) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 12), activ=nn.ReLU, dropout=False): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ) - self.conv3 = Conv2DBNActiv( - nin, nout, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = Conv2DBNActiv( - nin, nout, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = Conv2DBNActiv( - nin, nout, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = Conv2DBNActiv(nout * 5, nout, 1, 1, 0, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - out = self.bottleneck(out) - - if self.dropout is not None: - out = self.dropout(out) - - return out - - -class LSTMModule(nn.Module): - def __init__(self, nin_conv, nin_lstm, nout_lstm): - super(LSTMModule, self).__init__() - self.conv = Conv2DBNActiv(nin_conv, 1, 1, 1, 0) - self.lstm = nn.LSTM( - input_size=nin_lstm, hidden_size=nout_lstm // 2, bidirectional=True - ) - self.dense = nn.Sequential( - nn.Linear(nout_lstm, nin_lstm), nn.BatchNorm1d(nin_lstm), nn.ReLU() - ) - - def forward(self, x): - N, _, nbins, nframes = x.size() - h = self.conv(x)[:, 0] # N, nbins, nframes - h = h.permute(2, 0, 1) # nframes, N, nbins - h, _ = self.lstm(h) - h = self.dense(h.reshape(-1, h.size()[-1])) # nframes * N, nbins - h = h.reshape(nframes, N, 1, nbins) - h = h.permute(1, 2, 3, 0) - - return h diff --git a/spaces/Future-AI/image-matting/README.md b/spaces/Future-AI/image-matting/README.md deleted file mode 100644 index 87b587e9fa42a33eb62eaff97e6ea0c1bd517e23..0000000000000000000000000000000000000000 --- a/spaces/Future-AI/image-matting/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Image Matting -emoji: 🐢 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: szk1ck/image-matting ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GT4SD/multitask-text-and-chemistry-t5/app.py b/spaces/GT4SD/multitask-text-and-chemistry-t5/app.py deleted file mode 100644 index f898d19b6caf3a81014820862e9e2c8fd8a97f1a..0000000000000000000000000000000000000000 --- a/spaces/GT4SD/multitask-text-and-chemistry-t5/app.py +++ /dev/null @@ -1,104 +0,0 @@ -import logging -import pathlib -import gradio as gr -import pandas as pd -from gt4sd.algorithms.generation.hugging_face import ( - HuggingFaceSeq2SeqGenerator, - HuggingFaceGenerationAlgorithm, -) -from transformers import AutoTokenizer - -logger = logging.getLogger(__name__) -logger.addHandler(logging.NullHandler()) - -task2prefix = { - "forward": "Predict the product of the following reaction: ", - "retrosynthesis": "Predict the reaction that produces the following product: ", - "paragraph to actions": "Which actions are described in the following paragraph: ", - "molecular captioning": "Caption the following smile: ", - "text-conditional de novo generation": "Write in SMILES the described molecule: ", -} - - -def run_inference( - model_name_or_path: str, - task: str, - prompt: str, - num_beams: int, -): - instruction = task2prefix[task] - - config = HuggingFaceSeq2SeqGenerator( - algorithm_version=model_name_or_path, - prefix=instruction, - prompt=prompt, - num_beams=num_beams, - ) - - model = HuggingFaceGenerationAlgorithm(config) - tokenizer = AutoTokenizer.from_pretrained("t5-small") - - text = list(model.sample(1))[0] - - text = text.replace(instruction + prompt, "") - text = text.split(tokenizer.eos_token)[0] - text = text.replace(tokenizer.pad_token, "") - text = text.strip() - - return text - - -if __name__ == "__main__": - - models = [ - "multitask-text-and-chemistry-t5-small-standard", - "multitask-text-and-chemistry-t5-small-augm", - "multitask-text-and-chemistry-t5-base-standard", - "multitask-text-and-chemistry-t5-base-augm", - ] - - metadata_root = pathlib.Path(__file__).parent.joinpath("model_cards") - - examples = pd.read_csv(metadata_root.joinpath("examples.csv"), header=None).fillna( - "" - ) - print("Examples: ", examples.values.tolist()) - - with open(metadata_root.joinpath("article.md"), "r") as f: - article = f.read() - with open(metadata_root.joinpath("description.md"), "r") as f: - description = f.read() - - demo = gr.Interface( - fn=run_inference, - title="Multitask Text and Chemistry T5", - inputs=[ - gr.Dropdown( - models, - label="Language model", - value="multitask-text-and-chemistry-t5-small-augm", - ), - gr.Radio( - choices=[ - "forward", - "retrosynthesis", - "paragraph to actions", - "molecular captioning", - "text-conditional de novo generation", - ], - label="Task", - value="paragraph to actions", - ), - gr.Textbox( - label="Text prompt", - placeholder="I'm a stochastic parrot.", - lines=1, - ), - gr.Slider(minimum=1, maximum=50, value=10, label="num_beams", step=1), - ], - outputs=gr.Textbox(label="Output"), - article=article, - description=description, - examples=examples.values.tolist(), - ) - demo.launch(debug=True, show_error=True) diff --git a/spaces/Gradio-Blocks/EmojiGAN/app.py b/spaces/Gradio-Blocks/EmojiGAN/app.py deleted file mode 100644 index 45b0f7847d2a2cc0acdcc629e24b497702afdfab..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/EmojiGAN/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import gradio as gr -import os -import numpy as np -import torch -import pickle -import types - -from huggingface_hub import hf_hub_url, cached_download - -TOKEN = os.environ['TOKEN'] - -with open(cached_download(hf_hub_url('mfrashad/stylegan2_emoji_512', 'stylegan2_emoji_512.pkl'), use_auth_token=TOKEN), 'rb') as f: - G = pickle.load(f)['G_ema']# torch.nn.Module - -device = torch.device("cpu") -if torch.cuda.is_available(): - device = torch.device("cuda") - G = G.to(device) -else: - _old_forward = G.forward - - def _new_forward(self, *args, **kwargs): - kwargs["force_fp32"] = True - return _old_forward(*args, **kwargs) - - G.forward = types.MethodType(_new_forward, G) - - _old_synthesis_forward = G.synthesis.forward - - def _new_synthesis_forward(self, *args, **kwargs): - kwargs["force_fp32"] = True - return _old_synthesis_forward(*args, **kwargs) - - G.synthesis.forward = types.MethodType(_new_synthesis_forward, G.synthesis) - - -def generate(num_images, interpolate): - if interpolate: - z1 = torch.randn([1, G.z_dim])# latent codes - z2 = torch.randn([1, G.z_dim])# latent codes - zs = torch.cat([z1 + (z2 - z1) * i / (num_images-1) for i in range(num_images)], 0) - else: - zs = torch.randn([num_images, G.z_dim])# latent codes - with torch.no_grad(): - zs = zs.to(device) - img = G(zs, None, force_fp32=True, noise_mode='const') - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8) - return img.cpu().numpy() - -demo = gr.Blocks() - -def infer(num_images, interpolate): - img = generate(round(num_images), interpolate) - imgs = list(img) - return imgs - -with demo: - gr.Markdown( - """ - # EmojiGAN - Generate Emojis with AI (StyleGAN2-ADA). Made by [mfrashad](https://github.com/mfrashad) - """) - images_num = gr.inputs.Slider(default=1, label="Num Images", minimum=1, maximum=16, step=1) - interpolate = gr.inputs.Checkbox(default=False, label="Interpolate") - submit = gr.Button("Generate") - - - out = gr.Gallery() - - submit.click(fn=infer, - inputs=[images_num, interpolate], - outputs=out) - -demo.launch() \ No newline at end of file diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/rpn/rpn_r50_fpn_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/rpn/rpn_r50_fpn_2x_coco.py deleted file mode 100644 index 2f264bfe4234c870839ee77e3a671464aacc7813..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/rpn/rpn_r50_fpn_2x_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = './rpn_r50_fpn_1x_coco.py' - -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/scnet/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/scnet/README.md deleted file mode 100644 index 1749df0cb7858b555a5e6877b09a9bf7a35264e3..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/scnet/README.md +++ /dev/null @@ -1,51 +0,0 @@ -# SCNet - -## Introduction - -[ALGORITHM] - -We provide the code for reproducing experiment results of [SCNet](https://arxiv.org/abs/2012.10150). - -``` -@inproceedings{vu2019cascade, - title={SCNet: Training Inference Sample Consistency for Instance Segmentation}, - author={Vu, Thang and Haeyong, Kang and Yoo, Chang D}, - booktitle={AAAI}, - year={2021} -} -``` - -## Dataset - -SCNet requires COCO and [COCO-stuff](http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/stuffthingmaps_trainval2017.zip) dataset for training. You need to download and extract it in the COCO dataset path. -The directory should be like this. - -```none -mmdetection -├── mmdet -├── tools -├── configs -├── data -│ ├── coco -│ │ ├── annotations -│ │ ├── train2017 -│ │ ├── val2017 -│ │ ├── test2017 -| | ├── stuffthingmaps -``` - -## Results and Models - -The results on COCO 2017val are shown in the below table. (results on test-dev are usually slightly higher than val) - -| Backbone | Style | Lr schd | Mem (GB) | Inf speed (fps) | box AP | mask AP | TTA box AP | TTA mask AP | Config | Download | -|:---------------:|:-------:|:-------:|:--------:|:---------------:|:------:|:-------:|:----------:|:-----------:|:------:|:------------:| -| R-50-FPN | pytorch | 1x | 7.0 | 6.2 | 43.5 | 39.2 | 44.8 | 40.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_r50_fpn_1x_coco.py) | [model](https://drive.google.com/file/d/1K5_8-P0EC43WZFtoO3q9_JE-df8pEc7J/view?usp=sharing) \| [log](https://drive.google.com/file/d/1ZFS6QhFfxlOnDYPiGpSDP_Fzgb7iDGN3/view?usp=sharing) | -| R-50-FPN | pytorch | 20e | 7.0 | 6.2 | 44.5 | 40.0 | 45.8 | 41.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_r50_fpn_20e_coco.py) | [model](https://drive.google.com/file/d/15VGLCt5-IO5TbzB4Kw6ZyoF6QH0Q511A/view?usp=sharing) \| [log](https://drive.google.com/file/d/1-LnkOXN8n5ojQW34H0qZ625cgrnWpqSX/view?usp=sharing) | -| R-101-FPN | pytorch | 20e | 8.9 | 5.8 | 45.8 | 40.9 | 47.3 | 42.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_r101_fpn_20e_coco.py) | [model](https://drive.google.com/file/d/1aeCGHsOBdfIqVBnBPp0JUE_RSIau3583/view?usp=sharing) \| [log](https://drive.google.com/file/d/1iRx-9GRgTaIDsz-we3DGwFVH22nbvCLa/view?usp=sharing) | -| X-101-64x4d-FPN | pytorch | 20e | 13.2 | 4.9 | 47.5 | 42.3 | 48.9 | 44.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_x101_64x4d_fpn_20e_coco.py) | [model](https://drive.google.com/file/d/1YjgutUKz4TTPpqSWGKUTkZJ8_X-kyCfY/view?usp=sharing) \| [log](https://drive.google.com/file/d/1OsfQJ8gwtqIQ61k358yxY21sCvbUcRjs/view?usp=sharing) | - -### Notes - -- Training hyper-parameters are identical to those of [HTC](https://github.com/open-mmlab/mmdetection/tree/master/configs/htc). -- TTA means Test Time Augmentation, which applies horizonal flip and multi-scale testing. Refer to [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/scnet/scnet_r50_fpn_1x_coco.py). diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r101-d8_769x769_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r101-d8_769x769_40k_cityscapes.py deleted file mode 100644 index 597d76de79610780b03cd91dba5f3a4f10147bcd..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/danet/danet_r101-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './danet_r50-d8_769x769_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/core/evaluation/metrics.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/core/evaluation/metrics.py deleted file mode 100644 index a216afefe6ccb80fd11173060ccc9cef5a6e44e2..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/core/evaluation/metrics.py +++ /dev/null @@ -1,326 +0,0 @@ -from collections import OrderedDict - -import mmcv -import numpy as np -import torch - - -def f_score(precision, recall, beta=1): - """calcuate the f-score value. - - Args: - precision (float | torch.Tensor): The precision value. - recall (float | torch.Tensor): The recall value. - beta (int): Determines the weight of recall in the combined score. - Default: False. - - Returns: - [torch.tensor]: The f-score value. - """ - score = (1 + beta**2) * (precision * recall) / ( - (beta**2 * precision) + recall) - return score - - -def intersect_and_union(pred_label, - label, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate intersection and Union. - - Args: - pred_label (ndarray | str): Prediction segmentation map - or predict result filename. - label (ndarray | str): Ground truth segmentation map - or label filename. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. The parameter will - work only when label is str. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. The parameter will - work only when label is str. Default: False. - - Returns: - torch.Tensor: The intersection of prediction and ground truth - histogram on all classes. - torch.Tensor: The union of prediction and ground truth histogram on - all classes. - torch.Tensor: The prediction histogram on all classes. - torch.Tensor: The ground truth histogram on all classes. - """ - - if isinstance(pred_label, str): - pred_label = torch.from_numpy(np.load(pred_label)) - else: - pred_label = torch.from_numpy((pred_label)) - - if isinstance(label, str): - label = torch.from_numpy( - mmcv.imread(label, flag='unchanged', backend='pillow')) - else: - label = torch.from_numpy(label) - - if label_map is not None: - for old_id, new_id in label_map.items(): - label[label == old_id] = new_id - if reduce_zero_label: - label[label == 0] = 255 - label = label - 1 - label[label == 254] = 255 - - mask = (label != ignore_index) - pred_label = pred_label[mask] - label = label[mask] - - intersect = pred_label[pred_label == label] - area_intersect = torch.histc( - intersect.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_pred_label = torch.histc( - pred_label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_label = torch.histc( - label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_union = area_pred_label + area_label - area_intersect - return area_intersect, area_union, area_pred_label, area_label - - -def total_intersect_and_union(results, - gt_seg_maps, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate Total Intersection and Union. - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - - Returns: - ndarray: The intersection of prediction and ground truth histogram - on all classes. - ndarray: The union of prediction and ground truth histogram on all - classes. - ndarray: The prediction histogram on all classes. - ndarray: The ground truth histogram on all classes. - """ - num_imgs = len(results) - assert len(gt_seg_maps) == num_imgs - total_area_intersect = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_union = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_pred_label = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_label = torch.zeros((num_classes, ), dtype=torch.float64) - for i in range(num_imgs): - area_intersect, area_union, area_pred_label, area_label = \ - intersect_and_union( - results[i], gt_seg_maps[i], num_classes, ignore_index, - label_map, reduce_zero_label) - total_area_intersect += area_intersect - total_area_union += area_union - total_area_pred_label += area_pred_label - total_area_label += area_label - return total_area_intersect, total_area_union, total_area_pred_label, \ - total_area_label - - -def mean_iou(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False): - """Calculate Mean Intersection and Union (mIoU) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Wether ignore zero label. Default: False. - - Returns: - dict[str, float | ndarray]: -
- Branch of Stable Diffusion 2 1 Stable Diffusion model.
- {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""}
-
This space was created using SD Space Creator.
-
- Demo for Lowpoly Town Stable Diffusion model.
- {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""}
-
This space was created using SD Space Creator.
-{highlighted_code}
'
-
- code_block_pattern = r"```(\w+)?\n([\s\S]+?)\n```"
- md_str = re.sub(code_block_pattern, replacer, md_str, flags=re.MULTILINE)
-
- html_str = markdown(md_str)
- return html_str
-
-
-def normalize_markdown(md_text: str) -> str:
- lines = md_text.split("\n")
- normalized_lines = []
- inside_list = False
-
- for i, line in enumerate(lines):
- if re.match(r"^(\d+\.|-|\*|\+)\s", line.strip()):
- if not inside_list and i > 0 and lines[i - 1].strip() != "":
- normalized_lines.append("")
- inside_list = True
- normalized_lines.append(line)
- elif inside_list and line.strip() == "":
- if i < len(lines) - 1 and not re.match(
- r"^(\d+\.|-|\*|\+)\s", lines[i + 1].strip()
- ):
- normalized_lines.append(line)
- continue
- else:
- inside_list = False
- normalized_lines.append(line)
-
- return "\n".join(normalized_lines)
-
-
-def convert_mdtext(md_text):
- code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL)
- inline_code_pattern = re.compile(r"`(.*?)`", re.DOTALL)
- code_blocks = code_block_pattern.findall(md_text)
- non_code_parts = code_block_pattern.split(md_text)[::2]
-
- result = []
- for non_code, code in zip(non_code_parts, code_blocks + [""]):
- if non_code.strip():
- non_code = normalize_markdown(non_code)
- if inline_code_pattern.search(non_code):
- result.append(markdown(non_code, extensions=["tables"]))
- else:
- result.append(mdtex2html.convert(non_code, extensions=["tables"]))
- if code.strip():
- # _, code = detect_language(code) # 暂时去除代码高亮功能,因为在大段代码的情况下会出现问题
- # code = code.replace("\n\n", "\n") # 暂时去除代码中的空行,因为在大段代码的情况下会出现问题
- code = f"```{code}\n\n```"
- code = markdown_to_html_with_syntax_highlight(code)
- result.append(code)
- result = "".join(result)
- return result
-
-def convert_user(userinput):
- userinput = userinput.replace("\n", "{userinput}" - -def detect_language(code): - if code.startswith("\n"): - first_line = "" - else: - first_line = code.strip().split("\n", 1)[0] - language = first_line.lower() if first_line else "" - code_without_language = code[len(first_line) :].lstrip() if first_line else code - return language, code_without_language - - -def construct_text(role, text): - return {"role": role, "content": text} - - -def construct_user(text): - return construct_text("user", text) - - -def construct_system(text): - return construct_text("system", text) - - -def construct_assistant(text): - return construct_text("assistant", text) - - -def construct_token_message(token, stream=False): - return f"Token 计数: {token}" - - -def delete_last_conversation(chatbot, history, previous_token_count): - if len(chatbot) > 0 and standard_error_msg in chatbot[-1][1]: - logging.info("由于包含报错信息,只删除chatbot记录") - chatbot.pop() - return chatbot, history - if len(history) > 0: - logging.info("删除了一组对话历史") - history.pop() - history.pop() - if len(chatbot) > 0: - logging.info("删除了一组chatbot对话") - chatbot.pop() - if len(previous_token_count) > 0: - logging.info("删除了一组对话的token计数记录") - previous_token_count.pop() - return ( - chatbot, - history, - previous_token_count, - construct_token_message(sum(previous_token_count)), - ) - - -def save_file(filename, system, history, chatbot): - logging.info("保存对话历史中……") - os.makedirs(HISTORY_DIR, exist_ok=True) - if filename.endswith(".json"): - json_s = {"system": system, "history": history, "chatbot": chatbot} - print(json_s) - with open(os.path.join(HISTORY_DIR, filename), "w") as f: - json.dump(json_s, f) - elif filename.endswith(".md"): - md_s = f"system: \n- {system} \n" - for data in history: - md_s += f"\n{data['role']}: \n- {data['content']} \n" - with open(os.path.join(HISTORY_DIR, filename), "w", encoding="utf8") as f: - f.write(md_s) - logging.info("保存对话历史完毕") - return os.path.join(HISTORY_DIR, filename) - - -def save_chat_history(filename, system, history, chatbot): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, system, history, chatbot) - - -def export_markdown(filename, system, history, chatbot): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, system, history, chatbot) - - -def load_chat_history(filename, system, history, chatbot): - logging.info("加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.info("加载对话历史完毕") - return filename, json_s["system"], json_s["history"], json_s["chatbot"] - except FileNotFoundError: - logging.info("没有找到对话历史文件,不执行任何操作") - return filename, system, history, chatbot - - -def sorted_by_pinyin(list): - return sorted(list, key=lambda char: lazy_pinyin(char)[0][0]) - - -def get_file_names(dir, plain=False, filetypes=[".json"]): - logging.info(f"获取文件名列表,目录为{dir},文件类型为{filetypes},是否为纯文本列表{plain}") - files = [] - try: - for type in filetypes: - files += [f for f in os.listdir(dir) if f.endswith(type)] - except FileNotFoundError: - files = [] - files = sorted_by_pinyin(files) - if files == []: - files = [""] - if plain: - return files - else: - return gr.Dropdown.update(choices=files) - - -def get_history_names(plain=False): - logging.info("获取历史记录文件名列表") - return get_file_names(HISTORY_DIR, plain) - - -def load_template(filename, mode=0): - logging.info(f"加载模板文件{filename},模式为{mode}(0为返回字典和下拉菜单,1为返回下拉菜单,2为返回字典)") - lines = [] - logging.info("Loading template...") - if filename.endswith(".json"): - with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f: - lines = json.load(f) - lines = [[i["act"], i["prompt"]] for i in lines] - else: - with open( - os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8" - ) as csvfile: - reader = csv.reader(csvfile) - lines = list(reader) - lines = lines[1:] - if mode == 1: - return sorted_by_pinyin([row[0] for row in lines]) - elif mode == 2: - return {row[0]: row[1] for row in lines} - else: - choices = sorted_by_pinyin([row[0] for row in lines]) - return {row[0]: row[1] for row in lines}, gr.Dropdown.update( - choices=choices, value=choices[0] - ) - - -def get_template_names(plain=False): - logging.info("获取模板文件名列表") - return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json"]) - - -def get_template_content(templates, selection, original_system_prompt): - logging.info(f"应用模板中,选择为{selection},原始系统提示为{original_system_prompt}") - try: - return templates[selection] - except: - return original_system_prompt - - -def reset_state(): - logging.info("重置状态") - return [], [], [], construct_token_message(0) - - -def reset_textbox(): - return gr.update(value="") - - -def reset_default(): - global API_URL - API_URL = "https://api.openai.com/v1/chat/completions" - os.environ.pop("HTTPS_PROXY", None) - os.environ.pop("https_proxy", None) - return gr.update(value=API_URL), gr.update(value=""), "API URL 和代理已重置" - - -def change_api_url(url): - global API_URL - API_URL = url - msg = f"API地址更改为了{url}" - logging.info(msg) - return msg - - -def change_proxy(proxy): - os.environ["HTTPS_PROXY"] = proxy - msg = f"代理更改为了{proxy}" - logging.info(msg) - return msg - - -def hide_middle_chars(s): - if len(s) <= 8: - return s - else: - head = s[:4] - tail = s[-4:] - hidden = "*" * (len(s) - 8) - return head + hidden + tail - - -def submit_key(key): - key = key.strip() - msg = f"API密钥更改为了{hide_middle_chars(key)}" - logging.info(msg) - return key, msg - - -def sha1sum(filename): - sha1 = hashlib.sha1() - sha1.update(filename.encode("utf-8")) - return sha1.hexdigest() - - -def replace_today(prompt): - today = datetime.datetime.today().strftime("%Y-%m-%d") - return prompt.replace("{current_date}", today) - - -def get_geoip(): - response = requests.get("https://ipapi.co/json/", timeout=5) - try: - data = response.json() - except: - data = {"error": True, "reason": "连接ipapi失败"} - if "error" in data.keys(): - logging.warning(f"无法获取IP地址信息。\n{data}") - if data["reason"] == "RateLimited": - return ( - f"获取IP地理位置失败,因为达到了检测IP的速率限制。聊天功能可能仍然可用,但请注意,如果您的IP地址在不受支持的地区,您可能会遇到问题。" - ) - else: - return f"获取IP地理位置失败。原因:{data['reason']}。你仍然可以使用聊天功能。" - else: - country = data["country_name"] - if country == "China": - text = "**您的IP区域:中国。请立即检查代理设置,在不受支持的地区使用API可能导致账号被封禁。**" - else: - text = f"您的IP区域:{country}。" - logging.info(text) - return text - - -def find_n(lst, max_num): - n = len(lst) - total = sum(lst) - - if total < max_num: - return n - - for i in range(len(lst)): - if total - lst[i] < max_num: - return n - i -1 - total = total - lst[i] - return 1 diff --git a/spaces/abby711/FaceRestoration/README.md b/spaces/abby711/FaceRestoration/README.md deleted file mode 100644 index e6b217295c777e4e32802367f0dd32da97caa006..0000000000000000000000000000000000000000 --- a/spaces/abby711/FaceRestoration/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: FaceRestoration -emoji: 🐢 -colorFrom: gray -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/abdvl/datahub_qa_bot/docs/api/openapi/openapi-usage-guide.md b/spaces/abdvl/datahub_qa_bot/docs/api/openapi/openapi-usage-guide.md deleted file mode 100644 index be8961a08edf7902c21a7801f63092dbd6ca5e3f..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/api/openapi/openapi-usage-guide.md +++ /dev/null @@ -1,560 +0,0 @@ -# DataHub OpenAPI Guide - -## Why OpenAPI - -The OpenAPI standard is a widely used documentation and design approach for REST-ful APIs. -To make it easier to integrate with DataHub, we are publishing an OpenAPI based set of endpoints. - -Read [the DataHub API overview](../datahub-apis.md) to understand the rationale behind the different API-s and when to use each one. - -## Locating the OpenAPI endpoints - -Currently, the OpenAPI endpoints are isolated to a servlet on GMS and are automatically deployed with a GMS server. -The servlet includes auto-generation of an OpenAPI UI, also known as Swagger, which is available at **GMS_SERVER_HOST:GMS_PORT/openapi/swagger-ui/index.html**. For example, the Quickstart running locally exposes this at http://localhost:8080/openapi/swagger-ui/index.html. - -This is also exposed through DataHub frontend as a proxy with the same endpoint, but GMS host and port replaced with DataHub frontend's url ([Local Quickstart link](http://localhost:9002/openapi/swagger-ui/index.html)) and is available in the top right dropdown under the user profile picture as a link. - - - -Note that it is possible to get the raw JSON or YAML formats of the OpenAPI spec by navigating to [**BASE_URL/openapi/v3/api-docs**](http://localhost:9002/openapi/v3/api-docs) or [**BASE_URL/openapi/v3/api-docs.yaml**](http://localhost:9002/openapi/v3/api-docs.yaml). -The raw forms can be fed into codegen systems to generate client side code in the language of your choice that support the OpenAPI format. We have noticed varying degrees of maturity with different languages in these codegen systems so some may require customizations to be fully compatible. - -The OpenAPI UI includes explorable schemas for request and response objects that are fully documented. The models used -in the OpenAPI UI are all autogenerated at build time from the PDL models to JSON Schema compatible Java Models. - -## Understanding the OpenAPI endpoints - -While the full OpenAPI spec is always available at [**GMS_SERVER_HOST:GMS_PORT/openapi/swagger-ui/index.html**](http://localhost:8080/openapi/swagger-ui/index.html), here's a quick overview of the main OpenAPI endpoints and their purpose. - - -### Entities (/entities) - -The entities endpoints are intended for reads and writes to the metadata graph. The entire DataHub metadata model is available for you to write to (as entity, aspect pairs) or to read an individual entity's metadata from. See [examples](#entities-entities-endpoint) below. - -### Relationships (/relationships) - -The relationships endpoints are intended for you to query the graph, to navigate relationships from one entity to others. See [examples](#relationships-relationships-endpoint) below. - -### Timeline (/timeline) - -The timeline endpoints are intended for querying the versioned history of a given entity over time. For example, you can query a dataset for all schema changes that have happened to it over time, or all documentation changes that have happened to it. See [this](../../dev-guides/timeline.md) guide for more details. - -### Platform (/platform) - -Even lower-level API-s that allow you to write metadata events into the DataHub platform using a standard format. - - -### Example Requests - -#### Entities (/entities) endpoint - -##### POST - -```shell -curl --location --request POST 'localhost:8080/openapi/entities/v1/' \ ---header 'Content-Type: application/json' \ ---header 'Accept: application/json' \ ---header 'Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJhY3RvclR5cGUiOiJVU0VSIiwiYWN0b3JJZCI6ImRhdGFodWIiLCJ0eXBlIjoiUEVSU09OQUwiLCJ2ZXJzaW9uIjoiMSIsImV4cCI6MTY1MDY2MDY1NSwianRpIjoiM2E4ZDY3ZTItOTM5Yi00NTY3LWE0MjYtZDdlMDA1ZGU3NjJjIiwic3ViIjoiZGF0YWh1YiIsImlzcyI6ImRhdGFodWItbWV0YWRhdGEtc2VydmljZSJ9.pp_vW2u1tiiTT7U0nDF2EQdcayOMB8jatiOA8Je4JJA' \ ---data-raw '[ - { - "aspect": { - "__type": "SchemaMetadata", - "schemaName": "SampleHdfsSchema", - "platform": "urn:li:dataPlatform:platform", - "platformSchema": { - "__type": "MySqlDDL", - "tableSchema": "schema" - }, - "version": 0, - "created": { - "time": 1621882982738, - "actor": "urn:li:corpuser:etl", - "impersonator": "urn:li:corpuser:jdoe" - }, - "lastModified": { - "time": 1621882982738, - "actor": "urn:li:corpuser:etl", - "impersonator": "urn:li:corpuser:jdoe" - }, - "hash": "", - "fields": [ - { - "fieldPath": "county_fips_codefg", - "jsonPath": "null", - "nullable": true, - "description": "null", - "type": { - "type": { - "__type": "StringType" - } - }, - "nativeDataType": "String()", - "recursive": false - }, - { - "fieldPath": "county_name", - "jsonPath": "null", - "nullable": true, - "description": "null", - "type": { - "type": { - "__type": "StringType" - } - }, - "nativeDataType": "String()", - "recursive": false - } - ] - }, - "entityType": "dataset", - "entityUrn": "urn:li:dataset:(urn:li:dataPlatform:platform,testSchemaIngest,PROD)" - } -]' -``` - -##### GET - -```shell -curl --location --request GET 'localhost:8080/openapi/entities/v1/latest?urns=urn:li:dataset:(urn:li:dataPlatform:platform,testSchemaIngest,PROD)&aspectNames=schemaMetadata' \ ---header 'Accept: application/json' \ ---header 'Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJhY3RvclR5cGUiOiJVU0VSIiwiYWN0b3JJZCI6ImRhdGFodWIiLCJ0eXBlIjoiUEVSU09OQUwiLCJ2ZXJzaW9uIjoiMSIsImV4cCI6MTY1MDY2MDY1NSwianRpIjoiM2E4ZDY3ZTItOTM5Yi00NTY3LWE0MjYtZDdlMDA1ZGU3NjJjIiwic3ViIjoiZGF0YWh1YiIsImlzcyI6ImRhdGFodWItbWV0YWRhdGEtc2VydmljZSJ9.pp_vW2u1tiiTT7U0nDF2EQdcayOMB8jatiOA8Je4JJA' -``` - -##### DELETE - -```shell -curl --location --request DELETE 'localhost:8080/openapi/entities/v1/?urns=urn:li:dataset:(urn:li:dataPlatform:platform,testSchemaIngest,PROD)&soft=true' \ ---header 'Accept: application/json' \ ---header 'Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJhY3RvclR5cGUiOiJVU0VSIiwiYWN0b3JJZCI6ImRhdGFodWIiLCJ0eXBlIjoiUEVSU09OQUwiLCJ2ZXJzaW9uIjoiMSIsImV4cCI6MTY1MDY2MDY1NSwianRpIjoiM2E4ZDY3ZTItOTM5Yi00NTY3LWE0MjYtZDdlMDA1ZGU3NjJjIiwic3ViIjoiZGF0YWh1YiIsImlzcyI6ImRhdGFodWItbWV0YWRhdGEtc2VydmljZSJ9.pp_vW2u1tiiTT7U0nDF2EQdcayOMB8jatiOA8Je4JJA' -``` - -#### Postman Collection - -Collection includes a POST, GET, and DELETE for a single entity with a SchemaMetadata aspect - -```json -{ - "info": { - "_postman_id": "87b7401c-a5dc-47e4-90b4-90fe876d6c28", - "name": "DataHub OpenAPI", - "description": "A description", - "schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json" - }, - "item": [ - { - "name": "entities/v1", - "item": [ - { - "name": "post Entities 1", - "request": { - "method": "POST", - "header": [ - { - "key": "Content-Type", - "value": "application/json" - }, - { - "key": "Accept", - "value": "application/json" - } - ], - "body": { - "mode": "raw", - "raw": "[\n {\n \"aspect\": {\n \"__type\": \"SchemaMetadata\",\n \"schemaName\": \"SampleHdfsSchema\",\n \"platform\": \"urn:li:dataPlatform:platform\",\n \"platformSchema\": {\n \"__type\": \"MySqlDDL\",\n \"tableSchema\": \"schema\"\n },\n \"version\": 0,\n \"created\": {\n \"time\": 1621882982738,\n \"actor\": \"urn:li:corpuser:etl\",\n \"impersonator\": \"urn:li:corpuser:jdoe\"\n },\n \"lastModified\": {\n \"time\": 1621882982738,\n \"actor\": \"urn:li:corpuser:etl\",\n \"impersonator\": \"urn:li:corpuser:jdoe\"\n },\n \"hash\": \"\",\n \"fields\": [\n {\n \"fieldPath\": \"county_fips_codefg\",\n \"jsonPath\": \"null\",\n \"nullable\": true,\n \"description\": \"null\",\n \"type\": {\n \"type\": {\n \"__type\": \"StringType\"\n }\n },\n \"nativeDataType\": \"String()\",\n \"recursive\": false\n },\n {\n \"fieldPath\": \"county_name\",\n \"jsonPath\": \"null\",\n \"nullable\": true,\n \"description\": \"null\",\n \"type\": {\n \"type\": {\n \"__type\": \"StringType\"\n }\n },\n \"nativeDataType\": \"String()\",\n \"recursive\": false\n }\n ]\n },\n \"aspectName\": \"schemaMetadata\",\n \"entityType\": \"dataset\",\n \"entityUrn\": \"urn:li:dataset:(urn:li:dataPlatform:platform,testSchemaIngest,PROD)\"\n }\n]", - "options": { - "raw": { - "language": "json" - } - } - }, - "url": { - "raw": "{{baseUrl}}/openapi/entities/v1/", - "host": [ - "{{baseUrl}}" - ], - "path": [ - "openapi", - "entities", - "v1", - "" - ] - } - }, - "response": [ - { - "name": "OK", - "originalRequest": { - "method": "POST", - "header": [], - "body": { - "mode": "raw", - "raw": "[\n {\n \"aspect\": {\n \"value\": \"
" - "GitHub " - "
" -) - -device = "cuda" if torch.cuda.is_available() else "cpu" -fp16 = device != 'cpu' - -def generate(prompt: str): - input_ids = tokenizer.encode(prompt, return_tensors="pt").to(device) - out = model.generate(input_ids, - min_length=100, - max_length=200, - top_p=0.8, - top_k=0, - no_repeat_ngram_size=5 - ) - generated_text = list(map(tokenizer.decode, out))[0] - return generated_text - - -interface = gr.Interface.load("huggingface/sberbank-ai/mGPT", - description=description, - examples=examples, - fn=generate, - inputs="text", - outputs='text', - thumbnail = 'https://habrastorage.org/r/w1560/getpro/habr/upload_files/26a/fa1/3e1/26afa13e1d1a56f54c7b0356761af7b8.png', - theme = "peach", - article = article -) - -interface.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/akhaliq/Real-ESRGAN/experiments/pretrained_models/README.md b/spaces/akhaliq/Real-ESRGAN/experiments/pretrained_models/README.md deleted file mode 100644 index d0cc4afcbdd2c733f6b946bb86bd00baa90e8295..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-ESRGAN/experiments/pretrained_models/README.md +++ /dev/null @@ -1 +0,0 @@ -# Put downloaded pre-trained models here diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/cli/spinners.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/cli/spinners.py deleted file mode 100644 index 1e313e1090ad02f984d96b11b1b47fc72301fb94..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/cli/spinners.py +++ /dev/null @@ -1,157 +0,0 @@ -import contextlib -import itertools -import logging -import sys -import time -from typing import IO, Iterator - -from pip._vendor.progress import HIDE_CURSOR, SHOW_CURSOR - -from pip._internal.utils.compat import WINDOWS -from pip._internal.utils.logging import get_indentation - -logger = logging.getLogger(__name__) - - -class SpinnerInterface: - def spin(self) -> None: - raise NotImplementedError() - - def finish(self, final_status: str) -> None: - raise NotImplementedError() - - -class InteractiveSpinner(SpinnerInterface): - def __init__( - self, - message: str, - file: IO[str] = None, - spin_chars: str = "-\\|/", - # Empirically, 8 updates/second looks nice - min_update_interval_seconds: float = 0.125, - ): - self._message = message - if file is None: - file = sys.stdout - self._file = file - self._rate_limiter = RateLimiter(min_update_interval_seconds) - self._finished = False - - self._spin_cycle = itertools.cycle(spin_chars) - - self._file.write(" " * get_indentation() + self._message + " ... ") - self._width = 0 - - def _write(self, status: str) -> None: - assert not self._finished - # Erase what we wrote before by backspacing to the beginning, writing - # spaces to overwrite the old text, and then backspacing again - backup = "\b" * self._width - self._file.write(backup + " " * self._width + backup) - # Now we have a blank slate to add our status - self._file.write(status) - self._width = len(status) - self._file.flush() - self._rate_limiter.reset() - - def spin(self) -> None: - if self._finished: - return - if not self._rate_limiter.ready(): - return - self._write(next(self._spin_cycle)) - - def finish(self, final_status: str) -> None: - if self._finished: - return - self._write(final_status) - self._file.write("\n") - self._file.flush() - self._finished = True - - -# Used for dumb terminals, non-interactive installs (no tty), etc. -# We still print updates occasionally (once every 60 seconds by default) to -# act as a keep-alive for systems like Travis-CI that take lack-of-output as -# an indication that a task has frozen. -class NonInteractiveSpinner(SpinnerInterface): - def __init__(self, message: str, min_update_interval_seconds: float = 60.0) -> None: - self._message = message - self._finished = False - self._rate_limiter = RateLimiter(min_update_interval_seconds) - self._update("started") - - def _update(self, status: str) -> None: - assert not self._finished - self._rate_limiter.reset() - logger.info("%s: %s", self._message, status) - - def spin(self) -> None: - if self._finished: - return - if not self._rate_limiter.ready(): - return - self._update("still running...") - - def finish(self, final_status: str) -> None: - if self._finished: - return - self._update(f"finished with status '{final_status}'") - self._finished = True - - -class RateLimiter: - def __init__(self, min_update_interval_seconds: float) -> None: - self._min_update_interval_seconds = min_update_interval_seconds - self._last_update: float = 0 - - def ready(self) -> bool: - now = time.time() - delta = now - self._last_update - return delta >= self._min_update_interval_seconds - - def reset(self) -> None: - self._last_update = time.time() - - -@contextlib.contextmanager -def open_spinner(message: str) -> Iterator[SpinnerInterface]: - # Interactive spinner goes directly to sys.stdout rather than being routed - # through the logging system, but it acts like it has level INFO, - # i.e. it's only displayed if we're at level INFO or better. - # Non-interactive spinner goes through the logging system, so it is always - # in sync with logging configuration. - if sys.stdout.isatty() and logger.getEffectiveLevel() <= logging.INFO: - spinner: SpinnerInterface = InteractiveSpinner(message) - else: - spinner = NonInteractiveSpinner(message) - try: - with hidden_cursor(sys.stdout): - yield spinner - except KeyboardInterrupt: - spinner.finish("canceled") - raise - except Exception: - spinner.finish("error") - raise - else: - spinner.finish("done") - - -@contextlib.contextmanager -def hidden_cursor(file: IO[str]) -> Iterator[None]: - # The Windows terminal does not support the hide/show cursor ANSI codes, - # even via colorama. So don't even try. - if WINDOWS: - yield - # We don't want to clutter the output with control characters if we're - # writing to a file, or if the user is running with --quiet. - # See https://github.com/pypa/pip/issues/3418 - elif not file.isatty() or logger.getEffectiveLevel() > logging.INFO: - yield - else: - file.write(HIDE_CURSOR) - try: - yield - finally: - file.write(SHOW_CURSOR) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_log_render.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_log_render.py deleted file mode 100644 index fc16c84437a8a34231c44d3f0a331459ddcb0f34..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/_log_render.py +++ /dev/null @@ -1,94 +0,0 @@ -from datetime import datetime -from typing import Iterable, List, Optional, TYPE_CHECKING, Union, Callable - - -from .text import Text, TextType - -if TYPE_CHECKING: - from .console import Console, ConsoleRenderable, RenderableType - from .table import Table - -FormatTimeCallable = Callable[[datetime], Text] - - -class LogRender: - def __init__( - self, - show_time: bool = True, - show_level: bool = False, - show_path: bool = True, - time_format: Union[str, FormatTimeCallable] = "[%x %X]", - omit_repeated_times: bool = True, - level_width: Optional[int] = 8, - ) -> None: - self.show_time = show_time - self.show_level = show_level - self.show_path = show_path - self.time_format = time_format - self.omit_repeated_times = omit_repeated_times - self.level_width = level_width - self._last_time: Optional[Text] = None - - def __call__( - self, - console: "Console", - renderables: Iterable["ConsoleRenderable"], - log_time: Optional[datetime] = None, - time_format: Optional[Union[str, FormatTimeCallable]] = None, - level: TextType = "", - path: Optional[str] = None, - line_no: Optional[int] = None, - link_path: Optional[str] = None, - ) -> "Table": - from .containers import Renderables - from .table import Table - - output = Table.grid(padding=(0, 1)) - output.expand = True - if self.show_time: - output.add_column(style="log.time") - if self.show_level: - output.add_column(style="log.level", width=self.level_width) - output.add_column(ratio=1, style="log.message", overflow="fold") - if self.show_path and path: - output.add_column(style="log.path") - row: List["RenderableType"] = [] - if self.show_time: - log_time = log_time or console.get_datetime() - time_format = time_format or self.time_format - if callable(time_format): - log_time_display = time_format(log_time) - else: - log_time_display = Text(log_time.strftime(time_format)) - if log_time_display == self._last_time and self.omit_repeated_times: - row.append(Text(" " * len(log_time_display))) - else: - row.append(log_time_display) - self._last_time = log_time_display - if self.show_level: - row.append(level) - - row.append(Renderables(renderables)) - if self.show_path and path: - path_text = Text() - path_text.append( - path, style=f"link file://{link_path}" if link_path else "" - ) - if line_no: - path_text.append(":") - path_text.append( - f"{line_no}", - style=f"link file://{link_path}#{line_no}" if link_path else "", - ) - row.append(path_text) - - output.add_row(*row) - return output - - -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich.console import Console - - c = Console() - c.print("[on blue]Hello", justify="right") - c.log("[on blue]hello", justify="right") diff --git a/spaces/alexyuyxj/emotion-classify/README.md b/spaces/alexyuyxj/emotion-classify/README.md deleted file mode 100644 index 7a735def75c9f6e452cb40ddeb9f31c7ad28f047..0000000000000000000000000000000000000000 --- a/spaces/alexyuyxj/emotion-classify/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Emotion Classify -emoji: 📉 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/allknowingroger/Image-Models-Test84/README.md b/spaces/allknowingroger/Image-Models-Test84/README.md deleted file mode 100644 index 2361d71b2bc4de4925f9308f4bf98b70bfc45e02..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test84/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test83 ---- - - \ No newline at end of file diff --git a/spaces/allknowingroger/text-generation-webui-space-1/modules/callbacks.py b/spaces/allknowingroger/text-generation-webui-space-1/modules/callbacks.py deleted file mode 100644 index faa4a5e9991e1ae711589fed61e7d1f48e28fed3..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/text-generation-webui-space-1/modules/callbacks.py +++ /dev/null @@ -1,98 +0,0 @@ -import gc -from queue import Queue -from threading import Thread - -import torch -import transformers - -import modules.shared as shared - -# Copied from https://github.com/PygmalionAI/gradio-ui/ -class _SentinelTokenStoppingCriteria(transformers.StoppingCriteria): - - def __init__(self, sentinel_token_ids: torch.LongTensor, - starting_idx: int): - transformers.StoppingCriteria.__init__(self) - self.sentinel_token_ids = sentinel_token_ids - self.starting_idx = starting_idx - - def __call__(self, input_ids: torch.LongTensor, - _scores: torch.FloatTensor) -> bool: - for sample in input_ids: - trimmed_sample = sample[self.starting_idx:] - # Can't unfold, output is still too tiny. Skip. - if trimmed_sample.shape[-1] < self.sentinel_token_ids.shape[-1]: - continue - - for window in trimmed_sample.unfold( - 0, self.sentinel_token_ids.shape[-1], 1): - if torch.all(torch.eq(self.sentinel_token_ids, window)): - return True - return False - -class Stream(transformers.StoppingCriteria): - def __init__(self, callback_func=None): - self.callback_func = callback_func - - def __call__(self, input_ids, scores) -> bool: - if self.callback_func is not None: - self.callback_func(input_ids[0]) - return False - -class Iteratorize: - - """ - Transforms a function that takes a callback - into a lazy iterator (generator). - """ - - def __init__(self, func, kwargs={}, callback=None): - self.mfunc=func - self.c_callback=callback - self.q = Queue() - self.sentinel = object() - self.kwargs = kwargs - self.stop_now = False - - def _callback(val): - if self.stop_now: - raise ValueError - self.q.put(val) - - def gentask(): - try: - ret = self.mfunc(callback=_callback, **self.kwargs) - except ValueError: - pass - clear_torch_cache() - self.q.put(self.sentinel) - if self.c_callback: - self.c_callback(ret) - - self.thread = Thread(target=gentask) - self.thread.start() - - def __iter__(self): - return self - - def __next__(self): - obj = self.q.get(True,None) - if obj is self.sentinel: - raise StopIteration - else: - return obj - - def __del__(self): - clear_torch_cache() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.stop_now = True - clear_torch_cache() - -def clear_torch_cache(): - gc.collect() - if not shared.args.cpu: - torch.cuda.empty_cache() diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/c/src/com_portaudio_PortAudio.h b/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/c/src/com_portaudio_PortAudio.h deleted file mode 100644 index ed806ac4524e3b4564778822846f40faf8c488bf..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/c/src/com_portaudio_PortAudio.h +++ /dev/null @@ -1,183 +0,0 @@ -/* DO NOT EDIT THIS FILE - it is machine generated */ -#if defined(__APPLE__) -#includeAitraaz- An Intriguing and Engaging Thriller MovieAitraaz is an unconventional movie with a spinning take on molestation. The film breaks the stereotype associated with women molestation. It successfully portrays a unique story of sexual harassment where a woman is an aggressor, and a man is a victim.
-Download Zip ->->->-> https://urloso.com/2uyR8P
Watch the movie Aitraaz on the free film streaming website www.onlinemovieshindi.com (new web URL: ). Online streaming or downloading the video file easily. Watch or download Aitraaz online movie Hindi dubbed here.
-Dear visitor, you can download the movie Aitraaz on this onlinemovieshindi website. It will download the HD video file by just clicking on the button below. The video file is the same file for the online streaming above when you directly click to play. The decision to download is entirely your choice and your personal responsibility when dealing with the legality of file ownership
-Dr. Babasaheb Ambedkar Hd 720p Video Free Download ???? ???? LINK >>> =2sAmN8Dr. Ambedkar was a prominent and influential figure in. Hd videos of Ambedkar by the supporters, who are in. Ambedkar quotes, Ambedkar song, Ambedkar Marriage,. Ambedkar Dr. Babasaheb Ambedkar, Dr. Dr. B R Ambedkar this romantic movie.25/01/2014 · Dr. Ambedkar is the founder of the Indic Maniyara movement, an ideology aimed at promoting Indic values, to combat. but also because he played a major role in the movement towards granting full. For his birthday, Dr. Babasaheb Ambedkar is celebrated as the.Watch and Download the thrill, excitement in Dr. Babasaheb Ambedkar serial. Check out below for Dr. Babasaheb Ambedkar (2020): Full HD episodes,.ambedkar,. HD,.Dr. Babasaheb Ambedkar full movie|.Hun De Jhatte.I am Awaaz Dr Babasaheb Ambedkar Episodes HD.Free Download.. Dr Babasaheb Ambedkar free download Videocon d2Hd 4. ariyoga sanskrit quotes in hindiDr. Babasaheb Ambedkar hindi movie full hd download. Dr. Babasaheb Ambedkar (2020): Full HD episodes,. ambedkar,.Instagram Videos,. I am Awaaz Dr. Babasaheb Ambedkar Episodes HD.Free Download.. Dr. Babasaheb Ambedkar free download Videocon d2Hd 4. ambedkar,.Take part in the award-winning of Bengali. Learn our keyboard shortcuts for the most of your editing and timeline editing needs.Dr. Ambedkar: Film Review & Future Watch and Download the thrill, excitement in Dr. Babasaheb Ambedkar serial. Check out below for Dr. Babasaheb Ambedkar (2020): Full HD episodes, download videos now!.Free download of Ambedkar. Dr. Babasaheb Ambedkar full movie|.Hun De Jhatte.I am Awaaz Dr. Babasaheb Ambedkar Episodes HD.Dr. Ambedkar movie full hd 720p.. Dr. Babasaheb Ambedkar hindi movie full hd download video.. 9579fb97db -hd-full-movie-download-1080p -cd-converter-professional-503-keygen-software -font-family-42-fontsrar -para-autocad-2015-crack
- aaccfb2cb3If you have an Epson printer that shows an error message like "The Printer's Ink Pads at the end of Their service life" or "A Printer's Ink Pad is at the end of Its service life", you may need to reset the waste ink counters in your printer. The waste ink counters are used to keep track of how much ink is used for cleaning the print head and other maintenance tasks. When the waste ink counters reach a certain limit, the printer will stop working to prevent ink leakage and damage.
-DOWNLOAD ✸✸✸ https://urloso.com/2uyREg
One way to reset the waste ink counters is to use a software tool called WIC Reset Utility. This tool can help you reset the waste ink counters in most Epson inkjet printers with a few clicks. Here are the steps to use WIC Reset Utility:
-Note: The WIC Reset Utility can only reset the waste ink counters, not the ink levels or other functions of your printer. You still need to replace or refill your ink cartridges when they are low or empty.
If you have an Epson printer that uses edible inks for cake decorating, you may need to refill your ink cartridges from time to time. Edible inks are specially formulated food coloring that can be printed on edible paper or frosting sheets. You can buy edible ink cartridges or refillable cartridges that you can fill with edible ink bottles. Here are the steps to refill Epson cartridges with edible inks:
- -Note: The edible ink cartridges are only for printing on edible paper or frosting sheets. Do not use them for printing on regular paper or other materials. Also, do not mix edible inks with regular inks or use them in printers that have been used with regular inks before.
d5da3c52bfDownload Zip ✯ https://tinurli.com/2uwjzz
King Charles III and Camilla, Queen Consort, wave after viewing floral tributes to the late Queen Elizabeth II outside Buckingham Palace on Saturday. Chris Jackson/Getty Images hide caption
-Throughout the 1960s, Williams appeared in several television series and films including roles in The Beverly Hillbillies, The Twilight Zone, Batman, Adam-12, Lost in Space, The Naked Kiss, and the Sonny & Cher film, Good Times (1967). In 1970, she appeared as Ashley St. Ives in Russ Meyer's first mainstream film, Beyond the Valley of the Dolls, followed by his second mainstream film, The Seven Minutes (1971). Meyer and Williams married in 1970, shortly after the release of Beyond the Valley of the Dolls.
-Download File ⚹⚹⚹ https://tinurli.com/2uwjCk
Soon after, Kendrick made her film debut in the musical film Camp. She played the role of nerdy Fritzi Wagner in the movie. Then, she went on to essay the character of Ginny Ryerson in the film Rocket Science.
-R. David Edmunds, Historian: Tecumseh, we know, is very angry with his brother after this battle. And I think the Prophet spends the rest of his life trying to get back into a position of prominence.
-Narrator: For four years Geronimo struggled with life on the reservation. Then in the summer of 1881, he was drawn to the startling message of a charismatic Apache medicine man, called the Dreamer. A former military scout, well versed in American ways, he urged a return to traditional Apache life. Apaches came from miles around to attend his ceremony. The Dreamer marked East, South, West and North with sacred cattail pollen. People circled around him as he preached. Apaches should not take revenge against the white man, the Dreamer said. Ussen would see that the Americans suffered for their sins in the afterlife. It was a plea for unity and peace for a people who had seen little of either.
-download Cancer Survivor unlimited Movies and videos Download Here.Cancer Survivor Hd,3gp. mp4 320p and More Videos You Can Download Easyly. tamilrockers and movierulz, tamilgun, filmywap, and pagalworld videos and Movies download.
- -MP4 has penetrated into every aspect of our life. Teachers or trainers usually insert MP4 videos into PPTs to make further explanation. Companies always save their propaganda films as MP4 to fit for miscellaneous broadcast platforms flawlessly. Web surfers often download online videos as MP4 for fluent playback on media players, editing for further use, burning to DVD, etc.
aaccfb2cb3To protect your data against ransomware you will have to use a layered approach. It all starts with educating your users on how to recognize phishing emails. Users are the middleman when it comes to ransomware infections.
-DOWNLOAD ⚙⚙⚙ https://tinurli.com/2uwicj
Microsoft today shared tips on how to defend against human-operated ransomware attacks known to be behind hundreds of millions of dollars in losses following campaigns targeting enterprises and government entities.
-Also, the operators will not immediately deploy the ransomware payload on the victims' networks after the Trickbot infections occur but they will instead wait weeks or even months after the infiltration has started.
-Victims: Conti victim organizations span across multiple industries, including construction and engineering, legal and professional services, manufacturing, and retail. In addition, WIZARD SPIDER affiliates have deployed Conti ransomware against U.S. healthcare and first responder networks.
- -U.S., Australian, Canadian, New Zealand, and UK cybersecurity authorities urge network defenders of critical infrastructure organizations to exercise due diligence in identifying indicators of malicious activity. Organizations detecting potential APT or ransomware activity in their IT or OT networks should:
-Many consumers and corporate users install a security solution for their Windows systems to defend against ransomware. In its Advanced Threat Protection tests, AV-TEST examined just how well 29 of these products offered protection against a ransomware attack. In doing so, each security solution was required to successfully hold up against the attackers in 10 practical scenarios. Most of the products performed very well, but there were a few missteps here and there, dampening the result.
-The tested solutions for corporate users come from: Ahnlab, Avast, Bitdefender (with 2 versions), Check Point, Comodo, G DATA, Malwarebytes, Microsoft, Seqrite, Trellix, Trend Micro and VMware.
-The table featuring solutions for corporate users quickly reveals that 12 of the 13 examined products detected all attackers. Only Trend Micro missed the mark in one instance. But detection does not automatically mean complete defense against the ransomware attack. It is pleasing to note: 8 out of the 13 products were successful in detecting and totally fending off the attacks. For this they received the full 40 points for the protection score: Ahnlab, Avast, Bitdefender (with both versions), Comodo, G DATA, Malwarebytes and Microsoft.
-In theory, yes. If you install antivirus software on your system and haven\u2019t previously had software to prevent infection (or it\u2019s not picked it up), the new software\u2019s real-time scanning may not pick up the virus on your device. However, when you run a full scan of your system, the software should detect and block it.","author":"@type":"Person","name":"Craig McCart","description":"Craig McCart is a content writer and copywriter with 10+ years of experience working in cybersecurity in a corporate VPN environment. Since working for Comparitech, he's taken all of his experience and applied his knowledge to provide enjoyable and educational content.\nCraig researches the latest cybersecurity trends in an ever-changing landscape to provide VPN guides, comparisons, and reviews that are easy for readers to consume.\nWhen he's taking a break from being a Comparitech word-wizard, he spends time playing games with his baby (his power-hungry gaming PC).\nHis typical go-to titles are God of War, New World, and the occasional Metal Gear Solid speedrun (the best game ever, in his opinion).\nWhen he's not gaming, he's with his family (with actual non-gaming computer babies!), enjoying days out and the occasional trip abroad.\n","url":"https:\/\/www.comparitech.com\/author\/craigmccart\/"}},"@type":"Question","name":"How do I uninstall McAfee?","answerCount":1,"acceptedAnswer":"@type":"Answer","text":"It\u2019s pretty straightforward to uninstall McAfee from your system, and you can do so by following these steps:\n
Bitdefender creates automatic, up-to-date tamperproof backup copies of user files, identifying when ransomware attempts to encrypt files and automatically creating a backup of target files that are restored after the malware is blocked.
-With Bitdefender, Kansas Public Finance (KDFA) is keeping its endpoints free of ransomware, avoiding cleanup and lost productivity, and ensuring an efficient workflow. More importantly, KDFA can be productive and responsive to client requests. A painful repercussion following a ransomware attack prior to using Bitdefender was having to recreate lost financial documents and telling users about their lost data.
-Endpoint security, or endpoint protection, is the process of protecting user endpoints (a device connected to a network to communicate) from threats such as malware, ransomware, and zero-days. The connection of endpoint devices to corporate networks creates attack paths for security threats of all kinds. This could mean exposing important financial information about an organization or leaking personal information about customers that thought they were secure.
aaccfb2cb3${outputTxt}` - - const titleLength = 150; - let titleTxt = inputTxt; - if(titleTxt.length > titleLength){ - titleTxt = titleTxt.slice(0, titleLength) + ' ...'; - } - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - if(!inputTxt || !outputTxt){ - return; - }; - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const descriptionMd = `### Question: -${inputTxt} - -### Answer: - -${outputTxt}`; - - const params = { - title: titleTxt, - description: descriptionMd, - }; - - const paramsStr = Object.entries(params) - .map(([key, value]) => `${encodeURIComponent(key)}=${encodeURIComponent(value)}`) - .join('&'); - - window.open(`https://huggingface.co/spaces/bigcode/bigcode-playground/discussions/new?${paramsStr}`, '_blank'); - - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" - -share_btn_css = """ -a {text-decoration-line: underline; font-weight: 600;} -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { transform: rotate(0deg); } - to { transform: rotate(360deg); } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -""" \ No newline at end of file diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/8bps.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/8bps.c deleted file mode 100644 index 90d6c96fd1baddcb2def24955217d881a6251778..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/8bps.c +++ /dev/null @@ -1,185 +0,0 @@ -/* - * Quicktime Planar RGB (8BPS) Video Decoder - * Copyright (C) 2003 Roberto Togni - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * QT 8BPS Video Decoder by Roberto Togni - * For more information about the 8BPS format, visit: - * http://www.pcisys.net/~melanson/codecs/ - * - * Supports: PAL8 (RGB 8bpp, paletted) - * : BGR24 (RGB 24bpp) (can also output it as RGB32) - * : RGB32 (RGB 32bpp, 4th plane is alpha) - */ - -#include
Do you love creating your own worlds and playing with your friends online? Do you enjoy mining, crafting, building, and surviving in a sandbox game? If you answered yes to any of these questions, then you might want to check out Crafting and Building APK 1.18.32, a free game for Android devices that lets you unleash your imagination and have fun.
-Download ✔✔✔ https://urlca.com/2uOdmw
In this article, we will tell you everything you need to know about this game, including what it is, how to download and install it, how to play it, what are its features, and what are its pros and cons. We will also answer some frequently asked questions about this game at the end of the article.
-Crafting and Building APK 1.18.32 is a sandbox game that allows you to create your own worlds and play with your friends online. It is similar to Minecraft, but with more features and options. You can choose from different game modes, such as creative, multiplayer, or survival, and explore different worlds, such as city, forest, or desert.
-In creative mode, you have unlimited resources and no limits on what you can build. You can use different blocks and items to create anything you can imagine, such as houses, castles, bridges, gardens, or sculptures. You can also use different tools, such as paintbrushes, hammers, or axes, to modify your creations.
-crafting and building game download apk 1.18.32
-crafting and building mod apk 1.18.32 unlimited money
-crafting and building online multiplayer apk 1.18.32
-crafting and building apk 1.18.32 free download for android
-crafting and building apk 1.18.32 latest version
-crafting and building apk 1.18.32 caves and cliffs update
-crafting and building apk 1.18.32 no ads
-crafting and building apk 1.18.32 offline mode
-crafting and building apk 1.18.32 with xbox live
-crafting and building apk 1.18.32 for pc windows 10
-crafting and building apk 1.18.32 for ios iphone ipad
-crafting and building apk 1.18.32 hack cheats
-crafting and building apk 1.18.32 skins editor
-crafting and building apk 1.18.32 maps seeds
-crafting and building apk 1.18.32 texture packs
-crafting and building apk 1.18.32 shaders mods
-crafting and building apk 1.18.32 commands guide
-crafting and building apk 1.18.32 survival mode tips
-crafting and building apk 1.18.32 creative mode ideas
-crafting and building apk 1.18.32 adventure mode walkthrough
-crafting and building apk 1.18.32 how to install update
-crafting and building apk 1.18.32 how to play with friends
-crafting and building apk 1.18.32 how to make a house castle mine
-crafting and building apk 1.18.32 how to tame a goat warden frog
-crafting and building apk 1.18.32 how to get froglight blocks
-crafting and building apk 1.18.32 review ratings feedback
-crafting and building apk 1.18.32 comparison with minecraft pe
-crafting and building apk 1.18.32 alternatives similar games
-crafting and building apk 1.18.32 best features highlights
-crafting and building apk 1.18.32 bugs issues problems fixes
-crafting and building apk 1.18.32 developer publisher contact support
-crafting and building apk 1.18.32 release date changelog news
-crafting and building apk 1.18.32 download link safe secure verified
-crafting and building apk 1.18.32 download size requirements compatibility
-crafting and building apk 1.18.32 download speed performance optimization
-crafting and building apk 1.18.32 download error failed solution troubleshoot
-crafting and building apk 1.18.32 download from google play store app store amazon appstore apkpure apkmirror uptodown apknite apkomg apktada apktovi apkmody apksfull apksolo apkgk apkpanda apksafety apksmash apksweet apksite apkscope apksmart apksbox apksnake apkslide apksmartphone
In multiplayer mode, you can join or create your own server and play with your friends online. You can chat with them, share your creations, or collaborate on building projects. You can also visit other players' worlds and see what they have made.
-In survival mode, you have to gather resources, craft items, build shelters, and fight enemies. You have to manage your hunger, health, and stamina levels, as well as the day-night cycle and weather conditions. You can also encounter different animals, monsters, or zombies that will try to attack you.
-If you want to play this game on your Android device, you will need to download and install the APK file from a trusted source. Here are the steps you need to follow:
-You can download the APK file from various websites that offer free games for Android devices. However, you need to be careful about which website you choose, as some of them may contain viruses or malware that can harm your device. One of the websites that we recommend is [APKPure], which is a safe and reliable source for downloading APK files. You can visit their website and search for Crafting and Building APK 1.18.32, or you can use this link: [https://apkpure.com/crafting-and-building/com.mmarcel.cnb2/download?from=details].
-Before you can install the APK file, you need to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, you need to go to your device's settings, then security, then unknown sources, and toggle it on. You may also need to confirm this action by tapping OK or Allow.
-After you have downloaded the APK file and enabled unknown sources, you can install the APK file by tapping on it and following the instructions on the screen. You may need to grant some permissions to the app, such as access to your storage, camera, or microphone. Once the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer.
-Now that you have installed the game, you are ready to play it. Here are some tips on how to play Crafting and Building APK 1.18.32:
-When you start the game, you will see a menu with different options, such as play, settings, skins, or exit. To play the game, you need to tap on play and choose a game mode: creative, multiplayer, or survival. Then, you need to choose a world: city, forest, or desert. You can also create your own world by tapping on the plus sign and naming it.
-Once you enter a world, you can explore it by moving around with the joystick on the left side of the screen and looking around with the swipe gesture on the right side of the screen. You can also jump by tapping on the button on the bottom right corner of the screen. To mine blocks or items, you need to tap and hold on them until they break. To craft items, you need to tap on the backpack icon on the top right corner of the screen and select the items you want to craft from the inventory. To build blocks or items, you need to tap on them in your inventory and place them where you want by tapping on the screen.
-You can also customize your character and settings in this game. To change your character's appearance, you need to tap on the skins option in the menu and choose from different skins or create your own. To change your settings, such as sound, language, or controls, you need to tap on the settings option in the menu and adjust them according to your preference.
-Crafting and Building APK 1.18.32 has many features that make it an enjoyable and engaging game for creative minds. Here are some of them:
-This game has a variety of blocks and items that you can use to create anything you want. You can find different materials, such as wood, stone, metal, glass, or wool, as well as different objects, such as furniture, plants, animals, or vehicles. You can also craft different items, such as weapons, tools, armor, or food.
-This game has a realistic physics and graphics that make it more immersive and realistic. You can see shadows, reflections, and particles in the game, as well as realistic sounds and animations. You can also adjust the graphics quality and resolution in the settings.
-This game has a user-friendly interface and controls that make it easy to play and navigate. You can see your inventory, health, hunger, and stamina bars on the screen, as well as the chat and pause buttons. You can also use the joystick, swipe gesture, and buttons to move, look, jump, mine, craft, and build. You can also change the controls in the settings.
-Like any other game, Crafting and Building APK 1.18.32 has its pros and cons that you should consider before playing it. Here are some of them:
-This game is free to download and play, which means you don't have to spend any money to enjoy it. It is also fun to play, as you can create your own worlds and play with your friends online.
-This game is creative and educational, as it allows you to express your imagination and learn new skills. You can build anything you want, from simple structures to complex machines. You can also learn about different materials, physics, geometry, and logic.
-This game is compatible with most Android devices, which means you can play it on your phone or tablet without any problems. It is also updated regularly, which means you can enjoy new features and improvements.
-This game has ads and in-app purchases, which means you may see some pop-ups or banners while playing or be asked to buy some items or coins with real money. This can be annoying or tempting for some players.
-This game has some bugs and glitches, which means you may encounter some errors or crashes while playing or lose some of your progress or items. This can be frustrating or disappointing for some players.
-This game has limited content and quality, which means you may get bored or dissatisfied with the game after a while. It has fewer blocks, items, worlds, and modes than other similar games, such as Minecraft. It also has lower graphics quality and resolution than other similar games.
-Crafting and Building APK 1.18.32 is a free game for Android devices that lets you create your own worlds and play with your friends online. It is a sandbox game that has different game modes, such as creative, multiplayer, or survival, and different worlds, such as city, forest, or desert. It has a variety of blocks and items that you can use to build anything you want. It also has realistic physics and graphics that make it more immersive and realistic. It has a user-friendly interface and controls that make it easy to play and navigate.
-However, this game also has some drawbacks that you should be aware of before playing it. It has ads and in-app purchases that may interrupt or tempt you while playing. It also has some bugs and glitches that may cause some errors or crashes while playing. It also has limited content and quality that may not satisfy some players who are looking for more variety and challenge. Therefore, if you are looking for a free, fun, creative, and educational game that lets you create your own worlds and play with your friends online, you may want to give Crafting and Building APK 1.18.32 a try. However, if you are looking for a more polished, diverse, and advanced game that offers more features and options, you may want to look for other alternatives, such as Minecraft. We hope this article has helped you learn more about Crafting and Building APK 1.18.32 and decide whether you want to play it or not. If you have any questions or comments, feel free to leave them below. Thank you for reading and happy gaming!
-Here are some frequently asked questions about Crafting and Building APK 1.18.32:
-Yes, Crafting and Building APK 1.18.32 is safe to download and play, as long as you download it from a trusted source, such as APKPure, and enable unknown sources on your device. However, you should always be careful about what you download and install on your device, as some websites or apps may contain viruses or malware that can harm your device.
-Crafting and Building APK 1.18.32 is compatible with most Android devices that have Android 4.4 or higher. However, some devices may not be able to run the game smoothly or at all, depending on their specifications and performance. You can check the compatibility of your device by visiting the APKPure website and reading the information about the game.
-You can update Crafting and Building APK 1.18.32 by visiting the APKPure website and downloading the latest version of the game. You can also enable the auto-update option in the settings of the game, which will notify you when there is a new update available.
-You can contact the developers of Crafting and Building APK 1.18.32 by sending them an email at [craftingandbuilding@gmail.com]. You can also visit their Facebook page at [https://www.facebook.com/Crafting-and-Building-Game-1400176863520898/] or their YouTube channel at [https://www.youtube.com/channel/UCZfX8Ld5xhAH0yZ7yzMOfTA]. You can also leave a review or a comment on the APKPure website or the Google Play Store.
-Some similar games to Crafting and Building APK 1.18.32 are:
-If you are a fan of soccer games, you might have heard of eFootball PES 2023, the latest installment in the popular Pro Evolution Soccer series. This game is expected to be released in September 2023 for various platforms, including PlayStation, Xbox, PC, and mobile devices. However, if you want to enjoy this game on your Android or PC before its official release, you can do so by using a PSP emulator called PPSSPP. In this article, we will show you how to download and install eFootball PES 2023 PPSSPP game on your Android or PC in a few simple steps.
-eFootball PES 2023 PPSSPP is a modified version of the original eFootball PES 2023 game that is designed to run on PSP emulators. PSP emulators are software that allow you to play PSP games on other devices, such as Android phones or PCs. By using a PSP emulator, you can enjoy eFootball PES 2023 with high-quality graphics, smooth gameplay, and updated features.
-Download Zip >>>>> https://urlca.com/2uO5OA
Some of the features of eFootball PES 2023 PPSSPP are:
-PPSSPP Emulator is one of the best PSP emulators available for Android and PC. It can run your PSP games in HD resolution or even higher. It can also upscale textures, adjust colors and brightness, enable post-processing shaders, and other effects. Moreover, it can save and restore game state anywhere, anytime. You can also customize the on-screen touch controls or use an external controller or keyboard. PPSSPP Emulator is free and open source, meaning anyone can contribute to its development.
-The first step to play eFootball PES 2023 PPSSPP game on your Android or PC is to download the ISO file of the game. The ISO file is a compressed image of the game disc that contains all the data and files needed to run the game. You can download the eFootball PES 2023 PPSSPP ISO file from various websites that offer PSP games for free. However, you should be careful and only download from trusted and verified sources, as some websites may contain viruses or malware that can harm your device. Here are some of the websites that you can use to download the eFootball PES 2023 PPSSPP ISO file:
-Website | -Link | -
---|---|
PSP Games Download | -- |
PSP ISO Zone | -- |
PSP Share | -- |
PSP ROMs Download | -- |
PSP ISOs Net | -- |
Once you have chosen a website, follow these steps to download the eFootball PES 2023 PPSSPP ISO file:
-The next step to play eFootball PES 2023 PPSSPP game on your Android or PC is to download the PPSSPP emulator. The PPSSPP emulator is available for free on Google Play Store for Android devices and on its official website for PC devices. Here are the links to download the PPSSPP emulator:
- -Once you have clicked on the link, follow these steps to download the PPSSPP emulator:
-How to download efootball pes 2023 ppsspp iso file for android
-Efootball pes 2023 ppsspp game download link with ps5 camera
-Best graphics settings for efootball pes 2023 ppsspp on ios
-Efootball pes 2023 ppsspp offline mode with full players transfer
-Efootball pes 2023 ppsspp iso highly compressed free download
-New features and kits in efootball pes 2023 ppsspp game
-Efootball pes 2023 ppsspp apk + obb data download for android
-Efootball pes 2023 ppsspp cheats and tips to win matches
-Efootball pes 2023 ppsspp english commentary and language pack
-Efootball pes 2023 ppsspp latest update and patch download
-Efootball pes 2023 ppsspp review and gameplay video
-Efootball pes 2023 ppsspp mod apk download with unlimited coins
-Efootball pes 2023 ppsspp emulator download for pc and mac
-Efootball pes 2023 ppsspp online multiplayer mode guide
-Efootball pes 2023 ppsspp save data and texture file download
-Efootball pes 2023 ppsspp best teams and players ratings
-Efootball pes 2023 ppsspp original version download from play store
-Efootball pes 2023 ppsspp hack and mod menu download
-Efootball pes 2023 ppsspp requirements and compatibility list
-Efootball pes 2023 ppsspp license key and activation code generator
-Efootball pes 2023 ppsspp iso file size and download speed
-Efootball pes 2023 ppsspp controller support and configuration
-Efootball pes 2023 ppsspp custom kits and logos download
-Efootball pes 2023 ppsspp stadium and crowd effects mod
-Efootball pes 2023 ppsspp realistic physics and animations mod
-Efootball pes 2023 ppsspp error fix and troubleshooting guide
-Efootball pes 2023 ppsspp vs fifa 23 ppsspp comparison and review
-Efootball pes 2023 ppsspp legends edition download with classic players
-Efootball pes 2023 ppsspp world cup mode and tournament mod
-Efootball pes 2023 ppsspp master league and career mode tips
Now that you have downloaded both the game file and the emulator app, you are ready to install eFootball PES 2023 PPSSPP game on your Android device. Follow these steps to do so:
-If you want to play eFootball PES 2023 PPSSPP game on your PC, you need to install the game file and the emulator app on your PC. Follow these steps to do so:
-One of the advantages of playing eFootball PES 2023 PPSSPP game on your Android or PC is that you can customize the graphics and controls of the game according to your preference. You can access the graphics and controls settings by tapping or clicking on the Settings icon on the emulator app. Here are some of the options that you can adjust:
-Another feature of eFootball PES 2023 PPSSPP game is that you can play online multiplayer mode with other players around the world. You can join or create a room with up to four players and compete in various modes, such as Friendly Match, Co-op Match, Tournament, and League. To play online multiplayer mode, you need to have a stable internet connection and follow these steps:
-To make the most out of your eFootball PES 2023 PPSSPP game experience, here are some tips and tricks that you can use:
-eFootball PES 2023 PPSSPP game is a great way to enjoy soccer on your Android or PC device. You can download and install the game file and the emulator app in a few simple steps. You can also customize the graphics and controls of the game according to your preference. You can also play online multiplayer mode with other players around the world. You can also use some tips and tricks to improve your gameplay experience. eFootball PES 2023 PPSSPP game is a must-have for any soccer fan.
-So what are you waiting for? Download eFootball PES 2023 PPSSPP game now and start playing!
-Here are some frequently asked questions and answers about eFootball PES 2023 PPSSPP game:
-If you are looking for a song that exposes the corruption and injustice in Nigeria, then you should download M Josh Coro Dagbo. This song is a powerful critique of the government's response to the COVID-19 pandemic and the plight of the masses. In this article, we will tell you what the song is about, who is M Josh, how to download it, why you should listen to it, and what are some other songs by him.
-Download Zip ✫✫✫ https://urlca.com/2uO4pq
M Josh is a Nigerian singer, songwriter, producer, and activist. He was born in Enugu state and started his musical career as a gospel artist. He later switched to Afrobeat, a genre that combines African music with elements of jazz, funk, and soul. He is known for his socially conscious lyrics that address issues such as poverty, oppression, human rights, and politics. He has collaborated with artists such as Femi Kuti, Timaya, Duncan Mighty, and Mr Raw.
-Coro Dagbo is a song by M Josh that was released in March 2020. The title means "Corona is a lie" in Igbo language. The song is a satire that mocks the Nigerian government for using the coronavirus outbreak as an excuse to embezzle funds and oppress the people. The song also criticizes the media for spreading propaganda and misinformation. The song uses humor, irony, and sarcasm to expose the hypocrisy and greed of the ruling class.
-If you want to download M Josh Coro Dagbo, you have several options. You can either stream it online or download it offline. Here are some of the platforms where you can find the song:
-M Josh Coro Dagbo is not just a song, but a movement. It is a voice of resistance and solidarity against tyranny and injustice. It is a song that speaks truth to power and inspires people to demand accountability and change. By listening to this song, you will learn about the reality of Nigeria and its people. You will also join a community of like-minded people who share your values and vision. You will also enjoy the catchy melody, rhythm, and vocals of M Josh.
-How to download m josh coro dagbo mp3
-M josh coro dagbo lyrics video download
-Download m josh coro dagbo song on Shazam
-M josh coro dagbo Facebook video download
-Download m josh coro dagbo music for free
-M josh coro dagbo audio download link
-Download m josh coro dagbo latest song
-M josh coro dagbo YouTube video download
-Download m josh coro dagbo instrumental
-M josh coro dagbo official video download
-Download m josh coro dagbo remix
-M josh coro dagbo mp4 video download
-Download m josh coro dagbo ringtone
-M josh coro dagbo live performance download
-Download m josh coro dagbo album
-M josh coro dagbo Spotify download
-Download m josh coro dagbo on iTunes
-M josh coro dagbo SoundCloud download
-Download m josh coro dagbo on Audiomack
-M josh coro dagbo Amazon Music download
-Download m josh coro dagbo on Deezer
-M josh coro dagbo Tidal download
-Download m josh coro dagbo on Pandora
-M josh coro dagbo Apple Music download
-Download m josh coro dagbo on Google Play Music
-M josh coro dagbo YouTube Music download
-Download m josh coro dagbo on Napster
-M josh coro dagbo iHeartRadio download
-Download m josh coro dagbo on TuneIn Radio
-M josh coro dagbo Slacker Radio download
-Download m josh coro dagbo on Last.fm
-M josh coro dagbo Jango Radio download
-Download m josh coro dagbo on AccuRadio
-M josh coro dagbo Radio.com download
-Download m josh coro dagbo on 8tracks Radio
-M josh coro dagbo Jamendo Music download
-Download m josh coro dagbo on Bandcamp
-M josh coro dagbo ReverbNation download
-Download m josh coro dagbo on SoundClick
-M josh coro dagbo DatPiff download
M Josh has many other songs that you can listen to and download. Some of his popular and recent songs are:
-Title | Year | Theme |
---|---|---|
Audio Government | 2019 | The failure of the government to deliver on its promises |
Indomie Generation | 2020 | The challenges and aspirations of the young generation in Nigeria |
Movie In Aso Rock | 2020 | The drama and corruption in the presidential villa |
Don't Leave Me | 2020 | A love song with a twist of humor and wordplay |
Enugu Dalu | 2021 | A tribute to the people and culture of Enugu state |
M Josh Coro Dagbo is a song that you should not miss. It is a song that tells the truth about the situation in Nigeria and calls for action and change. It is a song that will make you laugh, cry, think, and dance. It is a song that will make you proud to be a Nigerian. Download M Josh Coro Dagbo today and join the movement.
-M Josh Coro Dagbo is an Afrobeat song, which is a fusion of African music with jazz, funk, and soul influences.
-You can watch the video of M Josh Coro Dagbo on YouTube. The video features M Josh performing the song in different locations and scenes.
-You can support M Josh and his music by following him on his social media accounts, sharing his songs with your friends, buying his merchandise, and attending his shows.
-No, M Josh Coro Dagbo is not banned in Nigeria. However, some radio stations and TV channels may not play it because of its controversial content and message.
-M Josh has received several awards and nominations for his music, such as:
-If you are looking for a reliable and convenient way to bet on sports and play online casino games from your Android device, you should definitely check out Linebet APK. This is a mobile application that allows you to access all the features and functions of Linebet, one of the best betting platforms in India. In this article, we will tell you everything you need to know about Linebet APK, including how to download, install, register, deposit, withdraw, bet, and play using this app. Read on and find out why Linebet APK is a must-have for any Indian gambler.
-Linebet APK is a mobile application that you can download and install on your Android device to enjoy all the services of Linebet, a licensed and reputable online betting platform that offers a wide range of sports betting and casino gaming options. By downloading Linebet APK, you will be able to access all the features of Linebet without any restrictions or limitations. You will be able to bet on your favorite sports, such as cricket, football, tennis, basketball, and more, as well as play hundreds of online casino games, such as slots, roulette, blackjack, poker, baccarat, and more. You will also be able to take advantage of various bonuses and promotions that Linebet offers to its customers, such as a 100% welcome bonus on your first deposit, free bets, cashback, loyalty program, and more. You will also be able to use various payment methods that are popular among Indian players, such as UPI, Paytm, PhonePe, Neteller, Skrill, Bitcoin, and more. You will also be able to contact the customer support team anytime via live chat, email, or phone. In short, Linebet APK is a complete mobile betting solution that will make your gambling experience more enjoyable and convenient.
-Download Zip >> https://urlca.com/2uObor
Some of the main features and benefits of Linebet APK are:
-As you can see, Linebet APK has many advantages that make it a great choice for any Indian gambler who wants to bet on sports and play casino games from their Android device.
-linebet app download and install for android apk
-linebet mobile app for sports betting and casino games
-how to download line bet apk for android devices
-linebet india app review and download link
-line bet apk latest version free download
-linebet app for android and ios devices
-linebet mobile version and apk download
-linebet online betting and gaming app for android
-download line bet app for android (apk) latest version
-linebet apk download for android free latest version
-line bet app download for free in india
-linebet sports betting and casino app for android
-linebet apk file download and installation guide
-line bet mobile app features and benefits
-linebet app for android - how to use and play
-linebet apk 2023 version free download
-line bet app - best betting site in india
-linebet android app - how to register and login
-line bet apk download for android - pros and cons
-linebet app - how to deposit and withdraw money
-linebet mobile apk - how to get bonuses and promotions
-line bet app for android - customer support and feedback
-linebet apk for android - betting options and odds
-line bet app - how to bet on cricket and other sports
-linebet android app - how to play online casino games
-line bet apk for android - security and reliability
-linebet app - how to update and uninstall
-line bet apk for android - compatibility and requirements
-linebet mobile app - how to access live streaming and statistics
-line bet app for android - frequently asked questions and answers
-linebet apk for android - advantages and disadvantages
-line bet mobile app - user reviews and ratings
-linebet app for android - tips and tricks
-line bet apk for android - alternatives and competitors
-linebet mobile app - how to contact customer service
-line bet app for android - how to verify your account
-linebet apk for android - how to change language and currency
-line bet mobile app - how to join the loyalty program
-linebet app for android - how to place bets and win money
-line bet apk for android - how to play responsibly and safely
If you are interested in downloading and installing Linebet APK on your Android device, you can follow these simple steps:
-The first thing you need to do is to go to the official website of Linebet, which is https://linebet.in/. You can use any browser you prefer, such as Chrome, Firefox, Opera, or Safari. Once you are on the website, you will see a menu icon on the top left corner of the screen. Tap on it and then tap on the "Download App" option. This will take you to the download page of Linebet APK.
-On the download page, you will see a green button that says "Download for Android". Tap on it and then tap on "OK" when prompted. This will start the download process of the APK file, which is about 20 MB in size. You can check the progress of the download on your notification bar or in your downloads folder.
-Before you can install the APK file, you need to allow installation from unknown sources on your Android device. This is because Linebet APK is not available on the Google Play Store and therefore it is considered as an unknown source by your device. To allow installation from unknown sources, go to your device settings and then tap on "Security" or "Privacy". Then, find the option that says "Unknown sources" or "Install unknown apps" and enable it. You may also need to confirm your choice by tapping on "OK" or "Allow". This will allow you to install Linebet APK without any issues.
-Once you have allowed installation from unknown sources, you can proceed to install the APK file. To do this, go to your downloads folder and find the file that says "Linebet.apk". Tap on it and then tap on "Install". Wait for a few seconds until the installation is complete. Then, tap on "Open" to launch the app. You can also find the app icon on your home screen or app drawer. Congratulations, you have successfully downloaded and installed Linebet APK on your Android device!
-Now that you have installed Linebet app on your Android device, you need to register and log in to start using it. Here are the steps you need to follow:
-When you launch the app for the first time, you will see a welcome screen that asks you to register or log in. Tap on the register button to create a new account. You can also tap on the log in button if you already have an account with Linebet.
-On the registration screen, you will see a form that asks you to fill in your personal details, such as your name, email address, phone number, country, currency, and date of birth. You also need to choose a password that is at least 8 characters long and contains letters and numbers. Make sure you enter valid and accurate information as this will be used for verification purposes later. You can also enter a promo code if you have one to claim a bonus. Then, tap on the checkbox that says "I agree with all rules" and then tap on the register button.
-After you register, you will receive a verification code via email or phone number, depending on what you entered during registration. You need to enter this code on the app to verify your account. This is an important step as it will allow you to make deposits, withdrawals, and bets using Linebet app. If you don't receive the code within a few minutes, check your spam folder or request a new code.
-Once you verify your account, you can log in to the app using your username and password that you created during registration. You can also use your email address or phone number as your username. You can also choose to save your login details or use the fingerprint option for faster access. Once you log in, you will see the main screen of the app, where you can access all the features and functions of Linebet.
-One of the most important aspects of any online betting platform is the payment system. Linebet app offers a variety of payment methods that are convenient and secure for Indian players. Here are the steps you need to follow to deposit and withdraw money using Linebet app:
-On the main screen of the app, you will see a cashier icon on the top right corner of the screen. Tap on it to open the cashier section, where you can manage your balance and transactions.
-On the cashier section, you will see two tabs: deposit and withdraw. Tap on the deposit tab if you want to add money to your account, or tap on the withdraw tab if you want to cash out your winnings. Then, choose your preferred payment method from the list, such as UPI, Paytm, PhonePe, Neteller, Skrill, Bitcoin, or more. You can also see the minimum and maximum limits for each method. Then, enter the amount you want to deposit or withdraw and tap on the confirm button.
-After you confirm your transaction, you will be redirected to the payment gateway of your chosen method, where you need to complete the payment process. For example, if you choose UPI, you need to enter your UPI ID and PIN, or scan the QR code. If you choose Bitcoin, you need to scan the address or copy it to your wallet. Follow the instructions on the screen and confirm your payment. Then, wait for a few minutes until you receive a confirmation message from Linebet app. You can also check your transaction history on the app to see if your transaction was successful.
-To withdraw money from your Linebet account, you need to follow the same steps as above, but choose withdraw instead of deposit on the cashier section. You also need to verify your identity before you can make your first withdrawal. To do this, you need to upload a copy of your ID card, passport, or driver's license, as well as a proof of address, such as a bank statement or utility bill. This is a standard procedure that ensures your security and prevents fraud. Once you verify your identity, you can withdraw money using any of the available methods. The withdrawal time may vary depending on the method, but usually it takes between 24 hours and 5 days.
-The main reason why you download Linebet app is to bet on sports and play casino games from your Android device. Linebet app offers a wide range of sports betting and casino gaming options that will suit any taste and preference. Here are the steps you need to follow to bet on sports and play casino games using Linebet app:
-On the main screen of the app, you will see two icons: sports and casino. Tap on the sports icon if you want to bet on sports events, or tap on the casino icon if you want to play casino games. You can also swipe left or right to switch between the two sections.
-On the sports section, you will see a list of all the sports that Linebet offers, such as cricket, football, tennis, basketball, and more. You can also see the live events, the top events, and the upcoming events. Tap on your favorite sport to see all the available markets and odds. You can also use the search bar or the filter options to find a specific event or league. On the casino section, you will see a list of all the casino games that Linebet offers, such as slots, roulette, blackjack, poker, baccarat, and more. You can also see the live casino games, where you can play with real dealers and other players. Tap on your favorite game to open it and start playing.
-On the sports section, you can place your bets by tapping on the market and the odds that you want to bet on. You can also choose from various bet types, such as single, accumulator, system, or chain. You can also use the bet slip to adjust your stake and see your potential winnings. Once you are satisfied with your bet, tap on the place bet button and confirm your bet. You can also cash out your bet before the event ends if you want to secure a profit or minimize a loss. On the casino section, you can play your games by tapping on the spin, deal, or play button, depending on the game. You can also adjust your bet size and use various features, such as auto-play, turbo mode, or chat. You can also check your game history and rules on the app.
-On both sections, you can check your results and winnings on the app by tapping on the history or balance icon. You can also see your active bets or games and their status. If you win a bet or a game, you will receive a notification from the app and your winnings will be credited to your account instantly. You can then withdraw your winnings using any of the available payment methods.
-In conclusion, Linebet APK is a mobile application that allows you to bet on sports and play casino games from your Android device. It offers a wide range of features and benefits that make it a great choice for any Indian gambler who wants to enjoy online betting from anywhere and anytime. To download and install Linebet APK on your Android device, you need to follow these simple steps:
-To register and log in to Linebet app, you need to follow these simple steps:
-To deposit and withdraw money using Linebet app, you need to follow these simple steps:
-To bet on sports and play casino games using Linebet app, you need to follow these simple steps:
-We hope that this article has helped you understand how to use Linebet APK for Android and why you should download it. If you have any questions or doubts, you can check the FAQs below or contact the customer support team of Linebet via live chat, email, or phone. Happy betting and gaming!
-Here are some of the most frequently asked questions about Linebet APK for Android:
-Yes, Linebet APK is safe and legal to use in India. Linebet is a licensed and regulated online betting platform that uses encryption technology to protect your data and transactions. Linebet also complies with the laws and regulations of India and does not offer any illegal or prohibited activities.
-Linebet APK works on all modern Android devices that have an operating system of 4.2 or higher. You also need to have a stable internet connection and enough storage space on your device.
-No, Linebet APK is only compatible with Android devices. However, you can use the mobile version of Linebet website on other devices, such as iOS or Windows. You can access the mobile version by using any browser on your device and going to https://linebet.in/.
-No, you need to have an internet connection to use Linebet APK. You cannot bet on sports or play casino games offline. However, you can access some features of the app offline, such as your balance, history, and settings.
-No, you can only use one account on Linebet APK. Creating multiple accounts is against the terms and conditions of Linebet and may result in the suspension or termination of your account. You should also not share your account details with anyone else or use someone else's account.
-If you are a fan of football and you love to manage your own team, then you should not miss Football Head Coach 23, the latest and most advanced football game for Android devices. In this game, you can create your own franchise and lead your team to victory in all the football games. You can manage your entire roster with custom tactics and formations that show off your unique play style. You can also compete with other players from around the world in online leagues and tournaments, where you can prove your skills and strategy. But what if you want to enjoy the game without any limitations or restrictions? Well, that's where Football Head Coach 23 Mod APK comes in. In this article, we will tell you everything you need to know about this modded version of the game, how to download and install it on your device, and why you should play it right now.
-DOWNLOAD ……… https://urlca.com/2uO8Gg
Football Head Coach 23 is a football management game that lets you take control of every aspect of your team. You can choose from over 800 clubs from 33 countries, each with their own history, culture, and fan base. You can also scout and sign players from a database of over 50,000 real-life players, each with their own attributes, skills, and potential. You can also train and develop your players, improve their performance, and deal with injuries, suspensions, and transfers. You can also set up your tactics and formations, choose your starting lineup, make substitutions, and give instructions during the matches. You can also watch the matches unfold in realistic 3D graphics, with commentary, replays, and statistics.
-Football Head Coach 23 is not just a single-player game. You can also play online with other players from around the world in various modes. You can join or create online leagues, where you can compete against other managers in a season-long competition. You can also participate in online tournaments, where you can face off against other teams in knockout rounds. You can also challenge your friends or random opponents in friendly matches, where you can test your skills and strategy. You can also chat with other players, send messages, make friends, and join communities.
-Football Head Coach 23 is also a game that lets you unleash your creativity and customize your team according to your preferences. You can design your own logo, kit, stadium, and fan banners. You can also edit your players' names, faces, hairstyles, tattoos, accessories, and boots. You can also create your own players, teams, leagues, tournaments, and scenarios. You can also share your creations with other players online or download their creations to use in your game
Football Head Coach 23 Mod APK is a modified version of the original game that gives you unlimited money and resources to use in the game. This means that you can buy any player you want, upgrade your facilities, hire the best staff, and more. You can also unlock all the features and modes of the game, such as the editor, the online modes, and the premium content. You can also access all the updates and patches of the game, as well as the latest rosters and data.
-Football Head Coach 23 Mod APK is also a safe and easy way to download and install the game on your Android device. You don't need to root your device or use any third-party apps to install the mod apk. You just need to follow some simple steps that we will explain later in this article. You also don't need to worry about any viruses, malware, or spyware that might harm your device or compromise your privacy. The mod apk file is scanned and verified by trusted sources and antivirus programs.
-Football Head Coach 23 Mod APK is also a fun and exciting way to enjoy the game without any limitations or restrictions. You can play the game as much as you want, without worrying about running out of money or resources. You can also experiment with different tactics and formations, try out different players and teams, and explore all the features and modes of the game. You can also have more fun and challenge in online modes, where you can compete with other players who have the mod apk as well.
-The first step to download and install Football Head Coach 23 Mod APK is to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to your device settings, then security, then unknown sources, and turn it on. You might see a warning message, but don't worry, it's safe to proceed.
-The next step is to download the mod apk file from a trusted source. There are many websites that offer mod apk files for various games, but not all of them are reliable or safe. Some of them might contain viruses, malware, or spyware that might harm your device or compromise your privacy. That's why we recommend you to download the mod apk file from our website, which is tested and verified by us and other users. You can find the download link at the end of this article.
-football head coach 23 game apk download
-fhc 23 mod apk unlimited gold and energy
-football head coach 23 android game free download
-how to install football head coach 23 mod apk
-football head coach 23 apk latest version
-football head coach 23 hack mod apk
-football head coach 23 game review
-football head coach 23 game tips and tricks
-football head coach 23 game cheats and codes
-football head coach 23 game online multiplayer
-football head coach 23 game best tactics and formations
-football head coach 23 game roster and players
-football head coach 23 game nfl players association licensed
-football head coach 23 game retro simulation engine
-football head coach 23 game quick match mode
-football head coach 23 game league and playoffs
-football head coach 23 game franchise and career mode
-football head coach 23 game player market and bidding
-football head coach 23 game training and development
-football head coach 23 game bowl and trophy
-fhc 23 mod apk download for android
-fhc 23 mod apk free gold and energy generator
-fhc 23 mod apk no root required
-fhc 23 mod apk safe and secure
-fhc 23 mod apk latest update and features
-fhc 23 mod apk hack tool and guide
-fhc 23 mod apk gameplay and walkthrough
-fhc 23 mod apk cheats and hacks
-fhc 23 mod apk online multiplayer and competition
-fhc 23 mod apk best tactics and formations for winning
-fhc 23 mod apk roster and players unlock all pro players
-fhc 23 mod apk nfl players association licensed game
-fhc 23 mod apk retro simulation engine realistic and fun
-fhc 23 mod apk quick match mode test your skills and strategies
-fhc 23 mod apk league and playoffs mode challenge other coaches
-fhc 23 mod apk franchise and career mode create your own team and legacy
-fhc 23 mod apk player market and bidding mode find the next star for your team
-fhc 23 mod apk training and development mode improve your players every day
-fhc 23 mod apk bowl and trophy mode win the ultimate prize in football
The third step is to locate and tap on the downloaded file to start the installation process. You can find the downloaded file in your device's download folder or in your notification bar. Once you find it, tap on it and you will see a pop-up window asking you to confirm the installation. Tap on install and wait for a few seconds.
-The fourth step is to follow the instructions on the screen and wait for the installation to complete. You might see some permissions requests or additional steps depending on your device model or Android version. Just follow them accordingly and grant any necessary permissions. Once the installation is complete, you will see a message saying that the app has been installed successfully.
-The final step is to launch the game and enjoy! You can find the game icon on your home screen or in your app drawer. Tap on it and you will see the game's main menu. You can now start playing Football Head Coach 23 Mod APK with unlimited money and resources. Have fun!
Football Head Coach 23 Mod APK is a game that will give you the thrill and challenge of managing a football team. You will have to make all the decisions that affect your team's performance, such as signing players, setting tactics, and dealing with the media. You will also have to face the pressure and expectations of your fans, board, and sponsors. You will have to balance your budget, your reputation, and your results. You will have to cope with the highs and lows of the football season, such as winning trophies, losing matches, or getting sacked. You will have to prove yourself as the best manager in the world.
-Football Head Coach 23 Mod APK is also a game that will let you compete with other players from around the world in online modes. You can join or create online leagues, where you can play against other managers in a season-long competition. You can also participate in online tournaments, where you can play against other teams in knockout rounds. You can also challenge your friends or random opponents in friendly matches, where you can test your skills and strategy. You can also chat with other players, send messages, make friends, and join communities. You can also compare your achievements and rankings with other players on the global leaderboards.
-Football Head Coach 23 Mod APK is also a game that will let you unleash your creativity and customize your team, tactics, and formations. You can design your own logo, kit, stadium, and fan banners. You can also edit your players' names, faces, hairstyles, tattoos, accessories, and boots. You can also create your own players, teams, leagues, tournaments, and scenarios. You can also share your creations with other players online or download their creations to use in your game. You can also experiment with different tactics and formations, choose your starting lineup, make substitutions, and give instructions during the matches.
-Football Head Coach 23 Mod APK is the ultimate football game for Android devices. It is a realistic and immersive football management game that lets you take control of every aspect of your team. It is a multiplayer and competitive game that lets you play online with other players from around the world in various modes. It is a customizable and creative game that lets you design your own team, tactics, and formations. It is also a modded version of the original game that gives you unlimited money and resources to use in the game. It is a safe and easy way to download and install the game on your device. It is a fun and exciting way to enjoy the game without any limitations or restrictions.
-If you are looking for a football game that will challenge your skills and strategy, entertain you for hours, and satisfy your passion for football, then you should download Football Head Coach 23 Mod APK right now. You will not regret it!
-A: Yes, Football Head Coach 23 Mod APK is free to download and play. You don't need to pay anything to enjoy the game.
-A: Football Head Coach 23 Mod APK is compatible with most Android devices that have Android 5.0 or higher. However, some devices might experience some performance issues or bugs due to different specifications or models.
-A: Yes, Football Head Coach 23 Mod APK is safe to use. The mod apk file is scanned and verified by trusted sources and antivirus programs. However, you should always download the mod apk file from our website or other reliable sources to avoid any risks.
-A: To update Football Head Coach 23 Mod APK, you need to download the latest version of the mod apk file from our website or other trusted sources. Then you need to uninstall the previous version of the game from your device and install the new version following the same steps as before.
-A: To contact the developers of Football Head Coach 23, you can visit their official website or their social media pages on Facebook, Twitter, Instagram, or YouTube. You can also send them an email at support@footballheadcoach.com.
401be4b1e0Have you ever dreamed of exploring, creating, and sharing your own virtual worlds with millions of people? If so, you might want to check out Roblox, one of the most popular and innovative gaming platforms in the world. In this article, we will show you how to download Roblox APK for Android, how to play and create on Roblox, and why it is the ultimate virtual universe for Android users.
-Roblox is not just a game, but a platform that lets you create, share, and experience anything you can imagine. Whether you want to go on an epic adventure, compete against rivals worldwide, or just hang out and chat with your friends online, Roblox has something for everyone. Here are some of the features that make Roblox unique:
-Download ✓ https://urlca.com/2uOchG
Roblox features a growing library of experiences created by the community, ranging from role-playing games, simulations, puzzles, shooters, racing, and more. You can join any of these experiences for free and enjoy them with your friends or other players from around the world. You can also use VR headsets or Xbox One to enhance your immersion.
-Roblox is more than just a platform, but also a social network where you can meet and interact with millions of other users who share your interests. You can chat with them using text or voice features, send them private messages, or join groups and clubs. You can also follow your favorite creators, rate and review their games, and give them feedback.
-Roblox is also a sandbox where you can unleash your creativity and make your own games using the Roblox Studio tool. You can use simple drag-and-drop features or learn to code using Lua scripting language. You can also customize your avatar with tons of items from the catalog, such as hats, shirts, faces, gear, and more. There is no limit to what you can create and share on Roblox.
-If you want to join the infinite metaverse of Roblox on your Android device, you will need to download the Roblox APK file. This is a free app that lets you access all the features of Roblox on your mobile device. Here are some things you need to know before downloading Roblox APK for Android:
-To download and install Roblox APK for Android, you will need a device that runs on Android 5.0 or higher. You will also need a network connection (Wi-Fi is recommended) and at least 163 MB of free storage space. Roblox APK is compatible with most Android devices, including smartphones and tablets.
-To install Roblox APK for Android, you can follow these simple steps:
-To make the most out of Roblox APK for Android, you can use some of these tips and tricks:
-roblox game apk download for android free
-roblox game download apk android latest version
-roblox game install apk on android device
-roblox game apk file download for android phone
-roblox game download apk android offline
-roblox game download apk for android tablet
-roblox game download apk android mod
-roblox game apk download for android 2023
-roblox game download apk android update
-roblox game download apk for android emulator
-roblox game apk download for android no verification
-roblox game download apk android online
-roblox game download apk for android tv
-roblox game apk download for android 5.0
-roblox game download apk android hack
-roblox game download apk for android 11
-roblox game apk download for android 10
-roblox game download apk android 9.0
-roblox game download apk for android 8.0
-roblox game apk download for android 7.0
-roblox game download apk android 6.0
-roblox game download apk for android 4.4
-roblox game apk download for android old version
-roblox game download apk android new version
-roblox game download apk for android fast and easy
-roblox game apk download for android without play store
-roblox game download apk android from official website
-roblox game download apk for android with unlimited money
-roblox game apk download for android safe and secure
-roblox game download apk android full version
-roblox game download apk for android with obb data
-roblox game apk download for android highly compressed
-roblox game download apk android cracked
-roblox game download apk for android with cheats
-roblox game apk download for android best graphics
-roblox game download apk android low mb size
-roblox game download apk for android all features unlocked
-roblox game apk download for android direct link
-roblox game download apk android no root required
-roblox game download apk for android in hindi language
Now that you have downloaded Roblox APK for Android, you are ready to play and create on Roblox. Here are some of the things you can do on Roblox:
-Roblox has a vast collection of experiences that you can join and enjoy. You can browse by genre, popularity, or recommendations. You can also use filters to find games that are suitable for your age, device, or language. Some of the most popular experiences on Roblox are:
-Name | Description |
---|---|
Adopt Me! | A game where you can adopt pets, design your home, and explore a magical world. |
Bloxburg | A game where you can build and design your own house, work at different jobs, and socialize with other players. |
Jailbreak | A game where you can choose to be a cop or a criminal, and either chase or escape from the law. |
Royale High | A game where you can dress up, attend classes, and role-play as a student in a fantasy school. |
Piggy | A game where you can either survive from a piggy monster or become one yourself. |
Roblox also lets you customize your avatar with various items from the catalog. You can change your appearance, clothing, accessories, gear, and more. You can also chat with your friends using text or voice features. You can send them private messages, join their parties, or invite them to your games. You can also make new friends by joining groups and clubs that match your interests.
-If you want to create your own games on Roblox, you can use the Roblox Studio tool. This is a powerful and user-friendly tool that lets you design, code, and publish your own experiences. You can use simple drag-and-drop features or learn to code using Lua scripting language. You can also use templates, models, sounds, and other assets from the library. You can also monetize your games by selling game passes or developer products. You can also earn Robux by participating in the Developer Exchange program.
-Roblox is the ultimate virtual universe for Android users who want to play and create their own games. It is a platform that offers immersive experiences, a community of millions of users, and a sandbox for creativity. It is also easy to download Roblox APK for Android and enjoy all the features of Roblox on your mobile device. Whether you want to explore, socialize, or learn new skills, Roblox has something for everyone.
-Here are some of the frequently asked questions about Roblox APK for Android:
-Yes, Roblox APK for Android is safe to download and install as long as you use trusted sources such as [Roblox APK (Android Game) - Free Download - APKCombo] or [Roblox - Apps on Google Play]. You should also scan the file with an antivirus app before opening it.
-Yes, Roblox APK for Android is free to download and play. However, some features may require in-app purchases using Robux currency. You can buy Robux with real money or earn them by creating games or participating in events.
-To update Roblox APK for Android, you can either use the Google Play Store app or download the latest version from [Roblox APK (Android Game) - Free Download - APKCombo]( ) or [Roblox - Apps on Google Play]. You should also enable the auto-update feature on your device to get the latest updates automatically.
-To uninstall Roblox APK for Android, you can either use the Google Play Store app or the settings app on your device. You can also delete the file from your file manager app. You should also clear the cache and data of the app to free up some storage space.
-If you have any issues or questions about Roblox APK for Android, you can contact Roblox support by visiting their website at [Roblox Support] or sending them an email at info@roblox.com. You can also check their help page at [Roblox Help] or their blog at [Roblox Blog] for more information and updates.
-Have you ever wondered what a 3D truck is and how to design one? If you are interested in trucks or 3D modeling, this article is for you. In this article, we will explain what a 3D truck is and why it is useful. We will also show you how to design a 3D truck using different methods and tools. Finally, we will discuss the benefits and challenges of 3D truck design. By the end of this article, you will have a better understanding of 3D trucks and how to create your own.
-A 3D truck is a digital model of a truck that can be printed, modified, or simulated. A 3D truck can be created using various software programs or online platforms that allow you to design and visualize your own truck. Alternatively, you can also scan an existing truck using a 3D scanner and convert it into a digital model.
-Download 🗸 https://urlca.com/2uOaAM
There are many applications of 3D trucks in various industries and sectors. For example:
-Some examples of 3D trucks that have been created or used for different purposes are:
-3d truck model
-3d truck simulator
-3d truck game
-3d truck design
-3d truck animation
-3d truck rendering
-3d truck cake
-3d truck driving
-3d truck vector
-3d truck wallpaper
-3d truck art
-3d truck logo
-3d truck sketch
-3d truck drawing
-3d truck print
-3d truck puzzle
-3d truck builder
-3d truck maker
-3d truck creator
-3d truck customizer
-3d truck stl
-3d truck obj
-3d truck fbx
-3d truck c4d
-3d truck maya
-3d truck blender
-3d truck max
-3d truck unity
-3d truck unreal
-3d truck vr
-3d truck ar
-3d truck rc
-3d truck toy
-3d truck svg
-3d truck png
-3d truck jpg
-3d truck pdf
-3d truck dwg
-3d truck cad
-3d truck camo
-3d truck wrap
-3d truck decal
-3d truck sticker
-3d truck vinyl
-3d truck paint
-3d truck modding
If you want to design your own 3D truck, there are several steps involved in the process. These steps are:
-There are different methods and tools for designing a 3D truck, and each one has its own advantages and disadvantages. Some of the factors that you should consider when choosing a method or tool are:
-Some tips and best practices for designing a 3D truck are:
-Designing a 3D truck can have many benefits and challenges. Some of the benefits are:
-Some of the challenges are:
-Some solutions or recommendations for overcoming the challenges of 3D truck design are:
-In conclusion, designing a 3D truck is an exciting and rewarding activity that can have many applications and benefits. However, designing a 3D truck also involves many steps and challenges that require careful consideration and attention. By following the tips and best practices that we have discussed in this article, you can design your own 3D truck successfully and easily.
-Here are some frequently asked questions on the topic of 3D truck design:
-There is no definitive answer to this question as different software programs have different features and advantages that suit different needs and preferences. However, some of the most popular and widely used software programs for designing a 3D truck are Blender , SketchUp, and CGTrader. These software programs are easy to use, have many features and options, and are compatible with various formats and platforms.
-The time it takes to design a 3D truck depends on many factors, such as the complexity and detail of your truck, the method and tool that you use, and your skill level and experience. However, a rough estimate is that it can take anywhere from a few hours to a few days to design a 3D truck.
-The cost of designing a 3D truck also depends on many factors, such as the software or platform that you use, the quality and resolution of your model, and the purpose and usage of your truck. However, a rough estimate is that it can cost anywhere from a few dollars to a few hundred dollars to design a 3D truck.
-If you want to print or export your 3D truck, you need to make sure that your model is in a format that is supported by the printer or platform that you want to use. Some of the common formats for printing or exporting 3D models are STL, OBJ, FBX, and GLTF. You also need to make sure that your model is scaled and oriented correctly for printing or exporting.
-If you want to learn more about 3D truck design, you can find many resources or tutorials online. Some of the websites that offer free or paid resources or tutorials on 3D truck design are Udemy, Lynda, YouTube, and Pinterest. You can also join online communities or forums where you can ask questions or share your work with other 3D truck designers.
401be4b1e0
-
-
-
-
-
- -🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or training your own diffusion models, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on [usability over performance](https://huggingface.co/docs/diffusers/conceptual/philosophy#usability-over-performance), [simple over easy](https://huggingface.co/docs/diffusers/conceptual/philosophy#simple-over-easy), and [customizability over abstractions](https://huggingface.co/docs/diffusers/conceptual/philosophy#tweakable-contributorfriendly-over-abstraction). - -🤗 Diffusers offers three core components: - -- State-of-the-art [diffusion pipelines](https://huggingface.co/docs/diffusers/api/pipelines/overview) that can be run in inference with just a few lines of code. -- Interchangeable noise [schedulers](https://huggingface.co/docs/diffusers/api/schedulers/overview) for different diffusion speeds and output quality. -- Pretrained [models](https://huggingface.co/docs/diffusers/api/models) that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. - -## Installation - -We recommend installing 🤗 Diffusers in a virtual environment from PyPi or Conda. For more details about installing [PyTorch](https://pytorch.org/get-started/locally/) and [Flax](https://flax.readthedocs.io/en/latest/installation.html), please refer to their official documentation. - -### PyTorch - -With `pip` (official package): - -```bash -pip install --upgrade diffusers[torch] -``` - -With `conda` (maintained by the community): - -```sh -conda install -c conda-forge diffusers -``` - -### Flax - -With `pip` (official package): - -```bash -pip install --upgrade diffusers[flax] -``` - -### Apple Silicon (M1/M2) support - -Please refer to the [How to use Stable Diffusion in Apple Silicon](https://huggingface.co/docs/diffusers/optimization/mps) guide. - -## Quickstart - -Generating outputs is super easy with 🤗 Diffusers. To generate an image from text, use the `from_pretrained` method to load any pretrained diffusion model (browse the [Hub](https://huggingface.co/models?library=diffusers&sort=downloads) for 4000+ checkpoints): - -```python -from diffusers import DiffusionPipeline - -pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") -pipeline.to("cuda") -pipeline("An image of a squirrel in Picasso style").images[0] -``` - -You can also dig into the models and schedulers toolbox to build your own diffusion system: - -```python -from diffusers import DDPMScheduler, UNet2DModel -from PIL import Image -import torch -import numpy as np - -scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") -model = UNet2DModel.from_pretrained("google/ddpm-cat-256").to("cuda") -scheduler.set_timesteps(50) - -sample_size = model.config.sample_size -noise = torch.randn((1, 3, sample_size, sample_size)).to("cuda") -input = noise - -for t in scheduler.timesteps: - with torch.no_grad(): - noisy_residual = model(input, t).sample - prev_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample - input = prev_noisy_sample - -image = (input / 2 + 0.5).clamp(0, 1) -image = image.cpu().permute(0, 2, 3, 1).numpy()[0] -image = Image.fromarray((image * 255).round().astype("uint8")) -image -``` - -Check out the [Quickstart](https://huggingface.co/docs/diffusers/quicktour) to launch your diffusion journey today! - -## How to navigate the documentation - -| **Documentation** | **What can I learn?** | -|---------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Tutorial | A basic crash course for learning how to use the library's most important features like using models and schedulers to build your own diffusion system, and training your own diffusion model. | -| Loading | Guides for how to load and configure all the components (pipelines, models, and schedulers) of the library, as well as how to use different schedulers. | -| Pipelines for inference | Guides for how to use pipelines for different inference tasks, batched generation, controlling generated outputs and randomness, and how to contribute a pipeline to the library. | -| Optimization | Guides for how to optimize your diffusion model to run faster and consume less memory. | -| [Training](https://huggingface.co/docs/diffusers/training/overview) | Guides for how to train a diffusion model for different tasks with different training techniques. | - -## Supported pipelines - -| Pipeline | Paper | Tasks | -|---|---|:---:| -| [alt_diffusion](./api/pipelines/alt_diffusion) | [**AltDiffusion**](https://arxiv.org/abs/2211.06679) | Image-to-Image Text-Guided Generation | -| [audio_diffusion](./api/pipelines/audio_diffusion) | [**Audio Diffusion**](https://github.com/teticio/audio-diffusion.git) | Unconditional Audio Generation | -| [controlnet](./api/pipelines/stable_diffusion/controlnet) | [**ControlNet with Stable Diffusion**](https://arxiv.org/abs/2302.05543) | Image-to-Image Text-Guided Generation | -| [cycle_diffusion](./api/pipelines/cycle_diffusion) | [**Cycle Diffusion**](https://arxiv.org/abs/2210.05559) | Image-to-Image Text-Guided Generation | -| [dance_diffusion](./api/pipelines/dance_diffusion) | [**Dance Diffusion**](https://github.com/williamberman/diffusers.git) | Unconditional Audio Generation | -| [ddpm](./api/pipelines/ddpm) | [**Denoising Diffusion Probabilistic Models**](https://arxiv.org/abs/2006.11239) | Unconditional Image Generation | -| [ddim](./api/pipelines/ddim) | [**Denoising Diffusion Implicit Models**](https://arxiv.org/abs/2010.02502) | Unconditional Image Generation | -| [latent_diffusion](./api/pipelines/latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)| Text-to-Image Generation | -| [latent_diffusion](./api/pipelines/latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)| Super Resolution Image-to-Image | -| [latent_diffusion_uncond](./api/pipelines/latent_diffusion_uncond) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752) | Unconditional Image Generation | -| [paint_by_example](./api/pipelines/paint_by_example) | [**Paint by Example: Exemplar-based Image Editing with Diffusion Models**](https://arxiv.org/abs/2211.13227) | Image-Guided Image Inpainting | -| [pndm](./api/pipelines/pndm) | [**Pseudo Numerical Methods for Diffusion Models on Manifolds**](https://arxiv.org/abs/2202.09778) | Unconditional Image Generation | -| [score_sde_ve](./api/pipelines/score_sde_ve) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation | -| [score_sde_vp](./api/pipelines/score_sde_vp) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation | -| [semantic_stable_diffusion](./api/pipelines/semantic_stable_diffusion) | [**Semantic Guidance**](https://arxiv.org/abs/2301.12247) | Text-Guided Generation | -| [stable_diffusion_text2img](./api/pipelines/stable_diffusion/text2img) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation | -| [stable_diffusion_img2img](./api/pipelines/stable_diffusion/img2img) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation | -| [stable_diffusion_inpaint](./api/pipelines/stable_diffusion/inpaint) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting | -| [stable_diffusion_panorama](./api/pipelines/stable_diffusion/panorama) | [**MultiDiffusion**](https://multidiffusion.github.io/) | Text-to-Panorama Generation | -| [stable_diffusion_pix2pix](./api/pipelines/stable_diffusion/pix2pix) | [**InstructPix2Pix**](https://github.com/timothybrooks/instruct-pix2pix) | Text-Guided Image Editing| -| [stable_diffusion_pix2pix_zero](./api/pipelines/stable_diffusion/pix2pix_zero) | [**Zero-shot Image-to-Image Translation**](https://pix2pixzero.github.io/) | Text-Guided Image Editing | -| [stable_diffusion_attend_and_excite](./api/pipelines/stable_diffusion/attend_and_excite) | [**Attend and Excite for Stable Diffusion**](https://attendandexcite.github.io/Attend-and-Excite/) | Text-to-Image Generation | -| [stable_diffusion_self_attention_guidance](./api/pipelines/stable_diffusion/self_attention_guidance) | [**Self-Attention Guidance**](https://ku-cvlab.github.io/Self-Attention-Guidance) | Text-to-Image Generation | -| [stable_diffusion_image_variation](./stable_diffusion/image_variation) | [**Stable Diffusion Image Variations**](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations) | Image-to-Image Generation | -| [stable_diffusion_latent_upscale](./stable_diffusion/latent_upscale) | [**Stable Diffusion Latent Upscaler**](https://twitter.com/StabilityAI/status/1590531958815064065) | Text-Guided Super Resolution Image-to-Image | -| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-to-Image Generation | -| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Image Inpainting | -| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [**Depth-Conditional Stable Diffusion**](https://github.com/Stability-AI/stablediffusion#depth-conditional-stable-diffusion) | Depth-to-Image Generation | -| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Super Resolution Image-to-Image | -| [stable_diffusion_safe](./api/pipelines/stable_diffusion_safe) | [**Safe Stable Diffusion**](https://arxiv.org/abs/2211.05105) | Text-Guided Generation | -| [stable_unclip](./stable_unclip) | **Stable unCLIP** | Text-to-Image Generation | -| [stable_unclip](./stable_unclip) | **Stable unCLIP** | Image-to-Image Text-Guided Generation | -| [stochastic_karras_ve](./api/pipelines/stochastic_karras_ve) | [**Elucidating the Design Space of Diffusion-Based Generative Models**](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation | -| [unclip](./api/pipelines/unclip) | [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125) | Text-to-Image Generation | -| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation | -| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation | -| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation | -| [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation | - -## Contribution - -We ❤️ contributions from the open-source community! -If you want to contribute to this library, please check out our [Contribution guide](https://github.com/huggingface/diffusers/blob/main/CONTRIBUTING.md). -You can look out for [issues](https://github.com/huggingface/diffusers/issues) you'd like to tackle to contribute to the library. -- See [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) for general opportunities to contribute -- See [New model/pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) to contribute exciting new diffusion models / diffusion pipelines -- See [New scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) - -Also, say 👋 in our public Discord channelin this paper, a multidisciplinary framework is proposed to evaluate the design of a model for a mechanical part. it is shown that in the construction of this framework, the use of multidisciplinary and quantitative methods can provide a strong foundation for solving design problems. it is shown by an example that the multidisciplinary framework is useful to the investigation of the characteristics of an application design.
-industrial applications of this work are in general required to be tolerant for multiple starting conditions, process flow diagram changes, and model adaptations. since the proposed framework is capable of handling these, it is believed that the framework can be applied in practical situations.
-DOWNLOAD --->>> https://gohhs.com/2uFVK5
whatever new knowledge emerges in new domains is an expression of human cognitive activity. it may begin as an invention, but will not be considered scientific until tested through formal mechanisms of inquiry. when testing, understanding, and community agree upon or otherwise validate the acquisition of knowledge, one enters the domain of science. establishing these rules is the job of the scientist.
-design, as applied to computer science and engineering activities, refers to the activities of partitioning, coordinating, and integrating tools and systems that create a product. the designer determines how the product is constructed from the individual elements of the design. he or she must determine what the components are, how they relate to each other, and how they are assembled in order to realize a particular purpose. the techniques employed are often called engineering because they focus on the physical world. design work is the first step in the project life cycle which leads to the development of an off-line or development environment that is used to model or represent a function to be implemented in the target environment.
899543212bDownload ✒ https://gohhs.com/2uFSXr
Download - https://gohhs.com/2uFTB5
Download Zip ► https://gohhs.com/2uFU4p
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation | Github Repo
" - - -gr.Interface(inference, - inputs, - outputs, - title=title, - description=description, - article=article, - examples=[['starrynight.jpeg',"Image Captioning","None","Nucleus sampling"]], - allow_flagging='never', - cache_examples=False).queue(concurrency_count=1).launch(show_error=True) diff --git a/spaces/doevent/blip/data/coco_karpathy_dataset.py b/spaces/doevent/blip/data/coco_karpathy_dataset.py deleted file mode 100644 index a34d29205f42aa09695b160ac9c91958ba041bb3..0000000000000000000000000000000000000000 --- a/spaces/doevent/blip/data/coco_karpathy_dataset.py +++ /dev/null @@ -1,126 +0,0 @@ -import os -import json - -from torch.utils.data import Dataset -from torchvision.datasets.utils import download_url - -from PIL import Image - -from data.utils import pre_caption - -class coco_karpathy_train(Dataset): - def __init__(self, transform, image_root, ann_root, max_words=30, prompt=''): - ''' - image_root (string): Root directory of images (e.g. coco/images/) - ann_root (string): directory to store the annotation file - ''' - url = 'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_train.json' - filename = 'coco_karpathy_train.json' - - download_url(url,ann_root) - - self.annotation = json.load(open(os.path.join(ann_root,filename),'r')) - self.transform = transform - self.image_root = image_root - self.max_words = max_words - self.prompt = prompt - - self.img_ids = {} - n = 0 - for ann in self.annotation: - img_id = ann['image_id'] - if img_id not in self.img_ids.keys(): - self.img_ids[img_id] = n - n += 1 - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - ann = self.annotation[index] - - image_path = os.path.join(self.image_root,ann['image']) - image = Image.open(image_path).convert('RGB') - image = self.transform(image) - - caption = self.prompt+pre_caption(ann['caption'], self.max_words) - - return image, caption, self.img_ids[ann['image_id']] - - -class coco_karpathy_caption_eval(Dataset): - def __init__(self, transform, image_root, ann_root, split): - ''' - image_root (string): Root directory of images (e.g. coco/images/) - ann_root (string): directory to store the annotation file - split (string): val or test - ''' - urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_val.json', - 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_test.json'} - filenames = {'val':'coco_karpathy_val.json','test':'coco_karpathy_test.json'} - - download_url(urls[split],ann_root) - - self.annotation = json.load(open(os.path.join(ann_root,filenames[split]),'r')) - self.transform = transform - self.image_root = image_root - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - ann = self.annotation[index] - - image_path = os.path.join(self.image_root,ann['image']) - image = Image.open(image_path).convert('RGB') - image = self.transform(image) - - img_id = ann['image'].split('/')[-1].strip('.jpg').split('_')[-1] - - return image, int(img_id) - - -class coco_karpathy_retrieval_eval(Dataset): - def __init__(self, transform, image_root, ann_root, split, max_words=30): - ''' - image_root (string): Root directory of images (e.g. coco/images/) - ann_root (string): directory to store the annotation file - split (string): val or test - ''' - urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_val.json', - 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_test.json'} - filenames = {'val':'coco_karpathy_val.json','test':'coco_karpathy_test.json'} - - download_url(urls[split],ann_root) - - self.annotation = json.load(open(os.path.join(ann_root,filenames[split]),'r')) - self.transform = transform - self.image_root = image_root - - self.text = [] - self.image = [] - self.txt2img = {} - self.img2txt = {} - - txt_id = 0 - for img_id, ann in enumerate(self.annotation): - self.image.append(ann['image']) - self.img2txt[img_id] = [] - for i, caption in enumerate(ann['caption']): - self.text.append(pre_caption(caption,max_words)) - self.img2txt[img_id].append(txt_id) - self.txt2img[txt_id] = img_id - txt_id += 1 - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - image_path = os.path.join(self.image_root, self.annotation[index]['image']) - image = Image.open(image_path).convert('RGB') - image = self.transform(image) - - return image, index \ No newline at end of file diff --git a/spaces/dolceschokolade/chatbot-mini/types/folder.ts b/spaces/dolceschokolade/chatbot-mini/types/folder.ts deleted file mode 100644 index 7160edea18576e227fadbd3e5d0dce455d3497e6..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/types/folder.ts +++ /dev/null @@ -1,7 +0,0 @@ -export interface FolderInterface { - id: string; - name: string; - type: FolderType; -} - -export type FolderType = 'chat' | 'prompt'; diff --git a/spaces/dongyi/MMFS/data/super_dataset.py b/spaces/dongyi/MMFS/data/super_dataset.py deleted file mode 100644 index 42fd2b7c06546cae1c5db715acba034e766cf85b..0000000000000000000000000000000000000000 --- a/spaces/dongyi/MMFS/data/super_dataset.py +++ /dev/null @@ -1,321 +0,0 @@ -import copy -import torch.utils.data as data -from utils.data_utils import check_img_loaded, check_numpy_loaded - -from data.test_data import add_test_data, apply_test_transforms -from data.test_video_data import TestVideoData -from data.static_data import StaticData - -from multiprocessing import Pool -import sys - - -class DataBin(object): - def __init__(self, filegroups): - self.filegroups = filegroups - - -class SuperDataset(data.Dataset): - def __init__(self, config, shuffle=False, check_all_data=False, DDP_device=None): - self.config = config - - self.check_all_data = check_all_data - self.DDP_device = DDP_device - - self.data = {} # Will be dictionary. Keys are data names, e.g. paired_A, patch_A. Values are lists containing associated data. - self.transforms = {} - - if self.config['common']['phase'] == 'test': - if not self.config['testing']['test_video'] is None: - self.test_video_data = TestVideoData(self.config) - else: - add_test_data(self.data, self.transforms, self.config) - return - - self.static_data = StaticData(self.config, shuffle) - - - def convert_old_config_to_new(self): - data_types = self.config['dataset']['data_type'] - if len(data_types) == 1 and data_types[0] == 'custom': - # convert custom data configuration to new data configuration - old_dict = self.config['dataset']['custom_' + self.config['common']['phase'] + '_data'] - preprocess_list = self.config['dataset']['preprocess'] - new_datadict = self.config['dataset'][self.config['common']['phase'] + '_data'] = old_dict - for i, group in enumerate(new_datadict.values()): # examples: (0, group_1), (1, group_2) - group['paired'] = True - group['preprocess'] = preprocess_list - # custom data does not support patch so we skip patch logic. - else: - new_datadict = self.config['dataset'][self.config['common']['phase'] + '_data'] = {} - preprocess_list = self.config['dataset']['preprocess'] - new_datadict['paired_group'] = {} - new_datadict['paired_group']['paired'] = True - new_datadict['paired_group']['data_types'] = [] - new_datadict['paired_group']['data_names'] = [] - new_datadict['paired_group']['preprocess'] = preprocess_list - new_datadict['unpaired_group'] = {} - new_datadict['unpaired_group']['paired'] = False - new_datadict['unpaired_group']['data_types'] = [] - new_datadict['unpaired_group']['data_names'] = [] - new_datadict['unpaired_group']['preprocess'] = preprocess_list - - for i in range(len(self.config['dataset']['data_type'])): - data_type = self.config['dataset']['data_type'][i] - if data_type == 'paired' or data_type == 'paired_numpy': - if self.config['dataset']['paired_' + self.config['common']['phase'] + '_filelist'] != '': - new_datadict['paired_group']['file_list'] = self.config['dataset'][ - 'paired_' + self.config['common']['phase'] + '_filelist'] - elif self.config['dataset']['paired_' + self.config['common']['phase'] + 'A_folder'] != '' and \ - self.config['dataset']['paired_' + self.config['common']['phase'] + 'B_folder'] != '': - new_datadict['paired_group']['paired_A_folder'] = self.config['dataset']['paired_' + self.config['common']['phase'] + 'A_folder'] - new_datadict['paired_group']['paired_B_folder'] = self.config['dataset']['paired_' + self.config['common']['phase'] + 'B_folder'] - else: - new_datadict['paired_group']['dataroot'] = self.config['dataset']['dataroot'] - - new_datadict['paired_group']['data_names'].append('paired_A') - new_datadict['paired_group']['data_names'].append('paired_B') - if data_type == 'paired': - new_datadict['paired_group']['data_types'].append('image') - new_datadict['paired_group']['data_types'].append('image') - else: - new_datadict['paired_group']['data_types'].append('numpy') - new_datadict['paired_group']['data_types'].append('numpy') - - elif data_type == 'unpaired' or data_type == 'unpaired_numpy': - if self.config['dataset']['unpaired_' + self.config['common']['phase'] + 'A_filelist'] != ''\ - and self.config['dataset']['unpaired_' + self.config['common']['phase'] + 'B_filelist'] != '': - # combine those two filelists into one filelist - self.combine_two_filelists_into_one( - self.config['dataset']['unpaired_' + self.config['common']['phase'] + 'A_filelist'], - self.config['dataset']['unpaired_' + self.config['common']['phase'] + 'B_filelist'] - ) - new_datadict['unpaired_group']['file_list'] = './tmp_filelist.txt' - elif self.config['dataset']['unpaired_' + self.config['common']['phase'] + 'A_folder'] != '' and \ - self.config['dataset']['unpaired_' + self.config['common']['phase'] + 'B_folder'] != '': - new_datadict['unpaired_group']['unpaired_A_folder'] = self.config['dataset']['unpaired_' + self.config['common']['phase'] + 'A_folder'] - new_datadict['unpaired_group']['unpaired_B_folder'] = self.config['dataset']['unpaired_' + self.config['common']['phase'] + 'B_folder'] - else: - new_datadict['unpaired_group']['dataroot'] = self.config['dataset']['dataroot'] - - new_datadict['unpaired_group']['data_names'].append('unpaired_A') - new_datadict['unpaired_group']['data_names'].append('unpaired_B') - if data_type == 'unpaired': - new_datadict['unpaired_group']['data_types'].append('image') - new_datadict['unpaired_group']['data_types'].append('image') - else: - new_datadict['unpaired_group']['data_types'].append('numpy') - new_datadict['unpaired_group']['data_types'].append('numpy') - - elif data_type == 'landmark': - if self.config['dataset']['paired_' + self.config['common']['phase'] + '_filelist'] != '': - new_datadict['paired_group']['file_list'] = self.config['dataset'][ - 'paired_' + self.config['common']['phase'] + '_filelist'] - elif 'paired_' + self.config['common']['phase'] + 'A_lmk_folder' in self.config['dataset'] and \ - 'paired_' + self.config['common']['phase'] + 'B_lmk_folder' in self.config['dataset'] and \ - self.config['dataset']['paired_' + self.config['common']['phase'] + 'A_lmk_folder'] != '' and \ - self.config['dataset']['paired_' + self.config['common']['phase'] + 'B_lmk_folder'] != '': - new_datadict['paired_group']['lmk_A_folder'] = self.config['dataset']['paired_' + self.config['common']['phase'] + 'A_lmk_folder'] - new_datadict['paired_group']['lmk_B_folder'] = self.config['dataset']['paired_' + self.config['common']['phase'] + 'B_lmk_folder'] - else: - new_datadict['paired_group']['dataroot'] = self.config['dataset']['dataroot'] - - new_datadict['paired_group']['data_names'].append('lmk_A') - new_datadict['paired_group']['data_names'].append('lmk_B') - new_datadict['paired_group']['data_types'].append('landmark') - new_datadict['paired_group']['data_types'].append('landmark') - - # Handle patches. This needs to happen after all non-patch data are added first. - if 'patch' in self.config['dataset']['data_type']: - # determine if patch comes from paired or unpaired image - if 'paired_A' in new_datadict['paired_group']['data_names']: - new_datadict['paired_group']['data_types'].append('patch') - new_datadict['paired_group']['data_names'].append('patch_A') - new_datadict['paired_group']['data_types'].append('patch') - new_datadict['paired_group']['data_names'].append('patch_B') - - if 'patch_sources' not in new_datadict['paired_group']: - new_datadict['paired_group']['patch_sources'] = [] - new_datadict['paired_group']['patch_sources'].append('paired_A') - new_datadict['paired_group']['patch_sources'].append('paired_B') - else: - new_datadict['unpaired_group']['data_types'].append('patch') - new_datadict['unpaired_group']['data_names'].append('patch_A') - new_datadict['unpaired_group']['data_types'].append('patch') - new_datadict['unpaired_group']['data_names'].append('patch_B') - - if 'patch_sources' not in new_datadict['unpaired_group']: - new_datadict['unpaired_group']['patch_sources'] = [] - new_datadict['unpaired_group']['patch_sources'].append('unpaired_A') - new_datadict['unpaired_group']['patch_sources'].append('unpaired_B') - - if 'diff_patch' not in self.config['dataset']: - self.config['dataset']['diff_patch'] = False - - new_datadict = {key: value for key, value in new_datadict.items() if len(value['data_names']) > 0} - - print('-----------------------------------------------------------------------') - print("converted %s data configuration: " % self.config['common']['phase']) - for key, value in new_datadict.items(): - print(key + ': ', value) - print('-----------------------------------------------------------------------') - - return self.config - - - def combine_two_filelists_into_one(self, filelist1, filelist2): - tmp_file = open('./tmp_filelist.txt', 'w+') - f1 = open(filelist1, 'r') - f2 = open(filelist2, 'r') - f1_lines = f1.readlines() - f2_lines = f2.readlines() - min_index = min(len(f1_lines), len(f2_lines)) - for i in range(min_index): - tmp_file.write(f1_lines[i].strip() + ' ' + f2_lines[i].strip() + '\n') - if min_index == len(f1_lines): - for i in range(min_index, len(f2_lines)): - tmp_file.write('None ' + f2_lines[i].strip() + '\n') - else: - for i in range(min_index, len(f1_lines)): - tmp_file.write(f1_lines[i].strip() + ' None\n') - - tmp_file.close() - f1.close() - f2.close() - - - def __len__(self): - if self.config['common']['phase'] == 'test': - if self.config['testing']['test_video'] is not None: - return self.test_video_data.get_len() - else: - if len(self.data.keys()) == 0: - return 0 - else: - min_len = 999999 - for k, v in self.data.items(): - length = len(v) - if length < min_len: - min_len = length - return min_len - else: - return self.static_data.get_len() - - - - def get_item_logic(self, index): - return_dict = {} - - if self.config['common']['phase'] == 'test': - if not self.config['testing']['test_video'] is None: - return self.test_video_data.get_item() - else: - apply_test_transforms(index, self.data, self.transforms, return_dict) - return return_dict - - return_dict = self.static_data.get_item(index) - - return return_dict - - - def __getitem__(self, index): - if self.config['dataset']['accept_data_error']: - while True: - try: - return self.get_item_logic(index) - except Exception as e: - print("Exception encountered in super_dataset's getitem function: ", e) - index = (index + 1) % self.__len__() - else: - return self.get_item_logic(index) - - - def split_data(self, value_mode, value, mode='split'): - new_dataset = copy.deepcopy(self) - ret1, new_dataset.static_data = self.split_data_helper(self.static_data, new_dataset.static_data, value_mode, value, mode=mode) - if ret1 is not None: - self.static_data = ret1 - return self, new_dataset - - - def split_data_helper(self, dataset, new_dataset, value_mode, value, mode='split'): - for i in range(len(dataset.file_groups)): - max_split_index = 0 - for k in dataset.file_groups[i].keys(): - length = len(dataset.file_groups[i][k]) - if value_mode == 'count': - split_index = min(length, value) - else: - split_index = int((1 - value) * length) - max_split_index = max(max_split_index, split_index) - new_dataset.file_groups[i][k] = new_dataset.file_groups[i][k][split_index:] - if mode == 'split': - dataset.file_groups[i][k] = dataset.file_groups[i][k][:split_index] - new_dataset.len_of_groups[i] -= max_split_index - if mode == 'split': - dataset.len_of_groups[i] = max_split_index - if mode == 'split': - return dataset, new_dataset - else: - return None, new_dataset - - - def check_data_helper(self, databin): - all_pass = True - for group in databin.filegroups: - for data_name, data_list in group.items(): - for data in data_list: - if '.npy' in data: # case: numpy array or landmark - all_pass = all_pass and check_numpy_loaded(data) - else: # case: image - all_pass = all_pass and check_img_loaded(data) - return all_pass - - - def check_data(self): - if self.DDP_device is None or self.DDP_device == 0: - print("-----------------------Checking all data-------------------------------") - data_ok = True - if self.config['dataset']['n_threads'] == 0: - data_ok = data_ok and self.check_data_helper(self.static_data) - else: - # start n_threads number of workers to perform data checking - with Pool(processes=self.config['dataset']['n_threads']) as pool: - checks = pool.map(self.check_data_helper, - self.split_data_into_bins(self.config['dataset']['n_threads'])) - for check in checks: - data_ok = data_ok and check - if data_ok: - print("---------------------all data passed check.-----------------------") - else: - print("---------------------The above data have failed in data checking. " - "Please fix first.---------------------------") - sys.exit() - - - def split_data_into_bins(self, num_bins): - bins = [] - for i in range(num_bins): - bins.append(DataBin(filegroups=[])) - - # handle static data - bins = self.split_data_into_bins_helper(bins, self.static_data) - return bins - - - def split_data_into_bins_helper(self, bins, dataset): - num_bins = len(bins) - for bin in bins: - for group_idx in range(len(dataset.file_groups)): - bin.filegroups.append({}) - - for group_idx in range(len(dataset.file_groups)): - file_group = dataset.file_groups[group_idx] - for data_name, data_list in file_group.items(): - num_items_in_bin = len(data_list) // num_bins - for data_index in range(len(data_list)): - which_bin = min(data_index // num_items_in_bin, num_bins - 1) - if data_name not in bins[which_bin].filegroups[group_idx]: - bins[which_bin].filegroups[group_idx][data_name] = [] - bins[which_bin].filegroups[group_idx][data_name].append(data_list[data_index]) - return bins diff --git a/spaces/dpc/mmstts/app.py b/spaces/dpc/mmstts/app.py deleted file mode 100644 index bedf4e6149a86e8ca63b186cabf47936d5abb958..0000000000000000000000000000000000000000 --- a/spaces/dpc/mmstts/app.py +++ /dev/null @@ -1,214 +0,0 @@ -# Based on example code of https://huggingface.co/facebook/m2m100_1.2B -# and https://github.com/wannaphong/ttsmms -# See also https://github.com/facebookresearch/fairseq/blob/main/examples/mms/README.md - -import gradio as gr -import os -import re -import soundfile as sf - -import json -import nltk -from underthesea import sent_tokenize as vie_sent_tokenize # Vietnamese NLP toolkit -from underthesea import text_normalize as vie_text_normalize -from nltk import sent_tokenize as nltk_sent_tokenize -from ttsmms import download -from ttsmms import TTS - -from collections import OrderedDict -import uuid -import datetime -import shutil -from num2words import num2words - - -this_description = """Text To Speech for [1000+ languages](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html) - using [fairseq MMS TTS](https://github.com/facebookresearch/fairseq/blob/main/examples/mms/README.md) and [ttsmms](https://github.com/wannaphong/ttsmms) wrapper. -Please note that for some languages, it may not pronounce all words correctly (yet). -""" - -nltk.download("punkt") - -# Pre-download some languages -tts_models = {} -eng_path = download("eng", "./data") -tts_models["eng"] = eng_path -vie_path = download("vie", "./data") -tts_models["vie"] = vie_path -mya_path = download("mya", "./data") -tts_models["mya"] = mya_path - -lang_codes = OrderedDict() - -language_names = list(lang_codes.keys()) -with open("lang_code.txt", "r") as file: - for line in file: - line = line.strip() - if line.startswith("----"): - continue - iso, lang = line.split("\t", 1) - lang_codes[lang + " (" + iso + ")"] = iso - -language_names = list(lang_codes.keys()) - -# Load num2words_lang_map -with open("num2words_lang_map.json") as f: - num2words_lang_map = json.load(f, object_pairs_hook=OrderedDict) - - -def convert_numbers_to_words_num2words(text, lang): - # Find all numbers in the text using regex - numbers = re.findall(r"\d+", text) - # Sort numbers in descending order of length - sorted_numbers = sorted(numbers, key=len, reverse=True) - print(sorted_numbers) - - # Replace numbers with their word equivalents - for number in sorted_numbers: - number_word = num2words(int(number), lang=num2words_lang_map[lang][0]) - text = text.replace(number, number_word) - - return text - - -def convert_mya_numbers_to_words(text): - from mm_num2word import mm_num2word, extract_num - - numbers = extract_num(text) - sorted_numbers = sorted(numbers, key=len, reverse=True) - print(sorted_numbers) - - for n in sorted_numbers: - text = text.replace(n, mm_num2word(n)) - return text - - -def prepare_sentences(text, lang="mya"): - sentences = [] - # pre-process the text for some languages - if lang.lower() == "mya": - text = convert_mya_numbers_to_words(text) - text = text.replace("\u104A", ",").replace("\u104B", ".") - - if lang in num2words_lang_map: - print("num2words supports this lang", lang) - text = convert_numbers_to_words_num2words(text, lang) - print("Processed text", text) - - # Not sure why this can fix unclear pronunciation for the first word of vie - text = text.lower() - - paragraphs = [paragraph for paragraph in text.split("\n") if paragraph.strip()] - - if lang.lower() == "vie": - for paragraph in paragraphs: - sentences_raw = vie_sent_tokenize(paragraph) - sentences.extend( - [ - vie_text_normalize(sentence) - for sentence in sentences_raw - if sentence.strip() - ] - ) - else: - sentences = [ - sentence - for paragraph in paragraphs - for sentence in nltk_sent_tokenize(paragraph) - if sentence.strip() - ] - return sentences - - -def list_dir(lang): - # Get the current directory - current_dir = os.getcwd() - print(current_dir) - - # List all files in the current directory - files = os.listdir(current_dir) - - # Filter the list to include only WAV files - wav_files = [file for file in files if file.endswith(".wav")] - print("Total wav files:", len(wav_files)) - - # Print the last WAV file - sorted_list = sorted(wav_files) - print(lang, sorted_list[-1]) - - -def combine_wav(source_dir, stamp, lang): - # Get a list of all WAV files in the folder - wav_files = [file for file in os.listdir(source_dir) if file.endswith(".wav")] - - # Sort the files alphabetically to ensure the correct order of combination - wav_files.sort() - - # Combine the WAV files - combined_data = [] - for file in wav_files: - file_path = os.path.join(source_dir, file) - data, sr = sf.read(file_path) - combined_data.extend(data) - - # Save the combined audio to a new WAV file - combined_file_path = f"{stamp}_{lang}.wav" - sf.write(combined_file_path, combined_data, sr) - - shutil.rmtree(source_dir) - list_dir(lang) - - # Display the combined audio in the Hugging Face Space app - return combined_file_path - - -def mms_tts(Input_Text, lang_name="Burmese (mya)"): - # lang_code = lang_codes[lang_name] - try: - lang_code = lang_codes[lang_name] - except KeyError: - lang_code = "mya" - - user_model = download(lang_code, "./data") - tts = TTS(user_model) - - sentences = prepare_sentences(Input_Text, lang_code) - - # output_dir = f"out_{lang_code}" - current_datetime = datetime.datetime.now() - timestamp = current_datetime.strftime("%Y%m%d%H%M%S%f") - - user_dir = f"u_{timestamp}" - if os.path.exists(user_dir): - session_id = str(uuid.uuid4()) # Generate a random session ID - user_dir = f"u_{session_id}_{timestamp}" - os.makedirs(user_dir, exist_ok=True) - print("New user directory", user_dir) - - for i, sentence in enumerate(sentences): - tts.synthesis(sentence, wav_path=f"{user_dir}/s_{str(i).zfill(10)}.wav") - combined_file_path = combine_wav(user_dir, timestamp, lang_code) - return combined_file_path - - -# common_languages = ["eng", "mya", "vie"] # List of common language codes -iface = gr.Interface( - fn=mms_tts, - title="Massively Multilingual Speech (MMS) - Text To Speech", - description=this_description, - inputs=[ - gr.Textbox(lines=5, placeholder="Enter text (unlimited sentences)", label="Input text (unlimited sentences)"), - gr.Dropdown( - choices=language_names, - label="Select language 1,000+", - value="Burmese (mya)", - ), - ], - outputs="audio", -) -# outputs=[ -# "audio", -# gr.File(label="Download", type="file", download_to="done.wav") -# ]) - - -iface.launch() diff --git a/spaces/dylanebert/igf/README.md b/spaces/dylanebert/igf/README.md deleted file mode 100644 index 1f7e841e1af125ea53bd58e76b68f67e52d5ba20..0000000000000000000000000000000000000000 --- a/spaces/dylanebert/igf/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: IGF -emoji: 🏢 -colorFrom: gray -colorTo: indigo -sdk: docker -pinned: false -license: mit -app_port: 3000 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ehristoforu/Testbot/app.py b/spaces/ehristoforu/Testbot/app.py deleted file mode 100644 index a84aaa79a85305d63e7e17b07d60052520e4d63a..0000000000000000000000000000000000000000 --- a/spaces/ehristoforu/Testbot/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import asyncio -import os -import threading -from threading import Event -from typing import Optional - -import discord -import gradio as gr -from discord import Permissions -from discord.ext import commands -from discord.utils import oauth_url - -import gradio_client as grc -from gradio_client.utils import QueueError - -intents = discord.Intents.default() -intents.members = True - -bot = commands.Bot(command_prefix='/', intents=intents) - -@bot.event -async def on_ready(): - print(f'Logged in as {bot.user.name}') - -@bot.command() -async def ban(ctx, member: discord.Member, reason=None): - await member.ban(reason=reason) - await ctx.send(f'{member.mention} был забанен. Причина: {reason}') - -@bot.command() -async def kick(ctx, member: discord.Member, reason=None): - await member.kick(reason=reason) - await ctx.send(f'{member.mention} был кикнут. Причина: {reason}') - -@bot.command() -async def warn(ctx, member: discord.Member, reason=None): - await ctx.send(f'{member.mention} получил предупреждение. Причина: {reason}') - -@bot.command() -async def clear(ctx, amount=5): - await ctx.channel.purge(limit=amount) - -bot.run('MTEzNTU5NzMwMTIxNDIyNDQ2NA.GEKXfh.Yua9UadqQeE8BYNI0aaqI0AmI3pC_GMeI7CiJ0') - -with gr.Blocks() as demo: - gr.Markdown( - f""" - # Discord bot of https://freddyaboulton-gpt-35-turbo.hf.space - {welcome_message} - """ - ) \ No newline at end of file diff --git a/spaces/enzostvs/hub-api-playground/utils/axios.ts b/spaces/enzostvs/hub-api-playground/utils/axios.ts deleted file mode 100644 index fef669d5c25543782a05df3b69be3ea6e64352a2..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/hub-api-playground/utils/axios.ts +++ /dev/null @@ -1,10 +0,0 @@ -import redaxios from 'redaxios'; - -const axios = redaxios.create({ - baseURL: process.env.NEXT_PUBLIC_APP_APIURL, - headers: { - 'Content-Type': 'application/json', - }, -}); - -export default axios \ No newline at end of file diff --git a/spaces/enzostvs/stable-diffusion-tpu/components/main/settings/index.tsx b/spaces/enzostvs/stable-diffusion-tpu/components/main/settings/index.tsx deleted file mode 100644 index 31441e4dfb830744e56e2d804dead2d40ec6f28f..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/stable-diffusion-tpu/components/main/settings/index.tsx +++ /dev/null @@ -1,43 +0,0 @@ -import classNames from "classnames"; - -interface Props { - style: string; - open: boolean; - setStyle: (style: string) => void; - list_styles: { name: string; negative_prompt: string; prompt: string }[]; -} -export const Settings: React.FCIf you are interested in creating electronic projects with Arduino boards, you will need a software tool to write code and upload it to the board. This software tool is called Arduino IDE, which stands for Integrated Development Environment. In this article, we will explain what Arduino IDE is, why you need it, how to download and install it on different platforms, and how to use it to write and upload sketches.
-Download Zip ⏩ https://urllie.com/2uNy59
Arduino IDE is an open-source software that allows you to write code in a simplified version of C/C++ language and upload it to any Arduino board. It also provides a text editor, a message area, a text console, a toolbar, a series of menus, and a built-in library manager. You can use Arduino IDE to create sketches, which are programs that run on the Arduino board.
-Some of the main features and benefits of Arduino IDE are:
-Although Arduino IDE is the most popular software for Arduino programming, there are some alternatives that offer different features and advantages. Some of the most common alternatives are:
-Name | Description | Pros | Cons |
---|---|---|---|
Atom.io + Platformio | A modern text editor with a plugin that supports Arduino development. | - Lightweight and fast - Cross-platform - Autocompletion and code navigation - Live debugger | - Requires installation of both Atom.io and Platformio - Difficult configuration for beginners |
Eclipse for Arduino | A powerful integrated development environment with an extension for Arduino development. | - Advanced features such as refactoring, code analysis, testing, etc. - Supports multiple languages - Customizable interface | - Heavy and slow - Complex setup - Steep learning curve |
Visual Studio with Arduino | A professional integrated development environment with an extension for Arduino development. | - Robust features such as debugging, testing, version control, etc. - Supports multiple languages - Intuitive interface | - Expensive for commercial use - Requires installation of both Visual Studio and Visual Micro - Windows only |
Arduino Web Editor | An online web-based editor that allows you to write and upload sketches from any browser. | - No installation required - Cloud storage - Accessible from any device - Integrated with Arduino Cloud | - Requires internet connection - Limited features compared to offline editors - Requires installation of Arduino Create Agent |
The process of downloading and installing Arduino IDE varies depending on the platform you are using. Here are the steps for the most common platforms:
To download and install Arduino IDE on Windows, follow these steps:
-arduino ide download windows 10
-arduino ide download mac
-arduino ide download linux
-arduino ide download zip
-arduino ide download appimage
-arduino ide download msi
-arduino ide download nightly build
-arduino ide download legacy
-arduino ide download chromebook
-arduino ide download micropython
-arduino ide download plc
-arduino ide download source code
-arduino ide download github
-arduino ide download checksums
-arduino ide download portable
-arduino ide download tutorial
-arduino ide download requirements
-arduino ide download troubleshooting
-arduino ide download web editor
-arduino ide download create agent
-arduino ide 2.0 beta download
-arduino ide 2.0 beta features
-arduino ide 2.0 beta release notes
-arduino ide 2.0 beta documentation
-arduino ide 2.0 beta feedback
-arduino ide 2.0 beta debugger
-arduino ide 2.0 beta autocompletion
-arduino ide 2.0 beta code navigation
-arduino ide 2.0 beta dark mode
-arduino ide 2.0 beta board manager
-how to install arduino ide on windows 10
-how to install arduino ide on mac os x
-how to install arduino ide on linux ubuntu
-how to install arduino ide from zip file
-how to install arduino ide from appimage file
-how to install arduino ide from msi file
-how to install arduino ide from source code
-how to install arduino web editor on chromebook
-how to install micropython on arduino boards
-how to install plc languages on arduino boards
-how to update arduino ide to latest version
-how to uninstall arduino ide from windows 10
-how to uninstall arduino ide from mac os x
-how to uninstall arduino ide from linux ubuntu
-how to use arduino ide with chromebook
-how to use arduino web editor with create agent
-how to use arduino plc languages with plc editor
-how to use micropython with pycharm and thonny
To download and install Arduino IDE on Mac OS, follow these steps:
-To download and install Arduino IDE on Linux, follow these steps:
-Once you have installed Arduino IDE on your computer, you can use it to write and upload sketches to your Arduino board. A sketch is a program that runs on the Arduino board and controls its behavior. Here are some basic steps to use Arduino IDE:
-The Arduino IDE interface consists of several components that help you write, edit, compile, and upload your code. These components are:
-If you are new to Arduino programming or want to learn more about a specific topic, you can use the built-in examples and libraries that come with Arduino IDE. These are ready-made sketches that demonstrate how to use various features of Arduino such as sensors, actuators, communication, display, etc. You can access them from the File menu -> Examples -> Built-in Examples or File menu -> Examples -> Library Examples. You can also find more examples online from other sources or create your own.
-If you encounter any errors or problems while writing or uploading your sketch, you can use some of the debugging and troubleshooting tools that Arduino IDE provides. These are:
-Arduino IDE is a powerful and user-friendly software that enables you to write and upload sketches to your Arduino board. It has many features and benefits that make Arduino programming easy and fun. It also has some alternatives that offer different advantages and disadvantages. You can download and install Arduino IDE on different platforms such as Windows, Mac OS, and Linux. You can also use Arduino IDE to write and upload sketches using examples, libraries, debugging, and troubleshooting tools.
-A: Arduino IDE can run on any computer that meets these minimum requirements:
-A: You can update Arduino IDE to the latest version by following these steps:
-A: You can add more libraries to Arduino IDE by using the built-in library manager or by manually installing them. To use the library manager, follow these steps:
-To manually install a library, follow these steps:
-A: You can change the language of Arduino IDE by following these steps:
-A: You can contact Arduino support by using one of these methods:
-If you are a fan of buses, or if you are interested in 3D modeling, you might have heard of bus 3D. But what exactly is bus 3D, and why should you care about it? In this article, we will answer these questions and more. We will explain what bus 3D is, how it works, what are its benefits, what are the different types of bus 3D models, how to create your own bus 3D model, and some tips and tricks for bus 3D modeling. By the end of this article, you will have a better understanding of bus 3D and hopefully, a new appreciation for this amazing technology.
-Download File » https://urllie.com/2uNJ0u
Buses are one of the most common and popular modes of transportation in the world. They can carry many passengers, they are convenient, they are affordable, and they are eco-friendly. Buses are also very diverse and versatile. They can come in different shapes, sizes, colors, designs, and functions. Some buses are used for public transportation, some for tourism, some for school, some for entertainment, and some for special purposes.
-But have you ever wondered how buses are designed and created? How do engineers and designers come up with new ideas and concepts for buses? How do they test and improve their designs before they become reality? This is where bus 3D comes in.
-Bus 3D is the process of creating a digital representation of a bus using three-dimensional computer graphics. Bus 3D models are virtual objects that can be manipulated, modified, animated, rendered, and displayed on a screen or a device. Bus 3D models can be used for various purposes, such as education, entertainment, simulation, visualization, prototyping, marketing, and more.
-Bus 3D works by using mathematical equations and algorithms to generate geometric shapes and surfaces that form the structure and appearance of a bus. These shapes and surfaces are called polygons or meshes. The more polygons or meshes a bus 3D model has, the more detailed and realistic it looks. However, more polygons or meshes also mean more computational power and memory required to process and display the model.
-Bus 3D models can also have textures, colors, materials, lighting effects, shadows, reflections, transparency, and other properties that enhance their visual quality and realism. These properties are called shaders or materials. Shaders or materials can be applied to the polygons or meshes of a bus 3D model to give them different characteristics and behaviors.
-bus 3d model free download
-bus 3d model low poly
-bus 3d model blender
-bus 3d model sketchfab
-bus 3d model cgtrader
-bus 3d model turbosquid
-bus 3d model maya
-bus 3d model obj
-bus 3d model fbx
-bus 3d model max
-bus 3d game online
-bus 3d game download
-bus 3d game simulator
-bus 3d game android
-bus 3d game apk
-bus 3d game mod apk
-bus 3d game offline
-bus 3d game video
-bus 3d game y8
-bus 3d game friv
-bus 3d printing stl
-bus 3d printing file
-bus 3d printing service
-bus 3d printing cost
-bus 3d printing design
-bus 3d printing material
-bus 3d printing size
-bus 3d printing time
-bus 3d printing quality
-bus 3d printing price
-bus 3d animation video
-bus 3d animation software
-bus 3d animation tutorial
-bus 3d animation blender
-bus 3d animation maya
-bus 3d animation after effects
-bus 3d animation cinema 4d
-bus 3d animation youtube
-bus 3d animation gif
-bus 3d animation png
-bus 3d drawing easy
-bus 3d drawing step by step
-bus 3d drawing autocad
-bus 3d drawing sketchup
-bus 3d drawing solidworks
-bus 3d drawing revit
-bus 3d drawing illustrator
-bus 3d drawing photoshop
Bus 3D models can also have animations or movements that make them more dynamic and interactive. Animations or movements can be created by using keyframes or bones. Keyframes are points in time that define the position, orientation, scale, or shape of a polygon or mesh. Bones are skeletal structures that control the deformation of a polygon or mesh. By using keyframes or bones, a bus 3D model can be made to move in different ways.
-Bus 3D has many benefits for both creators and consumers. Some of the benefits are:
-There are many types of bus 3D models available for different purposes and preferences. Some of the most common types are realistic bus 3D models and low-poly bus 3D models.
-Realistic bus 3D models are bus 3D models that aim to mimic the appearance and behavior of real buses as closely as possible. Realistic bus 3D models have high levels of detail, accuracy, and complexity. They usually have many polygons or meshes, shaders or materials, and animations or movements. Realistic bus 3D models are often used for simulation, visualization, education, and entertainment purposes.
-Some examples of realistic bus 3D models are:
-Some use cases of realistic bus 3D models are:
-Low-poly bus 3D models are bus 3D models that have a simple and stylized appearance. Low-poly bus 3D models have low levels of detail, accuracy, and complexity. They usually have few polygons or meshes, basic shaders or materials, and minimal animations or movements. Low-poly bus 3D models are often used for gaming, art, or fun purposes.
-Some examples of low-poly bus 3D models are:
-Some use cases of low-poly bus 3D models are:
-If you are interested in creating your own bus 3D model, you will need some tools and software to help you. There are many tools and software available for bus 3D modeling, but here are some of the most popular and user-friendly ones:
-Sketchfab is a platform where you can upload, view, share, and download 3D models. Sketchfab also has a 3D editor where you can create, edit, and publish your own 3D models. Sketchfab supports various formats, such as OBJ, STL, FBX, GLTF, etc. Sketchfab also has a library of thousands of free and premium 3D models that you can use or modify for your own projects. Sketchfab is free to use for personal and non-commercial purposes, but you can also upgrade to a paid plan for more features and benefits.
-Blender is a free and open-source 3D creation suite that can be used for modeling, sculpting, animating, rendering, compositing, editing, and more. Blender has a powerful and flexible interface that allows you to customize your workflow and preferences. Blender also has a large and active community that provides tutorials, resources, support, and feedback. Blender is compatible with Windows, Mac OS X, Linux, and other operating systems.
-Free3D is a website where you can find and download free 3D models for various categories, such as animals, architecture, vehicles, etc. Free3D also has a section for bus 3D models where you can browse and download different types of buses. Free3D also allows you to upload your own 3D models and share them with other users. Free3D is easy to use and requires no registration or login.
-CGTrader is a marketplace where you can buy and sell 3D models for various purposes, such as gaming, animation, VR/AR, etc. CGTrader also has a section for bus 3D models where you can find and purchase high-quality buses. CGTrader also offers free 3D models that you can download and use for your projects. CGTrader also has a forum where you can interact with other 3D artists and enthusiasts.
-If you want to create your own bus 3D model, here are some tips and tricks that might help you:
-Bus 3D is a fascinating and rewarding field that combines art, technology, and transportation. Bus 3D models are digital representations of buses that can be used for various purposes, such as education, entertainment, simulation, visualization, prototyping, marketing, and more. Bus 3D models can be realistic or low-poly, depending on the level of detail, accuracy, and complexity. Bus 3D models can be created using various tools and software, such as Sketchfab, Blender, Free3D, or CGTrader. Bus 3D modeling requires some research, planning, design, modeling, texturing, lighting, rendering, testing, and exporting skills.
-If you are interested in bus 3D modeling, we hope this article has given you some useful information and inspiration. Bus 3D modeling can be fun, challenging, and rewarding. You can express your creativity and imagination, experiment with different designs and concepts, showcase your work to a wider audience, collaborate with other creators and share feedback, and enjoy realistic and immersive experiences with buses. You can also learn more about buses and their features, functions, rules, and safety measures.
-So what are you waiting for? Start creating your own bus 3D model today!
-Here are some frequently asked questions about bus 3D modeling:
-3D modeling is the process of creating a 3D model using polygons or meshes. 3D rendering is the process of generating an image or a video from a 3D model using shaders or materials and lighting effects.
-Some of the best sources for bus 3D models are Sketchfab, Free3D, CGTrader, TurboSquid, and 3D Warehouse. These websites have thousands of free and premium bus 3D models that you can download or purchase for your projects.
-The time it takes to create a bus 3D model depends on many factors, such as the type of bus, the level of detail, the tool or software used, the skill level of the creator, etc. It can take anywhere from a few hours to a few days or weeks to create a bus 3D model.
-The cost of creating a bus 3D model also depends on many factors, such as the type of bus, the level of detail, the tool or software used, the skill level of the creator, etc. It can cost anywhere from a few dollars to a few hundred or thousand dollars to create a bus 3D model. Some tools or software are free or cheap to use, while others are expensive or require a subscription. Some creators are amateurs or hobbyists, while others are professionals or experts.
-The best way to learn bus 3D modeling is by practicing and experimenting with different tools and software. You can also watch tutorials, read guides, join courses, or follow blogs that teach you the basics and advanced techniques of bus 3D modeling. You can also seek feedback, advice, or help from other bus 3D modelers on forums, communities, or social media.
-If you are looking for a fast-paced, action-packed, and competitive multiplayer shooter game, then you should check out Carnage Wars. Carnage Wars is a game developed by Zic Zac, a studio that specializes in creating fun and addictive games for Android devices. In this game, you can battle with other players in multiplayer mode or play with challenging AI players on single player mode. You can earn XP points and challenge players to get to the top and win. You can also choose from a variety of weapons, vehicles, gadgets, and power-ups to customize your gameplay experience. In this article, we will show you how to download Carnage Wars for free on your PC using an Android emulator. We will also give you some features, tips, and tricks for playing Carnage Wars.
-DOWNLOAD »»» https://urllie.com/2uNBD5
One of the best ways to play Carnage Wars on your PC is to use an Android emulator. An Android emulator is a software that allows you to run Android apps and games on your PC. There are many Android emulators available online, but we recommend using LDPlayer. LDPlayer is a free emulator that has a high compatibility, performance, and stability. It also has a user-friendly interface and supports keyboard and mouse controls. Here are the steps to download Carnage Wars for free on your PC using LDPlayer:
-Carnage Wars is a game that offers a lot of features that make it fun and exciting to play. Here are some of the features that you can enjoy in Carnage Wars:
-Carnage Wars is a game that requires skill, strategy, and quick thinking. Here are some tips and tricks that can help you improve your gameplay and win more matches:
-Carnage Wars is a game that offers a unique and competitive multiplayer shooter experience. You can download it for free on your PC using an Android emulator like LDPlayer. You can also enjoy its features like 11 weapons, 4 vehicles, unique gadgets and power-ups. You can also improve your gameplay by following some tips and tricks like choosing the right weapon, using cover and movement, using vehicles, and using gadgets and power-ups. If you are looking for a fun and addictive game that will challenge your skills and strategy, then you should try Carnage Wars today.
-Here are some frequently asked questions about Carnage Wars:
-free download carnage wars apk
-free download carnage wars game for android
-free download carnage wars pc
-free download carnage wars steam
-free download carnage wars ios
-free download carnage wars online
-free download carnage wars multiplayer
-free download carnage wars mod apk
-free download carnage wars hack
-free download carnage wars cheats
-free download carnage offering game
-free download carnage offering pc
-free download carnage offering steam
-free download carnage offering android
-free download carnage offering ios
-free download carnage offering online
-free download carnage offering multiplayer
-free download carnage offering mod apk
-free download carnage offering hack
-free download carnage offering cheats
-how to play carnage wars for free
-how to play carnage offering for free
-how to install carnage wars on pc
-how to install carnage offering on pc
-how to get carnage wars for free on steam
-how to get carnage offering for free on steam
-best tips and tricks for carnage wars game
-best tips and tricks for carnage offering game
-best weapons and gadgets in carnage wars game
-best weapons and gadgets in carnage offering game
-review of carnage wars game by zic zac studio
-review of carnage offering game by futurtech studio
-comparison of carnage wars and carnage offering games
-similar games to carnage wars and carnage offering
-top 10 fps games like carnage wars and carnage offering
-is carnage wars game worth downloading?
-is carnage offering game worth downloading?
-is carnage wars game safe to download?
-is carnage offering game safe to download?
-is carnage wars game compatible with my device?
-is carnage offering game compatible with my device?
-what are the system requirements for carnage wars game?
-what are the system requirements for carnage offering game?
-what are the features and benefits of carnage wars game?
-what are the features and benefits of carnage offering game?
OS | Windows 7/8/10 (64-bit) |
CPU | Intel or AMD processor with virtualization technology enabled |
RAM | At least 4 GB |
Disk Space | At least 5 GB of free space |
Graphics | NVIDIA GeForce GTX 660 or higher / AMD Radeon HD 7870 or higher |
Internet | Broadband connection |
Do you love bike racing games? Do you want to experience the thrill of performing amazing stunts in midair with your motorcycle? If yes, then you should try Bike Racing 3D APK, one of the best bike motocross (BMX) racing games for Android devices. In this article, we will tell you everything you need to know about this game, including its features, how to download and install it, and some tips and tricks to help you master it. Let's get started!
-Bike Racing 3D APK is a game developed by Words Mobile, a popular developer of casual and arcade games for Android. It is a game that lets you race, jump, and crash your way through a variety of treacherous tracks while enjoying the realistic bike physics and fast-paced gameplay. You can choose from five different bikes, each with its own characteristics and performance. You can also upgrade your bike and unlock new tracks as you progress through the career mode. The game has over 100 million downloads on Google Play and has received positive reviews from users and critics alike .
-Download File ---> https://urllie.com/2uNIek
Bike Racing 3D APK is a game that will keep you hooked for hours with its addictive and challenging gameplay. Here are some reasons why you should play this game:
-Bike Racing 3D APK has many features that make it stand out from other bike racing games. Here are some of them:
-The game has 60 tracks in Career mode, ranging from easy trials to very technical ones. You can challenge yourself with different terrains, obstacles, ramps, loops, and more. You can also earn stars and coins by completing each track. The more stars and coins you get, the more tracks and bikes you can unlock.
-The game has realistic 3D physics and graphics that simulate the behavior and appearance of a real bike. You can see how your bike reacts to different forces, such as gravity, friction, inertia, and torque. You can also enjoy the stunning scenery and details of each track, such as the trees, rocks, clouds, shadows, and more.
-The game has five different bikes
The game has five different bikes to choose from, each with its own characteristics and performance. You can customize your bike with different colors, wheels, and stickers. You can also upgrade your bike's engine, speed, brake, and suspension to improve its handling and speed. The bikes are:
-If you want to play Bike Racing 3D APK on your Android device, you will need to download and install the APK file from a trusted source. Here are the steps to do so:
-You can download the APK file from various websites that offer free and safe APK downloads, such as [APKPure], [APKMirror], or [Uptodown]. You can also scan the QR code below to download the APK file directly to your device.
-bike racing 3d game download for android
-bike racing 3d mod apk unlimited money
-bike racing 3d apk free download
-bike racing 3d game online play
-bike racing 3d hack apk download
-bike racing 3d game install
-bike racing 3d apk old version
-bike racing 3d game video
-bike racing 3d apk pure
-bike racing 3d game features
-bike racing 3d mod apk revdl
-bike racing 3d apk latest version
-bike racing 3d game review
-bike racing 3d hack apk android 1
-bike racing 3d apk uptodown
-bike racing 3d game tips and tricks
-bike racing 3d mod apk rexdl
-bike racing 3d apk mirror
-bike racing 3d game cheats
-bike racing 3d hack apk no root
-bike racing 3d apk mob.org
-bike racing 3d game modes
-bike racing 3d mod apk happymod
-bike racing 3d apk obb
-bike racing 3d game controls
-bike racing 3d hack apk unlimited coins and gems
-bike racing 3d apk appvn
-bike racing 3d game graphics
-bike racing 3d mod apk android oyun club
-bike racing 3d apk data
-bike racing 3d game soundtrack
-bike racing 3d hack apk ios
-bike racing 3d apk indir
-bike racing 3d game rating
-bike racing 3d mod apk an1.com
-bike racing 3d apk file download
-bike racing 3d game update
-bike racing 3d hack apk offline
-bike racing 3d apk apkpure.com
-bike racing 3d game requirements
-bike racing 3d mod apk all bikes unlocked
-bike racing 3d apk for pc
-bike racing 3d game size
-bike racing 3d hack apk online generator
-bike racing 3d apk mod menu
-bike racing 3d game developer name
Before you can install the APK file, you will need to enable unknown sources on your device. This will allow you to install apps from sources other than Google Play. To do this, follow these steps:
-Once you have downloaded the APK file and enabled unknown sources, you can install the APK file by following these steps:
-Bike Racing 3D APK is a game that requires skill and strategy to master. Here are some tips and tricks that will help you improve your performance and enjoy the game more:
-The game has simple controls that consist of two buttons: accelerate and brake. You can also tilt your device left or right to balance your bike in midair. However, the game also has realistic physics that affect how your bike moves and reacts to different forces. You will need to master both the controls and the physics to navigate through the tracks smoothly and safely. Here are some tips to help you with that:
-The game has a lot of content that you can unlock by earning stars and coins. Stars are awarded based on how well you complete each track, while coins are earned by performing stunts, collecting bonuses, or watching ads. You can use stars and coins to upgrade your bike and unlock new tracks. Here are some tips to help you with that:
-Upgrade your bike regularly. Upgrading your bike will improve its engine, speed, brake, and suspension, which will make it easier and faster to complete the tracks. You can upgrade your bike by spending coins in the garage menu.
-The game has a scoring system that rewards you for performing combos and stunts. Combos are sequences of stunts that you perform without crashing or stopping, while stunts are specific tricks that you perform in midair, such as flips, spins, wheelies, and more. Performing combos and stunts will earn you more points, which will increase your score and rank. Here are some tips to help you with that:
-Bike Racing 3D APK is a thrilling BMX game for Android devices that lets you race, jump, and crash your way through 60 tracks in career mode. You can also choose from five unique bikes, each with its own characteristics and performance. You can also upgrade your bike and unlock new tracks as you progress through the game. The game has realistic 3D physics and graphics that make you feel like you are riding a real bike. The game also has a scoring system that rewards you for performing combos and stunts in midair.
-If you are looking for a fun and exciting bike racing game for your Android device, then you should definitely try Bike Racing 3D APK. You can download it from any of the links below or scan the QR code above. You will not regret it!
-[Download Bike Racing 3D APK from APKPure]
-[Download Bike Racing 3D APK from APKMirror]
-[Download Bike Racing 3D APK from Uptodown]
-Here are some of the most frequently asked questions about Bike Racing 3D APK:
-Yes, Bike Racing 3D APK is safe to download and install from any of the sources mentioned above or scanned with the QR code above. The APK file has been verified by antivirus software and does not contain any malware or viruses.
-Yes, Bike Racing 3D APK is free to play and does not require any payment or subscription to enjoy the game. However, the game does contain ads and in-app purchases that can enhance your gaming experience or remove ads.
-If you want to remove ads from Bike Racing 3D APK, you can either purchase the ad-free version of the game for $1.99 or watch an ad video to get an ad-free session for 10 minutes.
-If you want to get more
If you want to get more coins in Bike Racing 3D APK, you can either purchase them with real money or earn them by playing the game. You can earn coins by performing stunts, collecting bonuses, completing tracks, or watching ads. You can also get free coins by logging in daily, inviting friends, or rating the game.
-If you want to play Bike Racing 3D APK with friends, you can either challenge them online or play with them locally. To challenge them online, you need to connect your game account to Facebook or Google Play Games and then invite them to a race. To play with them locally, you need to connect your devices to the same Wi-Fi network and then join a local multiplayer mode.
-I hope this article has helped you learn more about Bike Racing 3D APK and how to play it. If you have any questions or feedback, please leave a comment below. Thank you for reading and happy racing!
197e85843dPokemon Go is a popular augmented reality game that lets you catch, train, and battle virtual creatures in the real world. The game is constantly updated with new features, events, and Pokemon to keep you engaged and entertained. However, if you live in a region where the game is not officially available or where the updates are delayed, you might be missing out on some of the fun. That's why some Android users choose to download the APK file of Pokemon Go, which is an installation file that allows you to run the game on your device without going through the Google Play Store. In this article, we will explain the benefits and risks of downloading the APK file of Pokemon Go, as well as how to do it safely and easily. We will also show you what's new in the latest version of Pokemon Go and how to play it.
-Download File ——— https://urllie.com/2uNGc1
One of the main reasons why Android users download the APK file of Pokemon Go is to get access to new features and events before they are officially released in their region. For example, in June 2023, Niantic launched Season 10 of Pokemon Go, which introduced new Pokemon, new gameplay modes, new cosmetics, and more. However, not all regions received the update at the same time, and some users had to wait for days or even weeks before they could enjoy it. By downloading the APK file from a reputable source, such as APKCombo, you can bypass this delay and start playing with the latest version right away.
-Another benefit of downloading the APK file is that you can save some storage space on your device. The APK file is usually smaller than the one downloaded from the Google Play Store, as it does not include additional data or resources that are not needed for your device. This can help you free up some memory and improve your device's performance.
-While downloading the APK file of Pokemon Go can have some advantages, it also comes with some risks that you should be aware of. The most important one is that you might download a fake or malicious file that could harm your device or compromise your personal information. There are many websites that claim to offer Pokemon Go APK files, but not all of them are trustworthy or reliable. Some of them might contain malware, viruses, spyware, or other security threats that could infect your device or steal your data. To avoid this, you should always download the APK file from a reputable source, such as APKCombo, which verifies and tests every file before uploading it.
-Another risk of downloading the APK file is that you might violate the terms of service of Pokemon Go or Google Play Store. Niantic, the developer of Pokemon Go, does not officially support or endorse downloading the game from sources other than the Google Play Store. This means that if you encounter any problems or issues with the game, such as bugs, glitches, or bans, Niantic might not be able to help you or provide you with customer service. Similarly, Google Play Store might not recognize or update your game if you download it from an external source. This could affect your game's performance or compatibility with future updates.
-download pokemon go apk latest version
-pokemon go new update apk download for android
-how to download and install pokemon go apk update
-pokemon go apk 0.273.3 download free
-pokemon go apk xapk download
-download pokemon go apk from official website
-pokemon go apk download for ios
-pokemon go apk mod download with unlimited coins
-pokemon go apk mirror download
-pokemon go apk pure download
-pokemon go apk obb download
-pokemon go apk hack download for android
-pokemon go apk offline download
-pokemon go apk beta download
-pokemon go apk old version download
-pokemon go apk update 2023 download
-pokemon go apk file download for pc
-pokemon go apk no root download
-pokemon go apk direct download link
-pokemon go apk full version download
-pokemon go apk cracked download
-pokemon go apk latest update features
-pokemon go new update apk release date
-what's new in pokemon go apk update
-how to update pokemon go apk manually
-pokemon go new update apk size
-pokemon go new update apk changelog
-how to fix pokemon go apk update error
-how to uninstall pokemon go apk update
-how to backup pokemon go apk before update
-how to play pokemon go apk after update
-how to get free pokecoins in pokemon go apk update
-how to catch legendary pokemons in pokemon go apk update
-how to join raid battles in pokemon go apk update
-how to make your pokemons stronger in pokemon go apk update
-how to compete in the GO Battle League in pokemon go apk update
-how to explore and discover pokemons in pokemon go apk update
-how to complete your pokedex in pokemon go apk update
-how to journey alongside your buddy pokemons in pokemon go apk update
-how to compete in epic gym battles in pokemon go apk update
-how to team up with other trainers in pokemon go apk update
-how to customize your trainer avatar in pokemon go apk update
-how to earn rewards and achievements in pokemon go apk update
-how to connect with your friends in pokemon go apk update
-how to trade and gift pokemons in pokemon go apk update
-how to use AR mode in pokemon go apk update
-how to save battery and data in pokemon go apk update
-how to optimize your device for pokemon go apk update
-how to troubleshoot common issues in pokemon go apk update
If you decide to download the APK file of Pokemon Go, you should follow these steps carefully to ensure a safe and smooth installation:
-Now that you have downloaded and installed the APK file of Pokemon Go, you might be wondering what's new in the latest version of the game and how to play it. Here are some of the highlights of Season 10 of Pokemon Go:
-These are just some of the new features and events that Season 10 of Pokemon Go has to offer. There are also many other changes and improvements that you can discover by playing the game. Have fun!
-Here are some of the frequently asked questions about Pokemon Go APK files:
-Question | Answer |
Is downloading the APK file of Pokemon Go illegal? | No, downloading the APK file of Pokemon Go is not illegal, as long as you do not use it for any malicious or fraudulent purposes. However, it might violate the terms of service of Pokemon Go or Google Play Store, so you should do it at your own risk. |
Will downloading the APK file of Pokemon Go affect my account or progress? | No, downloading the APK file of Pokemon Go will not affect your account or progress, as long as you use the same account that you use on the Google Play Store version. You will be able to access all your data, items, Pokemon, and achievements on both versions. |
Will downloading the APK file of Pokemon Go ban me from the game? | No, downloading the APK file of Pokemon Go will not ban you from the game, as long as you do not use any cheats, hacks, or mods that give you an unfair advantage over other players. Niantic has a strict policy against cheating and will ban any account that violates it. |
How often do I need to update the APK file of Pokemon Go? | You need to update the APK file of Pokemon Go whenever there is a new version available on AP KCombo. You can check the website regularly or subscribe to their newsletter to get notified of new updates. You can also follow their social media accounts or join their community forums to stay updated. |
Where can I get help or support for the APK file of Pokemon Go? | If you have any questions, issues, or feedback about the APK file of Pokemon Go, you can contact APKCombo through their website, email, or phone. They have a friendly and professional customer service team that will assist you with any problems. You can also visit their help center, where you can find answers to frequently asked questions, tutorials, guides, and tips. |
Pokemon Go is a fun and exciting game that lets you explore the world and catch virtual creatures. However, if you want to enjoy the latest features and events of the game before they are officially released in your region, you might want to download the APK file of Pokemon Go from a reputable source, such as APKCombo. This will allow you to install and run the game on your Android device without going through the Google Play Store. However, you should also be aware of the risks and challenges of downloading the APK file, such as security threats, terms of service violations, and update issues. You should always download the APK file from a trustworthy source, follow the instructions carefully, and play responsibly. We hope this article has helped you understand how to download and install the APK file of Pokemon Go and how to play it with the new update. Have fun and catch 'em all!
401be4b1e0- Generate your own Mr men or Little Misses. -
-Put in a text prompt and generate your own Mr Men or Little Misses character, no "prompt engineering" required!
-If you have ever lost or forgotten your car radio code, you know how frustrating and annoying it can be. Without the code, you cannot use your car audio system, and you may have to pay a lot of money to get it from the dealer or the manufacturer. But what if there was a way to get your car radio code in minutes, without spending a dime? That's what Crucc 24 Car Radio Universal Code Calculator 24 can do for you.
- -Crucc 24 Car Radio Universal Code Calculator 24 is a software that can calculate the car audio anti-theft security code by using the car radio serial number, diode / link coding and master codes. It can work with many different models and manufacturers of car audio units, such as Alpine, Becker, Blaupunkt, Clarion, Grundig, Philips, Pioneer, Sony and more. Crucc 24 Car Radio Universal Code Calculator 24 is the simple and fastest way to unlock your car radio and enjoy your music again.
-Download File === https://urlgoal.com/2uyN50
Crucc 24 Car Radio Universal Code Calculator 24 is very easy to use. All you need is a computer, a cable and the software. Here are the steps to follow:
- -Crucc 24 Car Radio Universal Code Calculator 24 has many benefits for car owners who want to recover their car radio code. Some of these benefits are:
- -In conclusion, -Crucc 24 Car Radio Universal Code Calculator 24 is a software that can help you unlock your car radio and enjoy your music again. It can calculate the car audio anti-theft security code by using the car radio serial number, diode / link coding and master codes. It can work with many different models and manufacturers of car audio units. It is simple, fast, easy, reliable and accurate. It is the best solution for car audio security.
-Crucc 24 Car Radio Universal Code Calculator 24 is a software that can be downloaded from the internet. There are various websites that offer Crucc 24 Car Radio Universal Code Calculator 24 for free or for a small fee. Some of these websites are:
- -To download Crucc 24 Car Radio Universal Code Calculator 24 from these websites, you need to follow these steps:
- -Crucc 24 Car Radio Universal Code Calculator 24 is a software that can help you unlock your car radio and enjoy your music again. However, you need to use it safely and responsibly, as there are some risks and limitations involved. Here are some tips to use Crucc 24 Car Radio Universal Code Calculator 24 safely:
- - -In summary, -Crucc 24 Car Radio Universal Code Calculator 24 is a software that can help you unlock your car radio and enjoy your music again. It can calculate the car audio anti-theft security code by using the car radio serial number, diode / link coding and master codes. It can work with many different models and manufacturers of car audio units. It is simple, fast, easy, reliable and accurate. It is the best solution for car audio security. However, you need to use it safely and responsibly, as there are some risks and limitations involved.
-Crucc 24 Car Radio Universal Code Calculator 24 is a software that can help you unlock your car radio and enjoy your music again. However, it is not the only option available for car audio security. There are some alternatives to Crucc 24 Car Radio Universal Code Calculator 24 that you can also try. Some of these alternatives are:
- -These alternatives to Crucc 24 Car Radio Universal Code Calculator 24 have their own advantages and disadvantages, such as availability, cost, reliability and compatibility. You can choose the one that suits your needs and preferences best.
- -Crucc 24 Car Radio Universal Code Calculator 24 is a software that has received many positive reviews from users who have tried it and benefited from it. Here are some of the reviews of Crucc 24 Car Radio Universal Code Calculator 24:
- --- -"I lost my car radio code when I changed my battery and I was desperate to get it back. I searched online and found Crucc 24 Car Radio Universal Code Calculator 24. I downloaded it and connected my car radio to my computer. In a few seconds, I got my car radio code and entered it on my car audio system. It worked perfectly and I was able to listen to my music again. Thank you Crucc 24!"
-- John, USA -
-- -"I bought a used car with a Blaupunkt car radio installed. The seller did not give me the car radio code and I could not use it. I tried to contact the dealer and the manufacturer but they asked me for a lot of money and paperwork. I decided to look for another solution and I found Crucc 24 Car Radio Universal Code Calculator 24. I downloaded it and entered my car radio serial number. In a few minutes, I got my car radio code and entered it on my car audio system. It worked like a charm and I was able to enjoy my music again. Thank you Crucc 24!"
-- Maria, Spain -
-- -"I am a car audio professional and I use Crucc 24 Car Radio Universal Code Calculator 24 regularly for my work. It is a very useful tool that helps me to unlock any car radio model and manufacturer in minutes. It saves me a lot of time and money and makes my customers happy. It is very easy to use and very reliable. It is the best software for car audio security. Thank you Crucc 24!"
-- Ahmed, Egypt -
These reviews show that Crucc 24 Car Radio Universal Code Calculator 24 is a software that has satisfied many users who have used it and recommended it.
-Car radio security is a feature that protects your car audio system from theft or unauthorized use. However, it can also cause problems if you lose or forget your car radio code. Without the code, you cannot use your car audio system, and you may have to pay a lot of money to get it from the dealer or the manufacturer.
- -That's why you need Crucc 24 Car Radio Universal Code Calculator 24, a software that can help you unlock your car radio and enjoy your music again. Crucc 24 Car Radio Universal Code Calculator 24 can calculate the car audio anti-theft security code by using the car radio serial number, diode / link coding and master codes. It can work with many different models and manufacturers of car audio units. It is simple, fast, easy, reliable and accurate. It is the best solution for car audio security.
- -Crucc 24 Car Radio Universal Code Calculator 24 can be downloaded from the internet, from various websites that offer it for free or for a small fee. Crucc 24 Car Radio Universal Code Calculator 24 can be used in different ways, according to the objective and the level of each user, such as a complement of the book of text, a guide of study, a material of consultation or a didactic resource. Crucc 24 Car Radio Universal Code Calculator 24 has many benefits for users who want to recover their car radio code, such as saving time and money, accessing and using easily the material, adapting to the level and rhythm of each user and improving their academic and professional performance.
- -In summary, -Crucc 24 Car Radio Universal Code Calculator 24 is a software that can help you unlock your car radio and enjoy your music again. It can calculate the car audio anti-theft security code by using the car radio serial number, diode / link coding and master codes. It can work with many different models and manufacturers of car audio units. It is simple, fast, easy, reliable and accurate. It is the best solution for car audio security. However, you need to use it safely and responsibly, as there are some risks and limitations involved.
3cee63e6c2Download === https://urlgoal.com/2uyNz3
Download ⚡ https://gohhs.com/2uz48Y
¿Necesita ayuda para entender la Biblia? El compendio manual bíblico, con la RVR60, le trasmite el mensaje y le hace más accesible el conocimiento bíblico. Podrá comprender y afianzarse mucho más en la Palabra de Dios con este manual bíblico.
Apreciará mejor las culturas, religiones y la geografía en que se desarrollaron las historias de la Biblia. Verá cómo pudieron entrelazarse sus diferentes temas de manera notable. Asimismo, podrá ver el corazón de Dios y la persona de Jesucristo revelados desde Génesis hasta Apocalipsis. El compendio manual bíblico RV60 mantiene el estilo muy personal de su autor, el Dr. Halley, y ofrece mapas, fotografías, diseño contemporáneo y lectura práctica.
DOWNLOAD ☆☆☆☆☆ https://gohhs.com/2uz5wM
aac 2010 keygen 64 bit. to unpack this before running hasphl2010.exe, just run the unzipping program. (i use winrar)
sentemul2010 x64.rar - online download.
sentemul2010-0.6.8a-x64.rar - original version of sentemul2010 0.8a
sentemul2010-0.
DOWNLOAD ••• https://urlin.us/2uEvXx
sentemul 2010 - dongle emulator. sentemul dongle emulator has definitely become a trendsetter. multiplatform solution that works with 32-bit and 64-bit windows.
soft-key solutions.. first of all unpack hasphl2010.zip and put hasphl2010.exe to any folder on your drive and run hasphl2010.exe on machine that has latest.
sentemul 2007 64bit./b04vr5pc-2bjstmnby36cf4c/sentemul.zip.html reteam.org/board/archive/index.php/t-487.html. sentemul.html
free sentemul 2010 32 bits download download software at updatestar -. sentemul 32 bits. winrar is a 32-bit/64-bit windows version of rar archiver,.
locorfou 09d271e77f
i try to emulate program with 64 bit sentemul 2010 ( windows 7 64 bit ) i install driver but when i try to loading dng file i have message. i was able to dl the file just fine. however, it gives error message of please use sentemul2010_x64 with your 64 bit ox. i am running a 64bit.
-i try to emulate program with 64 bit sentemul 2010 ( windows 7 64 bit ) i install driver but when i try to loading dng file i have message. i was able to dl the file just fine. however, it gives error message of please use sentemul2010_x64 with your 64 bit ox.
- 899543212bDownload ——— https://urlin.us/2uEwEA
35% Katthavar Pass is a 2016 Marathi movie directed by Satish Motling and starring Pratamesh Parab, Bhagyashree Shankpal, Yashoman Apte, Aayli Ghiya, Sanjay Narvekar and others. The movie is a coming-of-age story of three college-going friends who have no interest in studies and are always up to some mischief. However, their lives change when one of them falls in love with a girl.
-DOWNLOAD ★★★★★ https://tiurll.com/2uCiZR
The movie is a light-hearted comedy that explores the themes of friendship, love, education and family. The movie also features some popular Marathi actors like Neha Pendse, Vijay Patkar and Bharat Ganeshpure in supporting roles. The movie has received positive reviews from critics and audiences alike for its humor, performances and music.
-35% Katthavar Pass is available to watch online on Prime Video and JioCinema. If you are looking for a fun and entertaining movie to watch with your friends or family, you can give this movie a try.
- -The movie has a simple and relatable plot that showcases the struggles and aspirations of the young generation. The movie also has a message about the importance of education and career choices. The movie has some hilarious scenes and dialogues that will make you laugh out loud. The movie also has some emotional moments that will touch your heart.
- -The movie has a talented cast that delivers convincing performances. Prathamesh Parab, who is known for his roles in Timepass and Balak Palak, plays the role of Sairaj, a carefree and mischievous student who falls in love with Aarti, played by Bhagyashree Shankpal. Yashoman Apte and Aayli Ghiya play the roles of Tanish and Rutuja, Sairaj's best friends who support him in his love story. Sanjay Narvekar plays the role of Sairaj's father, who is a strict and conservative man who wants his son to focus on studies.
-The movie has a catchy soundtrack composed by Amitraj and Troy Arif. The songs are sung by popular singers like Adarsh Shinde, Anandi Joshi, Jasraj Joshi and others. The songs are well-picturized and suit the mood of the movie. The title song "35% Katthavar Pass" is a peppy and upbeat number that celebrates the spirit of the backbenchers. The song "Moharle He" is a romantic and melodious song that expresses the feelings of Sairaj and Aarti. The song "Tujhya Vina" is a sad and soulful song that depicts the separation of the lovers.
- -The movie was released on 20 May 2017 and received a good response from the audience. The movie was praised for its comedy, story and direction. The movie also did well at the box office and earned a decent collection. The movie was one of the successful Marathi movies of 2017.
-The movie is a good example of how Marathi cinema is evolving and experimenting with different genres and themes. The movie is a refreshing and entertaining watch that appeals to the youth and the family audience. The movie also has a social message about the value of education and the importance of following one's dreams.
-If you are looking for a fun-filled and engaging movie that will make you laugh and cry, you should watch 35% Katthavar Pass. The movie is a perfect blend of comedy, drama and romance that will keep you hooked till the end. The movie is a must-watch for all the fans of Prathamesh Parab and Sanjay Narvekar.
d5da3c52bfDOWNLOAD > https://tiurll.com/2uCjkp
元素,则不添加按钮
- }
- var firstChild = code.firstChild;
- if (!firstChild) {
- return; // 如果 元素没有子节点,则不添加按钮
- }
- var button = document.createElement('button');
- button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本
- button.style.position = 'relative';
- button.style.float = 'right';
- button.style.fontSize = '1em'; // 可选:调整按钮大小
- button.style.background = 'none'; // 可选:去掉背景颜色
- button.style.border = 'none'; // 可选:去掉边框
- button.style.cursor = 'pointer'; // 可选:显示指针样式
- button.addEventListener('click', function () {
- var range = document.createRange();
- range.selectNodeContents(code);
- range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前
- var selection = window.getSelection();
- selection.removeAllRanges();
- selection.addRange(range);
-
- try {
- var success = document.execCommand('copy');
- if (success) {
- button.textContent = '\u2714';
- setTimeout(function () {
- button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制”
- }, 2000);
- } else {
- button.textContent = '\u2716';
- }
- } catch (e) {
- console.error(e);
- button.textContent = '\u2716';
- }
-
- selection.removeAllRanges();
- });
- code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前
- }
-
- function handleNewElements(mutationsList, observer) {
- for (var mutation of mutationsList) {
- if (mutation.type === 'childList') {
- for (var node of mutation.addedNodes) {
- if (node.nodeName === 'PRE') {
- addCopyButton(node);
- }
- }
- }
- }
- }
-
- var observer = new MutationObserver(handleNewElements);
- observer.observe(document.documentElement, { childList: true, subtree: true });
-
- document.querySelectorAll('pre').forEach(addCopyButton);
-})();
diff --git a/spaces/jipenaflor/Youtube-Transcript-Summarizer/app.py b/spaces/jipenaflor/Youtube-Transcript-Summarizer/app.py
deleted file mode 100644
index 7f2d99ca00df43e3948f8ac7f885094c971b88c1..0000000000000000000000000000000000000000
--- a/spaces/jipenaflor/Youtube-Transcript-Summarizer/app.py
+++ /dev/null
@@ -1,29 +0,0 @@
-from youtube_transcript_api import YouTubeTranscriptApi
-import gradio as gr
-from gradio.mix import Series
-
-def generate_transcript(url):
- id = url[url.index("=")+1:]
-
- transcript = YouTubeTranscriptApi.get_transcript(id)
- script = ""
-
- for text in transcript:
- t = text["text"]
- if t != '[Music]':
- script += t + " "
-
- return script
-
-transcriber = gr.Interface(generate_transcript, 'text', 'text')
-summarizer = gr.Interface.load("huggingface/sshleifer/distilbart-cnn-12-6")
-
-gradio_ui = Series(transcriber, summarizer,
- inputs = gr.inputs.Textbox(label = "Enter the YouTube URL below:"),
- outputs = gr.outputs.Textbox(label = "Transcript Summary"),
- examples = ["https://www.youtube.com/watch?v=Cu3R5it4cQs&list", "https://www.youtube.com/watch?v=HB4I2CgkcCo"],
- title = "YouTube Transcript Summarizer",
- theme = "peach",
- description = "This application uses the sshleifer/distilbart-cnn-12-6 model to summarize a short YouTube video that has English subtitles. For it to work, the input URL must follow the format similar to the given examples, specifically having the video's ID at the end. Examples are videos from GCFLearnFree.org YouTube Channel.")
-
-gradio_ui.launch()
\ No newline at end of file
diff --git a/spaces/jlmarrugom/voice_fixer_app/voicefixer/vocoder/model/__init__.py b/spaces/jlmarrugom/voice_fixer_app/voicefixer/vocoder/model/__init__.py
deleted file mode 100644
index b4e5814cb200668317c393212f866b6d53097724..0000000000000000000000000000000000000000
--- a/spaces/jlmarrugom/voice_fixer_app/voicefixer/vocoder/model/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-#!/usr/bin/env python
-# -*- encoding: utf-8 -*-
-"""
-@File : __init__.py.py
-@Contact : haoheliu@gmail.com
-@License : (C)Copyright 2020-2100
-
-@Modify Time @Author @Version @Desciption
------------- ------- -------- -----------
-9/14/21 1:00 AM Haohe Liu 1.0 None
-"""
diff --git a/spaces/jordonpeter01/MusicGen/audiocraft/utils/notebook.py b/spaces/jordonpeter01/MusicGen/audiocraft/utils/notebook.py
deleted file mode 100644
index 019b9d19e5bef976bedddf428fd25da42a8a9726..0000000000000000000000000000000000000000
--- a/spaces/jordonpeter01/MusicGen/audiocraft/utils/notebook.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-try:
- import IPython.display as ipd # type: ignore
-except ImportError:
- # Note in a notebook...
- pass
-
-
-import torch
-
-
-def display_audio(samples: torch.Tensor, sample_rate: int):
- """Renders an audio player for the given audio samples.
-
- Args:
- samples (torch.Tensor): a Tensor of decoded audio samples
- with shapes [B, C, T] or [C, T]
- sample_rate (int): sample rate audio should be displayed with.
- """
- assert samples.dim() == 2 or samples.dim() == 3
-
- samples = samples.detach().cpu()
- if samples.dim() == 2:
- samples = samples[None, ...]
-
- for audio in samples:
- ipd.display(ipd.Audio(audio, rate=sample_rate))
diff --git a/spaces/joshuadunlop/Epic-GPT4-App/app.py b/spaces/joshuadunlop/Epic-GPT4-App/app.py
deleted file mode 100644
index bc49c468c200e7d66550febfc7a2c19cd4bbceac..0000000000000000000000000000000000000000
--- a/spaces/joshuadunlop/Epic-GPT4-App/app.py
+++ /dev/null
@@ -1,254 +0,0 @@
-import streamlit as st
-import openai
-import re
-import csv
-import base64
-from io import StringIO
-import threading
-from queue import Queue
-
-st.title("Josh's Epic GPT-4 App :sunglasses:")
-
-api_key = st.sidebar.text_input("API Key:", value="sk-")
-openai.api_key = api_key
-
-show_notes = st.sidebar.checkbox("Show Notes", value="TRUE")
-data_section = st.sidebar.text_area("CSV or Text Data:")
-paste_data = st.sidebar.button("Paste Data")
-num_concurrent_calls = st.sidebar.number_input("Concurrent Calls:", min_value=1, max_value=2000, value=10, step=1)
-generate_all = st.sidebar.button("Generate All")
-add_row = st.sidebar.button("Add row")
-reset = st.sidebar.button("Reset")
-followup_message = st.sidebar.text_area("Edit Message:")
-generate_all_edits = st.sidebar.button("Generate All Edits")
-model = st.sidebar.selectbox("Model:", ["gpt-4-1106-preview", "gpt-4"])
-temperature = st.sidebar.slider("Temperature:", 0.0, 1.0, 0.6, step=0.01)
-max_tokens = st.sidebar.number_input("Max Tokens:", min_value=1, max_value=8192, value=1000, step=1)
-top_p = st.sidebar.slider("Top P:", 0.0, 1.0, 1.0, step=0.01)
-system_message = st.sidebar.text_area("System Message:")
-row_count = st.session_state.get("row_count", 1)
-
-if add_row:
- row_count += 1
- st.session_state.row_count = row_count
-
-if paste_data:
- data = StringIO(data_section.strip())
- reader = csv.reader(data, delimiter='\n', quotechar='"')
- messages = [row[0] for row in reader]
-
- row_count = 0
- note_flag = show_notes # flag to check if we should add to note or message
-
- for i, message in enumerate(messages):
- note_delimiter = "//note//"
-
- if note_delimiter in message:
- split_message = message.split(note_delimiter)
- note, message = split_message[1].strip(), split_message[2].strip()
- st.session_state[f"note{row_count}"] = note
- st.session_state[f"message{row_count}"] = message
- row_count += 1
- else:
- if note_flag and show_notes: # Only add to note if Show Notes is enabled
- st.session_state[f"note{row_count}"] = message
- else:
- st.session_state[f"message{row_count}"] = message
- row_count += 1
-
- if show_notes: # Only toggle the note_flag if Show Notes is enabled
- note_flag = not note_flag # flip the flag for next iteration
-
- st.session_state.row_count = row_count
-
-if reset:
- row_count = 1
- st.session_state.row_count = row_count
- for i in range(1000):
- st.session_state[f"note{i}"] = ""
- st.session_state[f"message{i}"] = ""
- st.session_state[f"response{i}"] = ""
- st.session_state[f"prompt_tokens{i}"] = 0
- st.session_state[f"response_tokens{i}"] = 0
- st.session_state[f"word_count{i}"] = 0
- st.session_state[f"followup_response{i}"] = ""
-
-def generate_response(i, messages):
- try:
- completion = openai.ChatCompletion.create(
- model=model,
- messages=messages,
- temperature=temperature,
- max_tokens=max_tokens,
- top_p=top_p
- )
-
- response = completion.choices[0].message.content
- prompt_tokens = completion.usage['prompt_tokens']
- response_tokens = completion.usage['total_tokens'] - prompt_tokens
- word_count = len(re.findall(r'\w+', response))
-
- return (i, response, prompt_tokens, response_tokens, word_count, None)
-
- except Exception as e:
- return (i, str(e), 0, 0, 0, str(e))
-
-class WorkerThread(threading.Thread):
- def __init__(self, jobs, results):
- super().__init__()
- self.jobs = jobs
- self.results = results
-
- def run(self):
- while True:
- job = self.jobs.get()
- if job is None:
- break
- i, messages = job
- result = generate_response(i, messages)
- self.results.put(result)
-
-if generate_all:
- messages = [st.session_state.get(f"message{i}", "") for i in range(row_count)]
-
- jobs = Queue()
- results = Queue()
-
- workers = [WorkerThread(jobs, results) for _ in range(num_concurrent_calls)]
-
- for worker in workers:
- worker.start()
-
- for i, message in enumerate(messages):
- jobs.put((i, [
- {"role": "system", "content": system_message},
- {"role": "user", "content": message}
- ]))
-
- for _ in range(num_concurrent_calls):
- jobs.put(None)
-
- for worker in workers:
- worker.join()
-
- while not results.empty():
- i, response, prompt_tokens, response_tokens, word_count, error_message = results.get()
- if error_message is not None:
- st.write(f"Error on row {i}: {error_message}")
- st.session_state[f"response{i}"] = response
- st.session_state[f"prompt_tokens{i}"] = prompt_tokens
- st.session_state[f"response_tokens{i}"] = response_tokens
- st.session_state[f"word_count{i}"] = word_count
-
-if generate_all_edits:
- messages = [st.session_state.get(f"message{i}", "") for i in range(row_count)]
-
- jobs = Queue()
- results = Queue()
-
- workers = [WorkerThread(jobs, results) for _ in range(num_concurrent_calls)]
-
- for worker in workers:
- worker.start()
-
- for i, message in enumerate(messages):
- jobs.put((i, [
- {"role": "system", "content": system_message},
- {"role": "user", "content": message},
- {"role": "user", "content": followup_message}
- ]))
-
- for _ in range(num_concurrent_calls):
- jobs.put(None)
-
- for worker in workers:
- worker.join()
-
- while not results.empty():
- i, response, prompt_tokens, response_tokens, word_count, error_message = results.get()
- if error_message is not None:
- st.write(f"Error on row {i}: {error_message}")
- st.session_state[f"followup_response{i}"] = response
- st.session_state[f"prompt_tokens{i}"] = prompt_tokens
- st.session_state[f"response_tokens{i}"] = response_tokens
- st.session_state[f"word_count{i}"] = word_count
-
-for i in range(row_count):
- if show_notes:
- st.text_input(f"Note {i + 1}:", key=f"note{i}", value=st.session_state.get(f"note{i}", ""))
- col1, col2, col3 = st.columns(3)
-
- with col1:
- message = st.text_area(f"Message {i + 1}:", key=f"message{i}", value=st.session_state.get(f"message{i}", ""))
-
- if st.button(f"Generate Response {i + 1}"):
- _, response, prompt_tokens, response_tokens, word_count, error_message = generate_response(i, [
- {"role": "system", "content": system_message},
- {"role": "user", "content": message}
- ])
- st.session_state[f"response{i}"] = response
- st.session_state[f"prompt_tokens{i}"] = prompt_tokens
- st.session_state[f"response_tokens{i}"] = response_tokens
- st.session_state[f"word_count{i}"] = word_count
-
- with col2:
- st.text_area(f"Response {i + 1}:", value=st.session_state.get(f"response{i}", ""))
- st.write(f"Tokens: {st.session_state.get(f'prompt_tokens{i}', 0)} / {st.session_state.get(f'response_tokens{i}', 0)} + Words: {st.session_state.get(f'word_count{i}', 0)}")
-
- with col3:
- st.text_area(f"Edited Response {i + 1}:", value=st.session_state.get(f"followup_response{i}", ""))
- if st.button(f"Generate Edit {i + 1}"):
- _, followup_response, prompt_tokens, response_tokens, word_count, error_message = generate_response(i, [
- {"role": "system", "content": system_message},
- {"role": "user", "content": message},
- {"role": "user", "content": followup_message}
- ])
- st.session_state[f"followup_response{i}"] = followup_response
- st.session_state[f"prompt_tokens{i}"] = prompt_tokens
- st.session_state[f"response_tokens{i}"] = response_tokens
- st.session_state[f"word_count{i}"] = word_count
- st.experimental_rerun()
-
-def create_download_link(data, filename):
- csv_data = StringIO()
- writer = csv.writer(csv_data, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
- for row in data:
- writer.writerow([row])
- b64 = base64.b64encode(csv_data.getvalue().encode()).decode()
- return f'{filename}'
-
-def create_download_link_txt(data, filename):
- txt_data = "\n".join(data)
- b64 = base64.b64encode(txt_data.encode()).decode()
- return f'{filename}'
-
-responses_data = []
-edited_responses_data = []
-
-for i in range(row_count):
- note = st.session_state.get(f"note{i}", "")
- response = st.session_state.get(f"response{i}", "")
- followup_response = st.session_state.get(f"followup_response{i}", "")
-
- if show_notes:
- responses_data.append(note)
- edited_responses_data.append(note)
-
- responses_data.append(response)
- edited_responses_data.append(followup_response)
-
-# Create download links for CSV files
-download_responses_link_csv = create_download_link(responses_data, "Download Responses.csv")
-download_edited_responses_link_csv = create_download_link(edited_responses_data, "Download Edited Responses.csv")
-
-# Create download links for TXT files
-download_responses_link_txt = create_download_link_txt(responses_data, "Download Responses.txt")
-download_edited_responses_link_txt = create_download_link_txt(edited_responses_data, "Download Edited Responses.txt")
-
-# Display download links for CSV files
-st.markdown(download_responses_link_csv, unsafe_allow_html=True)
-st.markdown(download_edited_responses_link_csv, unsafe_allow_html=True)
-
-# Display download links for TXT files
-st.markdown(download_responses_link_txt, unsafe_allow_html=True)
-st.markdown(download_edited_responses_link_txt, unsafe_allow_html=True)
\ No newline at end of file
diff --git a/spaces/justest/gpt4free/g4f/Provider/Providers/DeepAi.py b/spaces/justest/gpt4free/g4f/Provider/Providers/DeepAi.py
deleted file mode 100644
index 02b08120ec8ef50c91c9237047a4f36c822a7bfc..0000000000000000000000000000000000000000
--- a/spaces/justest/gpt4free/g4f/Provider/Providers/DeepAi.py
+++ /dev/null
@@ -1,46 +0,0 @@
-import os
-import json
-import random
-import hashlib
-import requests
-
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://deepai.org'
-model = ['gpt-3.5-turbo']
-supports_stream = True
-needs_auth = False
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- def md5(text: str) -> str:
- return hashlib.md5(text.encode()).hexdigest()[::-1]
-
-
- def get_api_key(user_agent: str) -> str:
- part1 = str(random.randint(0, 10**11))
- part2 = md5(user_agent + md5(user_agent + md5(user_agent + part1 + "x")))
-
- return f"tryit-{part1}-{part2}"
-
- user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36'
-
- headers = {
- "api-key": get_api_key(user_agent),
- "user-agent": user_agent
- }
-
- files = {
- "chat_style": (None, "chat"),
- "chatHistory": (None, json.dumps(messages))
- }
-
- r = requests.post("https://api.deepai.org/chat_response", headers=headers, files=files, stream=True)
-
- for chunk in r.iter_content(chunk_size=None):
- r.raise_for_status()
- yield chunk.decode()
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/jvde/sovits-webui/train_ms.py b/spaces/jvde/sovits-webui/train_ms.py
deleted file mode 100644
index dde56d41416e9a60be9cff8c6766f36595524d58..0000000000000000000000000000000000000000
--- a/spaces/jvde/sovits-webui/train_ms.py
+++ /dev/null
@@ -1,306 +0,0 @@
-import os
-import json
-import argparse
-import itertools
-import math
-import torch
-from torch import nn, optim
-from torch.nn import functional as F
-from torch.utils.data import DataLoader
-from torch.utils.tensorboard import SummaryWriter
-import torch.multiprocessing as mp
-import torch.distributed as dist
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.cuda.amp import autocast, GradScaler
-from tqdm import tqdm
-
-import librosa
-import logging
-
-logging.getLogger('numba').setLevel(logging.WARNING)
-
-import commons
-import utils
-from data_utils import (
- TextAudioSpeakerLoader,
- TextAudioSpeakerCollate,
- DistributedBucketSampler
-)
-from models import (
- SynthesizerTrn,
- MultiPeriodDiscriminator,
-)
-from losses import (
- generator_loss,
- discriminator_loss,
- feature_loss,
- kl_loss
-)
-from mel_processing import mel_spectrogram_torch, spec_to_mel_torch
-from text.symbols import symbols
-
-
-torch.backends.cudnn.benchmark = True
-global_step = 0
-
-
-def main():
- """Assume Single Node Multi GPUs Training Only"""
- assert torch.cuda.is_available(), "CPU training is not allowed."
-
- n_gpus = torch.cuda.device_count()
- os.environ['MASTER_ADDR'] = 'localhost'
- os.environ['MASTER_PORT'] = '8000'
-
- hps = utils.get_hparams()
- mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
-
-
-def run(rank, n_gpus, hps):
- global global_step
- if rank == 0:
- logger = utils.get_logger(hps.model_dir)
- logger.info(hps)
- utils.check_git_hash(hps.model_dir)
- writer = SummaryWriter(log_dir=hps.model_dir)
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
-
- dist.init_process_group(backend='nccl', init_method='env://', world_size=n_gpus, rank=rank)
- torch.manual_seed(hps.train.seed)
- torch.cuda.set_device(rank)
-
- train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data)
- train_sampler = DistributedBucketSampler(
- train_dataset,
- hps.train.batch_size,
- [32,300,400,500,600,700,800,900,1000],
- num_replicas=n_gpus,
- rank=rank,
- shuffle=True)
- collate_fn = TextAudioSpeakerCollate()
- train_loader = DataLoader(train_dataset, num_workers=8, shuffle=False, pin_memory=True,
- collate_fn=collate_fn, batch_sampler=train_sampler)
- if rank == 0:
- eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data)
- eval_loader = DataLoader(eval_dataset, num_workers=8, shuffle=False,
- batch_size=hps.train.batch_size, pin_memory=True,
- drop_last=False, collate_fn=collate_fn)
-
- net_g = SynthesizerTrn(
- len(symbols),
- hps.data.filter_length // 2 + 1,
- hps.train.segment_size // hps.data.hop_length,
- n_speakers=hps.data.n_speakers,
- **hps.model).cuda(rank)
- net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
- optim_g = torch.optim.AdamW(
- net_g.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- optim_d = torch.optim.AdamW(
- net_d.parameters(),
- hps.train.learning_rate,
- betas=hps.train.betas,
- eps=hps.train.eps)
- net_g = DDP(net_g, device_ids=[rank])
- net_d = DDP(net_d, device_ids=[rank])
-
- try:
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g)
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d)
- global_step = (epoch_str - 1) * len(train_loader)
- except:
- epoch_str = 1
- global_step = 0
-
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str-2)
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str-2)
-
- scaler = GradScaler(enabled=hps.train.fp16_run)
-
- for epoch in range(epoch_str, hps.train.epochs + 1):
- if rank==0:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval])
- else:
- train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, None], None, None)
- scheduler_g.step()
- scheduler_d.step()
-
-
-def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers):
- net_g, net_d = nets
- optim_g, optim_d = optims
- scheduler_g, scheduler_d = schedulers
- train_loader, eval_loader = loaders
- if writers is not None:
- writer, writer_eval = writers
-
- train_loader.batch_sampler.set_epoch(epoch)
- global global_step
-
- net_g.train()
- net_d.train()
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers) in enumerate(tqdm(train_loader)):
- x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True)
- spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True)
- y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True)
- speakers = speakers.cuda(rank, non_blocking=True)
-
- with autocast(enabled=hps.train.fp16_run):
- y_hat, l_length, attn, ids_slice, x_mask, z_mask,\
- (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths, speakers)
-
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
-
- y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice
-
- # Discriminator
- y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach())
- with autocast(enabled=False):
- loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g)
- loss_disc_all = loss_disc
- optim_d.zero_grad()
- scaler.scale(loss_disc_all).backward()
- scaler.unscale_(optim_d)
- grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
- scaler.step(optim_d)
-
- with autocast(enabled=hps.train.fp16_run):
- # Generator
- y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat)
- with autocast(enabled=False):
- loss_dur = torch.sum(l_length.float())
- loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
- loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
-
- loss_fm = feature_loss(fmap_r, fmap_g)
- loss_gen, losses_gen = generator_loss(y_d_hat_g)
- loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl
- optim_g.zero_grad()
- scaler.scale(loss_gen_all).backward()
- scaler.unscale_(optim_g)
- grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
- scaler.step(optim_g)
- scaler.update()
-
- if rank==0:
- if global_step % hps.train.log_interval == 0:
- lr = optim_g.param_groups[0]['lr']
- losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl]
- logger.info('Train Epoch: {} [{:.0f}%]'.format(
- epoch,
- 100. * batch_idx / len(train_loader)))
- logger.info([x.item() for x in losses] + [global_step, lr])
-
- scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g}
- scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl})
-
- scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)})
- scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)})
- scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)})
- image_dict = {
- "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()),
- "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()),
- "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()),
- "all/attn": utils.plot_alignment_to_numpy(attn[0,0].data.cpu().numpy())
- }
- utils.summarize(
- writer=writer,
- global_step=global_step,
- images=image_dict,
- scalars=scalar_dict)
-
- if global_step % hps.train.eval_interval == 0:
- evaluate(hps, net_g, eval_loader, writer_eval)
- utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step)))
- utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step)))
- old_g=os.path.join(hps.model_dir, "G_{}.pth".format(global_step-2000))
- old_d=os.path.join(hps.model_dir, "D_{}.pth".format(global_step-2000))
- if os.path.exists(old_g):
- os.remove(old_g)
- if os.path.exists(old_d):
- os.remove(old_d)
- global_step += 1
-
- if rank == 0:
- logger.info('====> Epoch: {}'.format(epoch))
-
-
-def evaluate(hps, generator, eval_loader, writer_eval):
- generator.eval()
- with torch.no_grad():
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers) in enumerate(eval_loader):
- x, x_lengths = x.cuda(0), x_lengths.cuda(0)
- spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0)
- y, y_lengths = y.cuda(0), y_lengths.cuda(0)
- speakers = speakers.cuda(0)
-
- # remove else
- x = x[:1]
- x_lengths = x_lengths[:1]
- spec = spec[:1]
- spec_lengths = spec_lengths[:1]
- y = y[:1]
- y_lengths = y_lengths[:1]
- speakers = speakers[:1]
- break
- y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, max_len=1000)
- y_hat_lengths = mask.sum([1,2]).long() * hps.data.hop_length
-
- mel = spec_to_mel_torch(
- spec,
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.mel_fmin,
- hps.data.mel_fmax)
- y_hat_mel = mel_spectrogram_torch(
- y_hat.squeeze(1).float(),
- hps.data.filter_length,
- hps.data.n_mel_channels,
- hps.data.sampling_rate,
- hps.data.hop_length,
- hps.data.win_length,
- hps.data.mel_fmin,
- hps.data.mel_fmax
- )
- image_dict = {
- "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy())
- }
- audio_dict = {
- "gen/audio": y_hat[0,:,:y_hat_lengths[0]]
- }
- if global_step == 0:
- image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())})
- audio_dict.update({"gt/audio": y[0,:,:y_lengths[0]]})
-
- utils.summarize(
- writer=writer_eval,
- global_step=global_step,
- images=image_dict,
- audios=audio_dict,
- audio_sampling_rate=hps.data.sampling_rate
- )
- generator.train()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/kangvcar/RealChar/realtime_ai_character/static/styles.css b/spaces/kangvcar/RealChar/realtime_ai_character/static/styles.css
deleted file mode 100644
index a1f4450bad0172a712c891313879cf46d0a8392d..0000000000000000000000000000000000000000
--- a/spaces/kangvcar/RealChar/realtime_ai_character/static/styles.css
+++ /dev/null
@@ -1,630 +0,0 @@
-body {
- background-color: #02081d;
- display: flex;
- flex-direction: column;
- justify-content: center;
- align-items: center;
- min-height: 100vh;
-}
-
-/* CSS for mobile warning message and desktop content */
-@media (max-width: 768px) {
- #desktop-content { display: none; }
- #mobile-warning { display: block; }
-}
-
-@media (min-width: 769px) {
- #desktop-content { display: flex; }
- #mobile-warning { display: none; }
-}
-
-#desktop-content {
- flex-direction: column;
- justify-content: center;
- align-items: center;
- flex-grow: 1;
-}
-
-#mobile-warning {
- color:white;
-}
-
-#mobile-warning p {
- margin-bottom: 10px;
-}
-
-.logo-container {
- margin-top: 50px;
- margin-bottom: 10px;
-}
-
-.header p {
- color: #cccccc;
- font-family: "Prompt", Helvetica;
- font-size: 25px;
- font-weight: 200;
- margin-top: 10px;
-}
-
-.recording {
- color: firebrick;
- display: none;
- padding-left: 1.2em;
- line-height: 1.5em;
-}
-
-.recording::before {
- content: '🔴';
- margin-right: 3px;
- animation: recording 600ms alternate infinite;
-}
-@keyframes recording {
- from { opacity: 1; }
- to { opacity: 0.2; }
-}
-
-.devices-container {
- display: flex;
- align-items: center;
- justify-content: center;
- flex-direction: column;
-}
-
-.player-container {
- display: none;
-}
-
-.audio-device-label {
- margin-top: 40px;
- margin-bottom: 20px;
- color: #e0e0e0;
-}
-
-.actions {
- display: flex;
- justify-content: center;
- gap: 30px; /* optional, adds some space between the buttons */
-}
-
-.main-screen {
- display: flex;
- align-items: center;
- justify-content: center;
- width: 50vw;
- height: 20vh;
-}
-
-.audio-player {
- display: none;
-}
-
-.alert {
- color: white;
-}
-
-.copyright,
-.disclaimer {
- color:#777;
- font-size: 15px;
-}
-
-.chat-window {
- background-color: #02081d;
- display: none;
- color: white;
- font-size: 17px;
- width: 100%;
- height: 100%;
- border: none;
- resize: none;
-}
-
-.talk-btn,
-.send-btn,
-.text-btn,
-.connect,
-.select-character {
- font-family: "Prompt", Helvetica;
- font-size: 1rem;
- border-color: #6785d3;
- color: #fff;
- box-shadow: 0 0 10px 0 #6785d3 inset, 0 0 10px 4px #6785d3;
- transition: all 150ms ease-in-out;
- cursor: pointer;
- background-color: transparent;
- padding: 0.6em 2em;
- border-radius: 1.5em;
-}
-
-.talk-btn:hover,
-.send-btn:hover,
-.connect:hover,
-.text-btn:hover {
- box-shadow: 0 0 40px 40px #6785d3 inset, 0 0 0 0 #6785d3;
- outline: 0;
-}
-
-.talk-btn,
-.send-btn,
-.text-btn {
- display: none;
-}
-
-.talk-btn[disabled],
-.text-btn[disabled] {
- background-color: #999999;
- cursor: not-allowed;
- box-shadow: none;
-}
-
-.talk-btn,
-.text-btn,
-.connect {
- margin-top: 10px;
-}
-
-.action-container {
- display: flex;
- align-items: center;
- justify-content: center;
- flex-direction: column;
-}
-
-.options-container {
- display: flex;
- align-items: center;
- justify-content: center;
- padding: 20px 40px;
- bottom: 0;
- width: 100%;
-}
-
-.disconnect,
-.call {
- margin-right: 20px;
-}
-
-.disconnect,
-.call,
-.message {
- display: none;
- align-items: center;
- justify-content: center;
- background-color: #bccffe1a;
- border-radius: 50px;
- padding: 8px;
- height: 50px;
- width: 50px;
- cursor: pointer;
-}
-
-.call:hover {
- background-color: #0149ff;
-}
-
-.message:hover {
- background-color: #28c235;
-}
-
-.disconnect:hover {
- background-color: #ff4242;
-}
-
-.continue-call,
-.stop-call {
- display: none;
-}
-
-.stop-call {
- background-color: white;
- border-radius: 50px;
-}
-
-.stop-call,
-.continue-call,
-.disconnect,
-.message,
-.call {
- cursor: pointer;
- transition: transform 0.3s ease-in-out, background-color 0.3s ease-in-out;
-}
-
-.icon-instance-node {
- width: 3em;
- height: 3em;
- transition: transform 0.3s ease-in-out, filter 0.3s ease-in-out;
-}
-
-.icon-instance-node-small {
- width: 1.3em;
- height: 1.3em;
- transition: transform 0.3s ease-in-out, filter 0.3s ease-in-out;
-}
-
-.continue-call:hover,
-.stop-call:hover,
-.disconnect:hover,
-.message:hover,
-.call:hover {
- transform: scale(1.1);
-}
-
-.continue-call:hover .icon-instance-node,
-.stop-call:hover .icon-instance-node,
-.disconnect:hover .icon-instance-node-small,
-.message:hover, .icon-instance-node-small,
-.call:hover, .icon-instance-node-small{
- filter: brightness(2);
-}
-
-
-footer {
- padding: 10px;
- display: flex;
- justify-content: center;
- align-items: center;
- flex-direction: column;
-}
-
-.svg-inline--fa {
- vertical-align: -0.200em;
-}
-
-.rounded-social-buttons {
- text-align: center;
- margin-bottom: 10px;
-}
-
-.rounded-social-buttons .social-button {
- display: inline-block;
- position: relative;
- cursor: pointer;
- width: 3.125rem;
- height: 3.125rem;
- border: 0.125rem solid transparent;
- padding: 0;
- text-decoration: none;
- text-align: center;
- color: #fefefe;
- font-size: 1.5625rem;
- font-weight: normal;
- line-height: 2em;
- border-radius: 1.6875rem;
- transition: all 0.5s ease;
- margin-right: 0.25rem;
- margin-bottom: 0.25rem;
-}
-
-.rounded-social-buttons .social-button:hover, .rounded-social-buttons .social-button:focus {
- -webkit-transform: rotate(360deg);
- -ms-transform: rotate(360deg);
- transform: rotate(360deg);
-}
-
-.rounded-social-buttons .fa-twitter, .fa-facebook-f, .fa-linkedin, .fa-youtube, .fa-instagram, .fa-github, .fa-discord {
- font-size: 25px;
-}
-
-.rounded-social-buttons .social-button.facebook {
- background: #3b5998;
-}
-
-.rounded-social-buttons .social-button.facebook:hover, .rounded-social-buttons .social-button.facebook:focus {
- color: #3b5998;
- background: #fefefe;
- border-color: #3b5998;
-}
-
-.rounded-social-buttons .social-button.twitter {
- background: #bccffe1a;
-}
-
-.rounded-social-buttons .social-button.twitter:hover, .rounded-social-buttons .social-button.twitter:focus {
- color: #55acee;
- background: #fefefe;
- border-color: #55acee;
-}
-
-.rounded-social-buttons .social-button.linkedin {
- background: #007bb5;
-}
-
-.rounded-social-buttons .social-button.linkedin:hover, .rounded-social-buttons .social-button.linkedin:focus {
- color: #007bb5;
- background: #fefefe;
- border-color: #007bb5;
-}
-
-.rounded-social-buttons .social-button.github {
- background: #bccffe1a;
-}
-
-.rounded-social-buttons .social-button.github:hover, .rounded-social-buttons .social-button.github:focus {
- color: #bb0000;
- background: #fefefe;
- border-color: #bb0000;
-}
-
-.rounded-social-buttons .social-button.discord {
- background: #bccffe1a;
-}
-
-.rounded-social-buttons .social-button.discord:hover, .rounded-social-buttons .social-button.discord:focus {
- color: #125688;
- background: #fefefe;
- border-color: #125688;
-}
-
-/* character select radio groups */
-@import url("https://fonts.googleapis.com/css?family=Raleway:300,400,400i,700");
-*
-{
- margin:0;
- padding:0;
- box-sizing:border-box;
-}
-body
-{
- font-family: Raleway, sans-serif;
-}
-
-
-/* Characters Group */
-.main-container
-{
- margin-top: 10px;
-}
-
-.radio-buttons
-{
- width: 100%;
- margin: 0 auto;
- text-align: center;
-}
-
-.custom-radio input
-{
- display: none;
-}
-
-.radio-btn
-{
- margin: 8px;
- width: 176px;
- height: 192px;
- border: 2.4px solid transparent;
- display: inline-block;
- border-radius: 8px;
- position: relative;
- text-align: center;
- box-shadow: 0 0 16px #c3c3c367;
- cursor: pointer;
-}
-
-.radio-btn > i {
- color: #ffffff;
- background-color: #FFDAE9;
- font-size: 16px;
- position: absolute;
- top: -12px;
- left: 50%;
- transform: translateX(-50%) scale(1.6);
- border-radius: 40px;
- padding: 2.4px;
- transition: 0.5s;
- pointer-events: none;
- opacity: 0;
-}
-
-.radio-btn .hobbies-icon
-{
- width: 120px;
- height: 120px;
- position: absolute;
- top: 40%;
- left: 50%;
- transform: translate(-50%, -50%);
-}
-.radio-btn .hobbies-icon img
-{
- display:block;
- width:100%;
- margin-bottom:16px;
-
-}
-.radio-btn .hobbies-icon i
-{
- color: #FFDAE9;
- line-height: 64px;
- font-size: 48px;
-}
-
-.radio-btn .hobbies-icon h4
-{
- color: rgb(214, 214, 214);
- font-size: 12px;
- font-weight: 300;
- text-transform: uppercase;
- letter-spacing:0.8px;
-}
-
-.custom-radio input:checked + .radio-btn
-{
- border: 1.6px solid #FFDAE9;
-}
-
-.custom-radio input:checked + .radio-btn > i
-{
- opacity: 1;
- transform: translateX(-50%) scale(0.8);
-}
-
-@keyframes pulse {
- 0%,
- 100% {
- box-shadow: 0 0 0 0 rgba(173, 216, 230, 0.4);
- }
- 25% {
- box-shadow: 0 0 0 10px rgba(173, 216, 230, 0.15);
- }
- 50% {
- box-shadow: 0 0 0 20px rgba(173, 216, 230, 0.55);
- }
- 75% {
- box-shadow: 0 0 0 10px rgba(173, 216, 230, 0.25);
- }
-}
-
-.pulse-animation-1 {
- animation: pulse 1.5s infinite ease-in-out;
-}
-
-.pulse-animation-2 {
- animation: pulse 2.2s infinite ease-in-out;
-}
-
-/* media devices select */
-.select-dropdown,
-.select-dropdown * {
- margin: 0;
- padding: 0;
- position: relative;
- box-sizing: border-box;
-}
-.select-dropdown {
- position: relative;
- background-color: #02081d;
- border-radius: 4px;
-}
-.select-dropdown select {
- font-size: 1rem;
- font-weight: normal;
- color: white;
- max-width: 100%;
- padding: 8px 24px 8px 10px;
- border-radius: 10px;
- background-color: transparent;
- -webkit-appearance: none;
- -moz-appearance: none;
- appearance: none;
-}
-.select-dropdown select:active, .select-dropdown select:focus {
- outline: none;
- box-shadow: none;
-}
-.select-dropdown:after {
- content: "";
- position: absolute;
- top: 50%;
- right: 8px;
- width: 0;
- height: 0;
- margin-top: -2px;
- border-top: 5px solid #aaa;
- border-right: 5px solid transparent;
- border-left: 5px solid transparent;
-}
-
-/* text input */
-input[type="text"]{font: 15px/24px 'Muli', sans-serif; color: white; width: 100%; box-sizing: border-box; letter-spacing: 1px;}
-:focus{outline: none;}
-.message-input-container{
- float: left;
- width: 50vw;
- margin: 15px 3%;
- position: relative;}
-input[type="text"]{font: 15px/24px "Lato", Arial, sans-serif; color: white; width: 100%; box-sizing: border-box; letter-spacing: 1px;}
-.message-input {
- border: 1px solid #ccc;
- border-radius: 5px;
- padding: 7px 14px 9px;
- transition: 0.4s;
- font-size: 20px;
- display: none;
- color: white;
- background-color: transparent;
-}
-
-.message-input ~ .focus-border:before,
-.message-input ~ .focus-border:after{content: ""; position: absolute; top: 0; left: 0; width: 0; height: 2px; background-color: #85a7ff; transition: 0.3s;}
-.message-input ~ .focus-border:after{top: auto; bottom: 0; left: auto; right: 0;}
-.message-input ~ .focus-border i:before,
-.message-input ~ .focus-border i:after{content: ""; position: absolute; top: 0; left: 0; width: 2px; height: 0; background-color: #85a7ff; transition: 0.4s;}
-.message-input ~ .focus-border i:after{left: auto; right: 0; top: auto; bottom: 0;}
-.message-input:focus ~ .focus-border:before,
-.message-input:focus ~ .focus-border:after{width: 100%; transition: 0.3s;}
-.message-input:focus ~ .focus-border i:before,
-.message-input:focus ~ .focus-border i:after{height: 100%; transition: 0.4s;}
-
-
-#bars {
- height: 30px;
- left: 50%;
- margin: -30px 0 0 -40px;
- position: absolute;
- top: 60%;
- width: 80px;
-}
-
-@keyframes audio-wave {
- 0%{
- height: 10px;
- trnsform: translateY(0px);
- background: #1F4FCC;
- }
- 25%{
- height: 40px;
- trnsform: translateY(-5px);
- scaleY: 1.7;
- background:#6785D3;
- }
- 50%{
- height: 10px;
- trnsform: translateY(0px);
- scaleY: 1.7;
- background: #C2D3FF;
- }
- 100%{
- height: 10px;
- trnsform: translateY(0px);
- scaleY: 1.7;
- background:fixed #EEF3FF;
- }
-}
-
-.sound-wave{
- display:flex;
- justify-content: center;
- align-items:center;
- gap:8px;
- height:60px
-}
-.sound-wave span{
- height:18px;
- width:10px;
- display:block;
- border-radius:8px;
- background:#BEC5D9;
- animation:audio-wave 2.2s infinite ease-in-out
-}
-.sound-wave span:nth-child(2) {
- left:11px;
- background:#FFFFFF;
- animation-delay:0.2s
-}
-.sound-wave span:nth-child(3){
- left:22px;
- animation-delay:0.4s
-}
-.sound-wave span:nth-child(4){
- left:33px;
- animation-delay:0.6s
-}
-.sound-wave span:nth-child(5){
- left:44px;
- animation-delay:0.8s
-}
-.sound-wave span:nth-child(6){
- left:55px;
- animation-delay:1s
-}
diff --git a/spaces/katanaml-org/sparrow-ui/tools/utilities.py b/spaces/katanaml-org/sparrow-ui/tools/utilities.py
deleted file mode 100644
index cf129082185f80763cd97d19336c42a4e10a661c..0000000000000000000000000000000000000000
--- a/spaces/katanaml-org/sparrow-ui/tools/utilities.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import streamlit as st
-
-
-def load_css():
- with open("tools/style.css") as f:
- st.markdown(''.format(f.read()), unsafe_allow_html=True)
- st.markdown(
- '',
- unsafe_allow_html=True)
\ No newline at end of file
diff --git a/spaces/kdrkdrkdr/ProsekaTTS/monotonic_align/core.py b/spaces/kdrkdrkdr/ProsekaTTS/monotonic_align/core.py
deleted file mode 100644
index 1f940605fe4fd0738fa0006149fcba14ef88223a..0000000000000000000000000000000000000000
--- a/spaces/kdrkdrkdr/ProsekaTTS/monotonic_align/core.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import numba
-
-
-@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]),
- nopython=True, nogil=True)
-def maximum_path_jit(paths, values, t_ys, t_xs):
- b = paths.shape[0]
- max_neg_val = -1e9
- for i in range(int(b)):
- path = paths[i]
- value = values[i]
- t_y = t_ys[i]
- t_x = t_xs[i]
-
- v_prev = v_cur = 0.0
- index = t_x - 1
-
- for y in range(t_y):
- for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)):
- if x == y:
- v_cur = max_neg_val
- else:
- v_cur = value[y - 1, x]
- if x == 0:
- if y == 0:
- v_prev = 0.
- else:
- v_prev = max_neg_val
- else:
- v_prev = value[y - 1, x - 1]
- value[y, x] += max(v_prev, v_cur)
-
- for y in range(t_y - 1, -1, -1):
- path[y, index] = 1
- if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]):
- index = index - 1
diff --git a/spaces/kenjiqq/aesthetics-scorer/README.md b/spaces/kenjiqq/aesthetics-scorer/README.md
deleted file mode 100644
index f8955364c4a700a1e733cf89e150409f17206b5b..0000000000000000000000000000000000000000
--- a/spaces/kenjiqq/aesthetics-scorer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Aesthetics Scorer
-emoji: 📊
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/README.md b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/README.md
deleted file mode 100644
index 11ac270bb1189d87dda3577f18fc4aedc986e1c8..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ChatGLM2-SadTalker
-emoji: 📺
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 3.23.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/utils/plot.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/utils/plot.py
deleted file mode 100644
index ccc588e5c01ca550b69c385aeb3fd139c59fb88a..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/utils/plot.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# coding: utf-8
-
-import os
-from pathlib import Path
-
-import matplotlib.pyplot as plt
-import numpy as np
-import pandas as pd
-from menpo.visualize.viewmatplotlib import sample_colours_from_colourmap
-from prettytable import PrettyTable
-from sklearn.metrics import roc_curve, auc
-
-image_path = "/data/anxiang/IJB_release/IJBC"
-files = [
- "./ms1mv3_arcface_r100/ms1mv3_arcface_r100/ijbc.npy"
-]
-
-
-def read_template_pair_list(path):
- pairs = pd.read_csv(path, sep=' ', header=None).values
- t1 = pairs[:, 0].astype(np.int)
- t2 = pairs[:, 1].astype(np.int)
- label = pairs[:, 2].astype(np.int)
- return t1, t2, label
-
-
-p1, p2, label = read_template_pair_list(
- os.path.join('%s/meta' % image_path,
- '%s_template_pair_label.txt' % 'ijbc'))
-
-methods = []
-scores = []
-for file in files:
- methods.append(file.split('/')[-2])
- scores.append(np.load(file))
-
-methods = np.array(methods)
-scores = dict(zip(methods, scores))
-colours = dict(
- zip(methods, sample_colours_from_colourmap(methods.shape[0], 'Set2')))
-x_labels = [10 ** -6, 10 ** -5, 10 ** -4, 10 ** -3, 10 ** -2, 10 ** -1]
-tpr_fpr_table = PrettyTable(['Methods'] + [str(x) for x in x_labels])
-fig = plt.figure()
-for method in methods:
- fpr, tpr, _ = roc_curve(label, scores[method])
- roc_auc = auc(fpr, tpr)
- fpr = np.flipud(fpr)
- tpr = np.flipud(tpr) # select largest tpr at same fpr
- plt.plot(fpr,
- tpr,
- color=colours[method],
- lw=1,
- label=('[%s (AUC = %0.4f %%)]' %
- (method.split('-')[-1], roc_auc * 100)))
- tpr_fpr_row = []
- tpr_fpr_row.append("%s-%s" % (method, "IJBC"))
- for fpr_iter in np.arange(len(x_labels)):
- _, min_index = min(
- list(zip(abs(fpr - x_labels[fpr_iter]), range(len(fpr)))))
- tpr_fpr_row.append('%.2f' % (tpr[min_index] * 100))
- tpr_fpr_table.add_row(tpr_fpr_row)
-plt.xlim([10 ** -6, 0.1])
-plt.ylim([0.3, 1.0])
-plt.grid(linestyle='--', linewidth=1)
-plt.xticks(x_labels)
-plt.yticks(np.linspace(0.3, 1.0, 8, endpoint=True))
-plt.xscale('log')
-plt.xlabel('False Positive Rate')
-plt.ylabel('True Positive Rate')
-plt.title('ROC on IJB')
-plt.legend(loc="lower right")
-print(tpr_fpr_table)
diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/hparams.py b/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/hparams.py
deleted file mode 100644
index 9a8c16471903b0c92253b1d70fcd6a61d10e085f..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-SadTalker/speaker_encoder/hparams.py
+++ /dev/null
@@ -1,31 +0,0 @@
-## Mel-filterbank
-mel_window_length = 25 # In milliseconds
-mel_window_step = 10 # In milliseconds
-mel_n_channels = 40
-
-
-## Audio
-sampling_rate = 16000
-# Number of spectrogram frames in a partial utterance
-partials_n_frames = 160 # 1600 ms
-
-
-## Voice Activation Detection
-# Window size of the VAD. Must be either 10, 20 or 30 milliseconds.
-# This sets the granularity of the VAD. Should not need to be changed.
-vad_window_length = 30 # In milliseconds
-# Number of frames to average together when performing the moving average smoothing.
-# The larger this value, the larger the VAD variations must be to not get smoothed out.
-vad_moving_average_width = 8
-# Maximum number of consecutive silent frames a segment can have.
-vad_max_silence_length = 6
-
-
-## Audio volume normalization
-audio_norm_target_dBFS = -30
-
-
-## Model parameters
-model_hidden_size = 256
-model_embedding_size = 256
-model_num_layers = 3
\ No newline at end of file
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/ann_head.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/ann_head.py
deleted file mode 100644
index 30aaacc2cafc568d3de71d1477b4de0dc0fea9d3..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/decode_heads/ann_head.py
+++ /dev/null
@@ -1,245 +0,0 @@
-import torch
-import torch.nn as nn
-from annotator.uniformer.mmcv.cnn import ConvModule
-
-from ..builder import HEADS
-from ..utils import SelfAttentionBlock as _SelfAttentionBlock
-from .decode_head import BaseDecodeHead
-
-
-class PPMConcat(nn.ModuleList):
- """Pyramid Pooling Module that only concat the features of each layer.
-
- Args:
- pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid
- Module.
- """
-
- def __init__(self, pool_scales=(1, 3, 6, 8)):
- super(PPMConcat, self).__init__(
- [nn.AdaptiveAvgPool2d(pool_scale) for pool_scale in pool_scales])
-
- def forward(self, feats):
- """Forward function."""
- ppm_outs = []
- for ppm in self:
- ppm_out = ppm(feats)
- ppm_outs.append(ppm_out.view(*feats.shape[:2], -1))
- concat_outs = torch.cat(ppm_outs, dim=2)
- return concat_outs
-
-
-class SelfAttentionBlock(_SelfAttentionBlock):
- """Make a ANN used SelfAttentionBlock.
-
- Args:
- low_in_channels (int): Input channels of lower level feature,
- which is the key feature for self-attention.
- high_in_channels (int): Input channels of higher level feature,
- which is the query feature for self-attention.
- channels (int): Output channels of key/query transform.
- out_channels (int): Output channels.
- share_key_query (bool): Whether share projection weight between key
- and query projection.
- query_scale (int): The scale of query feature map.
- key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid
- Module of key feature.
- conv_cfg (dict|None): Config of conv layers.
- norm_cfg (dict|None): Config of norm layers.
- act_cfg (dict|None): Config of activation layers.
- """
-
- def __init__(self, low_in_channels, high_in_channels, channels,
- out_channels, share_key_query, query_scale, key_pool_scales,
- conv_cfg, norm_cfg, act_cfg):
- key_psp = PPMConcat(key_pool_scales)
- if query_scale > 1:
- query_downsample = nn.MaxPool2d(kernel_size=query_scale)
- else:
- query_downsample = None
- super(SelfAttentionBlock, self).__init__(
- key_in_channels=low_in_channels,
- query_in_channels=high_in_channels,
- channels=channels,
- out_channels=out_channels,
- share_key_query=share_key_query,
- query_downsample=query_downsample,
- key_downsample=key_psp,
- key_query_num_convs=1,
- key_query_norm=True,
- value_out_num_convs=1,
- value_out_norm=False,
- matmul_norm=True,
- with_out=True,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg)
-
-
-class AFNB(nn.Module):
- """Asymmetric Fusion Non-local Block(AFNB)
-
- Args:
- low_in_channels (int): Input channels of lower level feature,
- which is the key feature for self-attention.
- high_in_channels (int): Input channels of higher level feature,
- which is the query feature for self-attention.
- channels (int): Output channels of key/query transform.
- out_channels (int): Output channels.
- and query projection.
- query_scales (tuple[int]): The scales of query feature map.
- Default: (1,)
- key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid
- Module of key feature.
- conv_cfg (dict|None): Config of conv layers.
- norm_cfg (dict|None): Config of norm layers.
- act_cfg (dict|None): Config of activation layers.
- """
-
- def __init__(self, low_in_channels, high_in_channels, channels,
- out_channels, query_scales, key_pool_scales, conv_cfg,
- norm_cfg, act_cfg):
- super(AFNB, self).__init__()
- self.stages = nn.ModuleList()
- for query_scale in query_scales:
- self.stages.append(
- SelfAttentionBlock(
- low_in_channels=low_in_channels,
- high_in_channels=high_in_channels,
- channels=channels,
- out_channels=out_channels,
- share_key_query=False,
- query_scale=query_scale,
- key_pool_scales=key_pool_scales,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
- self.bottleneck = ConvModule(
- out_channels + high_in_channels,
- out_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=None)
-
- def forward(self, low_feats, high_feats):
- """Forward function."""
- priors = [stage(high_feats, low_feats) for stage in self.stages]
- context = torch.stack(priors, dim=0).sum(dim=0)
- output = self.bottleneck(torch.cat([context, high_feats], 1))
- return output
-
-
-class APNB(nn.Module):
- """Asymmetric Pyramid Non-local Block (APNB)
-
- Args:
- in_channels (int): Input channels of key/query feature,
- which is the key feature for self-attention.
- channels (int): Output channels of key/query transform.
- out_channels (int): Output channels.
- query_scales (tuple[int]): The scales of query feature map.
- Default: (1,)
- key_pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid
- Module of key feature.
- conv_cfg (dict|None): Config of conv layers.
- norm_cfg (dict|None): Config of norm layers.
- act_cfg (dict|None): Config of activation layers.
- """
-
- def __init__(self, in_channels, channels, out_channels, query_scales,
- key_pool_scales, conv_cfg, norm_cfg, act_cfg):
- super(APNB, self).__init__()
- self.stages = nn.ModuleList()
- for query_scale in query_scales:
- self.stages.append(
- SelfAttentionBlock(
- low_in_channels=in_channels,
- high_in_channels=in_channels,
- channels=channels,
- out_channels=out_channels,
- share_key_query=True,
- query_scale=query_scale,
- key_pool_scales=key_pool_scales,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
- self.bottleneck = ConvModule(
- 2 * in_channels,
- out_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg)
-
- def forward(self, feats):
- """Forward function."""
- priors = [stage(feats, feats) for stage in self.stages]
- context = torch.stack(priors, dim=0).sum(dim=0)
- output = self.bottleneck(torch.cat([context, feats], 1))
- return output
-
-
-@HEADS.register_module()
-class ANNHead(BaseDecodeHead):
- """Asymmetric Non-local Neural Networks for Semantic Segmentation.
-
- This head is the implementation of `ANNNet
- `_.
-
- Args:
- project_channels (int): Projection channels for Nonlocal.
- query_scales (tuple[int]): The scales of query feature map.
- Default: (1,)
- key_pool_scales (tuple[int]): The pooling scales of key feature map.
- Default: (1, 3, 6, 8).
- """
-
- def __init__(self,
- project_channels,
- query_scales=(1, ),
- key_pool_scales=(1, 3, 6, 8),
- **kwargs):
- super(ANNHead, self).__init__(
- input_transform='multiple_select', **kwargs)
- assert len(self.in_channels) == 2
- low_in_channels, high_in_channels = self.in_channels
- self.project_channels = project_channels
- self.fusion = AFNB(
- low_in_channels=low_in_channels,
- high_in_channels=high_in_channels,
- out_channels=high_in_channels,
- channels=project_channels,
- query_scales=query_scales,
- key_pool_scales=key_pool_scales,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.bottleneck = ConvModule(
- high_in_channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.context = APNB(
- in_channels=self.channels,
- out_channels=self.channels,
- channels=project_channels,
- query_scales=query_scales,
- key_pool_scales=key_pool_scales,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, inputs):
- """Forward function."""
- low_feats, high_feats = self._transform_inputs(inputs)
- output = self.fusion(low_feats, high_feats)
- output = self.dropout(output)
- output = self.bottleneck(output)
- output = self.context(output)
- output = self.cls_seg(output)
-
- return output
diff --git a/spaces/kirch/Text2Video-Zero/model.py b/spaces/kirch/Text2Video-Zero/model.py
deleted file mode 100644
index 5dd4b17e3c12bc8015db4a523c8db4f63acc6ba1..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/model.py
+++ /dev/null
@@ -1,304 +0,0 @@
-from enum import Enum
-import gc
-import numpy as np
-
-import torch
-import decord
-from diffusers import StableDiffusionInstructPix2PixPipeline, StableDiffusionControlNetPipeline, ControlNetModel, UNet2DConditionModel
-from diffusers.schedulers import EulerAncestralDiscreteScheduler, DDIMScheduler
-from text_to_video.text_to_video_pipeline import TextToVideoPipeline
-
-import utils
-import gradio_utils
-
-decord.bridge.set_bridge('torch')
-
-
-class ModelType(Enum):
- Pix2Pix_Video = 1,
- Text2Video = 2,
- ControlNetCanny = 3,
- ControlNetCannyDB = 4,
- ControlNetPose = 5,
-
-
-class Model:
- def __init__(self, device, dtype, **kwargs):
- self.device = device
- self.dtype = dtype
- self.generator = torch.Generator(device=device)
- self.pipe_dict = {
- ModelType.Pix2Pix_Video: StableDiffusionInstructPix2PixPipeline,
- ModelType.Text2Video: TextToVideoPipeline,
- ModelType.ControlNetCanny: StableDiffusionControlNetPipeline,
- ModelType.ControlNetCannyDB: StableDiffusionControlNetPipeline,
- ModelType.ControlNetPose: StableDiffusionControlNetPipeline,
- }
- self.controlnet_attn_proc = utils.CrossFrameAttnProcessor(unet_chunk_size=2)
- self.pix2pix_attn_proc = utils.CrossFrameAttnProcessor(unet_chunk_size=3)
- self.text2video_attn_proc = utils.CrossFrameAttnProcessor(unet_chunk_size=2)
-
- self.pipe = None
- self.model_type = None
-
- self.states = {}
-
- def set_model(self, model_type: ModelType, model_id: str, **kwargs):
- if self.pipe is not None:
- del self.pipe
- torch.cuda.empty_cache()
- gc.collect()
- safety_checker = kwargs.pop('safety_checker', None)
- self.pipe = self.pipe_dict[model_type].from_pretrained(model_id, safety_checker=safety_checker, **kwargs).to(self.device).to(self.dtype)
- self.model_type = model_type
-
- def inference_chunk(self, frame_ids, **kwargs):
- if self.pipe is None:
- return
- image = kwargs.pop('image')
- prompt = np.array(kwargs.pop('prompt'))
- negative_prompt = np.array(kwargs.pop('negative_prompt', ''))
- latents = None
- if 'latents' in kwargs:
- latents = kwargs.pop('latents')[frame_ids]
- return self.pipe(image=image[frame_ids],
- prompt=prompt[frame_ids].tolist(),
- negative_prompt=negative_prompt[frame_ids].tolist(),
- latents=latents,
- generator=self.generator,
- **kwargs)
-
- def inference(self, split_to_chunks=False, chunk_size=8, **kwargs):
- if self.pipe is None:
- return
- seed = kwargs.pop('seed', 0)
- kwargs.pop('generator', '')
- # self.generator.manual_seed(seed)
- if split_to_chunks:
- assert 'image' in kwargs
- assert 'prompt' in kwargs
- image = kwargs.pop('image')
- prompt = kwargs.pop('prompt')
- negative_prompt = kwargs.pop('negative_prompt', '')
- f = image.shape[0]
- chunk_ids = np.arange(0, f, chunk_size - 1)
- result = []
- for i in range(len(chunk_ids)):
- ch_start = chunk_ids[i]
- ch_end = f if i == len(chunk_ids) - 1 else chunk_ids[i + 1]
- frame_ids = [0] + list(range(ch_start, ch_end))
- self.generator.manual_seed(seed)
- print(f'Processing chunk {i + 1} / {len(chunk_ids)}')
- result.append(self.inference_chunk(frame_ids=frame_ids,
- image=image,
- prompt=[prompt] * f,
- negative_prompt=[negative_prompt] * f,
- **kwargs).images[1:])
- result = np.concatenate(result)
- return result
- else:
- return self.pipe(generator=self.generator, **kwargs).videos[0]
-
- def process_controlnet_canny(self,
- video_path,
- prompt,
- num_inference_steps=20,
- controlnet_conditioning_scale=1.0,
- guidance_scale=9.0,
- seed=42,
- eta=0.0,
- low_threshold=100,
- high_threshold=200,
- resolution=512):
- video_path = gradio_utils.edge_path_to_video_path(video_path)
- if self.model_type != ModelType.ControlNetCanny:
- controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny")
- self.set_model(ModelType.ControlNetCanny, model_id="runwayml/stable-diffusion-v1-5", controlnet=controlnet)
- self.pipe.scheduler = DDIMScheduler.from_config(self.pipe.scheduler.config)
- self.pipe.unet.set_attn_processor(processor=self.controlnet_attn_proc)
- self.pipe.controlnet.set_attn_processor(processor=self.controlnet_attn_proc)
-
- # TODO: Check scheduler
- added_prompt = 'best quality, extremely detailed'
- negative_prompts = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
-
- video, fps = utils.prepare_video(video_path, resolution, self.device, self.dtype, False, start_t=0, end_t=15)
- control = utils.pre_process_canny(video, low_threshold, high_threshold).to(self.device).to(self.dtype)
- f, _, h, w = video.shape
- self.generator.manual_seed(seed)
- latents = torch.randn((1, 4, h//8, w//8), dtype=self.dtype, device=self.device, generator=self.generator)
- latents = latents.repeat(f, 1, 1, 1)
- result = self.inference(image=control,
- prompt=prompt + ', ' + added_prompt,
- height=h,
- width=w,
- negative_prompt=negative_prompts,
- num_inference_steps=num_inference_steps,
- guidance_scale=guidance_scale,
- controlnet_conditioning_scale=controlnet_conditioning_scale,
- eta=eta,
- latents=latents,
- seed=seed,
- output_type='numpy',
- split_to_chunks=True,
- chunk_size=8,
- )
- return utils.create_video(result, fps)
-
- def process_controlnet_pose(self,
- video_path,
- prompt,
- num_inference_steps=20,
- controlnet_conditioning_scale=1.0,
- guidance_scale=9.0,
- seed=42,
- eta=0.0,
- resolution=512):
- video_path = gradio_utils.motion_to_video_path(video_path)
- if self.model_type != ModelType.ControlNetPose:
- controlnet = ControlNetModel.from_pretrained("fusing/stable-diffusion-v1-5-controlnet-openpose")
- self.set_model(ModelType.ControlNetPose, model_id="runwayml/stable-diffusion-v1-5", controlnet=controlnet)
- self.pipe.scheduler = DDIMScheduler.from_config(self.pipe.scheduler.config)
- self.pipe.unet.set_attn_processor(processor=self.controlnet_attn_proc)
- self.pipe.controlnet.set_attn_processor(processor=self.controlnet_attn_proc)
-
- added_prompt = 'best quality, extremely detailed, HD, ultra-realistic, 8K, HQ, masterpiece, trending on artstation, art, smooth'
- negative_prompts = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer difits, cropped, worst quality, low quality, deformed body, bloated, ugly, unrealistic'
-
- video, fps = utils.prepare_video(video_path, resolution, self.device, self.dtype, False, output_fps=4)
- control = utils.pre_process_pose(video, apply_pose_detect=False).to(self.device).to(self.dtype)
- f, _, h, w = video.shape
- self.generator.manual_seed(seed)
- latents = torch.randn((1, 4, h//8, w//8), dtype=self.dtype, device=self.device, generator=self.generator)
- latents = latents.repeat(f, 1, 1, 1)
- result = self.inference(image=control,
- prompt=prompt + ', ' + added_prompt,
- height=h,
- width=w,
- negative_prompt=negative_prompts,
- num_inference_steps=num_inference_steps,
- guidance_scale=guidance_scale,
- controlnet_conditioning_scale=controlnet_conditioning_scale,
- eta=eta,
- latents=latents,
- seed=seed,
- output_type='numpy',
- split_to_chunks=True,
- chunk_size=8,
- )
- return utils.create_gif(result, fps)
- # return utils.create_video(result, fps)
-
- def process_controlnet_canny_db(self,
- db_path,
- video_path,
- prompt,
- num_inference_steps=20,
- controlnet_conditioning_scale=1.0,
- guidance_scale=9.0,
- seed=42,
- eta=0.0,
- low_threshold=100,
- high_threshold=200,
- resolution=512):
- db_path = gradio_utils.get_model_from_db_selection(db_path)
- video_path = gradio_utils.get_video_from_canny_selection(video_path)
- # Load db and controlnet weights
- if 'db_path' not in self.states or db_path != self.states['db_path']:
- controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny")
- self.set_model(ModelType.ControlNetCannyDB, model_id=db_path, controlnet=controlnet)
- self.pipe.scheduler = DDIMScheduler.from_config(self.pipe.scheduler.config)
- self.pipe.unet.set_attn_processor(processor=self.controlnet_attn_proc)
- self.pipe.controlnet.set_attn_processor(processor=self.controlnet_attn_proc)
- self.states['db_path'] = db_path
-
- added_prompt = 'best quality, extremely detailed'
- negative_prompts = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
-
- video, fps = utils.prepare_video(video_path, resolution, self.device, self.dtype, False)
- control = utils.pre_process_canny(video, low_threshold, high_threshold).to(self.device).to(self.dtype)
- f, _, h, w = video.shape
- self.generator.manual_seed(seed)
- latents = torch.randn((1, 4, h//8, w//8), dtype=self.dtype, device=self.device, generator=self.generator)
- latents = latents.repeat(f, 1, 1, 1)
- result = self.inference(image=control,
- prompt=prompt + ', ' + added_prompt,
- height=h,
- width=w,
- negative_prompt=negative_prompts,
- num_inference_steps=num_inference_steps,
- guidance_scale=guidance_scale,
- controlnet_conditioning_scale=controlnet_conditioning_scale,
- eta=eta,
- latents=latents,
- seed=seed,
- output_type='numpy',
- split_to_chunks=True,
- chunk_size=8,
- )
- return utils.create_gif(result, fps)
-
- def process_pix2pix(self, video, prompt, resolution=512, seed=0, start_t=0, end_t=-1, out_fps=-1):
- end_t = start_t+15
- if self.model_type != ModelType.Pix2Pix_Video:
- self.set_model(ModelType.Pix2Pix_Video, model_id="timbrooks/instruct-pix2pix")
- self.pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(self.pipe.scheduler.config)
- self.pipe.unet.set_attn_processor(processor=self.pix2pix_attn_proc)
- video, fps = utils.prepare_video(video, resolution, self.device, self.dtype, True, start_t, end_t, out_fps)
- self.generator.manual_seed(seed)
- result = self.inference(image=video,
- prompt=prompt,
- seed=seed,
- output_type='numpy',
- num_inference_steps=50,
- image_guidance_scale=1.5,
- split_to_chunks=True,
- chunk_size=8,
- )
- return utils.create_video(result, fps)
-
- def process_text2video(self, prompt, motion_field_strength_x=12,motion_field_strength_y=12, n_prompt="", resolution=512, seed=24, num_frames=8, fps=2, t0=881, t1=941,
- use_cf_attn=True, use_motion_field=True,
- smooth_bg=False, smooth_bg_strength=0.4 ):
-
- if self.model_type != ModelType.Text2Video:
- unet = UNet2DConditionModel.from_pretrained('runwayml/stable-diffusion-v1-5', subfolder="unet")
- self.set_model(ModelType.Text2Video, model_id="runwayml/stable-diffusion-v1-5", unet=unet)
- self.pipe.scheduler = DDIMScheduler.from_config(self.pipe.scheduler.config)
- if use_cf_attn:
- self.pipe.unet.set_attn_processor(processor=self.text2video_attn_proc)
- self.generator.manual_seed(seed)
-
-
- added_prompt = "high quality, HD, 8K, trending on artstation, high focus, dramatic lighting"
- negative_prompts = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer difits, cropped, worst quality, low quality, deformed body, bloated, ugly, unrealistic'
-
- prompt = prompt.rstrip()
- if len(prompt) > 0 and (prompt[-1] == "," or prompt[-1] == "."):
- prompt = prompt.rstrip()[:-1]
- prompt = prompt.rstrip()
- prompt = prompt + ", "+added_prompt
- if len(n_prompt)>0:
- negative_prompt = [n_prompt]
- else:
- negative_prompt = None
-
- result = self.inference(prompt=[prompt],
- video_length=num_frames,
- height=resolution,
- width=resolution,
- num_inference_steps=50,
- guidance_scale=7.5,
- guidance_stop_step=1.0,
- t0=t0,
- t1=t1,
- motion_field_strength_x=motion_field_strength_x,
- motion_field_strength_y=motion_field_strength_y,
- use_motion_field=use_motion_field,
- smooth_bg=smooth_bg,
- smooth_bg_strength=smooth_bg_strength,
- seed=seed,
- output_type='numpy',
- negative_prompt = negative_prompt,
- )
- return utils.create_video(result, fps)
\ No newline at end of file
diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/m2m_100/tokenizers/README.md b/spaces/koajoel/PolyFormer/fairseq/examples/m2m_100/tokenizers/README.md
deleted file mode 100644
index e116932bc80572f221cff6472a7b1eea7032925d..0000000000000000000000000000000000000000
--- a/spaces/koajoel/PolyFormer/fairseq/examples/m2m_100/tokenizers/README.md
+++ /dev/null
@@ -1,18 +0,0 @@
-# M2M-100 Tokenization
-
-We apply different tokenization strategies for different languages following the existing literature. Here we provide tok.sh a tokenizer that can be used to reproduce our results.
-
-To reproduce the results, follow these steps:
-
-```
-tgt_lang=...
-reference_translation=...
-cat generation_output | grep -P "^H" | sort -V | cut -f 3- | sh tok.sh $tgt_lang > hyp
-cat $reference_translation |sh tok.sh $tgt_lang > ref
-sacrebleu -tok 'none' ref < hyp
-```
-
-## Installation
-
-Tools needed for all the languages except Arabic can be installed by running install_dependencies.sh
-If you want to evaluate Arabic models, please follow the instructions provided here: http://alt.qcri.org/tools/arabic-normalizer/ to install
diff --git a/spaces/konverner/deep-voice-cloning/scripts/cloning_inference.py b/spaces/konverner/deep-voice-cloning/scripts/cloning_inference.py
deleted file mode 100644
index a155f7250da4dc78dd57f08803de005a31a041c9..0000000000000000000000000000000000000000
--- a/spaces/konverner/deep-voice-cloning/scripts/cloning_inference.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import argparse
-import json
-import os
-
-import soundfile as sf
-
-from deep_voice_cloning.cloning.model import CloningModel
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--model_path", type=str, default=None, help="Path to model directory")
- parser.add_argument("--input_text", type=str, default=None, help="Text to be synthesized")
- parser.add_argument("--output_path", type=str, default=None, help="Path to output audio file")
- args = parser.parse_args()
-
- with open(os.path.join(os.path.dirname(__file__), "inference_config.json")) as f:
- config = json.load(f)
-
- if args.model_path is not None:
- config['model_path'] = args.model_path
- if args.input_text is not None:
- config['input_text'] = args.input_text
- if args.output_path is not None:
- config['output_path'] = args.output_path
-
- cloning_model = CloningModel(config)
- waveform_array = cloning_model.forward(config["input_text"])
-
- sf.write(config['output_path'], waveform_array, samplerate=16000)
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/h11/_state.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/h11/_state.py
deleted file mode 100644
index 3593430a74f21f6e0c2faf495e1627551eebfc30..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/h11/_state.py
+++ /dev/null
@@ -1,367 +0,0 @@
-################################################################
-# The core state machine
-################################################################
-#
-# Rule 1: everything that affects the state machine and state transitions must
-# live here in this file. As much as possible goes into the table-based
-# representation, but for the bits that don't quite fit, the actual code and
-# state must nonetheless live here.
-#
-# Rule 2: this file does not know about what role we're playing; it only knows
-# about HTTP request/response cycles in the abstract. This ensures that we
-# don't cheat and apply different rules to local and remote parties.
-#
-#
-# Theory of operation
-# ===================
-#
-# Possibly the simplest way to think about this is that we actually have 5
-# different state machines here. Yes, 5. These are:
-#
-# 1) The client state, with its complicated automaton (see the docs)
-# 2) The server state, with its complicated automaton (see the docs)
-# 3) The keep-alive state, with possible states {True, False}
-# 4) The SWITCH_CONNECT state, with possible states {False, True}
-# 5) The SWITCH_UPGRADE state, with possible states {False, True}
-#
-# For (3)-(5), the first state listed is the initial state.
-#
-# (1)-(3) are stored explicitly in member variables. The last
-# two are stored implicitly in the pending_switch_proposals set as:
-# (state of 4) == (_SWITCH_CONNECT in pending_switch_proposals)
-# (state of 5) == (_SWITCH_UPGRADE in pending_switch_proposals)
-#
-# And each of these machines has two different kinds of transitions:
-#
-# a) Event-triggered
-# b) State-triggered
-#
-# Event triggered is the obvious thing that you'd think it is: some event
-# happens, and if it's the right event at the right time then a transition
-# happens. But there are somewhat complicated rules for which machines can
-# "see" which events. (As a rule of thumb, if a machine "sees" an event, this
-# means two things: the event can affect the machine, and if the machine is
-# not in a state where it expects that event then it's an error.) These rules
-# are:
-#
-# 1) The client machine sees all h11.events objects emitted by the client.
-#
-# 2) The server machine sees all h11.events objects emitted by the server.
-#
-# It also sees the client's Request event.
-#
-# And sometimes, server events are annotated with a _SWITCH_* event. For
-# example, we can have a (Response, _SWITCH_CONNECT) event, which is
-# different from a regular Response event.
-#
-# 3) The keep-alive machine sees the process_keep_alive_disabled() event
-# (which is derived from Request/Response events), and this event
-# transitions it from True -> False, or from False -> False. There's no way
-# to transition back.
-#
-# 4&5) The _SWITCH_* machines transition from False->True when we get a
-# Request that proposes the relevant type of switch (via
-# process_client_switch_proposals), and they go from True->False when we
-# get a Response that has no _SWITCH_* annotation.
-#
-# So that's event-triggered transitions.
-#
-# State-triggered transitions are less standard. What they do here is couple
-# the machines together. The way this works is, when certain *joint*
-# configurations of states are achieved, then we automatically transition to a
-# new *joint* state. So, for example, if we're ever in a joint state with
-#
-# client: DONE
-# keep-alive: False
-#
-# then the client state immediately transitions to:
-#
-# client: MUST_CLOSE
-#
-# This is fundamentally different from an event-based transition, because it
-# doesn't matter how we arrived at the {client: DONE, keep-alive: False} state
-# -- maybe the client transitioned SEND_BODY -> DONE, or keep-alive
-# transitioned True -> False. Either way, once this precondition is satisfied,
-# this transition is immediately triggered.
-#
-# What if two conflicting state-based transitions get enabled at the same
-# time? In practice there's only one case where this arises (client DONE ->
-# MIGHT_SWITCH_PROTOCOL versus DONE -> MUST_CLOSE), and we resolve it by
-# explicitly prioritizing the DONE -> MIGHT_SWITCH_PROTOCOL transition.
-#
-# Implementation
-# --------------
-#
-# The event-triggered transitions for the server and client machines are all
-# stored explicitly in a table. Ditto for the state-triggered transitions that
-# involve just the server and client state.
-#
-# The transitions for the other machines, and the state-triggered transitions
-# that involve the other machines, are written out as explicit Python code.
-#
-# It'd be nice if there were some cleaner way to do all this. This isn't
-# *too* terrible, but I feel like it could probably be better.
-#
-# WARNING
-# -------
-#
-# The script that generates the state machine diagrams for the docs knows how
-# to read out the EVENT_TRIGGERED_TRANSITIONS and STATE_TRIGGERED_TRANSITIONS
-# tables. But it can't automatically read the transitions that are written
-# directly in Python code. So if you touch those, you need to also update the
-# script to keep it in sync!
-from typing import cast, Dict, Optional, Set, Tuple, Type, Union
-
-from ._events import *
-from ._util import LocalProtocolError, Sentinel
-
-# Everything in __all__ gets re-exported as part of the h11 public API.
-__all__ = [
- "CLIENT",
- "SERVER",
- "IDLE",
- "SEND_RESPONSE",
- "SEND_BODY",
- "DONE",
- "MUST_CLOSE",
- "CLOSED",
- "MIGHT_SWITCH_PROTOCOL",
- "SWITCHED_PROTOCOL",
- "ERROR",
-]
-
-
-class CLIENT(Sentinel, metaclass=Sentinel):
- pass
-
-
-class SERVER(Sentinel, metaclass=Sentinel):
- pass
-
-
-# States
-class IDLE(Sentinel, metaclass=Sentinel):
- pass
-
-
-class SEND_RESPONSE(Sentinel, metaclass=Sentinel):
- pass
-
-
-class SEND_BODY(Sentinel, metaclass=Sentinel):
- pass
-
-
-class DONE(Sentinel, metaclass=Sentinel):
- pass
-
-
-class MUST_CLOSE(Sentinel, metaclass=Sentinel):
- pass
-
-
-class CLOSED(Sentinel, metaclass=Sentinel):
- pass
-
-
-class ERROR(Sentinel, metaclass=Sentinel):
- pass
-
-
-# Switch types
-class MIGHT_SWITCH_PROTOCOL(Sentinel, metaclass=Sentinel):
- pass
-
-
-class SWITCHED_PROTOCOL(Sentinel, metaclass=Sentinel):
- pass
-
-
-class _SWITCH_UPGRADE(Sentinel, metaclass=Sentinel):
- pass
-
-
-class _SWITCH_CONNECT(Sentinel, metaclass=Sentinel):
- pass
-
-
-EventTransitionType = Dict[
- Type[Sentinel],
- Dict[
- Type[Sentinel],
- Dict[Union[Type[Event], Tuple[Type[Event], Type[Sentinel]]], Type[Sentinel]],
- ],
-]
-
-EVENT_TRIGGERED_TRANSITIONS: EventTransitionType = {
- CLIENT: {
- IDLE: {Request: SEND_BODY, ConnectionClosed: CLOSED},
- SEND_BODY: {Data: SEND_BODY, EndOfMessage: DONE},
- DONE: {ConnectionClosed: CLOSED},
- MUST_CLOSE: {ConnectionClosed: CLOSED},
- CLOSED: {ConnectionClosed: CLOSED},
- MIGHT_SWITCH_PROTOCOL: {},
- SWITCHED_PROTOCOL: {},
- ERROR: {},
- },
- SERVER: {
- IDLE: {
- ConnectionClosed: CLOSED,
- Response: SEND_BODY,
- # Special case: server sees client Request events, in this form
- (Request, CLIENT): SEND_RESPONSE,
- },
- SEND_RESPONSE: {
- InformationalResponse: SEND_RESPONSE,
- Response: SEND_BODY,
- (InformationalResponse, _SWITCH_UPGRADE): SWITCHED_PROTOCOL,
- (Response, _SWITCH_CONNECT): SWITCHED_PROTOCOL,
- },
- SEND_BODY: {Data: SEND_BODY, EndOfMessage: DONE},
- DONE: {ConnectionClosed: CLOSED},
- MUST_CLOSE: {ConnectionClosed: CLOSED},
- CLOSED: {ConnectionClosed: CLOSED},
- SWITCHED_PROTOCOL: {},
- ERROR: {},
- },
-}
-
-StateTransitionType = Dict[
- Tuple[Type[Sentinel], Type[Sentinel]], Dict[Type[Sentinel], Type[Sentinel]]
-]
-
-# NB: there are also some special-case state-triggered transitions hard-coded
-# into _fire_state_triggered_transitions below.
-STATE_TRIGGERED_TRANSITIONS: StateTransitionType = {
- # (Client state, Server state) -> new states
- # Protocol negotiation
- (MIGHT_SWITCH_PROTOCOL, SWITCHED_PROTOCOL): {CLIENT: SWITCHED_PROTOCOL},
- # Socket shutdown
- (CLOSED, DONE): {SERVER: MUST_CLOSE},
- (CLOSED, IDLE): {SERVER: MUST_CLOSE},
- (ERROR, DONE): {SERVER: MUST_CLOSE},
- (DONE, CLOSED): {CLIENT: MUST_CLOSE},
- (IDLE, CLOSED): {CLIENT: MUST_CLOSE},
- (DONE, ERROR): {CLIENT: MUST_CLOSE},
-}
-
-
-class ConnectionState:
- def __init__(self) -> None:
- # Extra bits of state that don't quite fit into the state model.
-
- # If this is False then it enables the automatic DONE -> MUST_CLOSE
- # transition. Don't set this directly; call .keep_alive_disabled()
- self.keep_alive = True
-
- # This is a subset of {UPGRADE, CONNECT}, containing the proposals
- # made by the client for switching protocols.
- self.pending_switch_proposals: Set[Type[Sentinel]] = set()
-
- self.states: Dict[Type[Sentinel], Type[Sentinel]] = {CLIENT: IDLE, SERVER: IDLE}
-
- def process_error(self, role: Type[Sentinel]) -> None:
- self.states[role] = ERROR
- self._fire_state_triggered_transitions()
-
- def process_keep_alive_disabled(self) -> None:
- self.keep_alive = False
- self._fire_state_triggered_transitions()
-
- def process_client_switch_proposal(self, switch_event: Type[Sentinel]) -> None:
- self.pending_switch_proposals.add(switch_event)
- self._fire_state_triggered_transitions()
-
- def process_event(
- self,
- role: Type[Sentinel],
- event_type: Type[Event],
- server_switch_event: Optional[Type[Sentinel]] = None,
- ) -> None:
- _event_type: Union[Type[Event], Tuple[Type[Event], Type[Sentinel]]] = event_type
- if server_switch_event is not None:
- assert role is SERVER
- if server_switch_event not in self.pending_switch_proposals:
- raise LocalProtocolError(
- "Received server {} event without a pending proposal".format(
- server_switch_event
- )
- )
- _event_type = (event_type, server_switch_event)
- if server_switch_event is None and _event_type is Response:
- self.pending_switch_proposals = set()
- self._fire_event_triggered_transitions(role, _event_type)
- # Special case: the server state does get to see Request
- # events.
- if _event_type is Request:
- assert role is CLIENT
- self._fire_event_triggered_transitions(SERVER, (Request, CLIENT))
- self._fire_state_triggered_transitions()
-
- def _fire_event_triggered_transitions(
- self,
- role: Type[Sentinel],
- event_type: Union[Type[Event], Tuple[Type[Event], Type[Sentinel]]],
- ) -> None:
- state = self.states[role]
- try:
- new_state = EVENT_TRIGGERED_TRANSITIONS[role][state][event_type]
- except KeyError:
- event_type = cast(Type[Event], event_type)
- raise LocalProtocolError(
- "can't handle event type {} when role={} and state={}".format(
- event_type.__name__, role, self.states[role]
- )
- ) from None
- self.states[role] = new_state
-
- def _fire_state_triggered_transitions(self) -> None:
- # We apply these rules repeatedly until converging on a fixed point
- while True:
- start_states = dict(self.states)
-
- # It could happen that both these special-case transitions are
- # enabled at the same time:
- #
- # DONE -> MIGHT_SWITCH_PROTOCOL
- # DONE -> MUST_CLOSE
- #
- # For example, this will always be true of a HTTP/1.0 client
- # requesting CONNECT. If this happens, the protocol switch takes
- # priority. From there the client will either go to
- # SWITCHED_PROTOCOL, in which case it's none of our business when
- # they close the connection, or else the server will deny the
- # request, in which case the client will go back to DONE and then
- # from there to MUST_CLOSE.
- if self.pending_switch_proposals:
- if self.states[CLIENT] is DONE:
- self.states[CLIENT] = MIGHT_SWITCH_PROTOCOL
-
- if not self.pending_switch_proposals:
- if self.states[CLIENT] is MIGHT_SWITCH_PROTOCOL:
- self.states[CLIENT] = DONE
-
- if not self.keep_alive:
- for role in (CLIENT, SERVER):
- if self.states[role] is DONE:
- self.states[role] = MUST_CLOSE
-
- # Tabular state-triggered transitions
- joint_state = (self.states[CLIENT], self.states[SERVER])
- changes = STATE_TRIGGERED_TRANSITIONS.get(joint_state, {})
- self.states.update(changes)
-
- if self.states == start_states:
- # Fixed point reached
- return
-
- def start_next_cycle(self) -> None:
- if self.states != {CLIENT: DONE, SERVER: DONE}:
- raise LocalProtocolError(
- "not in a reusable state. self.states={}".format(self.states)
- )
- # Can't reach DONE/DONE with any of these active, but still, let's be
- # sure.
- assert self.keep_alive
- assert not self.pending_switch_proposals
- self.states = {CLIENT: IDLE, SERVER: IDLE}
diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/retinaface/README.md b/spaces/lambdalabs/LambdaSuperRes/KAIR/retinaface/README.md
deleted file mode 100644
index 263dd1f070a33c2e0b720c660e2e1575e13beb89..0000000000000000000000000000000000000000
--- a/spaces/lambdalabs/LambdaSuperRes/KAIR/retinaface/README.md
+++ /dev/null
@@ -1 +0,0 @@
-This code is useful when you use `main_test_face_enhancement.py`.
diff --git a/spaces/lcipolina/Print_Gallery/glide_text2im/clip/model_creation.py b/spaces/lcipolina/Print_Gallery/glide_text2im/clip/model_creation.py
deleted file mode 100644
index fd5fbed8fce9da666a839c85fecd0d9ed5a7c584..0000000000000000000000000000000000000000
--- a/spaces/lcipolina/Print_Gallery/glide_text2im/clip/model_creation.py
+++ /dev/null
@@ -1,117 +0,0 @@
-import os
-from functools import lru_cache
-from typing import Any, Callable, Dict, List, Optional, Tuple
-
-import attr
-import numpy as np
-import torch
-import torch.nn as nn
-import yaml
-from glide_text2im.tokenizer.simple_tokenizer import SimpleTokenizer
-
-from .encoders import ImageEncoder, TextEncoder
-
-
-@lru_cache()
-def default_config_path() -> str:
- return os.path.join(os.path.dirname(os.path.abspath(__file__)), "config.yaml")
-
-
-@attr.s
-class CLIPModel:
- config: Dict[str, Any] = attr.ib()
- text_encoder: nn.Module = attr.ib()
- image_encoder: nn.Module = attr.ib()
- logit_scale: torch.Tensor = attr.ib()
- device: torch.device = attr.ib()
- tokenizer: SimpleTokenizer = attr.ib()
-
- def encode_prompts(self, prompts: List[str]) -> Tuple[torch.Tensor, torch.Tensor]:
- tokens = []
- lens = []
- for prompt in prompts:
- sub_tokens, sub_len = self.tokenizer.padded_tokens_and_len(
- self.tokenizer.encode(prompt), self.text_encoder.max_text_len
- )
- tokens.append(sub_tokens)
- lens.append(sub_len)
- return (
- torch.tensor(tokens).to(dtype=torch.long, device=self.device),
- torch.tensor(lens).to(dtype=torch.long, device=self.device),
- )
-
- def text_embeddings(self, prompts: List[str]) -> torch.Tensor:
- tokens, lens = self.encode_prompts(prompts)
- z_t = self.text_encoder(tokens, lens)
- return z_t / (torch.linalg.norm(z_t, dim=-1, keepdim=True) + 1e-12)
-
- def image_embeddings(self, images: torch.Tensor, t: torch.Tensor) -> torch.Tensor:
- z_i = self.image_encoder((images + 1) * 127.5, t)
- return z_i / (torch.linalg.norm(z_i, dim=-1, keepdim=True) + 1e-12)
-
- def cond_fn(self, prompts: List[str], grad_scale: float) -> Callable[..., torch.Tensor]:
- with torch.no_grad():
- z_t = self.text_embeddings(prompts)
-
- def cond_fn(x, t, grad_scale=grad_scale, **kwargs):
- with torch.enable_grad():
- x_var = x.detach().requires_grad_(True)
- z_i = self.image_embeddings(x_var, t)
- loss = torch.exp(self.logit_scale) * (z_t * z_i).sum()
- grad = torch.autograd.grad(loss, x_var)[0].detach()
- return grad * grad_scale
-
- return cond_fn
-
-
-def create_clip_model(
- config_path: Optional[str] = None,
- device: Optional[torch.device] = None,
- tokenizer: Optional[SimpleTokenizer] = None,
-) -> CLIPModel:
- if config_path is None:
- config_path = default_config_path()
- if device is None:
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- if tokenizer is None:
- tokenizer = SimpleTokenizer()
-
- with open(config_path, "r") as f:
- config = yaml.load(f, Loader=yaml.SafeLoader)
-
- text_encoder = TextEncoder(
- n_bpe_vocab=config["n_vocab"],
- max_text_len=config["max_text_len"],
- n_embd=config["n_embd"],
- n_head=config["n_head_text"],
- n_xf_blocks=config["n_xf_blocks_text"],
- n_head_state=config["n_head_state_text"],
- device=device,
- )
-
- image_encoder = ImageEncoder(
- image_size=config["image_size"],
- patch_size=config["patch_size"],
- n_embd=config["n_embd"],
- n_head=config["n_head_image"],
- n_xf_blocks=config["n_xf_blocks_image"],
- n_head_state=config["n_head_state_image"],
- n_timestep=config["n_timesteps"],
- device=device,
- )
-
- logit_scale = torch.tensor(
- np.log(config["logit_scale"]),
- dtype=torch.float32,
- device=device,
- requires_grad=False,
- )
-
- return CLIPModel(
- config=config,
- text_encoder=text_encoder,
- image_encoder=image_encoder,
- logit_scale=logit_scale,
- device=device,
- tokenizer=tokenizer,
- )
diff --git a/spaces/leafShen/CodeFormer/CodeFormer/basicsr/losses/losses.py b/spaces/leafShen/CodeFormer/CodeFormer/basicsr/losses/losses.py
deleted file mode 100644
index 1bcf272cfb756d99451a3005567ea4d4c9059067..0000000000000000000000000000000000000000
--- a/spaces/leafShen/CodeFormer/CodeFormer/basicsr/losses/losses.py
+++ /dev/null
@@ -1,455 +0,0 @@
-import math
-import lpips
-import torch
-from torch import autograd as autograd
-from torch import nn as nn
-from torch.nn import functional as F
-
-from basicsr.archs.vgg_arch import VGGFeatureExtractor
-from basicsr.utils.registry import LOSS_REGISTRY
-from .loss_util import weighted_loss
-
-_reduction_modes = ['none', 'mean', 'sum']
-
-
-@weighted_loss
-def l1_loss(pred, target):
- return F.l1_loss(pred, target, reduction='none')
-
-
-@weighted_loss
-def mse_loss(pred, target):
- return F.mse_loss(pred, target, reduction='none')
-
-
-@weighted_loss
-def charbonnier_loss(pred, target, eps=1e-12):
- return torch.sqrt((pred - target)**2 + eps)
-
-
-@LOSS_REGISTRY.register()
-class L1Loss(nn.Module):
- """L1 (mean absolute error, MAE) loss.
-
- Args:
- loss_weight (float): Loss weight for L1 loss. Default: 1.0.
- reduction (str): Specifies the reduction to apply to the output.
- Supported choices are 'none' | 'mean' | 'sum'. Default: 'mean'.
- """
-
- def __init__(self, loss_weight=1.0, reduction='mean'):
- super(L1Loss, self).__init__()
- if reduction not in ['none', 'mean', 'sum']:
- raise ValueError(f'Unsupported reduction mode: {reduction}. ' f'Supported ones are: {_reduction_modes}')
-
- self.loss_weight = loss_weight
- self.reduction = reduction
-
- def forward(self, pred, target, weight=None, **kwargs):
- """
- Args:
- pred (Tensor): of shape (N, C, H, W). Predicted tensor.
- target (Tensor): of shape (N, C, H, W). Ground truth tensor.
- weight (Tensor, optional): of shape (N, C, H, W). Element-wise
- weights. Default: None.
- """
- return self.loss_weight * l1_loss(pred, target, weight, reduction=self.reduction)
-
-
-@LOSS_REGISTRY.register()
-class MSELoss(nn.Module):
- """MSE (L2) loss.
-
- Args:
- loss_weight (float): Loss weight for MSE loss. Default: 1.0.
- reduction (str): Specifies the reduction to apply to the output.
- Supported choices are 'none' | 'mean' | 'sum'. Default: 'mean'.
- """
-
- def __init__(self, loss_weight=1.0, reduction='mean'):
- super(MSELoss, self).__init__()
- if reduction not in ['none', 'mean', 'sum']:
- raise ValueError(f'Unsupported reduction mode: {reduction}. ' f'Supported ones are: {_reduction_modes}')
-
- self.loss_weight = loss_weight
- self.reduction = reduction
-
- def forward(self, pred, target, weight=None, **kwargs):
- """
- Args:
- pred (Tensor): of shape (N, C, H, W). Predicted tensor.
- target (Tensor): of shape (N, C, H, W). Ground truth tensor.
- weight (Tensor, optional): of shape (N, C, H, W). Element-wise
- weights. Default: None.
- """
- return self.loss_weight * mse_loss(pred, target, weight, reduction=self.reduction)
-
-
-@LOSS_REGISTRY.register()
-class CharbonnierLoss(nn.Module):
- """Charbonnier loss (one variant of Robust L1Loss, a differentiable
- variant of L1Loss).
-
- Described in "Deep Laplacian Pyramid Networks for Fast and Accurate
- Super-Resolution".
-
- Args:
- loss_weight (float): Loss weight for L1 loss. Default: 1.0.
- reduction (str): Specifies the reduction to apply to the output.
- Supported choices are 'none' | 'mean' | 'sum'. Default: 'mean'.
- eps (float): A value used to control the curvature near zero.
- Default: 1e-12.
- """
-
- def __init__(self, loss_weight=1.0, reduction='mean', eps=1e-12):
- super(CharbonnierLoss, self).__init__()
- if reduction not in ['none', 'mean', 'sum']:
- raise ValueError(f'Unsupported reduction mode: {reduction}. ' f'Supported ones are: {_reduction_modes}')
-
- self.loss_weight = loss_weight
- self.reduction = reduction
- self.eps = eps
-
- def forward(self, pred, target, weight=None, **kwargs):
- """
- Args:
- pred (Tensor): of shape (N, C, H, W). Predicted tensor.
- target (Tensor): of shape (N, C, H, W). Ground truth tensor.
- weight (Tensor, optional): of shape (N, C, H, W). Element-wise
- weights. Default: None.
- """
- return self.loss_weight * charbonnier_loss(pred, target, weight, eps=self.eps, reduction=self.reduction)
-
-
-@LOSS_REGISTRY.register()
-class WeightedTVLoss(L1Loss):
- """Weighted TV loss.
-
- Args:
- loss_weight (float): Loss weight. Default: 1.0.
- """
-
- def __init__(self, loss_weight=1.0):
- super(WeightedTVLoss, self).__init__(loss_weight=loss_weight)
-
- def forward(self, pred, weight=None):
- y_diff = super(WeightedTVLoss, self).forward(pred[:, :, :-1, :], pred[:, :, 1:, :], weight=weight[:, :, :-1, :])
- x_diff = super(WeightedTVLoss, self).forward(pred[:, :, :, :-1], pred[:, :, :, 1:], weight=weight[:, :, :, :-1])
-
- loss = x_diff + y_diff
-
- return loss
-
-
-@LOSS_REGISTRY.register()
-class PerceptualLoss(nn.Module):
- """Perceptual loss with commonly used style loss.
-
- Args:
- layer_weights (dict): The weight for each layer of vgg feature.
- Here is an example: {'conv5_4': 1.}, which means the conv5_4
- feature layer (before relu5_4) will be extracted with weight
- 1.0 in calculting losses.
- vgg_type (str): The type of vgg network used as feature extractor.
- Default: 'vgg19'.
- use_input_norm (bool): If True, normalize the input image in vgg.
- Default: True.
- range_norm (bool): If True, norm images with range [-1, 1] to [0, 1].
- Default: False.
- perceptual_weight (float): If `perceptual_weight > 0`, the perceptual
- loss will be calculated and the loss will multiplied by the
- weight. Default: 1.0.
- style_weight (float): If `style_weight > 0`, the style loss will be
- calculated and the loss will multiplied by the weight.
- Default: 0.
- criterion (str): Criterion used for perceptual loss. Default: 'l1'.
- """
-
- def __init__(self,
- layer_weights,
- vgg_type='vgg19',
- use_input_norm=True,
- range_norm=False,
- perceptual_weight=1.0,
- style_weight=0.,
- criterion='l1'):
- super(PerceptualLoss, self).__init__()
- self.perceptual_weight = perceptual_weight
- self.style_weight = style_weight
- self.layer_weights = layer_weights
- self.vgg = VGGFeatureExtractor(
- layer_name_list=list(layer_weights.keys()),
- vgg_type=vgg_type,
- use_input_norm=use_input_norm,
- range_norm=range_norm)
-
- self.criterion_type = criterion
- if self.criterion_type == 'l1':
- self.criterion = torch.nn.L1Loss()
- elif self.criterion_type == 'l2':
- self.criterion = torch.nn.L2loss()
- elif self.criterion_type == 'mse':
- self.criterion = torch.nn.MSELoss(reduction='mean')
- elif self.criterion_type == 'fro':
- self.criterion = None
- else:
- raise NotImplementedError(f'{criterion} criterion has not been supported.')
-
- def forward(self, x, gt):
- """Forward function.
-
- Args:
- x (Tensor): Input tensor with shape (n, c, h, w).
- gt (Tensor): Ground-truth tensor with shape (n, c, h, w).
-
- Returns:
- Tensor: Forward results.
- """
- # extract vgg features
- x_features = self.vgg(x)
- gt_features = self.vgg(gt.detach())
-
- # calculate perceptual loss
- if self.perceptual_weight > 0:
- percep_loss = 0
- for k in x_features.keys():
- if self.criterion_type == 'fro':
- percep_loss += torch.norm(x_features[k] - gt_features[k], p='fro') * self.layer_weights[k]
- else:
- percep_loss += self.criterion(x_features[k], gt_features[k]) * self.layer_weights[k]
- percep_loss *= self.perceptual_weight
- else:
- percep_loss = None
-
- # calculate style loss
- if self.style_weight > 0:
- style_loss = 0
- for k in x_features.keys():
- if self.criterion_type == 'fro':
- style_loss += torch.norm(
- self._gram_mat(x_features[k]) - self._gram_mat(gt_features[k]), p='fro') * self.layer_weights[k]
- else:
- style_loss += self.criterion(self._gram_mat(x_features[k]), self._gram_mat(
- gt_features[k])) * self.layer_weights[k]
- style_loss *= self.style_weight
- else:
- style_loss = None
-
- return percep_loss, style_loss
-
- def _gram_mat(self, x):
- """Calculate Gram matrix.
-
- Args:
- x (torch.Tensor): Tensor with shape of (n, c, h, w).
-
- Returns:
- torch.Tensor: Gram matrix.
- """
- n, c, h, w = x.size()
- features = x.view(n, c, w * h)
- features_t = features.transpose(1, 2)
- gram = features.bmm(features_t) / (c * h * w)
- return gram
-
-
-@LOSS_REGISTRY.register()
-class LPIPSLoss(nn.Module):
- def __init__(self,
- loss_weight=1.0,
- use_input_norm=True,
- range_norm=False,):
- super(LPIPSLoss, self).__init__()
- self.perceptual = lpips.LPIPS(net="vgg", spatial=False).eval()
- self.loss_weight = loss_weight
- self.use_input_norm = use_input_norm
- self.range_norm = range_norm
-
- if self.use_input_norm:
- # the mean is for image with range [0, 1]
- self.register_buffer('mean', torch.Tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1))
- # the std is for image with range [0, 1]
- self.register_buffer('std', torch.Tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1))
-
- def forward(self, pred, target):
- if self.range_norm:
- pred = (pred + 1) / 2
- target = (target + 1) / 2
- if self.use_input_norm:
- pred = (pred - self.mean) / self.std
- target = (target - self.mean) / self.std
- lpips_loss = self.perceptual(target.contiguous(), pred.contiguous())
- return self.loss_weight * lpips_loss.mean()
-
-
-@LOSS_REGISTRY.register()
-class GANLoss(nn.Module):
- """Define GAN loss.
-
- Args:
- gan_type (str): Support 'vanilla', 'lsgan', 'wgan', 'hinge'.
- real_label_val (float): The value for real label. Default: 1.0.
- fake_label_val (float): The value for fake label. Default: 0.0.
- loss_weight (float): Loss weight. Default: 1.0.
- Note that loss_weight is only for generators; and it is always 1.0
- for discriminators.
- """
-
- def __init__(self, gan_type, real_label_val=1.0, fake_label_val=0.0, loss_weight=1.0):
- super(GANLoss, self).__init__()
- self.gan_type = gan_type
- self.loss_weight = loss_weight
- self.real_label_val = real_label_val
- self.fake_label_val = fake_label_val
-
- if self.gan_type == 'vanilla':
- self.loss = nn.BCEWithLogitsLoss()
- elif self.gan_type == 'lsgan':
- self.loss = nn.MSELoss()
- elif self.gan_type == 'wgan':
- self.loss = self._wgan_loss
- elif self.gan_type == 'wgan_softplus':
- self.loss = self._wgan_softplus_loss
- elif self.gan_type == 'hinge':
- self.loss = nn.ReLU()
- else:
- raise NotImplementedError(f'GAN type {self.gan_type} is not implemented.')
-
- def _wgan_loss(self, input, target):
- """wgan loss.
-
- Args:
- input (Tensor): Input tensor.
- target (bool): Target label.
-
- Returns:
- Tensor: wgan loss.
- """
- return -input.mean() if target else input.mean()
-
- def _wgan_softplus_loss(self, input, target):
- """wgan loss with soft plus. softplus is a smooth approximation to the
- ReLU function.
-
- In StyleGAN2, it is called:
- Logistic loss for discriminator;
- Non-saturating loss for generator.
-
- Args:
- input (Tensor): Input tensor.
- target (bool): Target label.
-
- Returns:
- Tensor: wgan loss.
- """
- return F.softplus(-input).mean() if target else F.softplus(input).mean()
-
- def get_target_label(self, input, target_is_real):
- """Get target label.
-
- Args:
- input (Tensor): Input tensor.
- target_is_real (bool): Whether the target is real or fake.
-
- Returns:
- (bool | Tensor): Target tensor. Return bool for wgan, otherwise,
- return Tensor.
- """
-
- if self.gan_type in ['wgan', 'wgan_softplus']:
- return target_is_real
- target_val = (self.real_label_val if target_is_real else self.fake_label_val)
- return input.new_ones(input.size()) * target_val
-
- def forward(self, input, target_is_real, is_disc=False):
- """
- Args:
- input (Tensor): The input for the loss module, i.e., the network
- prediction.
- target_is_real (bool): Whether the targe is real or fake.
- is_disc (bool): Whether the loss for discriminators or not.
- Default: False.
-
- Returns:
- Tensor: GAN loss value.
- """
- if self.gan_type == 'hinge':
- if is_disc: # for discriminators in hinge-gan
- input = -input if target_is_real else input
- loss = self.loss(1 + input).mean()
- else: # for generators in hinge-gan
- loss = -input.mean()
- else: # other gan types
- target_label = self.get_target_label(input, target_is_real)
- loss = self.loss(input, target_label)
-
- # loss_weight is always 1.0 for discriminators
- return loss if is_disc else loss * self.loss_weight
-
-
-def r1_penalty(real_pred, real_img):
- """R1 regularization for discriminator. The core idea is to
- penalize the gradient on real data alone: when the
- generator distribution produces the true data distribution
- and the discriminator is equal to 0 on the data manifold, the
- gradient penalty ensures that the discriminator cannot create
- a non-zero gradient orthogonal to the data manifold without
- suffering a loss in the GAN game.
-
- Ref:
- Eq. 9 in Which training methods for GANs do actually converge.
- """
- grad_real = autograd.grad(outputs=real_pred.sum(), inputs=real_img, create_graph=True)[0]
- grad_penalty = grad_real.pow(2).view(grad_real.shape[0], -1).sum(1).mean()
- return grad_penalty
-
-
-def g_path_regularize(fake_img, latents, mean_path_length, decay=0.01):
- noise = torch.randn_like(fake_img) / math.sqrt(fake_img.shape[2] * fake_img.shape[3])
- grad = autograd.grad(outputs=(fake_img * noise).sum(), inputs=latents, create_graph=True)[0]
- path_lengths = torch.sqrt(grad.pow(2).sum(2).mean(1))
-
- path_mean = mean_path_length + decay * (path_lengths.mean() - mean_path_length)
-
- path_penalty = (path_lengths - path_mean).pow(2).mean()
-
- return path_penalty, path_lengths.detach().mean(), path_mean.detach()
-
-
-def gradient_penalty_loss(discriminator, real_data, fake_data, weight=None):
- """Calculate gradient penalty for wgan-gp.
-
- Args:
- discriminator (nn.Module): Network for the discriminator.
- real_data (Tensor): Real input data.
- fake_data (Tensor): Fake input data.
- weight (Tensor): Weight tensor. Default: None.
-
- Returns:
- Tensor: A tensor for gradient penalty.
- """
-
- batch_size = real_data.size(0)
- alpha = real_data.new_tensor(torch.rand(batch_size, 1, 1, 1))
-
- # interpolate between real_data and fake_data
- interpolates = alpha * real_data + (1. - alpha) * fake_data
- interpolates = autograd.Variable(interpolates, requires_grad=True)
-
- disc_interpolates = discriminator(interpolates)
- gradients = autograd.grad(
- outputs=disc_interpolates,
- inputs=interpolates,
- grad_outputs=torch.ones_like(disc_interpolates),
- create_graph=True,
- retain_graph=True,
- only_inputs=True)[0]
-
- if weight is not None:
- gradients = gradients * weight
-
- gradients_penalty = ((gradients.norm(2, dim=1) - 1)**2).mean()
- if weight is not None:
- gradients_penalty /= torch.mean(weight)
-
- return gradients_penalty
diff --git a/spaces/leogabraneth/text-generation-webui-main/cmd_linux.sh b/spaces/leogabraneth/text-generation-webui-main/cmd_linux.sh
deleted file mode 100644
index 1685050aff7b270ae42e295c0b947d576e2653a3..0000000000000000000000000000000000000000
--- a/spaces/leogabraneth/text-generation-webui-main/cmd_linux.sh
+++ /dev/null
@@ -1,22 +0,0 @@
-#!/bin/bash
-
-cd "$(dirname "${BASH_SOURCE[0]}")"
-
-if [[ "$(pwd)" =~ " " ]]; then echo This script relies on Miniconda which can not be silently installed under a path with spaces. && exit; fi
-
-# deactivate existing conda envs as needed to avoid conflicts
-{ conda deactivate && conda deactivate && conda deactivate; } 2> /dev/null
-
-# config
-CONDA_ROOT_PREFIX="$(pwd)/installer_files/conda"
-INSTALL_ENV_DIR="$(pwd)/installer_files/env"
-
-# environment isolation
-export PYTHONNOUSERSITE=1
-unset PYTHONPATH
-unset PYTHONHOME
-export CUDA_PATH="$INSTALL_ENV_DIR"
-export CUDA_HOME="$CUDA_PATH"
-
-# activate env
-bash --init-file <(echo "source \"$CONDA_ROOT_PREFIX/etc/profile.d/conda.sh\" && conda activate \"$INSTALL_ENV_DIR\"")
diff --git a/spaces/leogabraneth/text-generation-webui-main/modules/callbacks.py b/spaces/leogabraneth/text-generation-webui-main/modules/callbacks.py
deleted file mode 100644
index bb979a6c944965bf60592b1341d3aa6084110ade..0000000000000000000000000000000000000000
--- a/spaces/leogabraneth/text-generation-webui-main/modules/callbacks.py
+++ /dev/null
@@ -1,99 +0,0 @@
-import gc
-import traceback
-from queue import Queue
-from threading import Thread
-
-import torch
-import transformers
-from transformers import is_torch_xpu_available
-
-import modules.shared as shared
-
-
-class _StopEverythingStoppingCriteria(transformers.StoppingCriteria):
- def __init__(self):
- transformers.StoppingCriteria.__init__(self)
-
- def __call__(self, input_ids: torch.LongTensor, _scores: torch.FloatTensor) -> bool:
- return shared.stop_everything
-
-
-class Stream(transformers.StoppingCriteria):
- def __init__(self, callback_func=None):
- self.callback_func = callback_func
-
- def __call__(self, input_ids, scores) -> bool:
- if self.callback_func is not None:
- self.callback_func(input_ids[0])
-
- return False
-
-
-class Iteratorize:
-
- """
- Transforms a function that takes a callback
- into a lazy iterator (generator).
-
- Adapted from: https://stackoverflow.com/a/9969000
- """
-
- def __init__(self, func, args=None, kwargs=None, callback=None):
- self.mfunc = func
- self.c_callback = callback
- self.q = Queue()
- self.sentinel = object()
- self.args = args or []
- self.kwargs = kwargs or {}
- self.stop_now = False
-
- def _callback(val):
- if self.stop_now or shared.stop_everything:
- raise ValueError
- self.q.put(val)
-
- def gentask():
- try:
- ret = self.mfunc(callback=_callback, *args, **self.kwargs)
- except ValueError:
- pass
- except:
- traceback.print_exc()
- pass
-
- clear_torch_cache()
- self.q.put(self.sentinel)
- if self.c_callback:
- self.c_callback(ret)
-
- self.thread = Thread(target=gentask)
- self.thread.start()
-
- def __iter__(self):
- return self
-
- def __next__(self):
- obj = self.q.get(True, None)
- if obj is self.sentinel:
- raise StopIteration
- else:
- return obj
-
- def __del__(self):
- clear_torch_cache()
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- self.stop_now = True
- clear_torch_cache()
-
-
-def clear_torch_cache():
- gc.collect()
- if not shared.args.cpu:
- if is_torch_xpu_available():
- torch.xpu.empty_cache()
- else:
- torch.cuda.empty_cache()
diff --git a/spaces/lgabrielb/fruit_classifier/README.md b/spaces/lgabrielb/fruit_classifier/README.md
deleted file mode 100644
index eb91a44986ff91ee2cf948ce0812bf922da27706..0000000000000000000000000000000000000000
--- a/spaces/lgabrielb/fruit_classifier/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Fruit Classifier
-emoji: 📊
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Activator For Windows 10 All 8.1 8 7 And Office 2010 Utorrent.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Activator For Windows 10 All 8.1 8 7 And Office 2010 Utorrent.md
deleted file mode 100644
index 3d3d8925d86ed35a743ea0215bf6bf5778a0afe1..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Activator For Windows 10 All 8.1 8 7 And Office 2010 Utorrent.md
+++ /dev/null
@@ -1,20 +0,0 @@
-Activator For Windows 10 All 8.1 8 7 and Office 2010 utorrent
Download Zip ➡ https://bytlly.com/2uGx6u
-
-KMSpico for Windows 10
-
-Windows 10 is the latest update of Microsoft's flagship Windows operating system. KMSpico 1.5.10 is the latest version of KMSpico and it is compatible with Windows 10 and it is designed to optimize the performance of Windows 10. To get the latest version of KMSpico, you need to get the latest version of KMSpico for Windows 10. KMSpico 1.5.10 Latest version is KMSpico 1.5.10 with update version 1.5.10.1025 and can be easily downloaded KMSpico 1.5.10 Latest version.
-
-You can download KMSpico 1.5.10 Latest version. Use the following link to download the latest KMSpico for Windows 10.
-
-To download KMSpico 1.5.10 Latest version. Click on the Download button below. Save KMSpico for Windows 10 latest version 1.5.10 Latest version KMSpico 1.5.10 on your computer. Run KMSpico 1.5.10 Latest version. It is recommended to install KMSpico for Windows 10 Latest version 1.5.10, You can run KMSpico 1.5.10 Latest version KMSpico 1.5.10 on your PC and it will run in the background to improve the performance of your computer.
-
-To keep your laptop optimized and free from virus for Windows 10, download the latest version of KMSpico for Windows 10. There is no need to mess around with the registry of your PC. We make it very easy to download and install KMSpico for Windows 10. Download KMSpico for Windows 10 latest version 1.5.10. Install KMSpico 1.5.10 Latest version.
-
-It is recommended to keep all the files related to KMSpico 1.5.10 Latest version 1.5.10.1025 and copy it to a different location where you want to install the KMSpico 1.5.10 Latest version. We have tried to create a tutorial which is simple and easy to understand so you do not need any technical experience to use this tool. This is the latest version of KMSpico that is compatible with Windows 10 and it is optimized to improve the performance of your PC.
-
-Steps to Download KMSpico 1.5.10 Latest version for Windows 10
-
-The latest version of KMSpico for Windows 10 can be downloaded by following the steps 4fefd39f24
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Electra X Vst Free Download ((LINK)).md b/spaces/lincquiQcaudo/Top-20-Diffusion/Electra X Vst Free Download ((LINK)).md
deleted file mode 100644
index 39edd46fc6aa7b8933666b8e9b7b467f7819e329..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Electra X Vst Free Download ((LINK)).md
+++ /dev/null
@@ -1,6 +0,0 @@
-electra x vst free download
DOWNLOAD 🗹 https://bytlly.com/2uGyCD
-
-... Electra X VST? Do you want more presets for that VST? Well, we created these free electra x presets just for you. Go here to download now... 4d29de3e1b
-
-
-
diff --git a/spaces/linfanluntan/Grounded-SAM/segment_anything/scripts/export_onnx_model.py b/spaces/linfanluntan/Grounded-SAM/segment_anything/scripts/export_onnx_model.py
deleted file mode 100644
index 8ec5c2ec24fc53cd9fdf66564cfe163b9eb26c24..0000000000000000000000000000000000000000
--- a/spaces/linfanluntan/Grounded-SAM/segment_anything/scripts/export_onnx_model.py
+++ /dev/null
@@ -1,204 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from segment_anything import build_sam, build_sam_vit_b, build_sam_vit_l
-from segment_anything.utils.onnx import SamOnnxModel
-
-import argparse
-import warnings
-
-try:
- import onnxruntime # type: ignore
-
- onnxruntime_exists = True
-except ImportError:
- onnxruntime_exists = False
-
-parser = argparse.ArgumentParser(
- description="Export the SAM prompt encoder and mask decoder to an ONNX model."
-)
-
-parser.add_argument(
- "--checkpoint", type=str, required=True, help="The path to the SAM model checkpoint."
-)
-
-parser.add_argument(
- "--output", type=str, required=True, help="The filename to save the ONNX model to."
-)
-
-parser.add_argument(
- "--model-type",
- type=str,
- default="default",
- help="In ['default', 'vit_b', 'vit_l']. Which type of SAM model to export.",
-)
-
-parser.add_argument(
- "--return-single-mask",
- action="store_true",
- help=(
- "If true, the exported ONNX model will only return the best mask, "
- "instead of returning multiple masks. For high resolution images "
- "this can improve runtime when upscaling masks is expensive."
- ),
-)
-
-parser.add_argument(
- "--opset",
- type=int,
- default=17,
- help="The ONNX opset version to use. Must be >=11",
-)
-
-parser.add_argument(
- "--quantize-out",
- type=str,
- default=None,
- help=(
- "If set, will quantize the model and save it with this name. "
- "Quantization is performed with quantize_dynamic from onnxruntime.quantization.quantize."
- ),
-)
-
-parser.add_argument(
- "--gelu-approximate",
- action="store_true",
- help=(
- "Replace GELU operations with approximations using tanh. Useful "
- "for some runtimes that have slow or unimplemented erf ops, used in GELU."
- ),
-)
-
-parser.add_argument(
- "--use-stability-score",
- action="store_true",
- help=(
- "Replaces the model's predicted mask quality score with the stability "
- "score calculated on the low resolution masks using an offset of 1.0. "
- ),
-)
-
-parser.add_argument(
- "--return-extra-metrics",
- action="store_true",
- help=(
- "The model will return five results: (masks, scores, stability_scores, "
- "areas, low_res_logits) instead of the usual three. This can be "
- "significantly slower for high resolution outputs."
- ),
-)
-
-
-def run_export(
- model_type: str,
- checkpoint: str,
- output: str,
- opset: int,
- return_single_mask: bool,
- gelu_approximate: bool = False,
- use_stability_score: bool = False,
- return_extra_metrics=False,
-):
- print("Loading model...")
- if model_type == "vit_b":
- sam = build_sam_vit_b(checkpoint)
- elif model_type == "vit_l":
- sam = build_sam_vit_l(checkpoint)
- else:
- sam = build_sam(checkpoint)
-
- onnx_model = SamOnnxModel(
- model=sam,
- return_single_mask=return_single_mask,
- use_stability_score=use_stability_score,
- return_extra_metrics=return_extra_metrics,
- )
-
- if gelu_approximate:
- for n, m in onnx_model.named_modules():
- if isinstance(m, torch.nn.GELU):
- m.approximate = "tanh"
-
- dynamic_axes = {
- "point_coords": {1: "num_points"},
- "point_labels": {1: "num_points"},
- }
-
- embed_dim = sam.prompt_encoder.embed_dim
- embed_size = sam.prompt_encoder.image_embedding_size
- mask_input_size = [4 * x for x in embed_size]
- dummy_inputs = {
- "image_embeddings": torch.randn(1, embed_dim, *embed_size, dtype=torch.float),
- "point_coords": torch.randint(low=0, high=1024, size=(1, 5, 2), dtype=torch.float),
- "point_labels": torch.randint(low=0, high=4, size=(1, 5), dtype=torch.float),
- "mask_input": torch.randn(1, 1, *mask_input_size, dtype=torch.float),
- "has_mask_input": torch.tensor([1], dtype=torch.float),
- "orig_im_size": torch.tensor([1500, 2250], dtype=torch.float),
- }
-
- _ = onnx_model(**dummy_inputs)
-
- output_names = ["masks", "iou_predictions", "low_res_masks"]
-
- with warnings.catch_warnings():
- warnings.filterwarnings("ignore", category=torch.jit.TracerWarning)
- warnings.filterwarnings("ignore", category=UserWarning)
- with open(output, "wb") as f:
- print(f"Exporing onnx model to {output}...")
- torch.onnx.export(
- onnx_model,
- tuple(dummy_inputs.values()),
- f,
- export_params=True,
- verbose=False,
- opset_version=opset,
- do_constant_folding=True,
- input_names=list(dummy_inputs.keys()),
- output_names=output_names,
- dynamic_axes=dynamic_axes,
- )
-
- if onnxruntime_exists:
- ort_inputs = {k: to_numpy(v) for k, v in dummy_inputs.items()}
- ort_session = onnxruntime.InferenceSession(output)
- _ = ort_session.run(None, ort_inputs)
- print("Model has successfully been run with ONNXRuntime.")
-
-
-def to_numpy(tensor):
- return tensor.cpu().numpy()
-
-
-if __name__ == "__main__":
- args = parser.parse_args()
- run_export(
- model_type=args.model_type,
- checkpoint=args.checkpoint,
- output=args.output,
- opset=args.opset,
- return_single_mask=args.return_single_mask,
- gelu_approximate=args.gelu_approximate,
- use_stability_score=args.use_stability_score,
- return_extra_metrics=args.return_extra_metrics,
- )
-
- if args.quantize_out is not None:
- assert onnxruntime_exists, "onnxruntime is required to quantize the model."
- from onnxruntime.quantization import QuantType # type: ignore
- from onnxruntime.quantization.quantize import quantize_dynamic # type: ignore
-
- print(f"Quantizing model and writing to {args.quantize_out}...")
- quantize_dynamic(
- model_input=args.output,
- model_output=args.quantize_out,
- optimize_model=True,
- per_channel=False,
- reduce_range=False,
- weight_type=QuantType.QUInt8,
- )
- print("Done!")
diff --git a/spaces/lj1995/vocal2guitar/gui.py b/spaces/lj1995/vocal2guitar/gui.py
deleted file mode 100644
index 754d95b47cc3abc80166c63d2ecc1e69ce38ce41..0000000000000000000000000000000000000000
--- a/spaces/lj1995/vocal2guitar/gui.py
+++ /dev/null
@@ -1,681 +0,0 @@
-"""
-0416后的更新:
- 引入config中half
- 重建npy而不用填写
- v2支持
- 无f0模型支持
- 修复
-
- int16:
- 增加无索引支持
- f0算法改harvest(怎么看就只有这个会影响CPU占用),但是不这么改效果不好
-"""
-import os, sys, traceback
-
-import json
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-from config import Config
-
-Config = Config()
-import PySimpleGUI as sg
-import sounddevice as sd
-import noisereduce as nr
-import numpy as np
-from fairseq import checkpoint_utils
-import librosa, torch, pyworld, faiss, time, threading
-import torch.nn.functional as F
-import torchaudio.transforms as tat
-import scipy.signal as signal
-
-
-# import matplotlib.pyplot as plt
-from infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from i18n import I18nAuto
-
-i18n = I18nAuto()
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-current_dir = os.getcwd()
-
-
-class RVC:
- def __init__(
- self, key, hubert_path, pth_path, index_path, npy_path, index_rate
- ) -> None:
- """
- 初始化
- """
- try:
- self.f0_up_key = key
- self.time_step = 160 / 16000 * 1000
- self.f0_min = 50
- self.f0_max = 1100
- self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
- self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
- self.sr = 16000
- self.window = 160
- if index_rate != 0:
- self.index = faiss.read_index(index_path)
- # self.big_npy = np.load(npy_path)
- self.big_npy = self.index.reconstruct_n(0, self.index.ntotal)
- print("index search enabled")
- self.index_rate = index_rate
- model_path = hubert_path
- print("load model(s) from {}".format(model_path))
- models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
- [model_path],
- suffix="",
- )
- self.model = models[0]
- self.model = self.model.to(device)
- if Config.is_half:
- self.model = self.model.half()
- else:
- self.model = self.model.float()
- self.model.eval()
- cpt = torch.load(pth_path, map_location="cpu")
- self.tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- self.if_f0 = cpt.get("f0", 1)
- self.version = cpt.get("version", "v1")
- if self.version == "v1":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs256NSFsid(
- *cpt["config"], is_half=Config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif self.version == "v2":
- if self.if_f0 == 1:
- self.net_g = SynthesizerTrnMs768NSFsid(
- *cpt["config"], is_half=Config.is_half
- )
- else:
- self.net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- del self.net_g.enc_q
- print(self.net_g.load_state_dict(cpt["weight"], strict=False))
- self.net_g.eval().to(device)
- if Config.is_half:
- self.net_g = self.net_g.half()
- else:
- self.net_g = self.net_g.float()
- except:
- print(traceback.format_exc())
-
- def get_f0(self, x, f0_up_key, inp_f0=None):
- x_pad = 1
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- f0, t = pyworld.harvest(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0]
- f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def infer(self, feats: torch.Tensor) -> np.ndarray:
- """
- 推理函数
- """
- audio = feats.clone().cpu().numpy()
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- if Config.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- inputs = {
- "source": feats.to(device),
- "padding_mask": padding_mask.to(device),
- "output_layer": 9 if self.version == "v1" else 12,
- }
- torch.cuda.synchronize()
- with torch.no_grad():
- logits = self.model.extract_features(**inputs)
- feats = (
- self.model.final_proj(logits[0]) if self.version == "v1" else logits[0]
- )
-
- ####索引优化
- try:
- if (
- hasattr(self, "index")
- and hasattr(self, "big_npy")
- and self.index_rate != 0
- ):
- npy = feats[0].cpu().numpy().astype("float32")
- score, ix = self.index.search(npy, k=8)
- weight = np.square(1 / score)
- weight /= weight.sum(axis=1, keepdims=True)
- npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
- if Config.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate
- + (1 - self.index_rate) * feats
- )
- else:
- print("index search FAIL or disabled")
- except:
- traceback.print_exc()
- print("index search FAIL")
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- torch.cuda.synchronize()
- print(feats.shape)
- if self.if_f0 == 1:
- pitch, pitchf = self.get_f0(audio, self.f0_up_key)
- p_len = min(feats.shape[1], 13000, pitch.shape[0]) # 太大了爆显存
- else:
- pitch, pitchf = None, None
- p_len = min(feats.shape[1], 13000) # 太大了爆显存
- torch.cuda.synchronize()
- # print(feats.shape,pitch.shape)
- feats = feats[:, :p_len, :]
- if self.if_f0 == 1:
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- pitch = torch.LongTensor(pitch).unsqueeze(0).to(device)
- pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device)
- p_len = torch.LongTensor([p_len]).to(device)
- ii = 0 # sid
- sid = torch.LongTensor([ii]).to(device)
- with torch.no_grad():
- if self.if_f0 == 1:
- infered_audio = (
- self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]
- .data.cpu()
- .float()
- )
- else:
- infered_audio = (
- self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float()
- )
- torch.cuda.synchronize()
- return infered_audio
-
-
-class GUIConfig:
- def __init__(self) -> None:
- self.hubert_path: str = ""
- self.pth_path: str = ""
- self.index_path: str = ""
- self.npy_path: str = ""
- self.pitch: int = 12
- self.samplerate: int = 44100
- self.block_time: float = 1.0 # s
- self.buffer_num: int = 1
- self.threhold: int = -30
- self.crossfade_time: float = 0.08
- self.extra_time: float = 0.04
- self.I_noise_reduce = False
- self.O_noise_reduce = False
- self.index_rate = 0.3
-
-
-class GUI:
- def __init__(self) -> None:
- self.config = GUIConfig()
- self.flag_vc = False
-
- self.launcher()
-
- def load(self):
- input_devices, output_devices, _, _ = self.get_devices()
- try:
- with open("values1.json", "r") as j:
- data = json.load(j)
- except:
- with open("values1.json", "w") as j:
- data = {
- "pth_path": " ",
- "index_path": " ",
- "sg_input_device": input_devices[sd.default.device[0]],
- "sg_output_device": output_devices[sd.default.device[1]],
- "threhold": "-45",
- "pitch": "0",
- "index_rate": "0",
- "block_time": "1",
- "crossfade_length": "0.04",
- "extra_time": "1",
- }
- return data
-
- def launcher(self):
- data = self.load()
- sg.theme("LightBlue3")
- input_devices, output_devices, _, _ = self.get_devices()
- layout = [
- [
- sg.Frame(
- title=i18n("加载模型"),
- layout=[
- [
- sg.Input(
- default_text="hubert_base.pt",
- key="hubert_path",
- disabled=True,
- ),
- sg.FileBrowse(
- i18n("Hubert模型"),
- initial_folder=os.path.join(os.getcwd()),
- file_types=((". pt"),),
- ),
- ],
- [
- sg.Input(
- default_text=data.get("pth_path", ""),
- key="pth_path",
- ),
- sg.FileBrowse(
- i18n("选择.pth文件"),
- initial_folder=os.path.join(os.getcwd(), "weights"),
- file_types=((". pth"),),
- ),
- ],
- [
- sg.Input(
- default_text=data.get("index_path", ""),
- key="index_path",
- ),
- sg.FileBrowse(
- i18n("选择.index文件"),
- initial_folder=os.path.join(os.getcwd(), "logs"),
- file_types=((". index"),),
- ),
- ],
- [
- sg.Input(
- default_text="你不需要填写这个You don't need write this.",
- key="npy_path",
- disabled=True,
- ),
- sg.FileBrowse(
- i18n("选择.npy文件"),
- initial_folder=os.path.join(os.getcwd(), "logs"),
- file_types=((". npy"),),
- ),
- ],
- ],
- )
- ],
- [
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("输入设备")),
- sg.Combo(
- input_devices,
- key="sg_input_device",
- default_value=data.get("sg_input_device", ""),
- ),
- ],
- [
- sg.Text(i18n("输出设备")),
- sg.Combo(
- output_devices,
- key="sg_output_device",
- default_value=data.get("sg_output_device", ""),
- ),
- ],
- ],
- title=i18n("音频设备(请使用同种类驱动)"),
- )
- ],
- [
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("响应阈值")),
- sg.Slider(
- range=(-60, 0),
- key="threhold",
- resolution=1,
- orientation="h",
- default_value=data.get("threhold", ""),
- ),
- ],
- [
- sg.Text(i18n("音调设置")),
- sg.Slider(
- range=(-24, 24),
- key="pitch",
- resolution=1,
- orientation="h",
- default_value=data.get("pitch", ""),
- ),
- ],
- [
- sg.Text(i18n("Index Rate")),
- sg.Slider(
- range=(0.0, 1.0),
- key="index_rate",
- resolution=0.01,
- orientation="h",
- default_value=data.get("index_rate", ""),
- ),
- ],
- ],
- title=i18n("常规设置"),
- ),
- sg.Frame(
- layout=[
- [
- sg.Text(i18n("采样长度")),
- sg.Slider(
- range=(0.1, 3.0),
- key="block_time",
- resolution=0.1,
- orientation="h",
- default_value=data.get("block_time", ""),
- ),
- ],
- [
- sg.Text(i18n("淡入淡出长度")),
- sg.Slider(
- range=(0.01, 0.15),
- key="crossfade_length",
- resolution=0.01,
- orientation="h",
- default_value=data.get("crossfade_length", ""),
- ),
- ],
- [
- sg.Text(i18n("额外推理时长")),
- sg.Slider(
- range=(0.05, 3.00),
- key="extra_time",
- resolution=0.01,
- orientation="h",
- default_value=data.get("extra_time", ""),
- ),
- ],
- [
- sg.Checkbox(i18n("输入降噪"), key="I_noise_reduce"),
- sg.Checkbox(i18n("输出降噪"), key="O_noise_reduce"),
- ],
- ],
- title=i18n("性能设置"),
- ),
- ],
- [
- sg.Button(i18n("开始音频转换"), key="start_vc"),
- sg.Button(i18n("停止音频转换"), key="stop_vc"),
- sg.Text(i18n("推理时间(ms):")),
- sg.Text("0", key="infer_time"),
- ],
- ]
- self.window = sg.Window("RVC - GUI", layout=layout)
- self.event_handler()
-
- def event_handler(self):
- while True:
- event, values = self.window.read()
- if event == sg.WINDOW_CLOSED:
- self.flag_vc = False
- exit()
- if event == "start_vc" and self.flag_vc == False:
- self.set_values(values)
- print("using_cuda:" + str(torch.cuda.is_available()))
- self.start_vc()
- settings = {
- "pth_path": values["pth_path"],
- "index_path": values["index_path"],
- "sg_input_device": values["sg_input_device"],
- "sg_output_device": values["sg_output_device"],
- "threhold": values["threhold"],
- "pitch": values["pitch"],
- "index_rate": values["index_rate"],
- "block_time": values["block_time"],
- "crossfade_length": values["crossfade_length"],
- "extra_time": values["extra_time"],
- }
- with open("values1.json", "w") as j:
- json.dump(settings, j)
- if event == "stop_vc" and self.flag_vc == True:
- self.flag_vc = False
-
- def set_values(self, values):
- self.set_devices(values["sg_input_device"], values["sg_output_device"])
- self.config.hubert_path = os.path.join(current_dir, "hubert_base.pt")
- self.config.pth_path = values["pth_path"]
- self.config.index_path = values["index_path"]
- self.config.npy_path = values["npy_path"]
- self.config.threhold = values["threhold"]
- self.config.pitch = values["pitch"]
- self.config.block_time = values["block_time"]
- self.config.crossfade_time = values["crossfade_length"]
- self.config.extra_time = values["extra_time"]
- self.config.I_noise_reduce = values["I_noise_reduce"]
- self.config.O_noise_reduce = values["O_noise_reduce"]
- self.config.index_rate = values["index_rate"]
-
- def start_vc(self):
- torch.cuda.empty_cache()
- self.flag_vc = True
- self.block_frame = int(self.config.block_time * self.config.samplerate)
- self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate)
- self.sola_search_frame = int(0.012 * self.config.samplerate)
- self.delay_frame = int(0.01 * self.config.samplerate) # 往前预留0.02s
- self.extra_frame = int(self.config.extra_time * self.config.samplerate)
- self.rvc = None
- self.rvc = RVC(
- self.config.pitch,
- self.config.hubert_path,
- self.config.pth_path,
- self.config.index_path,
- self.config.npy_path,
- self.config.index_rate,
- )
- self.input_wav: np.ndarray = np.zeros(
- self.extra_frame
- + self.crossfade_frame
- + self.sola_search_frame
- + self.block_frame,
- dtype="float32",
- )
- self.output_wav: torch.Tensor = torch.zeros(
- self.block_frame, device=device, dtype=torch.float32
- )
- self.sola_buffer: torch.Tensor = torch.zeros(
- self.crossfade_frame, device=device, dtype=torch.float32
- )
- self.fade_in_window: torch.Tensor = torch.linspace(
- 0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32
- )
- self.fade_out_window: torch.Tensor = 1 - self.fade_in_window
- self.resampler1 = tat.Resample(
- orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32
- )
- self.resampler2 = tat.Resample(
- orig_freq=self.rvc.tgt_sr,
- new_freq=self.config.samplerate,
- dtype=torch.float32,
- )
- thread_vc = threading.Thread(target=self.soundinput)
- thread_vc.start()
-
- def soundinput(self):
- """
- 接受音频输入
- """
- with sd.Stream(
- callback=self.audio_callback,
- blocksize=self.block_frame,
- samplerate=self.config.samplerate,
- dtype="float32",
- ):
- while self.flag_vc:
- time.sleep(self.config.block_time)
- print("Audio block passed.")
- print("ENDing VC")
-
- def audio_callback(
- self, indata: np.ndarray, outdata: np.ndarray, frames, times, status
- ):
- """
- 音频处理
- """
- start_time = time.perf_counter()
- indata = librosa.to_mono(indata.T)
- if self.config.I_noise_reduce:
- indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate)
-
- """noise gate"""
- frame_length = 2048
- hop_length = 1024
- rms = librosa.feature.rms(
- y=indata, frame_length=frame_length, hop_length=hop_length
- )
- db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold
- # print(rms.shape,db.shape,db)
- for i in range(db_threhold.shape[0]):
- if db_threhold[i]:
- indata[i * hop_length : (i + 1) * hop_length] = 0
- self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata)
-
- # infer
- print("input_wav:" + str(self.input_wav.shape))
- # print('infered_wav:'+str(infer_wav.shape))
- infer_wav: torch.Tensor = self.resampler2(
- self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav)))
- )[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to(
- device
- )
- print("infer_wav:" + str(infer_wav.shape))
-
- # SOLA algorithm from https://github.com/yxlllc/DDSP-SVC
- cor_nom = F.conv1d(
- infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame],
- self.sola_buffer[None, None, :],
- )
- cor_den = torch.sqrt(
- F.conv1d(
- infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame]
- ** 2,
- torch.ones(1, 1, self.crossfade_frame, device=device),
- )
- + 1e-8
- )
- sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0])
- print("sola offset: " + str(int(sola_offset)))
-
- # crossfade
- self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame]
- self.output_wav[: self.crossfade_frame] *= self.fade_in_window
- self.output_wav[: self.crossfade_frame] += self.sola_buffer[:]
- if sola_offset < self.sola_search_frame:
- self.sola_buffer[:] = (
- infer_wav[
- -self.sola_search_frame
- - self.crossfade_frame
- + sola_offset : -self.sola_search_frame
- + sola_offset
- ]
- * self.fade_out_window
- )
- else:
- self.sola_buffer[:] = (
- infer_wav[-self.crossfade_frame :] * self.fade_out_window
- )
-
- if self.config.O_noise_reduce:
- outdata[:] = np.tile(
- nr.reduce_noise(
- y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate
- ),
- (2, 1),
- ).T
- else:
- outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy()
- total_time = time.perf_counter() - start_time
- self.window["infer_time"].update(int(total_time * 1000))
- print("infer time:" + str(total_time))
-
- def get_devices(self, update: bool = True):
- """获取设备列表"""
- if update:
- sd._terminate()
- sd._initialize()
- devices = sd.query_devices()
- hostapis = sd.query_hostapis()
- for hostapi in hostapis:
- for device_idx in hostapi["devices"]:
- devices[device_idx]["hostapi_name"] = hostapi["name"]
- input_devices = [
- f"{d['name']} ({d['hostapi_name']})"
- for d in devices
- if d["max_input_channels"] > 0
- ]
- output_devices = [
- f"{d['name']} ({d['hostapi_name']})"
- for d in devices
- if d["max_output_channels"] > 0
- ]
- input_devices_indices = [
- d["index"] if "index" in d else d["name"]
- for d in devices
- if d["max_input_channels"] > 0
- ]
- output_devices_indices = [
- d["index"] if "index" in d else d["name"]
- for d in devices
- if d["max_output_channels"] > 0
- ]
- return (
- input_devices,
- output_devices,
- input_devices_indices,
- output_devices_indices,
- )
-
- def set_devices(self, input_device, output_device):
- """设置输出设备"""
- (
- input_devices,
- output_devices,
- input_device_indices,
- output_device_indices,
- ) = self.get_devices()
- sd.default.device[0] = input_device_indices[input_devices.index(input_device)]
- sd.default.device[1] = output_device_indices[
- output_devices.index(output_device)
- ]
- print("input device:" + str(sd.default.device[0]) + ":" + str(input_device))
- print("output device:" + str(sd.default.device[1]) + ":" + str(output_device))
-
-
-gui = GUI()
diff --git a/spaces/ljjggr/bingo/src/components/ui/badge.tsx b/spaces/ljjggr/bingo/src/components/ui/badge.tsx
deleted file mode 100644
index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000
--- a/spaces/ljjggr/bingo/src/components/ui/badge.tsx
+++ /dev/null
@@ -1,36 +0,0 @@
-import * as React from 'react'
-import { cva, type VariantProps } from 'class-variance-authority'
-
-import { cn } from '@/lib/utils'
-
-const badgeVariants = cva(
- 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2',
- {
- variants: {
- variant: {
- default:
- 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80',
- secondary:
- 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80',
- destructive:
- 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80',
- outline: 'text-foreground'
- }
- },
- defaultVariants: {
- variant: 'default'
- }
- }
-)
-
-export interface BadgeProps
- extends React.HTMLAttributes,
- VariantProps {}
-
-function Badge({ className, variant, ...props }: BadgeProps) {
- return (
-
- )
-}
-
-export { Badge, badgeVariants }
diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/whisper/utils.py b/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/whisper/utils.py
deleted file mode 100644
index 5dacc173c40bcd6e999d728862e29a968000b12e..0000000000000000000000000000000000000000
--- a/spaces/lllqqq/so-vits-svc-models-pcr/vencoder/whisper/utils.py
+++ /dev/null
@@ -1,163 +0,0 @@
-import json
-import os
-import sys
-import zlib
-from typing import Callable, TextIO
-
-system_encoding = sys.getdefaultencoding()
-
-if system_encoding != "utf-8":
- def make_safe(string):
- # replaces any character not representable using the system default encoding with an '?',
- # avoiding UnicodeEncodeError (https://github.com/openai/whisper/discussions/729).
- return string.encode(system_encoding, errors="replace").decode(system_encoding)
-else:
- def make_safe(string):
- # utf-8 can encode any Unicode code point, so no need to do the round-trip encoding
- return string
-
-
-def exact_div(x, y):
- assert x % y == 0
- return x // y
-
-
-def str2bool(string):
- str2val = {"True": True, "False": False}
- if string in str2val:
- return str2val[string]
- else:
- raise ValueError(f"Expected one of {set(str2val.keys())}, got {string}")
-
-
-def optional_int(string):
- return None if string == "None" else int(string)
-
-
-def optional_float(string):
- return None if string == "None" else float(string)
-
-
-def compression_ratio(text) -> float:
- text_bytes = text.encode("utf-8")
- return len(text_bytes) / len(zlib.compress(text_bytes))
-
-
-def format_timestamp(seconds: float, always_include_hours: bool = False, decimal_marker: str = '.'):
- assert seconds >= 0, "non-negative timestamp expected"
- milliseconds = round(seconds * 1000.0)
-
- hours = milliseconds // 3_600_000
- milliseconds -= hours * 3_600_000
-
- minutes = milliseconds // 60_000
- milliseconds -= minutes * 60_000
-
- seconds = milliseconds // 1_000
- milliseconds -= seconds * 1_000
-
- hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else ""
- return f"{hours_marker}{minutes:02d}:{seconds:02d}{decimal_marker}{milliseconds:03d}"
-
-
-class ResultWriter:
- extension: str
-
- def __init__(self, output_dir: str):
- self.output_dir = output_dir
-
- def __call__(self, result: dict, audio_path: str):
- audio_basename = os.path.basename(audio_path)
- output_path = os.path.join(self.output_dir, audio_basename + "." + self.extension)
-
- with open(output_path, "w", encoding="utf-8") as f:
- self.write_result(result, file=f)
-
- def write_result(self, result: dict, file: TextIO):
- raise NotImplementedError
-
-
-class WriteTXT(ResultWriter):
- extension: str = "txt"
-
- def write_result(self, result: dict, file: TextIO):
- for segment in result["segments"]:
- print(segment['text'].strip(), file=file, flush=True)
-
-
-class WriteVTT(ResultWriter):
- extension: str = "vtt"
-
- def write_result(self, result: dict, file: TextIO):
- print("WEBVTT\n", file=file)
- for segment in result["segments"]:
- print(
- f"{format_timestamp(segment['start'])} --> {format_timestamp(segment['end'])}\n"
- f"{segment['text'].strip().replace('-->', '->')}\n",
- file=file,
- flush=True,
- )
-
-
-class WriteSRT(ResultWriter):
- extension: str = "srt"
-
- def write_result(self, result: dict, file: TextIO):
- for i, segment in enumerate(result["segments"], start=1):
- # write srt lines
- print(
- f"{i}\n"
- f"{format_timestamp(segment['start'], always_include_hours=True, decimal_marker=',')} --> "
- f"{format_timestamp(segment['end'], always_include_hours=True, decimal_marker=',')}\n"
- f"{segment['text'].strip().replace('-->', '->')}\n",
- file=file,
- flush=True,
- )
-
-
-class WriteTSV(ResultWriter):
- """
- Write a transcript to a file in TSV (tab-separated values) format containing lines like:
- \t\t
-
- Using integer milliseconds as start and end times means there's no chance of interference from
- an environment setting a language encoding that causes the decimal in a floating point number
- to appear as a comma; also is faster and more efficient to parse & store, e.g., in C++.
- """
- extension: str = "tsv"
-
- def write_result(self, result: dict, file: TextIO):
- print("start", "end", "text", sep="\t", file=file)
- for segment in result["segments"]:
- print(round(1000 * segment['start']), file=file, end="\t")
- print(round(1000 * segment['end']), file=file, end="\t")
- print(segment['text'].strip().replace("\t", " "), file=file, flush=True)
-
-
-class WriteJSON(ResultWriter):
- extension: str = "json"
-
- def write_result(self, result: dict, file: TextIO):
- json.dump(result, file)
-
-
-def get_writer(output_format: str, output_dir: str) -> Callable[[dict, TextIO], None]:
- writers = {
- "txt": WriteTXT,
- "vtt": WriteVTT,
- "srt": WriteSRT,
- "tsv": WriteTSV,
- "json": WriteJSON,
- }
-
- if output_format == "all":
- all_writers = [writer(output_dir) for writer in writers.values()]
-
- def write_all(result: dict, file: TextIO):
- for writer in all_writers:
- writer(result, file)
-
- return write_all
-
- return writers[output_format](output_dir)
-
diff --git a/spaces/ludusc/latent-space-theories/torch_utils/misc.py b/spaces/ludusc/latent-space-theories/torch_utils/misc.py
deleted file mode 100644
index 335397dd1662d8f5bfd44e17899a00549867f4bc..0000000000000000000000000000000000000000
--- a/spaces/ludusc/latent-space-theories/torch_utils/misc.py
+++ /dev/null
@@ -1,266 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import re
-import contextlib
-import numpy as np
-import torch
-import warnings
-import dnnlib
-
-#----------------------------------------------------------------------------
-# Cached construction of constant tensors. Avoids CPU=>GPU copy when the
-# same constant is used multiple times.
-
-_constant_cache = dict()
-
-def constant(value, shape=None, dtype=None, device=None, memory_format=None):
- value = np.asarray(value)
- if shape is not None:
- shape = tuple(shape)
- if dtype is None:
- dtype = torch.get_default_dtype()
- if device is None:
- device = torch.device('cpu')
- if memory_format is None:
- memory_format = torch.contiguous_format
-
- key = (value.shape, value.dtype, value.tobytes(), shape, dtype, device, memory_format)
- tensor = _constant_cache.get(key, None)
- if tensor is None:
- tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device)
- if shape is not None:
- tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape))
- tensor = tensor.contiguous(memory_format=memory_format)
- _constant_cache[key] = tensor
- return tensor
-
-#----------------------------------------------------------------------------
-# Replace NaN/Inf with specified numerical values.
-
-try:
- nan_to_num = torch.nan_to_num # 1.8.0a0
-except AttributeError:
- def nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None): # pylint: disable=redefined-builtin
- assert isinstance(input, torch.Tensor)
- if posinf is None:
- posinf = torch.finfo(input.dtype).max
- if neginf is None:
- neginf = torch.finfo(input.dtype).min
- assert nan == 0
- return torch.clamp(input.unsqueeze(0).nansum(0), min=neginf, max=posinf, out=out)
-
-#----------------------------------------------------------------------------
-# Symbolic assert.
-
-try:
- symbolic_assert = torch._assert # 1.8.0a0 # pylint: disable=protected-access
-except AttributeError:
- symbolic_assert = torch.Assert # 1.7.0
-
-#----------------------------------------------------------------------------
-# Context manager to temporarily suppress known warnings in torch.jit.trace().
-# Note: Cannot use catch_warnings because of https://bugs.python.org/issue29672
-
-@contextlib.contextmanager
-def suppress_tracer_warnings():
- flt = ('ignore', None, torch.jit.TracerWarning, None, 0)
- warnings.filters.insert(0, flt)
- yield
- warnings.filters.remove(flt)
-
-#----------------------------------------------------------------------------
-# Assert that the shape of a tensor matches the given list of integers.
-# None indicates that the size of a dimension is allowed to vary.
-# Performs symbolic assertion when used in torch.jit.trace().
-
-def assert_shape(tensor, ref_shape):
- if tensor.ndim != len(ref_shape):
- raise AssertionError(f'Wrong number of dimensions: got {tensor.ndim}, expected {len(ref_shape)}')
- for idx, (size, ref_size) in enumerate(zip(tensor.shape, ref_shape)):
- if ref_size is None:
- pass
- elif isinstance(ref_size, torch.Tensor):
- with suppress_tracer_warnings(): # as_tensor results are registered as constants
- symbolic_assert(torch.equal(torch.as_tensor(size), ref_size), f'Wrong size for dimension {idx}')
- elif isinstance(size, torch.Tensor):
- with suppress_tracer_warnings(): # as_tensor results are registered as constants
- symbolic_assert(torch.equal(size, torch.as_tensor(ref_size)), f'Wrong size for dimension {idx}: expected {ref_size}')
- elif size != ref_size:
- raise AssertionError(f'Wrong size for dimension {idx}: got {size}, expected {ref_size}')
-
-#----------------------------------------------------------------------------
-# Function decorator that calls torch.autograd.profiler.record_function().
-
-def profiled_function(fn):
- def decorator(*args, **kwargs):
- with torch.autograd.profiler.record_function(fn.__name__):
- return fn(*args, **kwargs)
- decorator.__name__ = fn.__name__
- return decorator
-
-#----------------------------------------------------------------------------
-# Sampler for torch.utils.data.DataLoader that loops over the dataset
-# indefinitely, shuffling items as it goes.
-
-class InfiniteSampler(torch.utils.data.Sampler):
- def __init__(self, dataset, rank=0, num_replicas=1, shuffle=True, seed=0, window_size=0.5):
- assert len(dataset) > 0
- assert num_replicas > 0
- assert 0 <= rank < num_replicas
- assert 0 <= window_size <= 1
- super().__init__(dataset)
- self.dataset = dataset
- self.rank = rank
- self.num_replicas = num_replicas
- self.shuffle = shuffle
- self.seed = seed
- self.window_size = window_size
-
- def __iter__(self):
- order = np.arange(len(self.dataset))
- rnd = None
- window = 0
- if self.shuffle:
- rnd = np.random.RandomState(self.seed)
- rnd.shuffle(order)
- window = int(np.rint(order.size * self.window_size))
-
- idx = 0
- while True:
- i = idx % order.size
- if idx % self.num_replicas == self.rank:
- yield order[i]
- if window >= 2:
- j = (i - rnd.randint(window)) % order.size
- order[i], order[j] = order[j], order[i]
- idx += 1
-
-#----------------------------------------------------------------------------
-# Utilities for operating with torch.nn.Module parameters and buffers.
-
-def params_and_buffers(module):
- assert isinstance(module, torch.nn.Module)
- return list(module.parameters()) + list(module.buffers())
-
-def named_params_and_buffers(module):
- assert isinstance(module, torch.nn.Module)
- return list(module.named_parameters()) + list(module.named_buffers())
-
-def copy_params_and_buffers(src_module, dst_module, require_all=False):
- assert isinstance(src_module, torch.nn.Module)
- assert isinstance(dst_module, torch.nn.Module)
- src_tensors = dict(named_params_and_buffers(src_module))
- for name, tensor in named_params_and_buffers(dst_module):
- assert (name in src_tensors) or (not require_all)
- if name in src_tensors:
- tensor.copy_(src_tensors[name].detach()).requires_grad_(tensor.requires_grad)
-
-#----------------------------------------------------------------------------
-# Context manager for easily enabling/disabling DistributedDataParallel
-# synchronization.
-
-@contextlib.contextmanager
-def ddp_sync(module, sync):
- assert isinstance(module, torch.nn.Module)
- if sync or not isinstance(module, torch.nn.parallel.DistributedDataParallel):
- yield
- else:
- with module.no_sync():
- yield
-
-#----------------------------------------------------------------------------
-# Check DistributedDataParallel consistency across processes.
-
-def check_ddp_consistency(module, ignore_regex=None):
- assert isinstance(module, torch.nn.Module)
- for name, tensor in named_params_and_buffers(module):
- fullname = type(module).__name__ + '.' + name
- if ignore_regex is not None and re.fullmatch(ignore_regex, fullname):
- continue
- tensor = tensor.detach()
- if tensor.is_floating_point():
- tensor = nan_to_num(tensor)
- other = tensor.clone()
- torch.distributed.broadcast(tensor=other, src=0)
- assert (tensor == other).all(), fullname
-
-#----------------------------------------------------------------------------
-# Print summary table of module hierarchy.
-
-def print_module_summary(module, inputs, max_nesting=3, skip_redundant=True):
- assert isinstance(module, torch.nn.Module)
- assert not isinstance(module, torch.jit.ScriptModule)
- assert isinstance(inputs, (tuple, list))
-
- # Register hooks.
- entries = []
- nesting = [0]
- def pre_hook(_mod, _inputs):
- nesting[0] += 1
- def post_hook(mod, _inputs, outputs):
- nesting[0] -= 1
- if nesting[0] <= max_nesting:
- outputs = list(outputs) if isinstance(outputs, (tuple, list)) else [outputs]
- outputs = [t for t in outputs if isinstance(t, torch.Tensor)]
- entries.append(dnnlib.EasyDict(mod=mod, outputs=outputs))
- hooks = [mod.register_forward_pre_hook(pre_hook) for mod in module.modules()]
- hooks += [mod.register_forward_hook(post_hook) for mod in module.modules()]
-
- # Run module.
- outputs = module(*inputs)
- for hook in hooks:
- hook.remove()
-
- # Identify unique outputs, parameters, and buffers.
- tensors_seen = set()
- for e in entries:
- e.unique_params = [t for t in e.mod.parameters() if id(t) not in tensors_seen]
- e.unique_buffers = [t for t in e.mod.buffers() if id(t) not in tensors_seen]
- e.unique_outputs = [t for t in e.outputs if id(t) not in tensors_seen]
- tensors_seen |= {id(t) for t in e.unique_params + e.unique_buffers + e.unique_outputs}
-
- # Filter out redundant entries.
- if skip_redundant:
- entries = [e for e in entries if len(e.unique_params) or len(e.unique_buffers) or len(e.unique_outputs)]
-
- # Construct table.
- rows = [[type(module).__name__, 'Parameters', 'Buffers', 'Output shape', 'Datatype']]
- rows += [['---'] * len(rows[0])]
- param_total = 0
- buffer_total = 0
- submodule_names = {mod: name for name, mod in module.named_modules()}
- for e in entries:
- name = '' if e.mod is module else submodule_names[e.mod]
- param_size = sum(t.numel() for t in e.unique_params)
- buffer_size = sum(t.numel() for t in e.unique_buffers)
- output_shapes = [str(list(t.shape)) for t in e.outputs]
- output_dtypes = [str(t.dtype).split('.')[-1] for t in e.outputs]
- rows += [[
- name + (':0' if len(e.outputs) >= 2 else ''),
- str(param_size) if param_size else '-',
- str(buffer_size) if buffer_size else '-',
- (output_shapes + ['-'])[0],
- (output_dtypes + ['-'])[0],
- ]]
- for idx in range(1, len(e.outputs)):
- rows += [[name + f':{idx}', '-', '-', output_shapes[idx], output_dtypes[idx]]]
- param_total += param_size
- buffer_total += buffer_size
- rows += [['---'] * len(rows[0])]
- rows += [['Total', str(param_total), str(buffer_total), '-', '-']]
-
- # Print table.
- widths = [max(len(cell) for cell in column) for column in zip(*rows)]
- print()
- for row in rows:
- print(' '.join(cell + ' ' * (width - len(cell)) for cell, width in zip(row, widths)))
- print()
- return outputs
-
-#----------------------------------------------------------------------------
diff --git a/spaces/magicr/BuboGPT/bubogpt/processors/vision_augment.py b/spaces/magicr/BuboGPT/bubogpt/processors/vision_augment.py
deleted file mode 100644
index 7034a49ad5fc63b97910790017432617ff4c6d7b..0000000000000000000000000000000000000000
--- a/spaces/magicr/BuboGPT/bubogpt/processors/vision_augment.py
+++ /dev/null
@@ -1,398 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-"""
-
-import cv2
-import numpy as np
-
-import torch
-
-
-## aug functions
-def identity_func(img):
- return img
-
-
-def autocontrast_func(img, cutoff=0):
- """
- same output as PIL.ImageOps.autocontrast
- """
- n_bins = 256
-
- def tune_channel(ch):
- n = ch.size
- cut = cutoff * n // 100
- if cut == 0:
- high, low = ch.max(), ch.min()
- else:
- hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins])
- low = np.argwhere(np.cumsum(hist) > cut)
- low = 0 if low.shape[0] == 0 else low[0]
- high = np.argwhere(np.cumsum(hist[::-1]) > cut)
- high = n_bins - 1 if high.shape[0] == 0 else n_bins - 1 - high[0]
- if high <= low:
- table = np.arange(n_bins)
- else:
- scale = (n_bins - 1) / (high - low)
- offset = -low * scale
- table = np.arange(n_bins) * scale + offset
- table[table < 0] = 0
- table[table > n_bins - 1] = n_bins - 1
- table = table.clip(0, 255).astype(np.uint8)
- return table[ch]
-
- channels = [tune_channel(ch) for ch in cv2.split(img)]
- out = cv2.merge(channels)
- return out
-
-
-def equalize_func(img):
- """
- same output as PIL.ImageOps.equalize
- PIL's implementation is different from cv2.equalize
- """
- n_bins = 256
-
- def tune_channel(ch):
- hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins])
- non_zero_hist = hist[hist != 0].reshape(-1)
- step = np.sum(non_zero_hist[:-1]) // (n_bins - 1)
- if step == 0:
- return ch
- n = np.empty_like(hist)
- n[0] = step // 2
- n[1:] = hist[:-1]
- table = (np.cumsum(n) // step).clip(0, 255).astype(np.uint8)
- return table[ch]
-
- channels = [tune_channel(ch) for ch in cv2.split(img)]
- out = cv2.merge(channels)
- return out
-
-
-def rotate_func(img, degree, fill=(0, 0, 0)):
- """
- like PIL, rotate by degree, not radians
- """
- H, W = img.shape[0], img.shape[1]
- center = W / 2, H / 2
- M = cv2.getRotationMatrix2D(center, degree, 1)
- out = cv2.warpAffine(img, M, (W, H), borderValue=fill)
- return out
-
-
-def solarize_func(img, thresh=128):
- """
- same output as PIL.ImageOps.posterize
- """
- table = np.array([el if el < thresh else 255 - el for el in range(256)])
- table = table.clip(0, 255).astype(np.uint8)
- out = table[img]
- return out
-
-
-def color_func(img, factor):
- """
- same output as PIL.ImageEnhance.Color
- """
- ## implementation according to PIL definition, quite slow
- # degenerate = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)[:, :, np.newaxis]
- # out = blend(degenerate, img, factor)
- # M = (
- # np.eye(3) * factor
- # + np.float32([0.114, 0.587, 0.299]).reshape(3, 1) * (1. - factor)
- # )[np.newaxis, np.newaxis, :]
- M = np.float32(
- [[0.886, -0.114, -0.114], [-0.587, 0.413, -0.587], [-0.299, -0.299, 0.701]]
- ) * factor + np.float32([[0.114], [0.587], [0.299]])
- out = np.matmul(img, M).clip(0, 255).astype(np.uint8)
- return out
-
-
-def contrast_func(img, factor):
- """
- same output as PIL.ImageEnhance.Contrast
- """
- mean = np.sum(np.mean(img, axis=(0, 1)) * np.array([0.114, 0.587, 0.299]))
- table = (
- np.array([(el - mean) * factor + mean for el in range(256)])
- .clip(0, 255)
- .astype(np.uint8)
- )
- out = table[img]
- return out
-
-
-def brightness_func(img, factor):
- """
- same output as PIL.ImageEnhance.Contrast
- """
- table = (np.arange(256, dtype=np.float32) * factor).clip(0, 255).astype(np.uint8)
- out = table[img]
- return out
-
-
-def sharpness_func(img, factor):
- """
- The differences the this result and PIL are all on the 4 boundaries, the center
- areas are same
- """
- kernel = np.ones((3, 3), dtype=np.float32)
- kernel[1][1] = 5
- kernel /= 13
- degenerate = cv2.filter2D(img, -1, kernel)
- if factor == 0.0:
- out = degenerate
- elif factor == 1.0:
- out = img
- else:
- out = img.astype(np.float32)
- degenerate = degenerate.astype(np.float32)[1:-1, 1:-1, :]
- out[1:-1, 1:-1, :] = degenerate + factor * (out[1:-1, 1:-1, :] - degenerate)
- out = out.astype(np.uint8)
- return out
-
-
-def shear_x_func(img, factor, fill=(0, 0, 0)):
- H, W = img.shape[0], img.shape[1]
- M = np.float32([[1, factor, 0], [0, 1, 0]])
- out = cv2.warpAffine(
- img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR
- ).astype(np.uint8)
- return out
-
-
-def translate_x_func(img, offset, fill=(0, 0, 0)):
- """
- same output as PIL.Image.transform
- """
- H, W = img.shape[0], img.shape[1]
- M = np.float32([[1, 0, -offset], [0, 1, 0]])
- out = cv2.warpAffine(
- img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR
- ).astype(np.uint8)
- return out
-
-
-def translate_y_func(img, offset, fill=(0, 0, 0)):
- """
- same output as PIL.Image.transform
- """
- H, W = img.shape[0], img.shape[1]
- M = np.float32([[1, 0, 0], [0, 1, -offset]])
- out = cv2.warpAffine(
- img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR
- ).astype(np.uint8)
- return out
-
-
-def posterize_func(img, bits):
- """
- same output as PIL.ImageOps.posterize
- """
- out = np.bitwise_and(img, np.uint8(255 << (8 - bits)))
- return out
-
-
-def shear_y_func(img, factor, fill=(0, 0, 0)):
- H, W = img.shape[0], img.shape[1]
- M = np.float32([[1, 0, 0], [factor, 1, 0]])
- out = cv2.warpAffine(
- img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR
- ).astype(np.uint8)
- return out
-
-
-def cutout_func(img, pad_size, replace=(0, 0, 0)):
- replace = np.array(replace, dtype=np.uint8)
- H, W = img.shape[0], img.shape[1]
- rh, rw = np.random.random(2)
- pad_size = pad_size // 2
- ch, cw = int(rh * H), int(rw * W)
- x1, x2 = max(ch - pad_size, 0), min(ch + pad_size, H)
- y1, y2 = max(cw - pad_size, 0), min(cw + pad_size, W)
- out = img.copy()
- out[x1:x2, y1:y2, :] = replace
- return out
-
-
-### level to args
-def enhance_level_to_args(MAX_LEVEL):
- def level_to_args(level):
- return ((level / MAX_LEVEL) * 1.8 + 0.1,)
-
- return level_to_args
-
-
-def shear_level_to_args(MAX_LEVEL, replace_value):
- def level_to_args(level):
- level = (level / MAX_LEVEL) * 0.3
- if np.random.random() > 0.5:
- level = -level
- return (level, replace_value)
-
- return level_to_args
-
-
-def translate_level_to_args(translate_const, MAX_LEVEL, replace_value):
- def level_to_args(level):
- level = (level / MAX_LEVEL) * float(translate_const)
- if np.random.random() > 0.5:
- level = -level
- return (level, replace_value)
-
- return level_to_args
-
-
-def cutout_level_to_args(cutout_const, MAX_LEVEL, replace_value):
- def level_to_args(level):
- level = int((level / MAX_LEVEL) * cutout_const)
- return (level, replace_value)
-
- return level_to_args
-
-
-def solarize_level_to_args(MAX_LEVEL):
- def level_to_args(level):
- level = int((level / MAX_LEVEL) * 256)
- return (level,)
-
- return level_to_args
-
-
-def none_level_to_args(level):
- return ()
-
-
-def posterize_level_to_args(MAX_LEVEL):
- def level_to_args(level):
- level = int((level / MAX_LEVEL) * 4)
- return (level,)
-
- return level_to_args
-
-
-def rotate_level_to_args(MAX_LEVEL, replace_value):
- def level_to_args(level):
- level = (level / MAX_LEVEL) * 30
- if np.random.random() < 0.5:
- level = -level
- return (level, replace_value)
-
- return level_to_args
-
-
-func_dict = {
- "Identity": identity_func,
- "AutoContrast": autocontrast_func,
- "Equalize": equalize_func,
- "Rotate": rotate_func,
- "Solarize": solarize_func,
- "Color": color_func,
- "Contrast": contrast_func,
- "Brightness": brightness_func,
- "Sharpness": sharpness_func,
- "ShearX": shear_x_func,
- "TranslateX": translate_x_func,
- "TranslateY": translate_y_func,
- "Posterize": posterize_func,
- "ShearY": shear_y_func,
-}
-
-translate_const = 10
-MAX_LEVEL = 10
-replace_value = (128, 128, 128)
-arg_dict = {
- "Identity": none_level_to_args,
- "AutoContrast": none_level_to_args,
- "Equalize": none_level_to_args,
- "Rotate": rotate_level_to_args(MAX_LEVEL, replace_value),
- "Solarize": solarize_level_to_args(MAX_LEVEL),
- "Color": enhance_level_to_args(MAX_LEVEL),
- "Contrast": enhance_level_to_args(MAX_LEVEL),
- "Brightness": enhance_level_to_args(MAX_LEVEL),
- "Sharpness": enhance_level_to_args(MAX_LEVEL),
- "ShearX": shear_level_to_args(MAX_LEVEL, replace_value),
- "TranslateX": translate_level_to_args(translate_const, MAX_LEVEL, replace_value),
- "TranslateY": translate_level_to_args(translate_const, MAX_LEVEL, replace_value),
- "Posterize": posterize_level_to_args(MAX_LEVEL),
- "ShearY": shear_level_to_args(MAX_LEVEL, replace_value),
-}
-
-
-class RandomAugment(object):
- def __init__(self, N=2, M=10, isPIL=False, augs=[]):
- self.N = N
- self.M = M
- self.isPIL = isPIL
- if augs:
- self.augs = augs
- else:
- self.augs = list(arg_dict.keys())
-
- def get_random_ops(self):
- sampled_ops = np.random.choice(self.augs, self.N)
- return [(op, 0.5, self.M) for op in sampled_ops]
-
- def __call__(self, img):
- if self.isPIL:
- img = np.array(img)
- ops = self.get_random_ops()
- for name, prob, level in ops:
- if np.random.random() > prob:
- continue
- args = arg_dict[name](level)
- img = func_dict[name](img, *args)
- return img
-
-
-class VideoRandomAugment(object):
- def __init__(self, N=2, M=10, p=0.0, tensor_in_tensor_out=True, augs=[]):
- self.N = N
- self.M = M
- self.p = p
- self.tensor_in_tensor_out = tensor_in_tensor_out
- if augs:
- self.augs = augs
- else:
- self.augs = list(arg_dict.keys())
-
- def get_random_ops(self):
- sampled_ops = np.random.choice(self.augs, self.N, replace=False)
- return [(op, self.M) for op in sampled_ops]
-
- def __call__(self, frames):
- assert (
- frames.shape[-1] == 3
- ), "Expecting last dimension for 3-channels RGB (b, h, w, c)."
-
- if self.tensor_in_tensor_out:
- frames = frames.numpy().astype(np.uint8)
-
- num_frames = frames.shape[0]
-
- ops = num_frames * [self.get_random_ops()]
- apply_or_not = num_frames * [np.random.random(size=self.N) > self.p]
-
- frames = torch.stack(
- list(map(self._aug, frames, ops, apply_or_not)), dim=0
- ).float()
-
- return frames
-
- def _aug(self, img, ops, apply_or_not):
- for i, (name, level) in enumerate(ops):
- if not apply_or_not[i]:
- continue
- args = arg_dict[name](level)
- img = func_dict[name](img, *args)
- return torch.from_numpy(img)
-
-
-if __name__ == "__main__":
- a = RandomAugment()
- img = np.random.randn(32, 32, 3)
- a(img)
diff --git a/spaces/manan/fruit-classifier/app.py b/spaces/manan/fruit-classifier/app.py
deleted file mode 100644
index b4db7a6fe18c61003cabb12aa3cd9a6494f89e17..0000000000000000000000000000000000000000
--- a/spaces/manan/fruit-classifier/app.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import gradio as gr
-from model import predict_fruit_type
-
-def predict(img):
- return predict_fruit_type(img)
-
-
-interface_options = {
- "title": "Fruit Classifier Demo",
- "description": "Yoooo this is my college assignment. A VGG-16 model that predicts the fruit type.",
- # "interpretation": "default",
- "layout": "horizontal",
- "examples": [
- "./blue_berry_1.jpg", "./avacado_1.jpg",
- "./dragon_fruit_1.jpg", './cranberry_1.jpg',
- './jackfruit_1.jpg', './muskmelon_1.jpg',
- './pineapple_1.jpg', './banana_1.jpg',
- './orange_1.jpg', './watermelon_1.jpg',
- ],
- "allow_flagging": "never",
-}
-
-demo = gr.Interface(
- fn=predict,
- inputs=gr.inputs.Image(shape=(480, 480)),
- outputs=gr.outputs.Textbox(type="auto", label=None),
- **interface_options,
-)
-
-demo.launch(share=False)
\ No newline at end of file
diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/util/visualizer.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/util/visualizer.py
deleted file mode 100644
index 2cc519b52e9e15f5891ac3f4dcab620793794322..0000000000000000000000000000000000000000
--- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/util/visualizer.py
+++ /dev/null
@@ -1,134 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-# Licensed under the MIT License.
-
-import os
-import ntpath
-import time
-from . import util
-import scipy.misc
-
-try:
- from StringIO import StringIO # Python 2.7
-except ImportError:
- from io import BytesIO # Python 3.x
-import torchvision.utils as vutils
-from tensorboardX import SummaryWriter
-import torch
-import numpy as np
-
-
-class Visualizer:
- def __init__(self, opt):
- self.opt = opt
- self.tf_log = opt.isTrain and opt.tf_log
-
- self.tensorboard_log = opt.tensorboard_log
-
- self.win_size = opt.display_winsize
- self.name = opt.name
- if self.tensorboard_log:
-
- if self.opt.isTrain:
- self.log_dir = os.path.join(opt.checkpoints_dir, opt.name, "logs")
- if not os.path.exists(self.log_dir):
- os.makedirs(self.log_dir)
- self.writer = SummaryWriter(log_dir=self.log_dir)
- else:
- print("hi :)")
- self.log_dir = os.path.join(opt.checkpoints_dir, opt.name, opt.results_dir)
- if not os.path.exists(self.log_dir):
- os.makedirs(self.log_dir)
-
- if opt.isTrain:
- self.log_name = os.path.join(opt.checkpoints_dir, opt.name, "loss_log.txt")
- with open(self.log_name, "a") as log_file:
- now = time.strftime("%c")
- log_file.write("================ Training Loss (%s) ================\n" % now)
-
- # |visuals|: dictionary of images to display or save
- def display_current_results(self, visuals, epoch, step):
-
- all_tensor = []
- if self.tensorboard_log:
-
- for key, tensor in visuals.items():
- all_tensor.append((tensor.data.cpu() + 1) / 2)
-
- output = torch.cat(all_tensor, 0)
- img_grid = vutils.make_grid(output, nrow=self.opt.batchSize, padding=0, normalize=False)
-
- if self.opt.isTrain:
- self.writer.add_image("Face_SPADE/training_samples", img_grid, step)
- else:
- vutils.save_image(
- output,
- os.path.join(self.log_dir, str(step) + ".png"),
- nrow=self.opt.batchSize,
- padding=0,
- normalize=False,
- )
-
- # errors: dictionary of error labels and values
- def plot_current_errors(self, errors, step):
- if self.tf_log:
- for tag, value in errors.items():
- value = value.mean().float()
- summary = self.tf.Summary(value=[self.tf.Summary.Value(tag=tag, simple_value=value)])
- self.writer.add_summary(summary, step)
-
- if self.tensorboard_log:
-
- self.writer.add_scalar("Loss/GAN_Feat", errors["GAN_Feat"].mean().float(), step)
- self.writer.add_scalar("Loss/VGG", errors["VGG"].mean().float(), step)
- self.writer.add_scalars(
- "Loss/GAN",
- {
- "G": errors["GAN"].mean().float(),
- "D": (errors["D_Fake"].mean().float() + errors["D_real"].mean().float()) / 2,
- },
- step,
- )
-
- # errors: same format as |errors| of plotCurrentErrors
- def print_current_errors(self, epoch, i, errors, t):
- message = "(epoch: %d, iters: %d, time: %.3f) " % (epoch, i, t)
- for k, v in errors.items():
- v = v.mean().float()
- message += "%s: %.3f " % (k, v)
-
- print(message)
- with open(self.log_name, "a") as log_file:
- log_file.write("%s\n" % message)
-
- def convert_visuals_to_numpy(self, visuals):
- for key, t in visuals.items():
- tile = self.opt.batchSize > 8
- if "input_label" == key:
- t = util.tensor2label(t, self.opt.label_nc + 2, tile=tile) ## B*H*W*C 0-255 numpy
- else:
- t = util.tensor2im(t, tile=tile)
- visuals[key] = t
- return visuals
-
- # save image to the disk
- def save_images(self, webpage, visuals, image_path):
- visuals = self.convert_visuals_to_numpy(visuals)
-
- image_dir = webpage.get_image_dir()
- short_path = ntpath.basename(image_path[0])
- name = os.path.splitext(short_path)[0]
-
- webpage.add_header(name)
- ims = []
- txts = []
- links = []
-
- for label, image_numpy in visuals.items():
- image_name = os.path.join(label, "%s.png" % (name))
- save_path = os.path.join(image_dir, image_name)
- util.save_image(image_numpy, save_path, create_dir=True)
-
- ims.append(image_name)
- txts.append(label)
- links.append(image_name)
- webpage.add_images(ims, txts, links, width=self.win_size)
diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/approaches/train_noautovc.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/approaches/train_noautovc.py
deleted file mode 100644
index 6be3ff06958baa1559a4ad56e7ee12e5816b87e8..0000000000000000000000000000000000000000
--- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/approaches/train_noautovc.py
+++ /dev/null
@@ -1,470 +0,0 @@
-"""
- # Copyright 2020 Adobe
- # All Rights Reserved.
-
- # NOTICE: Adobe permits you to use, modify, and distribute this file in
- # accordance with the terms of the Adobe license agreement accompanying
- # it.
-
-"""
-
-import os
-import torch.nn.parallel
-import torch.optim as optim
-import torch.utils.data
-import time
-import torch.nn as nn
-from src.dataset.audio2landmark import Audio2landmark_Dataset
-from src.models import Audio2landmark_speaker_aware
-from util.utils import Record, get_n_params
-from tensorboardX import SummaryWriter
-from util.icp import icp
-import numpy as np
-from scipy.spatial.transform import Rotation as R
-from scipy.signal import savgol_filter
-
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
-class Speaker_aware_branch():
-
- def __init__(self, opt_parser):
- print('Run on device:', device)
-
- # Step 1 : load opt_parser
- for key in vars(opt_parser).keys():
- print(key, ':', vars(opt_parser)[key])
-
- self.opt_parser = opt_parser
- self.dump_dir = opt_parser.dump_dir
- self.std_face_id = np.loadtxt('dataset/utils/STD_FACE_LANDMARKS.txt')
- self.std_face_id = self.std_face_id.reshape(1, 204)
- self.std_face_id = torch.tensor(self.std_face_id, requires_grad=False, dtype=torch.float).to(device)
-
- # Step 2 : load data
- self.train_data = Audio2landmark_Dataset(dump_dir=self.dump_dir, dump_name=opt_parser.dump_file_name,
- num_window_frames=opt_parser.num_window_frames,
- num_window_step=opt_parser.num_window_step,
- status='train', noautovc='noautovc_')
- self.train_dataloader = torch.utils.data.DataLoader(self.train_data, batch_size=opt_parser.batch_size,
- shuffle=False, num_workers=0,
- collate_fn=self.train_data.my_collate_in_segments_noemb)
-
- print('Train num videos: {}'.format(len(self.train_data)))
- self.eval_data = Audio2landmark_Dataset(dump_dir=self.dump_dir, dump_name=opt_parser.dump_file_name,
- num_window_frames=opt_parser.num_window_frames,
- num_window_step=opt_parser.num_window_step,
- status='val', noautovc='noautovc_')
- self.eval_dataloader = torch.utils.data.DataLoader(self.eval_data, batch_size=opt_parser.batch_size,
- shuffle=False, num_workers=0,
- collate_fn=self.eval_data.my_collate_in_segments_noemb)
- print('EVAL num videos: {}'.format(len(self.eval_data)))
-
- # Step 3: Load model
- self.G = Audio2landmark_speaker_aware(
- spk_emb_enc_size=opt_parser.spk_emb_enc_size,
- transformer_d_model=opt_parser.transformer_d_model,
- N=opt_parser.transformer_N, heads=opt_parser.transformer_heads,
- pos_dim=opt_parser.pos_dim,
- use_prior_net=True, is_noautovc=True)
- # self.G.apply(weight_init)
- for p in self.G.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
- print('G: Running on {}, total num params = {:.2f}M'.format(device, get_n_params(self.G)/1.0e6))
-
- # self.D_L = Audio2landmark_pos_DL()
- # self.D_L.apply(weight_init)
- # print('D_L: Running on {}, total num params = {:.2f}M'.format(device, get_n_params(self.D_L)/1.0e6))
- #
- # self.D_T = Audio2landmark_pos_DT(spk_emb_enc_size=opt_parser.spk_emb_enc_size,
- # transformer_d_model=opt_parser.transformer_d_model,
- # N=opt_parser.transformer_N, heads=opt_parser.transformer_heads)
- # for p in self.D_T.parameters():
- # if p.dim() > 1:
- # nn.init.xavier_uniform_(p)
- # print('D_T: Running on {}, total num params = {:.2f}M'.format(device, get_n_params(self.D_T) / 1.0e6))
-
- if (opt_parser.load_a2l_G_name.split('/')[-1] != ''):
- model_dict = self.G.state_dict()
- ckpt = torch.load(opt_parser.load_a2l_G_name)
- pretrained_dict = {k: v for k, v in ckpt['G'].items()
- if 'out.' not in k and 'out_pos_1.' not in k}
- model_dict.update(pretrained_dict)
-
- self.G.load_state_dict(model_dict)
- print('======== LOAD PRETRAINED SPEAKER AWARE MODEL {} ========='.format(opt_parser.load_a2l_G_name))
- self.G.to(device)
-
- self.loss_mse = torch.nn.MSELoss()
- self.loss_bce = torch.nn.BCELoss()
-
- self.opt_G = optim.Adam(self.G.parameters(), lr=opt_parser.lr, weight_decay=opt_parser.reg_lr)
-
- if (opt_parser.write):
- self.writer = SummaryWriter(log_dir=os.path.join(opt_parser.log_dir, opt_parser.name))
- self.writer_count = {'TRAIN_epoch': 0, 'TRAIN_batch': 0, 'TRAIN_in_batch': 0,
- 'EVAL_epoch': 0, 'EVAL_batch': 0, 'EVAL_in_batch': 0}
-
- self.t_shape_idx = (27, 28, 29, 30, 33, 36, 39, 42, 45)
- self.anchor_t_shape = np.loadtxt('dataset/utils//STD_FACE_LANDMARKS.txt')
- self.anchor_t_shape = self.anchor_t_shape[self.t_shape_idx, :]
-
- def __train_speaker_aware__(self, fls, aus, face_id, is_training=True):
-
- # fls_gt = fls[:, 0, :].detach().clone().requires_grad_(False)
- reg_fls_gt = fls[:, 0, :].detach().clone().requires_grad_(False)
-
- if (face_id.shape[0] == 1):
- face_id = face_id.repeat(aus.shape[0], 1)
- face_id = face_id.requires_grad_(False)
- content_branch_face_id = face_id.detach()
-
- ''' ======================================================
- Generator G
- ====================================================== '''
-
- for name, p in self.G.named_parameters():
- p.requires_grad = True
-
- fl_dis_pred, pos_pred, _, spk_encode = self.G(aus, face_id)
-
- # reg fls loss
- loss_reg_fls = torch.nn.functional.l1_loss(fl_dis_pred+face_id[0:1].detach(), reg_fls_gt)
-
- # reg fls laplacian
- ''' use laplacian smooth loss '''
- loss_laplacian = 0.
- if (self.opt_parser.lambda_laplacian_smooth_loss > 0.0):
- n1 = [1] + list(range(0, 16)) + [18] + list(range(17, 21)) + [23] + list(range(22, 26)) + \
- [28] + list(range(27, 35)) + [41] + list(range(36, 41)) + [47] + list(range(42, 47)) + \
- [59] + list(range(48, 59)) + [67] + list(range(60, 67))
- n2 = list(range(1, 17)) + [15] + list(range(18, 22)) + [20] + list(range(23, 27)) + [25] + \
- list(range(28, 36)) + [34] + list(range(37, 42)) + [36] + list(range(43, 48)) + [42] + \
- list(range(49, 60)) + [48] + list(range(61, 68)) + [60]
- V = (fl_dis_pred + face_id[0:1].detach()).view(-1, 68, 3)
- L_V = V - 0.5 * (V[:, n1, :] + V[:, n2, :])
- G = reg_fls_gt.view(-1, 68, 3)
- L_G = G - 0.5 * (G[:, n1, :] + G[:, n2, :])
- loss_laplacian = torch.nn.functional.l1_loss(L_V, L_G)
-
- loss = loss_reg_fls + loss_laplacian * self.opt_parser.lambda_laplacian_smooth_loss
- # loss = loss_pos
-
- if(is_training):
- self.opt_G.zero_grad()
- loss.backward()
- self.opt_G.step()
-
- # reconstruct face through pos
- fl_dis_pred = fl_dis_pred + face_id[0:1].detach()
-
- return fl_dis_pred, pos_pred, face_id[0:1, :], (loss, loss_reg_fls, loss_laplacian)
-
- def __train_pass__(self, epoch, log_loss, is_training=True):
- st_epoch = time.time()
-
- # Step 1: init setup
- if (is_training):
- self.G.train()
- data = self.train_data
- dataloader = self.train_dataloader
- status = 'TRAIN'
- else:
- self.G.eval()
- data = self.eval_data
- dataloader = self.eval_dataloader
- status = 'EVAL'
-
- # random_clip_index = np.random.randint(0, len(dataloader)-1, 4)
- # random_clip_index = np.random.randint(0, 64, 4)
- random_clip_index = list(range(len(dataloader)))
- # print('random_clip_index', random_clip_index)
- # Step 2: train for each batch
- for i, batch in enumerate(dataloader):
-
- # if(i>=512):
- # break
-
- st = time.time()
- global_id, video_name = data[i][0][1][0], data[i][0][1][1][:-4]
-
- # Step 2.1: load batch data from dataloader (in segments)
- inputs_fl, inputs_au = batch
-
- if (is_training):
- rand_start = np.random.randint(0, inputs_fl.shape[0] // 5, 1).reshape(-1)
- inputs_fl = inputs_fl[rand_start[0]:]
- inputs_au = inputs_au[rand_start[0]:]
-
- inputs_fl, inputs_au = inputs_fl.to(device), inputs_au.to(device)
- std_fls_list, fls_pred_face_id_list, fls_pred_pos_list = [], [], []
- seg_bs = self.opt_parser.segment_batch_size
-
- close_fl_list = inputs_fl[::10, 0, :]
- idx = self.__close_face_lip__(close_fl_list.detach().cpu().numpy())
- input_face_id = close_fl_list[idx:idx + 1, :]
-
- ''' register face '''
- if (self.opt_parser.use_reg_as_std):
- landmarks = input_face_id.detach().cpu().numpy().reshape(68, 3)
- frame_t_shape = landmarks[self.t_shape_idx, :]
- T, distance, itr = icp(frame_t_shape, self.anchor_t_shape)
- landmarks = np.hstack((landmarks, np.ones((68, 1))))
- registered_landmarks = np.dot(T, landmarks.T).T
- input_face_id = torch.tensor(registered_landmarks[:, 0:3].reshape(1, 204), requires_grad=False,
- dtype=torch.float).to(device)
-
- for j in range(0, inputs_fl.shape[0], seg_bs):
- # Step 3.1: load segments
- inputs_fl_segments = inputs_fl[j: j + seg_bs]
- inputs_au_segments = inputs_au[j: j + seg_bs]
-
-
- if(inputs_fl_segments.shape[0] < 10):
- continue
-
- if(self.opt_parser.test_emb):
- input_face_id = self.std_face_id
-
- fl_dis_pred_pos, pos_pred, input_face_id, (loss, loss_g, loss_laplacian) = \
- self.__train_speaker_aware__(inputs_fl_segments, inputs_au_segments, input_face_id,
- is_training=is_training)
-
- fl_dis_pred_pos = fl_dis_pred_pos.data.cpu().numpy()
- fl_std = inputs_fl_segments[:, 0, :].data.cpu().numpy()
- ''' solve inverse lip '''
- if(not is_training):
- fl_dis_pred_pos = self.__solve_inverse_lip2__(fl_dis_pred_pos)
-
- fls_pred_pos_list += [fl_dis_pred_pos.reshape((-1, 204))]
- std_fls_list += [fl_std.reshape((-1, 204))]
-
- for key in log_loss.keys():
- if (key not in locals().keys()):
- continue
- if (type(locals()[key]) == float):
- log_loss[key].add(locals()[key])
- else:
- log_loss[key].add(locals()[key].data.cpu().numpy())
-
- if (epoch % 5 == 0): # and i in [0, 200, 400, 600, 800, 1000]):
- def save_fls_av(fake_fls_list, postfix='', ifsmooth=True):
- fake_fls_np = np.concatenate(fake_fls_list)
- filename = 'fake_fls_{}_{}_{}.txt'.format(epoch, video_name, postfix)
- np.savetxt(
- os.path.join(self.opt_parser.dump_dir, '../nn_result', self.opt_parser.name, filename),
- fake_fls_np, fmt='%.6f')
- # audio_filename = '{:05d}_{}_audio.wav'.format(global_id, video_name)
- # from util.vis import Vis_old
- # Vis_old(run_name=self.opt_parser.name, pred_fl_filename=filename, audio_filename=audio_filename,
- # fps=62.5, av_name='e{:04d}_{}_{}'.format(epoch, i, postfix),
- # postfix=postfix, root_dir=self.opt_parser.root_dir, ifsmooth=ifsmooth)
-
- if (True):
- if (self.opt_parser.show_animation):
- print('show animation ....')
- save_fls_av(fls_pred_pos_list, 'pred', ifsmooth=True)
- save_fls_av(std_fls_list, 'std', ifsmooth=False)
-
- if (self.opt_parser.verbose <= 1):
- print('{} Epoch: #{} batch #{}/{}'.format(status, epoch, i, len(dataloader)), end=': ')
- for key in log_loss.keys():
- print(key, '{:.5f}'.format(log_loss[key].per('batch')), end=', ')
- print('')
- self.__tensorboard_write__(status, log_loss, 'batch')
-
- if (self.opt_parser.verbose <= 2):
- print('==========================================================')
- print('{} Epoch: #{}'.format(status, epoch), end=':')
- for key in log_loss.keys():
- print(key, '{:.4f}'.format(log_loss[key].per('epoch')), end=', ')
- print('Epoch time usage: {:.2f} sec\n==========================================================\n'.format(time.time() - st_epoch))
- self.__save_model__(save_type='last_epoch', epoch=epoch)
- if(epoch % 5 == 0):
- self.__save_model__(save_type='e_{}'.format(epoch), epoch=epoch)
- self.__tensorboard_write__(status, log_loss, 'epoch')
-
-
- def test_end2end(self, jpg_shape):
-
- self.G.eval()
- self.C.eval()
- data = self.eval_data
- dataloader = self.eval_dataloader
-
- for i, batch in enumerate(dataloader):
-
- global_id, video_name = data[i][0][1][0], data[i][0][1][1][:-4]
-
- inputs_fl, inputs_au, inputs_emb, inputs_reg_fl, inputs_rot_tran, inputs_rot_quat = batch
-
- for key in ['irx71tYyI-Q', 'J-NPsvtQ8lE', 'Z7WRt--g-h4', 'E0zgrhQ0QDw', 'bXpavyiCu10', 'W6uRNCJmdtI', 'sxCbrYjBsGA', 'wAAMEC1OsRc', '_ldiVrXgZKc', '48uYS3bHIA8', 'E_kmpT-EfOg']:
- emb_val = self.test_embs[key]
- inputs_emb = np.tile(emb_val, (inputs_emb.shape[0], 1))
- inputs_emb = torch.tensor(inputs_emb, dtype=torch.float, requires_grad=False)
-
- # this_emb = key
- # inputs_emb = torch.zeros(size=(inputs_au.shape[0], len(self.test_embs_dic.keys())))
- # inputs_emb[:, self.test_embs_dic[this_emb]] = 1.
-
- inputs_fl, inputs_au, inputs_emb = inputs_fl.to(device), inputs_au.to(device), inputs_emb.to(device)
- inputs_reg_fl, inputs_rot_tran, inputs_rot_quat = inputs_reg_fl.to(device), inputs_rot_tran.to(device), inputs_rot_quat.to(device)
-
- std_fls_list, fls_pred_face_id_list, fls_pred_pos_list = [], [], []
- seg_bs = self.opt_parser.segment_batch_size
-
- # input_face_id = self.std_face_id
- input_face_id = torch.tensor(jpg_shape.reshape(1, 204), requires_grad=False, dtype=torch.float).to(device)
-
- ''' register face '''
- if (True):
- landmarks = input_face_id.detach().cpu().numpy().reshape(68, 3)
- frame_t_shape = landmarks[self.t_shape_idx, :]
- T, distance, itr = icp(frame_t_shape, self.anchor_t_shape)
- landmarks = np.hstack((landmarks, np.ones((68, 1))))
- registered_landmarks = np.dot(T, landmarks.T).T
- input_face_id = torch.tensor(registered_landmarks[:, 0:3].reshape(1, 204), requires_grad=False,
- dtype=torch.float).to(device)
-
- for j in range(0, inputs_fl.shape[0], seg_bs):
- # Step 3.1: load segments
- inputs_fl_segments = inputs_fl[j: j + seg_bs]
- inputs_au_segments = inputs_au[j: j + seg_bs]
- inputs_emb_segments = inputs_emb[j: j + seg_bs]
- inputs_reg_fl_segments = inputs_reg_fl[j: j + seg_bs]
- inputs_rot_tran_segments = inputs_rot_tran[j: j + seg_bs]
- inputs_rot_quat_segments = inputs_rot_quat[j: j + seg_bs]
-
- if(inputs_fl_segments.shape[0] < 10):
- continue
-
- fl_dis_pred_pos, pos_pred, input_face_id, (loss, loss_reg_fls, loss_laplacian, loss_pos) = \
- self.__train_speaker_aware__(inputs_fl_segments, inputs_au_segments, inputs_emb_segments,
- input_face_id, inputs_reg_fl_segments, inputs_rot_tran_segments,
- inputs_rot_quat_segments,
- is_training=False, use_residual=True)
-
- fl_dis_pred_pos = fl_dis_pred_pos.data.cpu().numpy()
- pos_pred = pos_pred.data.cpu().numpy()
- fl_std = inputs_reg_fl_segments[:, 0, :].data.cpu().numpy()
- pos_std = inputs_rot_tran_segments[:, 0, :].data.cpu().numpy()
-
- ''' solve inverse lip '''
- fl_dis_pred_pos = self.__solve_inverse_lip2__(fl_dis_pred_pos)
-
- fl_dis_pred_pos = fl_dis_pred_pos.reshape((-1, 68, 3))
- fl_std = fl_std.reshape((-1, 68, 3))
- if(self.opt_parser.pos_dim == 12):
- pos_pred = pos_pred.reshape((-1, 3, 4))
- for k in range(fl_dis_pred_pos.shape[0]):
- fl_dis_pred_pos[k] = np.dot(pos_pred[k, :3, :3].T + np.eye(3),
- (fl_dis_pred_pos[k] - pos_pred[k, :, 3].T).T).T
- pos_std = pos_std.reshape((-1, 3, 4))
- for k in range(fl_std.shape[0]):
- fl_std[k] = np.dot(pos_std[k, :3, :3].T + np.eye(3),
- (fl_std[k] - pos_std[k, :, 3].T).T).T
- else:
- smooth_length = int(min(pos_pred.shape[0] - 1, 27) // 2 * 2 + 1)
- pos_pred = savgol_filter(pos_pred, smooth_length, 3, axis=0)
- quat = pos_pred[:, :4]
- trans = pos_pred[:, 4:]
- for k in range(fl_dis_pred_pos.shape[0]):
- fl_dis_pred_pos[k] = np.dot(R.from_quat(quat[k]).as_matrix().T,
- (fl_dis_pred_pos[k] - trans[k:k+1]).T).T
- pos_std = pos_std.reshape((-1, 3, 4))
- for k in range(fl_std.shape[0]):
- fl_std[k] = np.dot(pos_std[k, :3, :3].T + np.eye(3),
- (fl_std[k] - pos_std[k, :, 3].T).T).T
-
- fls_pred_pos_list += [fl_dis_pred_pos.reshape((-1, 204))]
- std_fls_list += [fl_std.reshape((-1, 204))]
-
- fake_fls_np = np.concatenate(fls_pred_pos_list)
- filename = 'pred_fls_{}_{}.txt'.format(video_name.split('/')[-1], key)
- np.savetxt(os.path.join('MakeItTalk/examples', filename), fake_fls_np, fmt='%.6f')
-
-
- def __close_face_lip__(self, fl):
- facelandmark = fl.reshape(-1, 68, 3)
- from util.geo_math import area_of_polygon
- min_area_lip, idx = 999, 0
- for i, fls in enumerate(facelandmark):
- area_of_mouth = area_of_polygon(fls[list(range(60, 68)), 0:2])
- if (area_of_mouth < min_area_lip):
- min_area_lip = area_of_mouth
- idx = i
- return idx
-
- def train(self):
- train_loss = {key: Record(['epoch', 'batch']) for key in
- ['loss','loss_laplacian', 'loss_reg_fls', 'loss_pos']}
-
- eval_loss = {key: Record(['epoch', 'batch']) for key in
- ['loss','loss_laplacian', 'loss_reg_fls', 'loss_pos']}
- for epoch in range(self.opt_parser.nepoch):
- self.__train_pass__(epoch=epoch, log_loss=train_loss, is_training=True)
- # with torch.no_grad():
- # self.__train_pass__(epoch=epoch, log_loss=eval_loss, is_training=False)
-
- def test(self):
- train_loss = {key: Record(['epoch', 'batch', 'in_batch']) for key in
- ['loss', 'loss_g', 'loss_laplacian']}
- eval_loss = {key: Record(['epoch', 'batch', 'in_batch']) for key in
- ['loss_pos', 'loss_g', 'loss_laplacian']}
- with torch.no_grad():
- self.__train_pass__(epoch=0, log_loss=eval_loss, is_training=False)
-
- def __tensorboard_write__(self, status, loss, t):
- if (self.opt_parser.write):
- for key in loss.keys():
- self.writer.add_scalar('{}_loss_{}_{}'.format(status, t, key), loss[key].per(t),
- self.writer_count[status + '_' + t])
- loss[key].clean(t)
- self.writer_count[status + '_' + t] += 1
- else:
- for key in loss.keys():
- loss[key].clean(t)
-
- def __save_model__(self, save_type, epoch):
- if (self.opt_parser.write):
- torch.save({
- 'G': self.G.state_dict(),
- 'epoch': epoch
- }, os.path.join(self.opt_parser.ckpt_dir, 'ckpt_{}.pth'.format(save_type)))
-
- def adjust_learning_rate(self, optimizer, epoch):
- """Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
- lr = self.opt_parser.lr * (0.3 ** (np.max((0, epoch + 0)) // 50))
- lr = np.max((lr, 1e-5))
- print('###### ==== > Adjust learning rate to ', lr)
- for param_group in optimizer.param_groups:
- param_group['lr'] = lr
- # print('lr:', param_group['lr'])
-
- def __solve_inverse_lip2__(self, fl_dis_pred_pos_numpy):
- for j in range(fl_dis_pred_pos_numpy.shape[0]):
- # init_face = self.std_face_id.detach().cpu().numpy()
- from util.geo_math import area_of_signed_polygon
- fls = fl_dis_pred_pos_numpy[j].reshape(68, 3)
- area_of_mouth = area_of_signed_polygon(fls[list(range(60, 68)), 0:2])
- if (area_of_mouth < 0):
- fl_dis_pred_pos_numpy[j, 65 * 3:66 * 3] = 0.5 *(fl_dis_pred_pos_numpy[j, 63 * 3:64 * 3] + fl_dis_pred_pos_numpy[j, 65 * 3:66 * 3])
- fl_dis_pred_pos_numpy[j, 63 * 3:64 * 3] = fl_dis_pred_pos_numpy[j, 65 * 3:66 * 3]
- fl_dis_pred_pos_numpy[j, 66 * 3:67 * 3] = 0.5 *(fl_dis_pred_pos_numpy[j, 62 * 3:63 * 3] + fl_dis_pred_pos_numpy[j, 66 * 3:67 * 3])
- fl_dis_pred_pos_numpy[j, 62 * 3:63 * 3] = fl_dis_pred_pos_numpy[j, 66 * 3:67 * 3]
- fl_dis_pred_pos_numpy[j, 67 * 3:68 * 3] = 0.5 *(fl_dis_pred_pos_numpy[j, 61 * 3:62 * 3] + fl_dis_pred_pos_numpy[j, 67 * 3:68 * 3])
- fl_dis_pred_pos_numpy[j, 61 * 3:62 * 3] = fl_dis_pred_pos_numpy[j, 67 * 3:68 * 3]
- p = max([j-1, 0])
- fl_dis_pred_pos_numpy[j, 55 * 3+1:59 * 3+1:3] = fl_dis_pred_pos_numpy[j, 64 * 3+1:68 * 3+1:3] \
- + fl_dis_pred_pos_numpy[p, 55 * 3+1:59 * 3+1:3] \
- - fl_dis_pred_pos_numpy[p, 64 * 3+1:68 * 3+1:3]
- fl_dis_pred_pos_numpy[j, 59 * 3+1:60 * 3+1:3] = fl_dis_pred_pos_numpy[j, 60 * 3+1:61 * 3+1:3] \
- + fl_dis_pred_pos_numpy[p, 59 * 3+1:60 * 3+1:3] \
- - fl_dis_pred_pos_numpy[p, 60 * 3+1:61 * 3+1:3]
- fl_dis_pred_pos_numpy[j, 49 * 3+1:54 * 3+1:3] = fl_dis_pred_pos_numpy[j, 60 * 3+1:65 * 3+1:3] \
- + fl_dis_pred_pos_numpy[p, 49 * 3+1:54 * 3+1:3] \
- - fl_dis_pred_pos_numpy[p, 60 * 3+1:65 * 3+1:3]
- return fl_dis_pred_pos_numpy
-
-
-
diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/modules/chroma.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/modules/chroma.py
deleted file mode 100644
index e84fb66b4a4aaefb0b3ccac8a9a44c3b20e48f61..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/modules/chroma.py
+++ /dev/null
@@ -1,66 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-import typing as tp
-
-from einops import rearrange
-from librosa import filters
-import torch
-from torch import nn
-import torch.nn.functional as F
-import torchaudio
-
-
-class ChromaExtractor(nn.Module):
- """Chroma extraction and quantization.
-
- Args:
- sample_rate (int): Sample rate for the chroma extraction.
- n_chroma (int): Number of chroma bins for the chroma extraction.
- radix2_exp (int): Size of stft window for the chroma extraction (power of 2, e.g. 12 -> 2^12).
- nfft (int, optional): Number of FFT.
- winlen (int, optional): Window length.
- winhop (int, optional): Window hop size.
- argmax (bool, optional): Whether to use argmax. Defaults to False.
- norm (float, optional): Norm for chroma normalization. Defaults to inf.
- """
- def __init__(self, sample_rate: int, n_chroma: int = 12, radix2_exp: int = 12, nfft: tp.Optional[int] = None,
- winlen: tp.Optional[int] = None, winhop: tp.Optional[int] = None, argmax: bool = False,
- norm: float = torch.inf):
- super().__init__()
- self.winlen = winlen or 2 ** radix2_exp
- self.nfft = nfft or self.winlen
- self.winhop = winhop or (self.winlen // 4)
- self.sample_rate = sample_rate
- self.n_chroma = n_chroma
- self.norm = norm
- self.argmax = argmax
- self.register_buffer('fbanks', torch.from_numpy(filters.chroma(sr=sample_rate, n_fft=self.nfft, tuning=0,
- n_chroma=self.n_chroma)), persistent=False)
- self.spec = torchaudio.transforms.Spectrogram(n_fft=self.nfft, win_length=self.winlen,
- hop_length=self.winhop, power=2, center=True,
- pad=0, normalized=True)
-
- def forward(self, wav: torch.Tensor) -> torch.Tensor:
- T = wav.shape[-1]
- # in case we are getting a wav that was dropped out (nullified)
- # from the conditioner, make sure wav length is no less that nfft
- if T < self.nfft:
- pad = self.nfft - T
- r = 0 if pad % 2 == 0 else 1
- wav = F.pad(wav, (pad // 2, pad // 2 + r), 'constant', 0)
- assert wav.shape[-1] == self.nfft, f"expected len {self.nfft} but got {wav.shape[-1]}"
-
- spec = self.spec(wav).squeeze(1)
- raw_chroma = torch.einsum('cf,...ft->...ct', self.fbanks, spec)
- norm_chroma = torch.nn.functional.normalize(raw_chroma, p=self.norm, dim=-2, eps=1e-6)
- norm_chroma = rearrange(norm_chroma, 'b d t -> b t d')
-
- if self.argmax:
- idx = norm_chroma.argmax(-1, keepdim=True)
- norm_chroma[:] = 0
- norm_chroma.scatter_(dim=-1, index=idx, value=1)
-
- return norm_chroma
diff --git a/spaces/matthoffner/chatbot/components/Buttons/SidebarActionButton/index.ts b/spaces/matthoffner/chatbot/components/Buttons/SidebarActionButton/index.ts
deleted file mode 100644
index 1fce00e46cd649fea08ed9e7c6c136ac86fe1528..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/chatbot/components/Buttons/SidebarActionButton/index.ts
+++ /dev/null
@@ -1 +0,0 @@
-export { default } from './SidebarActionButton';
diff --git a/spaces/matthoffner/open-codetree/components/Modals/TemplateModal.tsx b/spaces/matthoffner/open-codetree/components/Modals/TemplateModal.tsx
deleted file mode 100644
index ba3248be01c4384ffdb22e1e9eebc9009bc5da65..0000000000000000000000000000000000000000
--- a/spaces/matthoffner/open-codetree/components/Modals/TemplateModal.tsx
+++ /dev/null
@@ -1,65 +0,0 @@
-import { motion } from "framer-motion";
-import { useTree } from "../../hooks";
-import { treeTemplates } from "../../constants";
-import { useAppSelector } from "../../store/hook";
-import { theme_state } from "../../store/features/themeSlice";
-import { compiler_state } from "../../store/features/compilerSlice";
-import { TemplateSelectionSkeleton } from "../Skeleton/TemplateSelectionSkeleton";
-import { modalVariant } from "./config";
-
-const TemplateModal = () => {
- const { theme } = useAppSelector(theme_state);
- const { esbuildStatus } = useAppSelector(compiler_state);
- const { setTree } = useTree();
-
- let arr = [];
-
- for (const item of Object.entries(treeTemplates)) {
- arr.push(item);
- }
-
- const templates = arr.map((template, key) => (
-
- ));
-
- return (
- e.stopPropagation()}
- >
-
- Templates
-
-
- {esbuildStatus.isReady ? templates : }
-
-
- );
-};
-
-export default TemplateModal;
diff --git a/spaces/menghanxia/disco/inference.py b/spaces/menghanxia/disco/inference.py
deleted file mode 100644
index c69b688a8923b6f5e69fb47e6720a1ff9900281a..0000000000000000000000000000000000000000
--- a/spaces/menghanxia/disco/inference.py
+++ /dev/null
@@ -1,105 +0,0 @@
-import os, glob, sys, logging
-import argparse, datetime, time
-import numpy as np
-import cv2
-from PIL import Image
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from models import model, basic
-from utils import util
-
-
-def setup_model(checkpt_path, device="cuda"):
- #print('--------------', torch.cuda.is_available())
- """Load the model into memory to make running multiple predictions efficient"""
- colorLabeler = basic.ColorLabel(device=device)
- colorizer = model.AnchorColorProb(inChannel=1, outChannel=313, enhanced=True, colorLabeler=colorLabeler)
- colorizer = colorizer.to(device)
- #checkpt_path = "./checkpoints/disco-beta.pth.rar"
- assert os.path.exists(checkpt_path), "No checkpoint found!"
- data_dict = torch.load(checkpt_path, map_location=torch.device('cpu'))
- colorizer.load_state_dict(data_dict['state_dict'])
- colorizer.eval()
- return colorizer, colorLabeler
-
-
-def resize_ab2l(gray_img, lab_imgs, vis=False):
- H, W = gray_img.shape[:2]
- reszied_ab = cv2.resize(lab_imgs[:,:,1:], (W,H), interpolation=cv2.INTER_LINEAR)
- if vis:
- gray_img = cv2.resize(lab_imgs[:,:,:1], (W,H), interpolation=cv2.INTER_LINEAR)
- return np.concatenate((gray_img[:,:,np.newaxis], reszied_ab), axis=2)
- else:
- return np.concatenate((gray_img, reszied_ab), axis=2)
-
-def prepare_data(rgb_img, target_res):
- rgb_img = np.array(rgb_img / 255., np.float32)
- lab_img = cv2.cvtColor(rgb_img, cv2.COLOR_RGB2LAB)
- org_grays = (lab_img[:,:,[0]]-50.) / 50.
- lab_img = cv2.resize(lab_img, target_res, interpolation=cv2.INTER_LINEAR)
-
- lab_img = torch.from_numpy(lab_img.transpose((2, 0, 1)))
- gray_img = (lab_img[0:1,:,:]-50.) / 50.
- ab_chans = lab_img[1:3,:,:] / 110.
- input_grays = gray_img.unsqueeze(0)
- input_colors = ab_chans.unsqueeze(0)
- return input_grays, input_colors, org_grays
-
-
-def colorize_grayscale(colorizer, color_class, rgb_img, hint_img, n_anchors, is_high_res, is_editable, device="cuda"):
- n_anchors = int(n_anchors)
- n_anchors = max(n_anchors, 3)
- n_anchors = min(n_anchors, 14)
- target_res = (512,512) if is_high_res else (256,256)
- input_grays, input_colors, org_grays = prepare_data(rgb_img, target_res)
- input_grays = input_grays.to(device)
- input_colors = input_colors.to(device)
-
- if is_editable:
- print('>>>:editable mode')
- sampled_T = -1
- _, input_colors, _ = prepare_data(hint_img, target_res)
- input_colors = input_colors.to(device)
- pal_logit, ref_logit, enhanced_ab, affinity_map, spix_colors, hint_mask = colorizer(input_grays, \
- input_colors, n_anchors, sampled_T)
- else:
- print('>>>:automatic mode')
- sampled_T = 0
- pal_logit, ref_logit, enhanced_ab, affinity_map, spix_colors, hint_mask = colorizer(input_grays, \
- input_colors, n_anchors, sampled_T)
-
- pred_labs = torch.cat((input_grays,enhanced_ab), dim=1)
- lab_imgs = basic.tensor2array(pred_labs).squeeze(axis=0)
- lab_imgs = resize_ab2l(org_grays, lab_imgs)
-
- lab_imgs[:,:,0] = lab_imgs[:,:,0] * 50.0 + 50.0
- lab_imgs[:,:,1:3] = lab_imgs[:,:,1:3] * 110.0
- rgb_output = cv2.cvtColor(lab_imgs[:,:,:], cv2.COLOR_LAB2RGB)
- return (rgb_output*255.0).astype(np.uint8)
-
-
-def predict_anchors(colorizer, color_class, rgb_img, n_anchors, is_high_res, is_editable, device="cuda"):
- n_anchors = int(n_anchors)
- n_anchors = max(n_anchors, 3)
- n_anchors = min(n_anchors, 14)
- target_res = (512,512) if is_high_res else (256,256)
- input_grays, input_colors, org_grays = prepare_data(rgb_img, target_res)
- input_grays = input_grays.to(device)
- input_colors = input_colors.to(device)
-
- sampled_T, sp_size = 0, 16
- pal_logit, ref_logit, enhanced_ab, affinity_map, spix_colors, hint_mask = colorizer(input_grays, \
- input_colors, n_anchors, sampled_T)
- pred_probs = pal_logit
- guided_colors = color_class.decode_ind2ab(ref_logit, T=0)
- guided_colors = basic.upfeat(guided_colors, affinity_map, sp_size, sp_size)
- anchor_masks = basic.upfeat(hint_mask, affinity_map, sp_size, sp_size)
- marked_labs = basic.mark_color_hints(input_grays, guided_colors, anchor_masks, base_ABs=None)
- lab_imgs = basic.tensor2array(marked_labs).squeeze(axis=0)
- lab_imgs = resize_ab2l(org_grays, lab_imgs, vis=True)
-
- lab_imgs[:,:,0] = lab_imgs[:,:,0] * 50.0 + 50.0
- lab_imgs[:,:,1:3] = lab_imgs[:,:,1:3] * 110.0
- rgb_output = cv2.cvtColor(lab_imgs[:,:,:], cv2.COLOR_LAB2RGB)
- return (rgb_output*255.0).astype(np.uint8)
\ No newline at end of file
diff --git a/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/init.js b/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/init.js
deleted file mode 100644
index ee7c8a4f14939e8d09185fd47b2b43c8e3c37b11..0000000000000000000000000000000000000000
--- a/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/scatter-plot-colab/spearman-compare/init.js
+++ /dev/null
@@ -1,200 +0,0 @@
-/* Copyright 2021 Google LLC. All Rights Reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-==============================================================================*/
-
-console.clear()
-
-window.init = function(){
- var initFns = [window.initUtil, window.initScatter, window.initPair]
- if (!initFns.every(d => d)) return
-
- window.util = initUtil()
-
- function parseTidy(csvStr, sentences){
- var tidy = d3.csvParse(csvStr, d => {
- return {
- e0: +d.e0,
- e1: +d.e1,
- i0: +d.i0,
- i1: +d.i1,
- tokenIndex: +d.tokenIndex,
- sentenceIndex: +d.sentenceIndex,
- }
- })
-
- var bySentence = d3.nestBy(tidy, d => d.sentenceIndex)
- bySentence.forEach(sent => {
- sent.sentenceIndex = +sent.key
- sent.s0 = sentences[sent.sentenceIndex].s0
- sent.s1 = sentences[sent.sentenceIndex].s1
- sent.orig = sentences[sent.sentenceIndex].orig
-
- sent.corr = ss.sampleCorrelation(
- sent.map(d => Math.min(d.i0, 300)),
- sent.map(d => Math.min(d.i1, 300))
- )
- // sent.corr = ss.sampleCorrelation(sent.map(d => d.e0), sent.map(d => d.e1))
- })
-
- return bySentence
- }
-
- var bySentenceA = parseTidy(python_data.tidyCSV_A, python_data.sentences_A)
- var bySentenceB = parseTidy(python_data.tidyCSV_B, python_data.sentences_B)
- var bySentence = bySentenceA.map((a, i) => {
- var b = bySentenceB[i]
- var orig = a.orig
- .replace('in 1918, ', '')
- .replace('in texas, ', '')
- .replace('in texas, ', '')
-
- return {a, b, orig}
- })
-
- var sel = d3.select('.container').html(`
-
-
-
-
-
-
-
-
-
- `)
- .st({width: 1400})
- d3.selectAll('.list,.scatter').st({width: 430, display: 'inline-block', verticalAlign: 'top'})
-
- d3.selectAll('.pair-a,.pair-b,.pair-ab').st({width: 400, display: 'inline-block', verticalAlign: 'top'})
-
- function initScatter(bySentence, sel){
- var c = d3.conventions({
- sel: sel.st({width: 350}),
- height: 100,
- width: 300,
- height: 300,
- margin: {left: 40, top: 17, bottom: 60}
- })
-
- var domain = d3.extent(bySentence.map(d => d.a.corr).concat(bySentence.map(d => d.b.corr)))
-
-
- c.x.domain(domain).nice()
- c.y.domain(domain).nice()
- c.xAxis.ticks(5)
- c.yAxis.ticks(5)
- d3.drawAxis(c)
- c.svg.selectAll('.tick').st({display: 'block'})
-
- util.ggPlotBg(c)
- util.addAxisLabel(c,
- python_data.slug_A + ' coefficients (avg ' + util.corrFmt(d3.mean(bySentence, d => d.a.corr)) + ')',
- python_data.slug_B + ' coefficients (avg ' + util.corrFmt(d3.mean(bySentence, d => d.b.corr)) + ')',
- )
-
-
- c.svg.append('path').at({d: `M 0 ${c.height} L ${c.width} 0`, stroke: '#fff', strokeWidth: 2})
-
- c.svg.appendMany('circle.sentence', bySentence)
- .translate(d => [c.x(d.a.corr), c.y(d.b.corr)])
- .at({
- r: 3,
- fill: 'none',
- stroke: '#000'
- })
- .on('mouseover', setSentenceAsPair)
- }
- initScatter(bySentence, d3.select('.scatter'))
-
-
- function initList(bySentence, sel){
- var tableSel = sel
- .st({height: 300 + 17, overflowY: 'scroll', cursor: 'default', position: 'relative'})
- .append('table')
- .st({fontSize: 12})
-
- tableSel.append('tr.header')
- .html(`
- ${python_data.slug_A}
- ${python_data.slug_B}
- template
- `)
-
- var rowSel = tableSel
- .appendMany('tr.sentence', _.sortBy(bySentence, d => d.a.corr))
- .on('mouseover', setSentenceAsPair)
- .st({padding: 2, fontSize: 12})
- .html(d => `
- ${util.corrFmt(d.a.corr)}
- ${util.corrFmt(d.b.corr)}
- ${d.orig.replace('[', '').replace(']', '')}
- `)
-
- }
- initList(bySentence, d3.select('.list'))
-
-
- function setSentenceAsPair(s){
- function drawScatter(type){
- var st = s
- if (type.length == 2){
- st.e0 = s.a.e0.map((e0, i) => e0 - s.a.e1[i])
- st.e1 = s.b.e0.map((e0, i) => e0 - s.b.e1[i])
-
- st.label0 = python_data.slug_A + ' dif'
- st.label1 = python_data.slug_B + ' dif'
- st.isDifference = false
- st.count = (python_settings.count || 150)*2
- } else {
- st = s[type]
- st.e0 = d3.range(python_data.vocab.length).map(d => -Infinity)
- st.e1 = d3.range(python_data.vocab.length).map(d => -Infinity)
- st.forEach(d => {
- st.e0[d.tokenIndex] = d.e0
- st.e1[d.tokenIndex] = d.e1
- })
-
- st.label0 = st.s0
- st.label1 = st.s1
-
- st.isDifference = python_settings.isDifference
- st.count = python_settings.count || 150
-
- st.topLabel = type == 'a' ? python_data.slug_A : python_data.slug_B
- }
-
- st.vocab = python_data.vocab
-
- var sel = d3.select('.pair-' + type).html('').st({width: 400, marginRight: 40})
- initPair(st, sel.append('div'))
- }
- drawScatter('b')
- drawScatter('a')
- drawScatter('ab')
-
- d3.selectAll('.sentence').classed('active', d => d == s)
-
- d3.selectAll('tr.sentence').filter(d => d == s)
- .each(function(){
- this.scrollIntoView({ block: 'nearest', inline: 'nearest'})
- })
- }
- setSentenceAsPair(bySentence[0])
-
-}
-
-
-
-window.init()
-
diff --git a/spaces/michaljunczyk/pl-asr-bigos-workspace/helpers.py b/spaces/michaljunczyk/pl-asr-bigos-workspace/helpers.py
deleted file mode 100644
index f0bce511c1340b833acb259980b2caebe108ac95..0000000000000000000000000000000000000000
--- a/spaces/michaljunczyk/pl-asr-bigos-workspace/helpers.py
+++ /dev/null
@@ -1,79 +0,0 @@
-dict_origin = {
- "Poland": {
- "cities": [
- "Warsaw",
- "Białystok",
- "Bydgoszcz",
- "Gdańsk",
- "Gorzów Wielkopolski",
- "Katowice",
- "Kielce",
- "Kraków",
- "Lublin",
- "Łódź",
- "Olsztyn",
- "Opole",
- "Poznań",
- "Rzeszów",
- "Szczecin",
- "Toruń",
- "Wrocław"
- ]
- },
- "Ukraine": {
- "cities": [
- "Kyiv",
- "Kharkiv",
- "Dnipro",
- "Odessa",
- "Lviv",
- "Zaporizhzhia",
- "Kryvyi Rih",
- "Mykolaiv",
- "Mariupol",
- "Luhansk",
- "Makiivka",
- "Vinnytsia",
- "Simferopol",
- "Kherson",
- "Poltava",
- "Chernihiv",
- "Cherkasy",
- "Sumy",
- "Zhytomyr",
- "Horlivka"
- ]
- }
-}
-dict_promptset={
- "bridge":[
- "Licytacja pozwala określić, jaka gra zostanie rozegrana w danym rozdaniu.",
- "Kiedy twoja para jest w defensywie, ważne jest, aby rozumieć sygnały dawane przez partnera.",
- "Kontra pokazuje pewne zasoby punktowe i/lub określone kolory w ręce gracza.",
- "Rekontra zwiększa wynik jeśli przeciwnik zostanie skontraktowany.",
- "Manche to poziom licytacji, na którym można zdobyć co najmniej 100 punktów za poniżej linii.",
- "Slem to licytacja na poziomie 6, który wymaga wzięcia wszystkich lew oprócz jednej.",
- "Wielki slem to licytacja na poziomie 7, gdzie musisz wziąć wszystkie 13 lew.",
- "Finesse to technika, dzięki której możesz zdobyć lewę, mimo że przeciwnik ma wyższą kartę.",
- "Ciąg licytacyjny składa się z kilku kolejnych deklaracji licytacyjnych.",
- "W brydżu sportowym ważna jest komunikacja niewerbalna, dlatego nie wolno używać żadnych znaków ani gestów.",
- "Jeśli omyłkowo zadeklarujesz niewłaściwy kontrakt, możesz poprosić o poprawienie błędu przed wyjściem.",
- "Deklarant próbuje zdobyć zadeklarowaną liczbę lew, podczas gdy obrońcy starają się to uniemożliwić.",
- "Jeśli grasz w kolorze trefl, twoim zadaniem jest zdobyć jak najwięcej lew w tym kolorze.",
- "As to najwyższa karta w każdym kolorze.",
- "Zawistą nazywamy pierwszą kartę wychodzoną przez obrońcę.",
- "Balansująca licytacja może pomóc twojej parze wejść do gry, kiedy przeciwnicy są blisko kontraktu.",
- "Nie wszyscy gracze stosują ten sam system licytacyjny.",
- "W brydżu ważne jest, aby pamiętać o kolejności kart i taktyce ich wykładania.",
- "Zapadka to sytuacja, gdy któryś z graczy jest zmuszony do zagrania karty, która przyniesie korzyść przeciwnikowi.",
- "Pamiętaj, aby zawsze podążać kolorem jeśli masz kartę w tym kolorze.",
- "Zabawa w brydża wymaga koncentracji, taktyki i dobrej komunikacji z partnerem.",
- "Każdy gracz ma 13 kart w ręku podczas rozgrywki.",
- "Otwarcie to pierwsza licytacja w rozdaniu.",
- "Niektóre systemy licytacyjne zawierają szczegółowe konwencje, które precyzyjnie opisują siłę i kolor ręki.",
- "W brydżu impasem nazywamy sytuację, gdy mamy jedną kartę niższą od karty przeciwnika w tym samym kolorze i mamy szansę wziąć lewę, jeśli ten kolor zostanie wychodzony z odpowiedniej strony.",
- "Czasami kontrakt jest niewykonalny i celem staje się minimalizacja strat.",
- "Gdy grasz kontrakt bez atu, trumfem staje się kolor kier.",
- "Niektóre pary używają specjalnych systemów sygnałów, aby przekazywać sobie dodatkowe informacje podczas obrońy."
- ]
-}
diff --git a/spaces/mindtube/protogen-models/app.py b/spaces/mindtube/protogen-models/app.py
deleted file mode 100644
index a586572350efe8770a0dc7f164d1cb523fa8b8b1..0000000000000000000000000000000000000000
--- a/spaces/mindtube/protogen-models/app.py
+++ /dev/null
@@ -1,96 +0,0 @@
-import gradio as gr
-import os
-import sys
-from pathlib import Path
-
-models = [
- {"name": "Protogen 2.2", "url": "darkstorm2150/Protogen_v2.2_Official_Release"},
- {"name": "Protogen X 3.4", "url": "darkstorm2150/Protogen_x3.4_Official_Release"},
- {"name": "Protogen X 5.8", "url": "darkstorm2150/Protogen_x5.8_Official_Release"},
- {"name": "Protogen X 5.3", "url": "darkstorm2150/Protogen_x5.3_Official_Release"},
- {"name": "Protogen Nova", "url": "darkstorm2150/Protogen_Nova_Official_Release"},
- {"name": "Protogen Ecplise", "url": "darkstorm2150/Protogen_Eclipse_Official_Release"},
- {"name": "Protogen Infinity", "url": "darkstorm2150/Protogen_Infinity_Official_Release"},
-]
-
-current_model = models[0]
-
-text_gen = gr.Interface.load("spaces/daspartho/prompt-extend")
-
-models2 = []
-for model in models:
- model_url = f"models/{model['url']}"
- loaded_model = gr.Interface.load(model_url, live=True, preprocess=True)
- models2.append(loaded_model)
-
-
-def text_it(inputs, text_gen=text_gen):
- return text_gen(inputs)
-
-
-def set_model(current_model_index):
- global current_model
- current_model = models[current_model_index]
- return gr.update(value=f"{current_model['name']}")
-
-
-def send_it(inputs, model_choice):
- proc = models2[model_choice]
- return proc(inputs)
-
-
-with gr.Blocks() as myface:
- gr.HTML(
-
- )
-
- with gr.Row():
- with gr.Row():
- input_text = gr.Textbox(label="Prompt idea", placeholder="Eg. Mystical zen garden", lines=1)
- # Model selection dropdown
- model_name1 = gr.Dropdown(
- label="Choose Model",
- choices=[m["name"] for m in models],
- type="index",
- value=current_model["name"],
- interactive=True,
- )
- with gr.Row():
- see_prompts = gr.Button("Generate Prompts")
- run = gr.Button("Generate Images", variant="primary")
-
- with gr.Row():
- output1 = gr.Image(label="")
- output2 = gr.Image(label="")
- output3 = gr.Image(label="")
- with gr.Row():
- magic1 = gr.Textbox(label="Generated Prompt", lines=2)
- magic2 = gr.Textbox(label="Generated Prompt", lines=2)
- magic3 = gr.Textbox(label="Generated Prompt", lines=2)
- with gr.Row():
- output4 = gr.Image(label="")
- output5 = gr.Image(label="")
- output6 = gr.Image(label="")
- with gr.Row():
- magic4 = gr.Textbox(label="Generated Prompt", lines=2)
- magic5 = gr.Textbox(label="Generated Prompt", lines=2)
- magic6 = gr.Textbox(label="Generated Prompt", lines=2)
-
- model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3, output4, output5, output6])
-
- run.click(send_it, inputs=[magic1, model_name1], outputs=[output1])
- run.click(send_it, inputs=[magic2, model_name1], outputs=[output2])
- run.click(send_it, inputs=[magic3, model_name1], outputs=[output3])
- run.click(send_it, inputs=[magic4, model_name1], outputs=[output4])
- run.click(send_it, inputs=[magic5, model_name1], outputs=[output5])
- run.click(send_it, inputs=[magic6, model_name1], outputs=[output6])
-
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic1])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic2])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic3])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic4])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic5])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic6])
-
-myface.queue(concurrency_count=200)
-myface.launch(inline=True, show_api=False, max_threads=400)
\ No newline at end of file
diff --git a/spaces/mlfoundations/VisIT-Bench-Leaderboard/app.py b/spaces/mlfoundations/VisIT-Bench-Leaderboard/app.py
deleted file mode 100644
index 743f008e9db1102edeba0e3d2b571626a270cb10..0000000000000000000000000000000000000000
--- a/spaces/mlfoundations/VisIT-Bench-Leaderboard/app.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import gradio as gr
-import pandas as pd
-
-# df = pd.read_table("visit_bench_leaderboard.tsv")
-df = pd.read_table('visitbench_leaderboard_Single~Image_Oct282023.tsv')
-
-headline = """# VisIT-Bench Leaderboard
-
-To submit your results to the leaderboard, please add a "predictions" column to [this csv](https://huggingface.co/datasets/mlfoundations/VisIT-Bench/blob/main/visit_bench_single_image.csv), and send to [this mail](mailto:yonatanbitton1@gmail.com).
-Please include in your email 1) a name for your model, 2) your team name (including your affiliation), and optionally, 3) a github repo or paper link.
-"""
-demo = gr.Blocks()
-with demo:
- with gr.Row():
- gr.Markdown(headline)
-
- with gr.Column():
- leaderboard_df = gr.components.DataFrame(
- value=df,
- datatype=["markdown", "markdown", "number", "number", "number"]
- )
-
-demo.launch()
diff --git a/spaces/monra/freegpt-webui-chimera/client/css/checkbox.css b/spaces/monra/freegpt-webui-chimera/client/css/checkbox.css
deleted file mode 100644
index 94955b604ea3fab493a50d740fb29be1a8ef6cd3..0000000000000000000000000000000000000000
--- a/spaces/monra/freegpt-webui-chimera/client/css/checkbox.css
+++ /dev/null
@@ -1,55 +0,0 @@
-.checkbox input {
- height: 0;
- width: 0;
- display: none;
-}
-
-.checkbox span {
- font-size: 0.875rem;
- color: var(--colour-2);
- margin-left: 4px;
-}
-
-.checkbox label:after {
- content: "";
- position: absolute;
- top: 50%;
- transform: translateY(-50%);
- left: 5px;
- width: 20px;
- height: 20px;
- background: var(--blur-border);
- border-radius: 90px;
- transition: 0.33s;
-}
-
-.checkbox input + label:after,
-.checkbox input:checked + label {
- background: var(--colour-3);
-}
-
-.checkbox input + label,
-.checkbox input:checked + label:after {
- background: var(--blur-border);
-}
-
-.checkbox input:checked + label:after {
- left: calc(100% - 5px - 20px);
-}
-
-@media screen and (max-width: 990px) {
- .checkbox label {
- width: 25px;
- height: 15px;
- }
-
- .checkbox label:after {
- left: 2px;
- width: 10px;
- height: 10px;
- }
-
- .checkbox input:checked + label:after {
- left: calc(100% - 2px - 10px);
- }
-}
diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/backtranslation_dataset.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/data/backtranslation_dataset.py
deleted file mode 100644
index 8f70c90df3d237077537993e125d366c95292f1a..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/fairseq/fairseq/data/backtranslation_dataset.py
+++ /dev/null
@@ -1,165 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from fairseq import utils
-
-from . import FairseqDataset
-
-
-def backtranslate_samples(samples, collate_fn, generate_fn, cuda=True):
- """Backtranslate a list of samples.
-
- Given an input (*samples*) of the form:
-
- [{'id': 1, 'source': 'hallo welt'}]
-
- this will return:
-
- [{'id': 1, 'source': 'hello world', 'target': 'hallo welt'}]
-
- Args:
- samples (List[dict]): samples to backtranslate. Individual samples are
- expected to have a 'source' key, which will become the 'target'
- after backtranslation.
- collate_fn (callable): function to collate samples into a mini-batch
- generate_fn (callable): function to generate backtranslations
- cuda (bool): use GPU for generation (default: ``True``)
-
- Returns:
- List[dict]: an updated list of samples with a backtranslated source
- """
- collated_samples = collate_fn(samples)
- s = utils.move_to_cuda(collated_samples) if cuda else collated_samples
- generated_sources = generate_fn(s)
-
- id_to_src = {sample["id"]: sample["source"] for sample in samples}
-
- # Go through each tgt sentence in batch and its corresponding best
- # generated hypothesis and create a backtranslation data pair
- # {id: id, source: generated backtranslation, target: original tgt}
- return [
- {
- "id": id.item(),
- "target": id_to_src[id.item()],
- "source": hypos[0]["tokens"].cpu(),
- }
- for id, hypos in zip(collated_samples["id"], generated_sources)
- ]
-
-
-class BacktranslationDataset(FairseqDataset):
- """
- Sets up a backtranslation dataset which takes a tgt batch, generates
- a src using a tgt-src backtranslation function (*backtranslation_fn*),
- and returns the corresponding `{generated src, input tgt}` batch.
-
- Args:
- tgt_dataset (~fairseq.data.FairseqDataset): the dataset to be
- backtranslated. Only the source side of this dataset will be used.
- After backtranslation, the source sentences in this dataset will be
- returned as the targets.
- src_dict (~fairseq.data.Dictionary): the dictionary of backtranslated
- sentences.
- tgt_dict (~fairseq.data.Dictionary, optional): the dictionary of
- sentences to be backtranslated.
- backtranslation_fn (callable, optional): function to call to generate
- backtranslations. This is typically the `generate` method of a
- :class:`~fairseq.sequence_generator.SequenceGenerator` object.
- Pass in None when it is not available at initialization time, and
- use set_backtranslation_fn function to set it when available.
- output_collater (callable, optional): function to call on the
- backtranslated samples to create the final batch
- (default: ``tgt_dataset.collater``).
- cuda: use GPU for generation
- """
-
- def __init__(
- self,
- tgt_dataset,
- src_dict,
- tgt_dict=None,
- backtranslation_fn=None,
- output_collater=None,
- cuda=True,
- **kwargs
- ):
- self.tgt_dataset = tgt_dataset
- self.backtranslation_fn = backtranslation_fn
- self.output_collater = (
- output_collater if output_collater is not None else tgt_dataset.collater
- )
- self.cuda = cuda if torch.cuda.is_available() else False
- self.src_dict = src_dict
- self.tgt_dict = tgt_dict
-
- def __getitem__(self, index):
- """
- Returns a single sample from *tgt_dataset*. Note that backtranslation is
- not applied in this step; use :func:`collater` instead to backtranslate
- a batch of samples.
- """
- return self.tgt_dataset[index]
-
- def __len__(self):
- return len(self.tgt_dataset)
-
- def set_backtranslation_fn(self, backtranslation_fn):
- self.backtranslation_fn = backtranslation_fn
-
- def collater(self, samples):
- """Merge and backtranslate a list of samples to form a mini-batch.
-
- Using the samples from *tgt_dataset*, load a collated target sample to
- feed to the backtranslation model. Then take the backtranslation with
- the best score as the source and the original input as the target.
-
- Note: we expect *tgt_dataset* to provide a function `collater()` that
- will collate samples into the format expected by *backtranslation_fn*.
- After backtranslation, we will feed the new list of samples (i.e., the
- `(backtranslated source, original source)` pairs) to *output_collater*
- and return the result.
-
- Args:
- samples (List[dict]): samples to backtranslate and collate
-
- Returns:
- dict: a mini-batch with keys coming from *output_collater*
- """
- if samples[0].get("is_dummy", False):
- return samples
- samples = backtranslate_samples(
- samples=samples,
- collate_fn=self.tgt_dataset.collater,
- generate_fn=(lambda net_input: self.backtranslation_fn(net_input)),
- cuda=self.cuda,
- )
- return self.output_collater(samples)
-
- def num_tokens(self, index):
- """Just use the tgt dataset num_tokens"""
- return self.tgt_dataset.num_tokens(index)
-
- def ordered_indices(self):
- """Just use the tgt dataset ordered_indices"""
- return self.tgt_dataset.ordered_indices()
-
- def size(self, index):
- """Return an example's size as a float or tuple. This value is used
- when filtering a dataset with ``--max-positions``.
-
- Note: we use *tgt_dataset* to approximate the length of the source
- sentence, since we do not know the actual length until after
- backtranslation.
- """
- tgt_size = self.tgt_dataset.size(index)[0]
- return (tgt_size, tgt_size)
-
- @property
- def supports_prefetch(self):
- return getattr(self.tgt_dataset, "supports_prefetch", False)
-
- def prefetch(self, indices):
- return self.tgt_dataset.prefetch(indices)
diff --git a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/caption/ofa_wacaption_vqacapsnliground_caption_stage_1_lr1e5.sh b/spaces/mshukor/UnIVAL/slurm_adastra/averaging/caption/ofa_wacaption_vqacapsnliground_caption_stage_1_lr1e5.sh
deleted file mode 100644
index 65ffed1e5189082315363609201689441cb56b38..0000000000000000000000000000000000000000
--- a/spaces/mshukor/UnIVAL/slurm_adastra/averaging/caption/ofa_wacaption_vqacapsnliground_caption_stage_1_lr1e5.sh
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name=ofa_wacaption_vqacapsnliground_caption_stage_1_lr1e5
-#SBATCH --nodes=1
-#SBATCH --ntasks=1
-#SBATCH --gpus=8
-#SBATCH --threads-per-core=2
-#SBATCH --gpu-bind=closest
-####SBATCH --nodelist=x1004c4s2b0n0
-#SBATCH --time=24:00:00
-#SBATCH -C MI250
-#SBATCH -A gda2204
-#SBATCH --mail-type=END,FAIL
-#SBATCH --output=/lus/home/NAT/gda2204/mshukor/logs/slurm/ofa_wacaption_vqacapsnliground_caption_stage_1_lr1e5.out
-#SBATCH --exclusive
-#SBATCH --mail-user=mustafa.shukor@isir.upmc.fr
-
-
-cd /lus/home/NAT/gda2204/mshukor/code/ofa_ours/run_scripts
-source /lus/home/NAT/gda2204/mshukor/.bashrc
-
-conda activate main
-
-
-rm core-python3*
-
-
-srun -l -N 1 -n 1 -c 128 --gpus=8 bash averaging/caption/ofa_wacaption_vqacapsnliground_caption_stage_1_lr1e5.sh
-
-
diff --git a/spaces/mueller-franzes/medfusion-app/scripts/helpers/sample_latent_embedder.py b/spaces/mueller-franzes/medfusion-app/scripts/helpers/sample_latent_embedder.py
deleted file mode 100644
index 6ff7243e6cc69220985c434ff38b9087be86a14c..0000000000000000000000000000000000000000
--- a/spaces/mueller-franzes/medfusion-app/scripts/helpers/sample_latent_embedder.py
+++ /dev/null
@@ -1,86 +0,0 @@
-from pathlib import Path
-import math
-
-import torch
-import torch.nn.functional as F
-from torchvision.utils import save_image
-
-from medical_diffusion.data.datamodules import SimpleDataModule
-from medical_diffusion.data.datasets import AIROGSDataset, MSIvsMSS_2_Dataset, CheXpert_2_Dataset
-from medical_diffusion.models.embedders.latent_embedders import VQVAE, VQGAN, VAE, VAEGAN
-import matplotlib.pyplot as plt
-import seaborn as sns
-
-path_out = Path.cwd()/'results/test/latent_embedder'
-path_out.mkdir(parents=True, exist_ok=True)
-device = torch.device('cuda')
-torch.manual_seed(0)
-
-# ds = AIROGSDataset( # 256x256
-# crawler_ext='jpg',
-# augment_horizontal_flip=True,
-# augment_vertical_flip=True,
-# # path_root='/home/gustav/Documents/datasets/AIROGS/dataset',
-# path_root='/mnt/hdd/datasets/eye/AIROGS/data_256x256',
-# )
-
-# ds = MSIvsMSS_2_Dataset( # 512x512
-# # image_resize=256,
-# crawler_ext='jpg',
-# augment_horizontal_flip=False,
-# augment_vertical_flip=False,
-# # path_root='/home/gustav/Documents/datasets/Kather_2/train'
-# path_root='/mnt/hdd/datasets/pathology/kather_msi_mss_2/train/'
-# )
-
-ds = CheXpert_2_Dataset( # 256x256
- augment_horizontal_flip=False,
- augment_vertical_flip=False,
- path_root = '/mnt/hdd/datasets/chest/CheXpert/ChecXpert-v10/preprocessed_tianyu'
-)
-
-dm = SimpleDataModule(
- ds_train = ds,
- batch_size=4,
- num_workers=0,
-)
-
-
-# ------------------ Load Model -------------------
-model = VAE.load_from_checkpoint('runs/2022_12_12_133315_chest_vaegan/last_vae.ckpt')
-
-# from diffusers import StableDiffusionPipeline
-# with open('auth_token.txt', 'r') as file:
-# auth_token = file.read()
-# pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float32, use_auth_token=auth_token)
-# model = pipe.vae
-
-model = model.to(device)
-
-# ------------- Reset Seed ------------
-torch.manual_seed(0)
-
-# ------------ Prepare Data ----------------
-date_iter = iter(dm.train_dataloader())
-for k in range(1):
- batch = next(date_iter)
-x = batch['source']
-x = x.to(device) #.to(torch.float16)
-
-# ------------- Run Model ----------------
-with torch.no_grad():
- # ------------- Encode ----------
- z = model.encode(x)
- # z = z.latent_dist.sample() # Only for stable-diffusion
-
- # ------------- Decode -----------
- sns.histplot(z.flatten().detach().cpu().numpy())
- plt.savefig('test.png')
- x_pred = model.decode(z)
- # x_pred = x_pred.sample # Only for stable-diffusion
- x_pred = x_pred.clamp(-1, 1)
-
-images = x_pred[0] #torch.cat([x, x_pred])
-save_image(images, path_out/'latent_embedder_vaegan.png', nrow=x.shape[0], normalize=True, scale_each=True)
-
-
diff --git a/spaces/mygyasir/genious_bgremover/carvekit/web/static/css/bootstrap.min.css b/spaces/mygyasir/genious_bgremover/carvekit/web/static/css/bootstrap.min.css
deleted file mode 100644
index b68ee65bc15b833071bbdf97e549a068eae9706b..0000000000000000000000000000000000000000
--- a/spaces/mygyasir/genious_bgremover/carvekit/web/static/css/bootstrap.min.css
+++ /dev/null
@@ -1,7772 +0,0 @@
-/*!
- * Bootstrap v3.3.1 (http://getbootstrap.com)
- * Copyright 2011-2014 Twitter, Inc.
- * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
- */
-/*! normalize.css v3.0.2 | MIT License | git.io/normalize */
-html {
- font-family: sans-serif;
- -webkit-text-size-adjust: 100%;
- -ms-text-size-adjust: 100%
-}
-
-body {
- margin: 0
-}
-
-article,
-aside,
-details,
-figcaption,
-figure,
-footer,
-header,
-hgroup,
-main,
-menu,
-nav,
-section,
-summary {
- display: block
-}
-
-audio,
-canvas,
-progress,
-video {
- display: inline-block;
- vertical-align: baseline
-}
-
-audio:not([controls]) {
- display: none;
- height: 0
-}
-
-[hidden],
-template {
- display: none
-}
-
-a {
- background-color: transparent
-}
-
-a:active,
-a:hover {
- outline: 0
-}
-
-abbr[title] {
- border-bottom: 1px dotted
-}
-
-b,
-strong {
- font-weight: 700
-}
-
-dfn {
- font-style: italic
-}
-
-h1 {
- margin: .67em 0;
- font-size: 2em
-}
-
-mark {
- color: #000;
- background: #ff0
-}
-
-small {
- font-size: 80%
-}
-
-sub,
-sup {
- position: relative;
- font-size: 75%;
- line-height: 0;
- vertical-align: baseline
-}
-
-sup {
- top: -.5em
-}
-
-sub {
- bottom: -.25em
-}
-
-img {
- border: 0
-}
-
-svg:not(:root) {
- overflow: hidden
-}
-
-figure {
- margin: 1em 40px
-}
-
-hr {
- height: 0;
- -webkit-box-sizing: content-box;
- -moz-box-sizing: content-box;
- box-sizing: content-box
-}
-
-pre {
- overflow: auto
-}
-
-code,
-kbd,
-pre,
-samp {
- font-family: monospace, monospace;
- font-size: 1em
-}
-
-button,
-input,
-optgroup,
-select,
-textarea {
- margin: 0;
- font: inherit;
- color: inherit
-}
-
-button {
- overflow: visible
-}
-
-button,
-select {
- text-transform: none
-}
-
-button,
-html input[type=button],
-input[type=reset],
-input[type=submit] {
- -webkit-appearance: button;
- cursor: pointer
-}
-
-button[disabled],
-html input[disabled] {
- cursor: default
-}
-
-button::-moz-focus-inner,
-input::-moz-focus-inner {
- padding: 0;
- border: 0
-}
-
-input {
- line-height: normal
-}
-
-input[type=checkbox],
-input[type=radio] {
- -webkit-box-sizing: border-box;
- -moz-box-sizing: border-box;
- box-sizing: border-box;
- padding: 0
-}
-
-input[type=number]::-webkit-inner-spin-button,
-input[type=number]::-webkit-outer-spin-button {
- height: auto
-}
-
-input[type=search] {
- -webkit-box-sizing: content-box;
- -moz-box-sizing: content-box;
- box-sizing: content-box;
- -webkit-appearance: textfield
-}
-
-input[type=search]::-webkit-search-cancel-button,
-input[type=search]::-webkit-search-decoration {
- -webkit-appearance: none
-}
-
-fieldset {
- padding: .35em .625em .75em;
- margin: 0 2px;
- border: 1px solid silver
-}
-
-legend {
- padding: 0;
- border: 0
-}
-
-textarea {
- overflow: auto
-}
-
-optgroup {
- font-weight: 700
-}
-
-table {
- border-spacing: 0;
- border-collapse: collapse
-}
-
-td,
-th {
- padding: 0
-}
-
-/*! Source: https://github.com/h5bp/html5-boilerplate/blob/master/src/css/main.css */
-@media print {
-
- *,
- :before,
- :after {
- color: #000 !important;
- text-shadow: none !important;
- background: transparent !important;
- -webkit-box-shadow: none !important;
- box-shadow: none !important
- }
-
- a,
- a:visited {
- text-decoration: underline
- }
-
- a[href]:after {
- content: " (" attr(href) ")"
- }
-
- abbr[title]:after {
- content: " (" attr(title) ")"
- }
-
- a[href^="#"]:after,
- a[href^="javascript:"]:after {
- content: ""
- }
-
- pre,
- blockquote {
- border: 1px solid #999;
- page-break-inside: avoid
- }
-
- thead {
- display: table-header-group
- }
-
- tr,
- img {
- page-break-inside: avoid
- }
-
- img {
- max-width: 100% !important
- }
-
- p,
- h2,
- h3 {
- orphans: 3;
- widows: 3
- }
-
- h2,
- h3 {
- page-break-after: avoid
- }
-
- select {
- background: #fff !important
- }
-
- .navbar {
- display: none
- }
-
- .btn>.caret,
- .dropup>.btn>.caret {
- border-top-color: #000 !important
- }
-
- .label {
- border: 1px solid #000
- }
-
- .table {
- border-collapse: collapse !important
- }
-
- .table td,
- .table th {
- background-color: #fff !important
- }
-
- .table-bordered th,
- .table-bordered td {
- border: 1px solid #ddd !important
- }
-}
-
-@font-face {
- font-family: 'Glyphicons Halflings';
- src: url(../fonts/glyphicons-halflings-regular.eot);
- src: url(../fonts/glyphicons-halflings-regular.eot?#iefix) format('embedded-opentype'), url(../fonts/glyphicons-halflings-regular.woff) format('woff'), url(../fonts/glyphicons-halflings-regular.ttf) format('truetype'), url(../fonts/glyphicons-halflings-regular.svg#glyphicons_halflingsregular) format('svg')
-}
-
-.glyphicon {
- position: relative;
- top: 1px;
- display: inline-block;
- font-family: 'Glyphicons Halflings';
- font-style: normal;
- font-weight: 400;
- line-height: 1;
- -webkit-font-smoothing: antialiased;
- -moz-osx-font-smoothing: grayscale
-}
-
-.glyphicon-asterisk:before {
- content: "\2a"
-}
-
-.glyphicon-plus:before {
- content: "\2b"
-}
-
-.glyphicon-euro:before,
-.glyphicon-eur:before {
- content: "\20ac"
-}
-
-.glyphicon-minus:before {
- content: "\2212"
-}
-
-.glyphicon-cloud:before {
- content: "\2601"
-}
-
-.glyphicon-envelope:before {
- content: "\2709"
-}
-
-.glyphicon-pencil:before {
- content: "\270f"
-}
-
-.glyphicon-glass:before {
- content: "\e001"
-}
-
-.glyphicon-music:before {
- content: "\e002"
-}
-
-.glyphicon-search:before {
- content: "\e003"
-}
-
-.glyphicon-heart:before {
- content: "\e005"
-}
-
-.glyphicon-star:before {
- content: "\e006"
-}
-
-.glyphicon-star-empty:before {
- content: "\e007"
-}
-
-.glyphicon-user:before {
- content: "\e008"
-}
-
-.glyphicon-film:before {
- content: "\e009"
-}
-
-.glyphicon-th-large:before {
- content: "\e010"
-}
-
-.glyphicon-th:before {
- content: "\e011"
-}
-
-.glyphicon-th-list:before {
- content: "\e012"
-}
-
-.glyphicon-ok:before {
- content: "\e013"
-}
-
-.glyphicon-remove:before {
- content: "\e014"
-}
-
-.glyphicon-zoom-in:before {
- content: "\e015"
-}
-
-.glyphicon-zoom-out:before {
- content: "\e016"
-}
-
-.glyphicon-off:before {
- content: "\e017"
-}
-
-.glyphicon-signal:before {
- content: "\e018"
-}
-
-.glyphicon-cog:before {
- content: "\e019"
-}
-
-.glyphicon-trash:before {
- content: "\e020"
-}
-
-.glyphicon-home:before {
- content: "\e021"
-}
-
-.glyphicon-file:before {
- content: "\e022"
-}
-
-.glyphicon-time:before {
- content: "\e023"
-}
-
-.glyphicon-road:before {
- content: "\e024"
-}
-
-.glyphicon-download-alt:before {
- content: "\e025"
-}
-
-.glyphicon-download:before {
- content: "\e026"
-}
-
-.glyphicon-upload:before {
- content: "\e027"
-}
-
-.glyphicon-inbox:before {
- content: "\e028"
-}
-
-.glyphicon-play-circle:before {
- content: "\e029"
-}
-
-.glyphicon-repeat:before {
- content: "\e030"
-}
-
-.glyphicon-refresh:before {
- content: "\e031"
-}
-
-.glyphicon-list-alt:before {
- content: "\e032"
-}
-
-.glyphicon-lock:before {
- content: "\e033"
-}
-
-.glyphicon-flag:before {
- content: "\e034"
-}
-
-.glyphicon-headphones:before {
- content: "\e035"
-}
-
-.glyphicon-volume-off:before {
- content: "\e036"
-}
-
-.glyphicon-volume-down:before {
- content: "\e037"
-}
-
-.glyphicon-volume-up:before {
- content: "\e038"
-}
-
-.glyphicon-qrcode:before {
- content: "\e039"
-}
-
-.glyphicon-barcode:before {
- content: "\e040"
-}
-
-.glyphicon-tag:before {
- content: "\e041"
-}
-
-.glyphicon-tags:before {
- content: "\e042"
-}
-
-.glyphicon-book:before {
- content: "\e043"
-}
-
-.glyphicon-bookmark:before {
- content: "\e044"
-}
-
-.glyphicon-print:before {
- content: "\e045"
-}
-
-.glyphicon-camera:before {
- content: "\e046"
-}
-
-.glyphicon-font:before {
- content: "\e047"
-}
-
-.glyphicon-bold:before {
- content: "\e048"
-}
-
-.glyphicon-italic:before {
- content: "\e049"
-}
-
-.glyphicon-text-height:before {
- content: "\e050"
-}
-
-.glyphicon-text-width:before {
- content: "\e051"
-}
-
-.glyphicon-align-left:before {
- content: "\e052"
-}
-
-.glyphicon-align-center:before {
- content: "\e053"
-}
-
-.glyphicon-align-right:before {
- content: "\e054"
-}
-
-.glyphicon-align-justify:before {
- content: "\e055"
-}
-
-.glyphicon-list:before {
- content: "\e056"
-}
-
-.glyphicon-indent-left:before {
- content: "\e057"
-}
-
-.glyphicon-indent-right:before {
- content: "\e058"
-}
-
-.glyphicon-facetime-video:before {
- content: "\e059"
-}
-
-.glyphicon-picture:before {
- content: "\e060"
-}
-
-.glyphicon-map-marker:before {
- content: "\e062"
-}
-
-.glyphicon-adjust:before {
- content: "\e063"
-}
-
-.glyphicon-tint:before {
- content: "\e064"
-}
-
-.glyphicon-edit:before {
- content: "\e065"
-}
-
-.glyphicon-share:before {
- content: "\e066"
-}
-
-.glyphicon-check:before {
- content: "\e067"
-}
-
-.glyphicon-move:before {
- content: "\e068"
-}
-
-.glyphicon-step-backward:before {
- content: "\e069"
-}
-
-.glyphicon-fast-backward:before {
- content: "\e070"
-}
-
-.glyphicon-backward:before {
- content: "\e071"
-}
-
-.glyphicon-play:before {
- content: "\e072"
-}
-
-.glyphicon-pause:before {
- content: "\e073"
-}
-
-.glyphicon-stop:before {
- content: "\e074"
-}
-
-.glyphicon-forward:before {
- content: "\e075"
-}
-
-.glyphicon-fast-forward:before {
- content: "\e076"
-}
-
-.glyphicon-step-forward:before {
- content: "\e077"
-}
-
-.glyphicon-eject:before {
- content: "\e078"
-}
-
-.glyphicon-chevron-left:before {
- content: "\e079"
-}
-
-.glyphicon-chevron-right:before {
- content: "\e080"
-}
-
-.glyphicon-plus-sign:before {
- content: "\e081"
-}
-
-.glyphicon-minus-sign:before {
- content: "\e082"
-}
-
-.glyphicon-remove-sign:before {
- content: "\e083"
-}
-
-.glyphicon-ok-sign:before {
- content: "\e084"
-}
-
-.glyphicon-question-sign:before {
- content: "\e085"
-}
-
-.glyphicon-info-sign:before {
- content: "\e086"
-}
-
-.glyphicon-screenshot:before {
- content: "\e087"
-}
-
-.glyphicon-remove-circle:before {
- content: "\e088"
-}
-
-.glyphicon-ok-circle:before {
- content: "\e089"
-}
-
-.glyphicon-ban-circle:before {
- content: "\e090"
-}
-
-.glyphicon-arrow-left:before {
- content: "\e091"
-}
-
-.glyphicon-arrow-right:before {
- content: "\e092"
-}
-
-.glyphicon-arrow-up:before {
- content: "\e093"
-}
-
-.glyphicon-arrow-down:before {
- content: "\e094"
-}
-
-.glyphicon-share-alt:before {
- content: "\e095"
-}
-
-.glyphicon-resize-full:before {
- content: "\e096"
-}
-
-.glyphicon-resize-small:before {
- content: "\e097"
-}
-
-.glyphicon-exclamation-sign:before {
- content: "\e101"
-}
-
-.glyphicon-gift:before {
- content: "\e102"
-}
-
-.glyphicon-leaf:before {
- content: "\e103"
-}
-
-.glyphicon-fire:before {
- content: "\e104"
-}
-
-.glyphicon-eye-open:before {
- content: "\e105"
-}
-
-.glyphicon-eye-close:before {
- content: "\e106"
-}
-
-.glyphicon-warning-sign:before {
- content: "\e107"
-}
-
-.glyphicon-plane:before {
- content: "\e108"
-}
-
-.glyphicon-calendar:before {
- content: "\e109"
-}
-
-.glyphicon-random:before {
- content: "\e110"
-}
-
-.glyphicon-comment:before {
- content: "\e111"
-}
-
-.glyphicon-magnet:before {
- content: "\e112"
-}
-
-.glyphicon-chevron-up:before {
- content: "\e113"
-}
-
-.glyphicon-chevron-down:before {
- content: "\e114"
-}
-
-.glyphicon-retweet:before {
- content: "\e115"
-}
-
-.glyphicon-shopping-cart:before {
- content: "\e116"
-}
-
-.glyphicon-folder-close:before {
- content: "\e117"
-}
-
-.glyphicon-folder-open:before {
- content: "\e118"
-}
-
-.glyphicon-resize-vertical:before {
- content: "\e119"
-}
-
-.glyphicon-resize-horizontal:before {
- content: "\e120"
-}
-
-.glyphicon-hdd:before {
- content: "\e121"
-}
-
-.glyphicon-bullhorn:before {
- content: "\e122"
-}
-
-.glyphicon-bell:before {
- content: "\e123"
-}
-
-.glyphicon-certificate:before {
- content: "\e124"
-}
-
-.glyphicon-thumbs-up:before {
- content: "\e125"
-}
-
-.glyphicon-thumbs-down:before {
- content: "\e126"
-}
-
-.glyphicon-hand-right:before {
- content: "\e127"
-}
-
-.glyphicon-hand-left:before {
- content: "\e128"
-}
-
-.glyphicon-hand-up:before {
- content: "\e129"
-}
-
-.glyphicon-hand-down:before {
- content: "\e130"
-}
-
-.glyphicon-circle-arrow-right:before {
- content: "\e131"
-}
-
-.glyphicon-circle-arrow-left:before {
- content: "\e132"
-}
-
-.glyphicon-circle-arrow-up:before {
- content: "\e133"
-}
-
-.glyphicon-circle-arrow-down:before {
- content: "\e134"
-}
-
-.glyphicon-globe:before {
- content: "\e135"
-}
-
-.glyphicon-wrench:before {
- content: "\e136"
-}
-
-.glyphicon-tasks:before {
- content: "\e137"
-}
-
-.glyphicon-filter:before {
- content: "\e138"
-}
-
-.glyphicon-briefcase:before {
- content: "\e139"
-}
-
-.glyphicon-fullscreen:before {
- content: "\e140"
-}
-
-.glyphicon-dashboard:before {
- content: "\e141"
-}
-
-.glyphicon-paperclip:before {
- content: "\e142"
-}
-
-.glyphicon-heart-empty:before {
- content: "\e143"
-}
-
-.glyphicon-link:before {
- content: "\e144"
-}
-
-.glyphicon-phone:before {
- content: "\e145"
-}
-
-.glyphicon-pushpin:before {
- content: "\e146"
-}
-
-.glyphicon-usd:before {
- content: "\e148"
-}
-
-.glyphicon-gbp:before {
- content: "\e149"
-}
-
-.glyphicon-sort:before {
- content: "\e150"
-}
-
-.glyphicon-sort-by-alphabet:before {
- content: "\e151"
-}
-
-.glyphicon-sort-by-alphabet-alt:before {
- content: "\e152"
-}
-
-.glyphicon-sort-by-order:before {
- content: "\e153"
-}
-
-.glyphicon-sort-by-order-alt:before {
- content: "\e154"
-}
-
-.glyphicon-sort-by-attributes:before {
- content: "\e155"
-}
-
-.glyphicon-sort-by-attributes-alt:before {
- content: "\e156"
-}
-
-.glyphicon-unchecked:before {
- content: "\e157"
-}
-
-.glyphicon-expand:before {
- content: "\e158"
-}
-
-.glyphicon-collapse-down:before {
- content: "\e159"
-}
-
-.glyphicon-collapse-up:before {
- content: "\e160"
-}
-
-.glyphicon-log-in:before {
- content: "\e161"
-}
-
-.glyphicon-flash:before {
- content: "\e162"
-}
-
-.glyphicon-log-out:before {
- content: "\e163"
-}
-
-.glyphicon-new-window:before {
- content: "\e164"
-}
-
-.glyphicon-record:before {
- content: "\e165"
-}
-
-.glyphicon-save:before {
- content: "\e166"
-}
-
-.glyphicon-open:before {
- content: "\e167"
-}
-
-.glyphicon-saved:before {
- content: "\e168"
-}
-
-.glyphicon-import:before {
- content: "\e169"
-}
-
-.glyphicon-export:before {
- content: "\e170"
-}
-
-.glyphicon-send:before {
- content: "\e171"
-}
-
-.glyphicon-floppy-disk:before {
- content: "\e172"
-}
-
-.glyphicon-floppy-saved:before {
- content: "\e173"
-}
-
-.glyphicon-floppy-remove:before {
- content: "\e174"
-}
-
-.glyphicon-floppy-save:before {
- content: "\e175"
-}
-
-.glyphicon-floppy-open:before {
- content: "\e176"
-}
-
-.glyphicon-credit-card:before {
- content: "\e177"
-}
-
-.glyphicon-transfer:before {
- content: "\e178"
-}
-
-.glyphicon-cutlery:before {
- content: "\e179"
-}
-
-.glyphicon-header:before {
- content: "\e180"
-}
-
-.glyphicon-compressed:before {
- content: "\e181"
-}
-
-.glyphicon-earphone:before {
- content: "\e182"
-}
-
-.glyphicon-phone-alt:before {
- content: "\e183"
-}
-
-.glyphicon-tower:before {
- content: "\e184"
-}
-
-.glyphicon-stats:before {
- content: "\e185"
-}
-
-.glyphicon-sd-video:before {
- content: "\e186"
-}
-
-.glyphicon-hd-video:before {
- content: "\e187"
-}
-
-.glyphicon-subtitles:before {
- content: "\e188"
-}
-
-.glyphicon-sound-stereo:before {
- content: "\e189"
-}
-
-.glyphicon-sound-dolby:before {
- content: "\e190"
-}
-
-.glyphicon-sound-5-1:before {
- content: "\e191"
-}
-
-.glyphicon-sound-6-1:before {
- content: "\e192"
-}
-
-.glyphicon-sound-7-1:before {
- content: "\e193"
-}
-
-.glyphicon-copyright-mark:before {
- content: "\e194"
-}
-
-.glyphicon-registration-mark:before {
- content: "\e195"
-}
-
-.glyphicon-cloud-download:before {
- content: "\e197"
-}
-
-.glyphicon-cloud-upload:before {
- content: "\e198"
-}
-
-.glyphicon-tree-conifer:before {
- content: "\e199"
-}
-
-.glyphicon-tree-deciduous:before {
- content: "\e200"
-}
-
-* {
- -webkit-box-sizing: border-box;
- -moz-box-sizing: border-box;
- box-sizing: border-box
-}
-
-:before,
-:after {
- -webkit-box-sizing: border-box;
- -moz-box-sizing: border-box;
- box-sizing: border-box
-}
-
-html {
- font-size: 10px;
- -webkit-tap-highlight-color: rgba(0, 0, 0, 0)
-}
-
-body {
- font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
- font-size: 14px;
- line-height: 1.42857143;
- color: #333;
- background-color: #fff
-}
-
-input,
-button,
-select,
-textarea {
- font-family: inherit;
- font-size: inherit;
- line-height: inherit
-}
-
-a {
- color: #337ab7;
- text-decoration: none
-}
-
-a:hover,
-a:focus {
- color: #23527c;
- text-decoration: underline
-}
-
-a:focus {
- outline: thin dotted;
- outline: 5px auto -webkit-focus-ring-color;
- outline-offset: -2px
-}
-
-figure {
- margin: 0
-}
-
-img {
- vertical-align: middle
-}
-
-.img-responsive,
-.thumbnail>img,
-.thumbnail a>img,
-.carousel-inner>.item>img,
-.carousel-inner>.item>a>img {
- display: block;
- max-width: 100%;
- height: auto
-}
-
-.img-rounded {
- border-radius: 6px
-}
-
-.img-thumbnail {
- display: inline-block;
- max-width: 100%;
- height: auto;
- padding: 4px;
- line-height: 1.42857143;
- background-color: #fff;
- border: 1px solid #ddd;
- border-radius: 4px;
- -webkit-transition: all .2s ease-in-out;
- -o-transition: all .2s ease-in-out;
- transition: all .2s ease-in-out
-}
-
-.img-circle {
- border-radius: 50%
-}
-
-hr {
- margin-top: 20px;
- margin-bottom: 20px;
- border: 0;
- border-top: 1px solid #eee
-}
-
-.sr-only {
- position: absolute;
- width: 1px;
- height: 1px;
- padding: 0;
- margin: -1px;
- overflow: hidden;
- clip: rect(0, 0, 0, 0);
- border: 0
-}
-
-.sr-only-focusable:active,
-.sr-only-focusable:focus {
- position: static;
- width: auto;
- height: auto;
- margin: 0;
- overflow: visible;
- clip: auto
-}
-
-h1,
-h2,
-h3,
-h4,
-h5,
-h6,
-.h1,
-.h2,
-.h3,
-.h4,
-.h5,
-.h6 {
- font-family: inherit;
- font-weight: 500;
- line-height: 1.1;
- color: inherit
-}
-
-h1 small,
-h2 small,
-h3 small,
-h4 small,
-h5 small,
-h6 small,
-.h1 small,
-.h2 small,
-.h3 small,
-.h4 small,
-.h5 small,
-.h6 small,
-h1 .small,
-h2 .small,
-h3 .small,
-h4 .small,
-h5 .small,
-h6 .small,
-.h1 .small,
-.h2 .small,
-.h3 .small,
-.h4 .small,
-.h5 .small,
-.h6 .small {
- font-weight: 400;
- line-height: 1;
- color: #777
-}
-
-h1,
-.h1,
-h2,
-.h2,
-h3,
-.h3 {
- margin-top: 20px;
- margin-bottom: 10px
-}
-
-h1 small,
-.h1 small,
-h2 small,
-.h2 small,
-h3 small,
-.h3 small,
-h1 .small,
-.h1 .small,
-h2 .small,
-.h2 .small,
-h3 .small,
-.h3 .small {
- font-size: 65%
-}
-
-h4,
-.h4,
-h5,
-.h5,
-h6,
-.h6 {
- margin-top: 10px;
- margin-bottom: 10px
-}
-
-h4 small,
-.h4 small,
-h5 small,
-.h5 small,
-h6 small,
-.h6 small,
-h4 .small,
-.h4 .small,
-h5 .small,
-.h5 .small,
-h6 .small,
-.h6 .small {
- font-size: 75%
-}
-
-h1,
-.h1 {
- font-size: 36px
-}
-
-h2,
-.h2 {
- font-size: 30px
-}
-
-h3,
-.h3 {
- font-size: 24px
-}
-
-h4,
-.h4 {
- font-size: 18px
-}
-
-h5,
-.h5 {
- font-size: 14px
-}
-
-h6,
-.h6 {
- font-size: 12px
-}
-
-p {
- margin: 0 0 10px
-}
-
-.lead {
- margin-bottom: 20px;
- font-size: 16px;
- font-weight: 300;
- line-height: 1.4
-}
-
-@media (min-width:768px) {
- .lead {
- font-size: 21px
- }
-}
-
-small,
-.small {
- font-size: 85%
-}
-
-mark,
-.mark {
- padding: .2em;
- background-color: #fcf8e3
-}
-
-.text-left {
- text-align: left
-}
-
-.text-right {
- text-align: right
-}
-
-.text-center {
- text-align: center
-}
-
-.text-justify {
- text-align: justify
-}
-
-.text-nowrap {
- white-space: nowrap
-}
-
-.text-lowercase {
- text-transform: lowercase
-}
-
-.text-uppercase {
- text-transform: uppercase
-}
-
-.text-capitalize {
- text-transform: capitalize
-}
-
-.text-muted {
- color: #777
-}
-
-.text-primary {
- color: #337ab7
-}
-
-a.text-primary:hover {
- color: #6f420a
-}
-
-.text-success {
- color: #3c763d
-}
-
-a.text-success:hover {
- color: #2b542c
-}
-
-.text-info {
- color: #31708f
-}
-
-a.text-info:hover {
- color: #245269
-}
-
-.text-warning {
- color: #8a6d3b
-}
-
-a.text-warning:hover {
- color: #66512c
-}
-
-.text-danger {
- color: #a94442
-}
-
-a.text-danger:hover {
- color: #843534
-}
-
-.bg-primary {
- color: #fff;
- background-color: #337ab7
-}
-
-a.bg-primary:hover {
- background-color: #6f420a
-}
-
-.bg-success {
- background-color: #dff0d8
-}
-
-a.bg-success:hover {
- background-color: #c1e2b3
-}
-
-.bg-info {
- background-color: #d9edf7
-}
-
-a.bg-info:hover {
- background-color: #afd9ee
-}
-
-.bg-warning {
- background-color: #fcf8e3
-}
-
-a.bg-warning:hover {
- background-color: #f7ecb5
-}
-
-.bg-danger {
- background-color: #f2dede
-}
-
-a.bg-danger:hover {
- background-color: #e4b9b9
-}
-
-.page-header {
- padding-bottom: 9px;
- margin: 40px 0 20px;
- border-bottom: 1px solid #eee
-}
-
-ul,
-ol {
- margin-top: 0;
- margin-bottom: 10px
-}
-
-ul ul,
-ol ul,
-ul ol,
-ol ol {
- margin-bottom: 0
-}
-
-.list-unstyled {
- padding-left: 0;
- list-style: none
-}
-
-.list-inline {
- padding-left: 0;
- margin-left: -5px;
- list-style: none
-}
-
-.list-inline>li {
- display: inline-block;
- padding-right: 5px;
- padding-left: 5px
-}
-
-dl {
- margin-top: 0;
- margin-bottom: 20px
-}
-
-dt,
-dd {
- line-height: 1.42857143
-}
-
-dt {
- font-weight: 700
-}
-
-dd {
- margin-left: 0
-}
-
-@media (min-width:768px) {
- .dl-horizontal dt {
- float: left;
- width: 160px;
- overflow: hidden;
- clear: left;
- text-align: right;
- text-overflow: ellipsis;
- white-space: nowrap
- }
-
- .dl-horizontal dd {
- margin-left: 180px
- }
-}
-
-abbr[title],
-abbr[data-original-title] {
- cursor: help;
- border-bottom: 1px dotted #777
-}
-
-.initialism {
- font-size: 90%;
- text-transform: uppercase
-}
-
-blockquote {
- padding: 10px 20px;
- margin: 0 0 20px;
- font-size: 17.5px;
- border-left: 5px solid #eee
-}
-
-blockquote p:last-child,
-blockquote ul:last-child,
-blockquote ol:last-child {
- margin-bottom: 0
-}
-
-blockquote footer,
-blockquote small,
-blockquote .small {
- display: block;
- font-size: 80%;
- line-height: 1.42857143;
- color: #777
-}
-
-blockquote footer:before,
-blockquote small:before,
-blockquote .small:before {
- content: '\2014 \00A0'
-}
-
-.blockquote-reverse,
-blockquote.pull-right {
- padding-right: 15px;
- padding-left: 0;
- text-align: right;
- border-right: 5px solid #eee;
- border-left: 0
-}
-
-.blockquote-reverse footer:before,
-blockquote.pull-right footer:before,
-.blockquote-reverse small:before,
-blockquote.pull-right small:before,
-.blockquote-reverse .small:before,
-blockquote.pull-right .small:before {
- content: ''
-}
-
-.blockquote-reverse footer:after,
-blockquote.pull-right footer:after,
-.blockquote-reverse small:after,
-blockquote.pull-right small:after,
-.blockquote-reverse .small:after,
-blockquote.pull-right .small:after {
- content: '\00A0 \2014'
-}
-
-address {
- margin-bottom: 20px;
- font-style: normal;
- line-height: 1.42857143
-}
-
-code,
-kbd,
-pre,
-samp {
- font-family: Menlo, Monaco, Consolas, "Courier New", monospace
-}
-
-code {
- padding: 2px 4px;
- font-size: 90%;
- color: #c7254e;
- background-color: #f9f2f4;
- border-radius: 4px
-}
-
-kbd {
- padding: 2px 4px;
- font-size: 90%;
- color: #fff;
- background-color: #333;
- border-radius: 3px;
- -webkit-box-shadow: inset 0 -1px 0 rgba(0, 0, 0, .25);
- box-shadow: inset 0 -1px 0 rgba(0, 0, 0, .25)
-}
-
-kbd kbd {
- padding: 0;
- font-size: 100%;
- font-weight: 700;
- -webkit-box-shadow: none;
- box-shadow: none
-}
-
-pre {
- display: block;
- padding: 9.5px;
- margin: 0 0 10px;
- font-size: 13px;
- line-height: 1.42857143;
- color: #333;
- word-break: break-all;
- word-wrap: break-word;
- background-color: #f5f5f5;
- border: 1px solid #ccc;
- border-radius: 4px
-}
-
-pre code {
- padding: 0;
- font-size: inherit;
- color: inherit;
- white-space: pre-wrap;
- background-color: transparent;
- border-radius: 0
-}
-
-.pre-scrollable {
- max-height: 340px;
- overflow-y: scroll
-}
-
-.container {
- padding-right: 15px;
- padding-left: 15px;
- margin-right: auto;
- margin-left: auto
-}
-
-@media (min-width:768px) {
- .container {
- width: 750px
- }
-}
-
-@media (min-width:992px) {
- .container {
- width: 970px
- }
-}
-
-@media (min-width:1200px) {
- .container {
- width: 1170px
- }
-}
-
-.container-fluid {
- padding-right: 15px;
- padding-left: 15px;
- margin-right: auto;
- margin-left: auto
-}
-
-.row {
- margin-right: -15px;
- margin-left: -15px
-}
-
-.col-xs-1,
-.col-sm-1,
-.col-md-1,
-.col-lg-1,
-.col-xs-2,
-.col-sm-2,
-.col-md-2,
-.col-lg-2,
-.col-xs-3,
-.col-sm-3,
-.col-md-3,
-.col-lg-3,
-.col-xs-4,
-.col-sm-4,
-.col-md-4,
-.col-lg-4,
-.col-xs-5,
-.col-sm-5,
-.col-md-5,
-.col-lg-5,
-.col-xs-6,
-.col-sm-6,
-.col-md-6,
-.col-lg-6,
-.col-xs-7,
-.col-sm-7,
-.col-md-7,
-.col-lg-7,
-.col-xs-8,
-.col-sm-8,
-.col-md-8,
-.col-lg-8,
-.col-xs-9,
-.col-sm-9,
-.col-md-9,
-.col-lg-9,
-.col-xs-10,
-.col-sm-10,
-.col-md-10,
-.col-lg-10,
-.col-xs-11,
-.col-sm-11,
-.col-md-11,
-.col-lg-11,
-.col-xs-12,
-.col-sm-12,
-.col-md-12,
-.col-lg-12 {
- position: relative;
- min-height: 1px;
- padding-right: 15px;
- padding-left: 15px
-}
-
-.col-xs-1,
-.col-xs-2,
-.col-xs-3,
-.col-xs-4,
-.col-xs-5,
-.col-xs-6,
-.col-xs-7,
-.col-xs-8,
-.col-xs-9,
-.col-xs-10,
-.col-xs-11,
-.col-xs-12 {
- float: left
-}
-
-.col-xs-12 {
- width: 100%
-}
-
-.col-xs-11 {
- width: 91.66666667%
-}
-
-.col-xs-10 {
- width: 83.33333333%
-}
-
-.col-xs-9 {
- width: 75%
-}
-
-.col-xs-8 {
- width: 66.66666667%
-}
-
-.col-xs-7 {
- width: 58.33333333%
-}
-
-.col-xs-6 {
- width: 50%
-}
-
-.col-xs-5 {
- width: 41.66666667%
-}
-
-.col-xs-4 {
- width: 33.33333333%
-}
-
-.col-xs-3 {
- width: 25%
-}
-
-.col-xs-2 {
- width: 16.66666667%
-}
-
-.col-xs-1 {
- width: 8.33333333%
-}
-
-.col-xs-pull-12 {
- right: 100%
-}
-
-.col-xs-pull-11 {
- right: 91.66666667%
-}
-
-.col-xs-pull-10 {
- right: 83.33333333%
-}
-
-.col-xs-pull-9 {
- right: 75%
-}
-
-.col-xs-pull-8 {
- right: 66.66666667%
-}
-
-.col-xs-pull-7 {
- right: 58.33333333%
-}
-
-.col-xs-pull-6 {
- right: 50%
-}
-
-.col-xs-pull-5 {
- right: 41.66666667%
-}
-
-.col-xs-pull-4 {
- right: 33.33333333%
-}
-
-.col-xs-pull-3 {
- right: 25%
-}
-
-.col-xs-pull-2 {
- right: 16.66666667%
-}
-
-.col-xs-pull-1 {
- right: 8.33333333%
-}
-
-.col-xs-pull-0 {
- right: auto
-}
-
-.col-xs-push-12 {
- left: 100%
-}
-
-.col-xs-push-11 {
- left: 91.66666667%
-}
-
-.col-xs-push-10 {
- left: 83.33333333%
-}
-
-.col-xs-push-9 {
- left: 75%
-}
-
-.col-xs-push-8 {
- left: 66.66666667%
-}
-
-.col-xs-push-7 {
- left: 58.33333333%
-}
-
-.col-xs-push-6 {
- left: 50%
-}
-
-.col-xs-push-5 {
- left: 41.66666667%
-}
-
-.col-xs-push-4 {
- left: 33.33333333%
-}
-
-.col-xs-push-3 {
- left: 25%
-}
-
-.col-xs-push-2 {
- left: 16.66666667%
-}
-
-.col-xs-push-1 {
- left: 8.33333333%
-}
-
-.col-xs-push-0 {
- left: auto
-}
-
-.col-xs-offset-12 {
- margin-left: 100%
-}
-
-.col-xs-offset-11 {
- margin-left: 91.66666667%
-}
-
-.col-xs-offset-10 {
- margin-left: 83.33333333%
-}
-
-.col-xs-offset-9 {
- margin-left: 75%
-}
-
-.col-xs-offset-8 {
- margin-left: 66.66666667%
-}
-
-.col-xs-offset-7 {
- margin-left: 58.33333333%
-}
-
-.col-xs-offset-6 {
- margin-left: 50%
-}
-
-.col-xs-offset-5 {
- margin-left: 41.66666667%
-}
-
-.col-xs-offset-4 {
- margin-left: 33.33333333%
-}
-
-.col-xs-offset-3 {
- margin-left: 25%
-}
-
-.col-xs-offset-2 {
- margin-left: 16.66666667%
-}
-
-.col-xs-offset-1 {
- margin-left: 8.33333333%
-}
-
-.col-xs-offset-0 {
- margin-left: 0
-}
-
-@media (min-width:768px) {
-
- .col-sm-1,
- .col-sm-2,
- .col-sm-3,
- .col-sm-4,
- .col-sm-5,
- .col-sm-6,
- .col-sm-7,
- .col-sm-8,
- .col-sm-9,
- .col-sm-10,
- .col-sm-11,
- .col-sm-12 {
- float: left
- }
-
- .col-sm-12 {
- width: 100%
- }
-
- .col-sm-11 {
- width: 91.66666667%
- }
-
- .col-sm-10 {
- width: 83.33333333%
- }
-
- .col-sm-9 {
- width: 75%
- }
-
- .col-sm-8 {
- width: 66.66666667%
- }
-
- .col-sm-7 {
- width: 58.33333333%
- }
-
- .col-sm-6 {
- width: 50%
- }
-
- .col-sm-5 {
- width: 41.66666667%
- }
-
- .col-sm-4 {
- width: 33.33333333%
- }
-
- .col-sm-3 {
- width: 25%
- }
-
- .col-sm-2 {
- width: 16.66666667%
- }
-
- .col-sm-1 {
- width: 8.33333333%
- }
-
- .col-sm-pull-12 {
- right: 100%
- }
-
- .col-sm-pull-11 {
- right: 91.66666667%
- }
-
- .col-sm-pull-10 {
- right: 83.33333333%
- }
-
- .col-sm-pull-9 {
- right: 75%
- }
-
- .col-sm-pull-8 {
- right: 66.66666667%
- }
-
- .col-sm-pull-7 {
- right: 58.33333333%
- }
-
- .col-sm-pull-6 {
- right: 50%
- }
-
- .col-sm-pull-5 {
- right: 41.66666667%
- }
-
- .col-sm-pull-4 {
- right: 33.33333333%
- }
-
- .col-sm-pull-3 {
- right: 25%
- }
-
- .col-sm-pull-2 {
- right: 16.66666667%
- }
-
- .col-sm-pull-1 {
- right: 8.33333333%
- }
-
- .col-sm-pull-0 {
- right: auto
- }
-
- .col-sm-push-12 {
- left: 100%
- }
-
- .col-sm-push-11 {
- left: 91.66666667%
- }
-
- .col-sm-push-10 {
- left: 83.33333333%
- }
-
- .col-sm-push-9 {
- left: 75%
- }
-
- .col-sm-push-8 {
- left: 66.66666667%
- }
-
- .col-sm-push-7 {
- left: 58.33333333%
- }
-
- .col-sm-push-6 {
- left: 50%
- }
-
- .col-sm-push-5 {
- left: 41.66666667%
- }
-
- .col-sm-push-4 {
- left: 33.33333333%
- }
-
- .col-sm-push-3 {
- left: 25%
- }
-
- .col-sm-push-2 {
- left: 16.66666667%
- }
-
- .col-sm-push-1 {
- left: 8.33333333%
- }
-
- .col-sm-push-0 {
- left: auto
- }
-
- .col-sm-offset-12 {
- margin-left: 100%
- }
-
- .col-sm-offset-11 {
- margin-left: 91.66666667%
- }
-
- .col-sm-offset-10 {
- margin-left: 83.33333333%
- }
-
- .col-sm-offset-9 {
- margin-left: 75%
- }
-
- .col-sm-offset-8 {
- margin-left: 66.66666667%
- }
-
- .col-sm-offset-7 {
- margin-left: 58.33333333%
- }
-
- .col-sm-offset-6 {
- margin-left: 50%
- }
-
- .col-sm-offset-5 {
- margin-left: 41.66666667%
- }
-
- .col-sm-offset-4 {
- margin-left: 33.33333333%
- }
-
- .col-sm-offset-3 {
- margin-left: 25%
- }
-
- .col-sm-offset-2 {
- margin-left: 16.66666667%
- }
-
- .col-sm-offset-1 {
- margin-left: 8.33333333%
- }
-
- .col-sm-offset-0 {
- margin-left: 0
- }
-}
-
-@media (min-width:992px) {
-
- .col-md-1,
- .col-md-2,
- .col-md-3,
- .col-md-4,
- .col-md-5,
- .col-md-6,
- .col-md-7,
- .col-md-8,
- .col-md-9,
- .col-md-10,
- .col-md-11,
- .col-md-12 {
- float: left
- }
-
- .col-md-12 {
- width: 100%
- }
-
- .col-md-11 {
- width: 91.66666667%
- }
-
- .col-md-10 {
- width: 83.33333333%
- }
-
- .col-md-9 {
- width: 75%
- }
-
- .col-md-8 {
- width: 66.66666667%
- }
-
- .col-md-7 {
- width: 58.33333333%
- }
-
- .col-md-6 {
- width: 50%
- }
-
- .col-md-5 {
- width: 41.66666667%
- }
-
- .col-md-4 {
- width: 33.33333333%
- }
-
- .col-md-3 {
- width: 25%
- }
-
- .col-md-2 {
- width: 16.66666667%
- }
-
- .col-md-1 {
- width: 8.33333333%
- }
-
- .col-md-pull-12 {
- right: 100%
- }
-
- .col-md-pull-11 {
- right: 91.66666667%
- }
-
- .col-md-pull-10 {
- right: 83.33333333%
- }
-
- .col-md-pull-9 {
- right: 75%
- }
-
- .col-md-pull-8 {
- right: 66.66666667%
- }
-
- .col-md-pull-7 {
- right: 58.33333333%
- }
-
- .col-md-pull-6 {
- right: 50%
- }
-
- .col-md-pull-5 {
- right: 41.66666667%
- }
-
- .col-md-pull-4 {
- right: 33.33333333%
- }
-
- .col-md-pull-3 {
- right: 25%
- }
-
- .col-md-pull-2 {
- right: 16.66666667%
- }
-
- .col-md-pull-1 {
- right: 8.33333333%
- }
-
- .col-md-pull-0 {
- right: auto
- }
-
- .col-md-push-12 {
- left: 100%
- }
-
- .col-md-push-11 {
- left: 91.66666667%
- }
-
- .col-md-push-10 {
- left: 83.33333333%
- }
-
- .col-md-push-9 {
- left: 75%
- }
-
- .col-md-push-8 {
- left: 66.66666667%
- }
-
- .col-md-push-7 {
- left: 58.33333333%
- }
-
- .col-md-push-6 {
- left: 50%
- }
-
- .col-md-push-5 {
- left: 41.66666667%
- }
-
- .col-md-push-4 {
- left: 33.33333333%
- }
-
- .col-md-push-3 {
- left: 25%
- }
-
- .col-md-push-2 {
- left: 16.66666667%
- }
-
- .col-md-push-1 {
- left: 8.33333333%
- }
-
- .col-md-push-0 {
- left: auto
- }
-
- .col-md-offset-12 {
- margin-left: 100%
- }
-
- .col-md-offset-11 {
- margin-left: 91.66666667%
- }
-
- .col-md-offset-10 {
- margin-left: 83.33333333%
- }
-
- .col-md-offset-9 {
- margin-left: 75%
- }
-
- .col-md-offset-8 {
- margin-left: 66.66666667%
- }
-
- .col-md-offset-7 {
- margin-left: 58.33333333%
- }
-
- .col-md-offset-6 {
- margin-left: 50%
- }
-
- .col-md-offset-5 {
- margin-left: 41.66666667%
- }
-
- .col-md-offset-4 {
- margin-left: 33.33333333%
- }
-
- .col-md-offset-3 {
- margin-left: 25%
- }
-
- .col-md-offset-2 {
- margin-left: 16.66666667%
- }
-
- .col-md-offset-1 {
- margin-left: 8.33333333%
- }
-
- .col-md-offset-0 {
- margin-left: 0
- }
-}
-
-@media (min-width:1200px) {
-
- .col-lg-1,
- .col-lg-2,
- .col-lg-3,
- .col-lg-4,
- .col-lg-5,
- .col-lg-6,
- .col-lg-7,
- .col-lg-8,
- .col-lg-9,
- .col-lg-10,
- .col-lg-11,
- .col-lg-12 {
- float: left
- }
-
- .col-lg-12 {
- width: 100%
- }
-
- .col-lg-11 {
- width: 91.66666667%
- }
-
- .col-lg-10 {
- width: 83.33333333%
- }
-
- .col-lg-9 {
- width: 75%
- }
-
- .col-lg-8 {
- width: 66.66666667%
- }
-
- .col-lg-7 {
- width: 58.33333333%
- }
-
- .col-lg-6 {
- width: 50%
- }
-
- .col-lg-5 {
- width: 41.66666667%
- }
-
- .col-lg-4 {
- width: 33.33333333%
- }
-
- .col-lg-3 {
- width: 25%
- }
-
- .col-lg-2 {
- width: 16.66666667%
- }
-
- .col-lg-1 {
- width: 8.33333333%
- }
-
- .col-lg-pull-12 {
- right: 100%
- }
-
- .col-lg-pull-11 {
- right: 91.66666667%
- }
-
- .col-lg-pull-10 {
- right: 83.33333333%
- }
-
- .col-lg-pull-9 {
- right: 75%
- }
-
- .col-lg-pull-8 {
- right: 66.66666667%
- }
-
- .col-lg-pull-7 {
- right: 58.33333333%
- }
-
- .col-lg-pull-6 {
- right: 50%
- }
-
- .col-lg-pull-5 {
- right: 41.66666667%
- }
-
- .col-lg-pull-4 {
- right: 33.33333333%
- }
-
- .col-lg-pull-3 {
- right: 25%
- }
-
- .col-lg-pull-2 {
- right: 16.66666667%
- }
-
- .col-lg-pull-1 {
- right: 8.33333333%
- }
-
- .col-lg-pull-0 {
- right: auto
- }
-
- .col-lg-push-12 {
- left: 100%
- }
-
- .col-lg-push-11 {
- left: 91.66666667%
- }
-
- .col-lg-push-10 {
- left: 83.33333333%
- }
-
- .col-lg-push-9 {
- left: 75%
- }
-
- .col-lg-push-8 {
- left: 66.66666667%
- }
-
- .col-lg-push-7 {
- left: 58.33333333%
- }
-
- .col-lg-push-6 {
- left: 50%
- }
-
- .col-lg-push-5 {
- left: 41.66666667%
- }
-
- .col-lg-push-4 {
- left: 33.33333333%
- }
-
- .col-lg-push-3 {
- left: 25%
- }
-
- .col-lg-push-2 {
- left: 16.66666667%
- }
-
- .col-lg-push-1 {
- left: 8.33333333%
- }
-
- .col-lg-push-0 {
- left: auto
- }
-
- .col-lg-offset-12 {
- margin-left: 100%
- }
-
- .col-lg-offset-11 {
- margin-left: 91.66666667%
- }
-
- .col-lg-offset-10 {
- margin-left: 83.33333333%
- }
-
- .col-lg-offset-9 {
- margin-left: 75%
- }
-
- .col-lg-offset-8 {
- margin-left: 66.66666667%
- }
-
- .col-lg-offset-7 {
- margin-left: 58.33333333%
- }
-
- .col-lg-offset-6 {
- margin-left: 50%
- }
-
- .col-lg-offset-5 {
- margin-left: 41.66666667%
- }
-
- .col-lg-offset-4 {
- margin-left: 33.33333333%
- }
-
- .col-lg-offset-3 {
- margin-left: 25%
- }
-
- .col-lg-offset-2 {
- margin-left: 16.66666667%
- }
-
- .col-lg-offset-1 {
- margin-left: 8.33333333%
- }
-
- .col-lg-offset-0 {
- margin-left: 0
- }
-}
-
-table {
- background-color: transparent
-}
-
-caption {
- padding-top: 8px;
- padding-bottom: 8px;
- color: #777;
- text-align: left
-}
-
-th {
- text-align: left
-}
-
-.table {
- width: 100%;
- max-width: 100%;
- margin-bottom: 20px
-}
-
-.table>thead>tr>th,
-.table>tbody>tr>th,
-.table>tfoot>tr>th,
-.table>thead>tr>td,
-.table>tbody>tr>td,
-.table>tfoot>tr>td {
- padding: 8px;
- line-height: 1.42857143;
- vertical-align: top;
- border-top: 1px solid #ddd
-}
-
-.table>thead>tr>th {
- vertical-align: bottom;
- border-bottom: 2px solid #ddd
-}
-
-.table>caption+thead>tr:first-child>th,
-.table>colgroup+thead>tr:first-child>th,
-.table>thead:first-child>tr:first-child>th,
-.table>caption+thead>tr:first-child>td,
-.table>colgroup+thead>tr:first-child>td,
-.table>thead:first-child>tr:first-child>td {
- border-top: 0
-}
-
-.table>tbody+tbody {
- border-top: 2px solid #ddd
-}
-
-.table .table {
- background-color: #fff
-}
-
-.table-condensed>thead>tr>th,
-.table-condensed>tbody>tr>th,
-.table-condensed>tfoot>tr>th,
-.table-condensed>thead>tr>td,
-.table-condensed>tbody>tr>td,
-.table-condensed>tfoot>tr>td {
- padding: 5px
-}
-
-.table-bordered {
- border: 1px solid #ddd
-}
-
-.table-bordered>thead>tr>th,
-.table-bordered>tbody>tr>th,
-.table-bordered>tfoot>tr>th,
-.table-bordered>thead>tr>td,
-.table-bordered>tbody>tr>td,
-.table-bordered>tfoot>tr>td {
- border: 1px solid #ddd
-}
-
-.table-bordered>thead>tr>th,
-.table-bordered>thead>tr>td {
- border-bottom-width: 2px
-}
-
-.table-striped>tbody>tr:nth-child(odd) {
- background-color: #f9f9f9
-}
-
-.table-hover>tbody>tr:hover {
- background-color: #f5f5f5
-}
-
-table col[class*=col-] {
- position: static;
- display: table-column;
- float: none
-}
-
-table td[class*=col-],
-table th[class*=col-] {
- position: static;
- display: table-cell;
- float: none
-}
-
-.table>thead>tr>td.active,
-.table>tbody>tr>td.active,
-.table>tfoot>tr>td.active,
-.table>thead>tr>th.active,
-.table>tbody>tr>th.active,
-.table>tfoot>tr>th.active,
-.table>thead>tr.active>td,
-.table>tbody>tr.active>td,
-.table>tfoot>tr.active>td,
-.table>thead>tr.active>th,
-.table>tbody>tr.active>th,
-.table>tfoot>tr.active>th {
- background-color: #f5f5f5
-}
-
-.table-hover>tbody>tr>td.active:hover,
-.table-hover>tbody>tr>th.active:hover,
-.table-hover>tbody>tr.active:hover>td,
-.table-hover>tbody>tr:hover>.active,
-.table-hover>tbody>tr.active:hover>th {
- background-color: #e8e8e8
-}
-
-.table>thead>tr>td.success,
-.table>tbody>tr>td.success,
-.table>tfoot>tr>td.success,
-.table>thead>tr>th.success,
-.table>tbody>tr>th.success,
-.table>tfoot>tr>th.success,
-.table>thead>tr.success>td,
-.table>tbody>tr.success>td,
-.table>tfoot>tr.success>td,
-.table>thead>tr.success>th,
-.table>tbody>tr.success>th,
-.table>tfoot>tr.success>th {
- background-color: #dff0d8
-}
-
-.table-hover>tbody>tr>td.success:hover,
-.table-hover>tbody>tr>th.success:hover,
-.table-hover>tbody>tr.success:hover>td,
-.table-hover>tbody>tr:hover>.success,
-.table-hover>tbody>tr.success:hover>th {
- background-color: #d0e9c6
-}
-
-.table>thead>tr>td.info,
-.table>tbody>tr>td.info,
-.table>tfoot>tr>td.info,
-.table>thead>tr>th.info,
-.table>tbody>tr>th.info,
-.table>tfoot>tr>th.info,
-.table>thead>tr.info>td,
-.table>tbody>tr.info>td,
-.table>tfoot>tr.info>td,
-.table>thead>tr.info>th,
-.table>tbody>tr.info>th,
-.table>tfoot>tr.info>th {
- background-color: #d9edf7
-}
-
-.table-hover>tbody>tr>td.info:hover,
-.table-hover>tbody>tr>th.info:hover,
-.table-hover>tbody>tr.info:hover>td,
-.table-hover>tbody>tr:hover>.info,
-.table-hover>tbody>tr.info:hover>th {
- background-color: #c4e3f3
-}
-
-.table>thead>tr>td.warning,
-.table>tbody>tr>td.warning,
-.table>tfoot>tr>td.warning,
-.table>thead>tr>th.warning,
-.table>tbody>tr>th.warning,
-.table>tfoot>tr>th.warning,
-.table>thead>tr.warning>td,
-.table>tbody>tr.warning>td,
-.table>tfoot>tr.warning>td,
-.table>thead>tr.warning>th,
-.table>tbody>tr.warning>th,
-.table>tfoot>tr.warning>th {
- background-color: #fcf8e3
-}
-
-.table-hover>tbody>tr>td.warning:hover,
-.table-hover>tbody>tr>th.warning:hover,
-.table-hover>tbody>tr.warning:hover>td,
-.table-hover>tbody>tr:hover>.warning,
-.table-hover>tbody>tr.warning:hover>th {
- background-color: #faf2cc
-}
-
-.table>thead>tr>td.danger,
-.table>tbody>tr>td.danger,
-.table>tfoot>tr>td.danger,
-.table>thead>tr>th.danger,
-.table>tbody>tr>th.danger,
-.table>tfoot>tr>th.danger,
-.table>thead>tr.danger>td,
-.table>tbody>tr.danger>td,
-.table>tfoot>tr.danger>td,
-.table>thead>tr.danger>th,
-.table>tbody>tr.danger>th,
-.table>tfoot>tr.danger>th {
- background-color: #f2dede
-}
-
-.table-hover>tbody>tr>td.danger:hover,
-.table-hover>tbody>tr>th.danger:hover,
-.table-hover>tbody>tr.danger:hover>td,
-.table-hover>tbody>tr:hover>.danger,
-.table-hover>tbody>tr.danger:hover>th {
- background-color: #ebcccc
-}
-
-.table-responsive {
- min-height: .01%;
- overflow-x: auto
-}
-
-@media screen and (max-width:767px) {
- .table-responsive {
- width: 100%;
- margin-bottom: 15px;
- overflow-y: hidden;
- -ms-overflow-style: -ms-autohiding-scrollbar;
- border: 1px solid #ddd
- }
-
- .table-responsive>.table {
- margin-bottom: 0
- }
-
- .table-responsive>.table>thead>tr>th,
- .table-responsive>.table>tbody>tr>th,
- .table-responsive>.table>tfoot>tr>th,
- .table-responsive>.table>thead>tr>td,
- .table-responsive>.table>tbody>tr>td,
- .table-responsive>.table>tfoot>tr>td {
- white-space: nowrap
- }
-
- .table-responsive>.table-bordered {
- border: 0
- }
-
- .table-responsive>.table-bordered>thead>tr>th:first-child,
- .table-responsive>.table-bordered>tbody>tr>th:first-child,
- .table-responsive>.table-bordered>tfoot>tr>th:first-child,
- .table-responsive>.table-bordered>thead>tr>td:first-child,
- .table-responsive>.table-bordered>tbody>tr>td:first-child,
- .table-responsive>.table-bordered>tfoot>tr>td:first-child {
- border-left: 0
- }
-
- .table-responsive>.table-bordered>thead>tr>th:last-child,
- .table-responsive>.table-bordered>tbody>tr>th:last-child,
- .table-responsive>.table-bordered>tfoot>tr>th:last-child,
- .table-responsive>.table-bordered>thead>tr>td:last-child,
- .table-responsive>.table-bordered>tbody>tr>td:last-child,
- .table-responsive>.table-bordered>tfoot>tr>td:last-child {
- border-right: 0
- }
-
- .table-responsive>.table-bordered>tbody>tr:last-child>th,
- .table-responsive>.table-bordered>tfoot>tr:last-child>th,
- .table-responsive>.table-bordered>tbody>tr:last-child>td,
- .table-responsive>.table-bordered>tfoot>tr:last-child>td {
- border-bottom: 0
- }
-}
-
-fieldset {
- min-width: 0;
- padding: 0;
- margin: 0;
- border: 0
-}
-
-legend {
- display: block;
- width: 100%;
- padding: 0;
- margin-bottom: 20px;
- font-size: 21px;
- line-height: inherit;
- color: #333;
- border: 0;
- border-bottom: 1px solid #e5e5e5
-}
-
-label {
- display: inline-block;
- max-width: 100%;
- margin-bottom: 5px;
- font-weight: 700
-}
-
-input[type=search] {
- -webkit-box-sizing: border-box;
- -moz-box-sizing: border-box;
- box-sizing: border-box
-}
-
-input[type=radio],
-input[type=checkbox] {
- margin: 4px 0 0;
- margin-top: 1px \9;
- line-height: normal
-}
-
-input[type=file] {
- display: block
-}
-
-input[type=range] {
- display: block;
- width: 100%
-}
-
-select[multiple],
-select[size] {
- height: auto
-}
-
-input[type=file]:focus,
-input[type=radio]:focus,
-input[type=checkbox]:focus {
- outline: thin dotted;
- outline: 5px auto -webkit-focus-ring-color;
- outline-offset: -2px
-}
-
-output {
- display: block;
- padding-top: 7px;
- font-size: 14px;
- line-height: 1.42857143;
- color: #555
-}
-
-.form-control {
- display: block;
- width: 100%;
- height: 34px;
- padding: 6px 12px;
- font-size: 14px;
- line-height: 1.42857143;
- color: #555;
- background-color: #fff;
- background-image: none;
- border: 1px solid #ccc;
- border-radius: 4px;
- -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075);
- box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075);
- -webkit-transition: border-color ease-in-out .15s, -webkit-box-shadow ease-in-out .15s;
- -o-transition: border-color ease-in-out .15s, box-shadow ease-in-out .15s;
- transition: border-color ease-in-out .15s, box-shadow ease-in-out .15s
-}
-
-.form-control:focus {
- border-color: #66afe9;
- outline: 0;
- -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075), 0 0 8px rgba(102, 175, 233, .6);
- box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075), 0 0 8px rgba(102, 175, 233, .6)
-}
-
-.form-control::-moz-placeholder {
- color: #999;
- opacity: 1
-}
-
-.form-control:-ms-input-placeholder {
- color: #999
-}
-
-.form-control::-webkit-input-placeholder {
- color: #999
-}
-
-.form-control[disabled],
-.form-control[readonly],
-fieldset[disabled] .form-control {
- cursor: not-allowed;
- background-color: #eee;
- opacity: 1
-}
-
-textarea.form-control {
- height: auto
-}
-
-input[type=search] {
- -webkit-appearance: none
-}
-
-@media screen and (-webkit-min-device-pixel-ratio:0) {
-
- input[type=date],
- input[type=time],
- input[type=datetime-local],
- input[type=month] {
- line-height: 34px
- }
-
- input[type=date].input-sm,
- input[type=time].input-sm,
- input[type=datetime-local].input-sm,
- input[type=month].input-sm {
- line-height: 30px
- }
-
- input[type=date].input-lg,
- input[type=time].input-lg,
- input[type=datetime-local].input-lg,
- input[type=month].input-lg {
- line-height: 46px
- }
-}
-
-.form-group {
- margin-bottom: 15px
-}
-
-.radio,
-.checkbox {
- position: relative;
- display: block;
- margin-top: 10px;
- margin-bottom: 10px
-}
-
-.radio label,
-.checkbox label {
- min-height: 20px;
- padding-left: 20px;
- margin-bottom: 0;
- font-weight: 400;
- cursor: pointer
-}
-
-.radio input[type=radio],
-.radio-inline input[type=radio],
-.checkbox input[type=checkbox],
-.checkbox-inline input[type=checkbox] {
- position: absolute;
- margin-top: 4px \9;
- margin-left: -20px
-}
-
-.radio+.radio,
-.checkbox+.checkbox {
- margin-top: -5px
-}
-
-.radio-inline,
-.checkbox-inline {
- display: inline-block;
- padding-left: 20px;
- margin-bottom: 0;
- font-weight: 400;
- vertical-align: middle;
- cursor: pointer
-}
-
-.radio-inline+.radio-inline,
-.checkbox-inline+.checkbox-inline {
- margin-top: 0;
- margin-left: 10px
-}
-
-input[type=radio][disabled],
-input[type=checkbox][disabled],
-input[type=radio].disabled,
-input[type=checkbox].disabled,
-fieldset[disabled] input[type=radio],
-fieldset[disabled] input[type=checkbox] {
- cursor: not-allowed
-}
-
-.radio-inline.disabled,
-.checkbox-inline.disabled,
-fieldset[disabled] .radio-inline,
-fieldset[disabled] .checkbox-inline {
- cursor: not-allowed
-}
-
-.radio.disabled label,
-.checkbox.disabled label,
-fieldset[disabled] .radio label,
-fieldset[disabled] .checkbox label {
- cursor: not-allowed
-}
-
-.form-control-static {
- padding-top: 7px;
- padding-bottom: 7px;
- margin-bottom: 0
-}
-
-.form-control-static.input-lg,
-.form-control-static.input-sm {
- padding-right: 0;
- padding-left: 0
-}
-
-.input-sm,
-.form-group-sm .form-control {
- height: 30px;
- padding: 5px 10px;
- font-size: 12px;
- line-height: 1.5;
- border-radius: 3px
-}
-
-select.input-sm,
-select.form-group-sm .form-control {
- height: 30px;
- line-height: 30px
-}
-
-textarea.input-sm,
-textarea.form-group-sm .form-control,
-select[multiple].input-sm,
-select[multiple].form-group-sm .form-control {
- height: auto
-}
-
-.input-lg,
-.form-group-lg .form-control {
- height: 46px;
- padding: 10px 16px;
- font-size: 18px;
- line-height: 1.33;
- border-radius: 6px
-}
-
-select.input-lg,
-select.form-group-lg .form-control {
- height: 46px;
- line-height: 46px
-}
-
-textarea.input-lg,
-textarea.form-group-lg .form-control,
-select[multiple].input-lg,
-select[multiple].form-group-lg .form-control {
- height: auto
-}
-
-.has-feedback {
- position: relative
-}
-
-.has-feedback .form-control {
- padding-right: 42.5px
-}
-
-.form-control-feedback {
- position: absolute;
- top: 0;
- right: 0;
- z-index: 2;
- display: block;
- width: 34px;
- height: 34px;
- line-height: 34px;
- text-align: center;
- pointer-events: none
-}
-
-.input-lg+.form-control-feedback {
- width: 46px;
- height: 46px;
- line-height: 46px
-}
-
-.input-sm+.form-control-feedback {
- width: 30px;
- height: 30px;
- line-height: 30px
-}
-
-.has-success .help-block,
-.has-success .control-label,
-.has-success .radio,
-.has-success .checkbox,
-.has-success .radio-inline,
-.has-success .checkbox-inline,
-.has-success.radio label,
-.has-success.checkbox label,
-.has-success.radio-inline label,
-.has-success.checkbox-inline label {
- color: #3c763d
-}
-
-.has-success .form-control {
- border-color: #3c763d;
- -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075);
- box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075)
-}
-
-.has-success .form-control:focus {
- border-color: #2b542c;
- -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075), 0 0 6px #67b168;
- box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075), 0 0 6px #67b168
-}
-
-.has-success .input-group-addon {
- color: #3c763d;
- background-color: #dff0d8;
- border-color: #3c763d
-}
-
-.has-success .form-control-feedback {
- color: #3c763d
-}
-
-.has-warning .help-block,
-.has-warning .control-label,
-.has-warning .radio,
-.has-warning .checkbox,
-.has-warning .radio-inline,
-.has-warning .checkbox-inline,
-.has-warning.radio label,
-.has-warning.checkbox label,
-.has-warning.radio-inline label,
-.has-warning.checkbox-inline label {
- color: #8a6d3b
-}
-
-.has-warning .form-control {
- border-color: #8a6d3b;
- -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075);
- box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075)
-}
-
-.has-warning .form-control:focus {
- border-color: #66512c;
- -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075), 0 0 6px #c0a16b;
- box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075), 0 0 6px #c0a16b
-}
-
-.has-warning .input-group-addon {
- color: #8a6d3b;
- background-color: #fcf8e3;
- border-color: #8a6d3b
-}
-
-.has-warning .form-control-feedback {
- color: #8a6d3b
-}
-
-.has-error .help-block,
-.has-error .control-label,
-.has-error .radio,
-.has-error .checkbox,
-.has-error .radio-inline,
-.has-error .checkbox-inline,
-.has-error.radio label,
-.has-error.checkbox label,
-.has-error.radio-inline label,
-.has-error.checkbox-inline label {
- color: #a94442
-}
-
-.has-error .form-control {
- border-color: #a94442;
- -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075);
- box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075)
-}
-
-.has-error .form-control:focus {
- border-color: #843534;
- -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075), 0 0 6px #ce8483;
- box-shadow: inset 0 1px 1px rgba(0, 0, 0, .075), 0 0 6px #ce8483
-}
-
-.has-error .input-group-addon {
- color: #a94442;
- background-color: #f2dede;
- border-color: #a94442
-}
-
-.has-error .form-control-feedback {
- color: #a94442
-}
-
-.has-feedback label~.form-control-feedback {
- top: 25px
-}
-
-.has-feedback label.sr-only~.form-control-feedback {
- top: 0
-}
-
-.help-block {
- display: block;
- margin-top: 5px;
- margin-bottom: 10px;
- color: #737373
-}
-
-@media (min-width:768px) {
- .form-inline .form-group {
- display: inline-block;
- margin-bottom: 0;
- vertical-align: middle
- }
-
- .form-inline .form-control {
- display: inline-block;
- width: auto;
- vertical-align: middle
- }
-
- .form-inline .form-control-static {
- display: inline-block
- }
-
- .form-inline .input-group {
- display: inline-table;
- vertical-align: middle
- }
-
- .form-inline .input-group .input-group-addon,
- .form-inline .input-group .input-group-btn,
- .form-inline .input-group .form-control {
- width: auto
- }
-
- .form-inline .input-group>.form-control {
- width: 100%
- }
-
- .form-inline .control-label {
- margin-bottom: 0;
- vertical-align: middle
- }
-
- .form-inline .radio,
- .form-inline .checkbox {
- display: inline-block;
- margin-top: 0;
- margin-bottom: 0;
- vertical-align: middle
- }
-
- .form-inline .radio label,
- .form-inline .checkbox label {
- padding-left: 0
- }
-
- .form-inline .radio input[type=radio],
- .form-inline .checkbox input[type=checkbox] {
- position: relative;
- margin-left: 0
- }
-
- .form-inline .has-feedback .form-control-feedback {
- top: 0
- }
-}
-
-.form-horizontal .radio,
-.form-horizontal .checkbox,
-.form-horizontal .radio-inline,
-.form-horizontal .checkbox-inline {
- padding-top: 7px;
- margin-top: 0;
- margin-bottom: 0
-}
-
-.form-horizontal .radio,
-.form-horizontal .checkbox {
- min-height: 27px
-}
-
-.form-horizontal .form-group {
- margin-right: -15px;
- margin-left: -15px
-}
-
-@media (min-width:768px) {
- .form-horizontal .control-label {
- padding-top: 7px;
- margin-bottom: 0;
- text-align: right
- }
-}
-
-.form-horizontal .has-feedback .form-control-feedback {
- right: 15px
-}
-
-@media (min-width:768px) {
- .form-horizontal .form-group-lg .control-label {
- padding-top: 14.3px
- }
-}
-
-@media (min-width:768px) {
- .form-horizontal .form-group-sm .control-label {
- padding-top: 6px
- }
-}
-
-.btn {
- display: inline-block;
- padding: 6px 12px;
- margin-bottom: 0;
- font-size: 14px;
- font-weight: 400;
- line-height: 1.42857143;
- text-align: center;
- white-space: nowrap;
- vertical-align: middle;
- -ms-touch-action: manipulation;
- touch-action: manipulation;
- cursor: pointer;
- -webkit-user-select: none;
- -moz-user-select: none;
- -ms-user-select: none;
- user-select: none;
- background-image: none;
- border: 1px solid transparent;
- border-radius: 4px
-}
-
-.btn:focus,
-.btn:active:focus,
-.btn.active:focus,
-.btn.focus,
-.btn:active.focus,
-.btn.active.focus {
- outline: thin dotted;
- outline: 5px auto -webkit-focus-ring-color;
- outline-offset: -2px
-}
-
-.btn:hover,
-.btn:focus,
-.btn.focus {
- color: #333;
- text-decoration: none
-}
-
-.btn:active,
-.btn.active {
- background-image: none;
- outline: 0;
- -webkit-box-shadow: inset 0 3px 5px rgba(0, 0, 0, .125);
- box-shadow: inset 0 3px 5px rgba(0, 0, 0, .125)
-}
-
-.btn.disabled,
-.btn[disabled],
-fieldset[disabled] .btn {
- pointer-events: none;
- cursor: not-allowed;
- filter: alpha(opacity=65);
- -webkit-box-shadow: none;
- box-shadow: none;
- opacity: .65
-}
-
-.btn-default {
- color: #333;
- background-color: #fff;
- border-color: #ccc
-}
-
-.btn-default:hover,
-.btn-default:focus,
-.btn-default.focus,
-.btn-default:active,
-.btn-default.active,
-.open>.dropdown-toggle.btn-default {
- color: #333;
- background-color: #e6e6e6;
- border-color: #adadad
-}
-
-.btn-default:active,
-.btn-default.active,
-.open>.dropdown-toggle.btn-default {
- background-image: none
-}
-
-.btn-default.disabled,
-.btn-default[disabled],
-fieldset[disabled] .btn-default,
-.btn-default.disabled:hover,
-.btn-default[disabled]:hover,
-fieldset[disabled] .btn-default:hover,
-.btn-default.disabled:focus,
-.btn-default[disabled]:focus,
-fieldset[disabled] .btn-default:focus,
-.btn-default.disabled.focus,
-.btn-default[disabled].focus,
-fieldset[disabled] .btn-default.focus,
-.btn-default.disabled:active,
-.btn-default[disabled]:active,
-fieldset[disabled] .btn-default:active,
-.btn-default.disabled.active,
-.btn-default[disabled].active,
-fieldset[disabled] .btn-default.active {
- background-color: #fff;
- border-color: #ccc
-}
-
-.btn-default .badge {
- color: #fff;
- background-color: #333
-}
-
-.btn-primary {
- color: #fff;
- background-color: #a96308;
- border-color: #aa6309
-}
-
-.btn-primary:hover,
-.btn-primary:focus,
-.btn-primary.focus,
-.btn-primary:active,
-.btn-primary.active,
-.open>.dropdown-toggle.btn-primary {
- color: #fff;
- background-color: #6f420a;
- border-color: #1e1202
-}
-
-.btn-primary:active,
-.btn-primary.active,
-.open>.dropdown-toggle.btn-primary {
- background-image: none
-}
-
-.btn-primary.disabled,
-.btn-primary[disabled],
-fieldset[disabled] .btn-primary,
-.btn-primary.disabled:hover,
-.btn-primary[disabled]:hover,
-fieldset[disabled] .btn-primary:hover,
-.btn-primary.disabled:focus,
-.btn-primary[disabled]:focus,
-fieldset[disabled] .btn-primary:focus,
-.btn-primary.disabled.focus,
-.btn-primary[disabled].focus,
-fieldset[disabled] .btn-primary.focus,
-.btn-primary.disabled:active,
-.btn-primary[disabled]:active,
-fieldset[disabled] .btn-primary:active,
-.btn-primary.disabled.active,
-.btn-primary[disabled].active,
-fieldset[disabled] .btn-primary.active {
- background-color: #337ab7;
- border-color: #2e6da4
-}
-
-.btn-primary .badge {
- color: #337ab7;
- background-color: #fff
-}
-
-.btn-success {
- color: #fff;
- background-color: #5cb85c;
- border-color: #4cae4c
-}
-
-.btn-success:hover,
-.btn-success:focus,
-.btn-success.focus,
-.btn-success:active,
-.btn-success.active,
-.open>.dropdown-toggle.btn-success {
- color: #fff;
- background-color: #449d44;
- border-color: #398439
-}
-
-.btn-success:active,
-.btn-success.active,
-.open>.dropdown-toggle.btn-success {
- background-image: none
-}
-
-.btn-success.disabled,
-.btn-success[disabled],
-fieldset[disabled] .btn-success,
-.btn-success.disabled:hover,
-.btn-success[disabled]:hover,
-fieldset[disabled] .btn-success:hover,
-.btn-success.disabled:focus,
-.btn-success[disabled]:focus,
-fieldset[disabled] .btn-success:focus,
-.btn-success.disabled.focus,
-.btn-success[disabled].focus,
-fieldset[disabled] .btn-success.focus,
-.btn-success.disabled:active,
-.btn-success[disabled]:active,
-fieldset[disabled] .btn-success:active,
-.btn-success.disabled.active,
-.btn-success[disabled].active,
-fieldset[disabled] .btn-success.active {
- background-color: #5cb85c;
- border-color: #4cae4c
-}
-
-.btn-success .badge {
- color: #5cb85c;
- background-color: #fff
-}
-
-.btn-info {
- color: #fff;
- background-color: #5bc0de;
- border-color: #46b8da
-}
-
-.btn-info:hover,
-.btn-info:focus,
-.btn-info.focus,
-.btn-info:active,
-.btn-info.active,
-.open>.dropdown-toggle.btn-info {
- color: #fff;
- background-color: #31b0d5;
- border-color: #269abc
-}
-
-.btn-info:active,
-.btn-info.active,
-.open>.dropdown-toggle.btn-info {
- background-image: none
-}
-
-.btn-info.disabled,
-.btn-info[disabled],
-fieldset[disabled] .btn-info,
-.btn-info.disabled:hover,
-.btn-info[disabled]:hover,
-fieldset[disabled] .btn-info:hover,
-.btn-info.disabled:focus,
-.btn-info[disabled]:focus,
-fieldset[disabled] .btn-info:focus,
-.btn-info.disabled.focus,
-.btn-info[disabled].focus,
-fieldset[disabled] .btn-info.focus,
-.btn-info.disabled:active,
-.btn-info[disabled]:active,
-fieldset[disabled] .btn-info:active,
-.btn-info.disabled.active,
-.btn-info[disabled].active,
-fieldset[disabled] .btn-info.active {
- background-color: #5bc0de;
- border-color: #46b8da
-}
-
-.btn-info .badge {
- color: #5bc0de;
- background-color: #fff
-}
-
-.btn-warning {
- color: #fff;
- background-color: #f0ad4e;
- border-color: #eea236
-}
-
-.btn-warning:hover,
-.btn-warning:focus,
-.btn-warning.focus,
-.btn-warning:active,
-.btn-warning.active,
-.open>.dropdown-toggle.btn-warning {
- color: #fff;
- background-color: #ec971f;
- border-color: #d58512
-}
-
-.btn-warning:active,
-.btn-warning.active,
-.open>.dropdown-toggle.btn-warning {
- background-image: none
-}
-
-.btn-warning.disabled,
-.btn-warning[disabled],
-fieldset[disabled] .btn-warning,
-.btn-warning.disabled:hover,
-.btn-warning[disabled]:hover,
-fieldset[disabled] .btn-warning:hover,
-.btn-warning.disabled:focus,
-.btn-warning[disabled]:focus,
-fieldset[disabled] .btn-warning:focus,
-.btn-warning.disabled.focus,
-.btn-warning[disabled].focus,
-fieldset[disabled] .btn-warning.focus,
-.btn-warning.disabled:active,
-.btn-warning[disabled]:active,
-fieldset[disabled] .btn-warning:active,
-.btn-warning.disabled.active,
-.btn-warning[disabled].active,
-fieldset[disabled] .btn-warning.active {
- background-color: #f0ad4e;
- border-color: #eea236
-}
-
-.btn-warning .badge {
- color: #f0ad4e;
- background-color: #fff
-}
-
-.btn-danger {
- color: #fff;
- background-color: #d9534f;
- border-color: #d43f3a
-}
-
-.btn-danger:hover,
-.btn-danger:focus,
-.btn-danger.focus,
-.btn-danger:active,
-.btn-danger.active,
-.open>.dropdown-toggle.btn-danger {
- color: #fff;
- background-color: #c9302c;
- border-color: #ac2925
-}
-
-.btn-danger:active,
-.btn-danger.active,
-.open>.dropdown-toggle.btn-danger {
- background-image: none
-}
-
-.btn-danger.disabled,
-.btn-danger[disabled],
-fieldset[disabled] .btn-danger,
-.btn-danger.disabled:hover,
-.btn-danger[disabled]:hover,
-fieldset[disabled] .btn-danger:hover,
-.btn-danger.disabled:focus,
-.btn-danger[disabled]:focus,
-fieldset[disabled] .btn-danger:focus,
-.btn-danger.disabled.focus,
-.btn-danger[disabled].focus,
-fieldset[disabled] .btn-danger.focus,
-.btn-danger.disabled:active,
-.btn-danger[disabled]:active,
-fieldset[disabled] .btn-danger:active,
-.btn-danger.disabled.active,
-.btn-danger[disabled].active,
-fieldset[disabled] .btn-danger.active {
- background-color: #d9534f;
- border-color: #d43f3a
-}
-
-.btn-danger .badge {
- color: #d9534f;
- background-color: #fff
-}
-
-.btn-link {
- font-weight: 400;
- color: #337ab7;
- border-radius: 0
-}
-
-.btn-link,
-.btn-link:active,
-.btn-link.active,
-.btn-link[disabled],
-fieldset[disabled] .btn-link {
- background-color: transparent;
- -webkit-box-shadow: none;
- box-shadow: none
-}
-
-.btn-link,
-.btn-link:hover,
-.btn-link:focus,
-.btn-link:active {
- border-color: transparent
-}
-
-.btn-link:hover,
-.btn-link:focus {
- color: #23527c;
- text-decoration: underline;
- background-color: transparent
-}
-
-.btn-link[disabled]:hover,
-fieldset[disabled] .btn-link:hover,
-.btn-link[disabled]:focus,
-fieldset[disabled] .btn-link:focus {
- color: #777;
- text-decoration: none
-}
-
-.btn-lg,
-.btn-group-lg>.btn {
- padding: 10px 16px;
- font-size: 18px;
- line-height: 1.33;
- border-radius: 6px
-}
-
-.btn-sm,
-.btn-group-sm>.btn {
- padding: 5px 10px;
- font-size: 12px;
- line-height: 1.5;
- border-radius: 3px
-}
-
-.btn-xs,
-.btn-group-xs>.btn {
- padding: 1px 5px;
- font-size: 12px;
- line-height: 1.5;
- border-radius: 3px
-}
-
-.btn-block {
- display: block;
- width: 100%
-}
-
-.btn-block+.btn-block {
- margin-top: 5px
-}
-
-input[type=submit].btn-block,
-input[type=reset].btn-block,
-input[type=button].btn-block {
- width: 100%
-}
-
-.fade {
- opacity: 0;
- -webkit-transition: opacity .15s linear;
- -o-transition: opacity .15s linear;
- transition: opacity .15s linear
-}
-
-.fade.in {
- opacity: 1
-}
-
-.collapse {
- display: none;
- visibility: hidden
-}
-
-.collapse.in {
- display: block;
- visibility: visible
-}
-
-tr.collapse.in {
- display: table-row
-}
-
-tbody.collapse.in {
- display: table-row-group
-}
-
-.collapsing {
- position: relative;
- height: 0;
- overflow: hidden;
- -webkit-transition-timing-function: ease;
- -o-transition-timing-function: ease;
- transition-timing-function: ease;
- -webkit-transition-duration: .35s;
- -o-transition-duration: .35s;
- transition-duration: .35s;
- -webkit-transition-property: height, visibility;
- -o-transition-property: height, visibility;
- transition-property: height, visibility
-}
-
-.caret {
- display: inline-block;
- width: 0;
- height: 0;
- margin-left: 2px;
- vertical-align: middle;
- border-top: 4px solid;
- border-right: 4px solid transparent;
- border-left: 4px solid transparent
-}
-
-.dropdown {
- position: relative
-}
-
-.dropdown-toggle:focus {
- outline: 0
-}
-
-.dropdown-menu {
- position: absolute;
- top: 100%;
- left: 0;
- z-index: 1000;
- display: none;
- float: left;
- min-width: 160px;
- padding: 5px 0;
- margin: 2px 0 0;
- font-size: 14px;
- text-align: left;
- list-style: none;
- background-color: #fff;
- -webkit-background-clip: padding-box;
- background-clip: padding-box;
- border: 1px solid #ccc;
- border: 1px solid rgba(0, 0, 0, .15);
- border-radius: 4px;
- -webkit-box-shadow: 0 6px 12px rgba(0, 0, 0, .175);
- box-shadow: 0 6px 12px rgba(0, 0, 0, .175)
-}
-
-.dropdown-menu.pull-right {
- right: 0;
- left: auto
-}
-
-.dropdown-menu .divider {
- height: 1px;
- margin: 9px 0;
- overflow: hidden;
- background-color: #e5e5e5
-}
-
-.dropdown-menu>li>a {
- display: block;
- padding: 3px 20px;
- clear: both;
- font-weight: 400;
- line-height: 1.42857143;
- color: #333;
- white-space: nowrap
-}
-
-.dropdown-menu>li>a:hover,
-.dropdown-menu>li>a:focus {
- color: #262626;
- text-decoration: none;
- background-color: #f5f5f5
-}
-
-.dropdown-menu>.active>a,
-.dropdown-menu>.active>a:hover,
-.dropdown-menu>.active>a:focus {
- color: #fff;
- text-decoration: none;
- background-color: #337ab7;
- outline: 0
-}
-
-.dropdown-menu>.disabled>a,
-.dropdown-menu>.disabled>a:hover,
-.dropdown-menu>.disabled>a:focus {
- color: #777
-}
-
-.dropdown-menu>.disabled>a:hover,
-.dropdown-menu>.disabled>a:focus {
- text-decoration: none;
- cursor: not-allowed;
- background-color: transparent;
- background-image: none;
- filter: progid:DXImageTransform.Microsoft.gradient(enabled=false)
-}
-
-.open>.dropdown-menu {
- display: block
-}
-
-.open>a {
- outline: 0
-}
-
-.dropdown-menu-right {
- right: 0;
- left: auto
-}
-
-.dropdown-menu-left {
- right: auto;
- left: 0
-}
-
-.dropdown-header {
- display: block;
- padding: 3px 20px;
- font-size: 12px;
- line-height: 1.42857143;
- color: #777;
- white-space: nowrap
-}
-
-.dropdown-backdrop {
- position: fixed;
- top: 0;
- right: 0;
- bottom: 0;
- left: 0;
- z-index: 990
-}
-
-.pull-right>.dropdown-menu {
- right: 0;
- left: auto
-}
-
-.dropup .caret,
-.navbar-fixed-bottom .dropdown .caret {
- content: "";
- border-top: 0;
- border-bottom: 4px solid
-}
-
-.dropup .dropdown-menu,
-.navbar-fixed-bottom .dropdown .dropdown-menu {
- top: auto;
- bottom: 100%;
- margin-bottom: 1px
-}
-
-@media (min-width:768px) {
- .navbar-right .dropdown-menu {
- right: 0;
- left: auto
- }
-
- .navbar-right .dropdown-menu-left {
- right: auto;
- left: 0
- }
-}
-
-.btn-group,
-.btn-group-vertical {
- position: relative;
- display: inline-block;
- vertical-align: middle
-}
-
-.btn-group>.btn,
-.btn-group-vertical>.btn {
- position: relative;
- float: left
-}
-
-.btn-group>.btn:hover,
-.btn-group-vertical>.btn:hover,
-.btn-group>.btn:focus,
-.btn-group-vertical>.btn:focus,
-.btn-group>.btn:active,
-.btn-group-vertical>.btn:active,
-.btn-group>.btn.active,
-.btn-group-vertical>.btn.active {
- z-index: 2
-}
-
-.btn-group .btn+.btn,
-.btn-group .btn+.btn-group,
-.btn-group .btn-group+.btn,
-.btn-group .btn-group+.btn-group {
- margin-left: -1px
-}
-
-.btn-toolbar {
- margin-left: -5px
-}
-
-.btn-toolbar .btn-group,
-.btn-toolbar .input-group {
- float: left
-}
-
-.btn-toolbar>.btn,
-.btn-toolbar>.btn-group,
-.btn-toolbar>.input-group {
- margin-left: 5px
-}
-
-.btn-group>.btn:not(:first-child):not(:last-child):not(.dropdown-toggle) {
- border-radius: 0
-}
-
-.btn-group>.btn:first-child {
- margin-left: 0
-}
-
-.btn-group>.btn:first-child:not(:last-child):not(.dropdown-toggle) {
- border-top-right-radius: 0;
- border-bottom-right-radius: 0
-}
-
-.btn-group>.btn:last-child:not(:first-child),
-.btn-group>.dropdown-toggle:not(:first-child) {
- border-top-left-radius: 0;
- border-bottom-left-radius: 0
-}
-
-.btn-group>.btn-group {
- float: left
-}
-
-.btn-group>.btn-group:not(:first-child):not(:last-child)>.btn {
- border-radius: 0
-}
-
-.btn-group>.btn-group:first-child>.btn:last-child,
-.btn-group>.btn-group:first-child>.dropdown-toggle {
- border-top-right-radius: 0;
- border-bottom-right-radius: 0
-}
-
-.btn-group>.btn-group:last-child>.btn:first-child {
- border-top-left-radius: 0;
- border-bottom-left-radius: 0
-}
-
-.btn-group .dropdown-toggle:active,
-.btn-group.open .dropdown-toggle {
- outline: 0
-}
-
-.btn-group>.btn+.dropdown-toggle {
- padding-right: 8px;
- padding-left: 8px
-}
-
-.btn-group>.btn-lg+.dropdown-toggle {
- padding-right: 12px;
- padding-left: 12px
-}
-
-.btn-group.open .dropdown-toggle {
- -webkit-box-shadow: inset 0 3px 5px rgba(0, 0, 0, .125);
- box-shadow: inset 0 3px 5px rgba(0, 0, 0, .125)
-}
-
-.btn-group.open .dropdown-toggle.btn-link {
- -webkit-box-shadow: none;
- box-shadow: none
-}
-
-.btn .caret {
- margin-left: 0
-}
-
-.btn-lg .caret {
- border-width: 5px 5px 0;
- border-bottom-width: 0
-}
-
-.dropup .btn-lg .caret {
- border-width: 0 5px 5px
-}
-
-.btn-group-vertical>.btn,
-.btn-group-vertical>.btn-group,
-.btn-group-vertical>.btn-group>.btn {
- display: block;
- float: none;
- width: 100%;
- max-width: 100%
-}
-
-.btn-group-vertical>.btn-group>.btn {
- float: none
-}
-
-.btn-group-vertical>.btn+.btn,
-.btn-group-vertical>.btn+.btn-group,
-.btn-group-vertical>.btn-group+.btn,
-.btn-group-vertical>.btn-group+.btn-group {
- margin-top: -1px;
- margin-left: 0
-}
-
-.btn-group-vertical>.btn:not(:first-child):not(:last-child) {
- border-radius: 0
-}
-
-.btn-group-vertical>.btn:first-child:not(:last-child) {
- border-top-right-radius: 4px;
- border-bottom-right-radius: 0;
- border-bottom-left-radius: 0
-}
-
-.btn-group-vertical>.btn:last-child:not(:first-child) {
- border-top-left-radius: 0;
- border-top-right-radius: 0;
- border-bottom-left-radius: 4px
-}
-
-.btn-group-vertical>.btn-group:not(:first-child):not(:last-child)>.btn {
- border-radius: 0
-}
-
-.btn-group-vertical>.btn-group:first-child:not(:last-child)>.btn:last-child,
-.btn-group-vertical>.btn-group:first-child:not(:last-child)>.dropdown-toggle {
- border-bottom-right-radius: 0;
- border-bottom-left-radius: 0
-}
-
-.btn-group-vertical>.btn-group:last-child:not(:first-child)>.btn:first-child {
- border-top-left-radius: 0;
- border-top-right-radius: 0
-}
-
-.btn-group-justified {
- display: table;
- width: 100%;
- table-layout: fixed;
- border-collapse: separate
-}
-
-.btn-group-justified>.btn,
-.btn-group-justified>.btn-group {
- display: table-cell;
- float: none;
- width: 1%
-}
-
-.btn-group-justified>.btn-group .btn {
- width: 100%
-}
-
-.btn-group-justified>.btn-group .dropdown-menu {
- left: auto
-}
-
-[data-toggle=buttons]>.btn input[type=radio],
-[data-toggle=buttons]>.btn-group>.btn input[type=radio],
-[data-toggle=buttons]>.btn input[type=checkbox],
-[data-toggle=buttons]>.btn-group>.btn input[type=checkbox] {
- position: absolute;
- clip: rect(0, 0, 0, 0);
- pointer-events: none
-}
-
-.input-group {
- position: relative;
- display: table;
- border-collapse: separate
-}
-
-.input-group[class*=col-] {
- float: none;
- padding-right: 0;
- padding-left: 0
-}
-
-.input-group .form-control {
- position: relative;
- z-index: 2;
- float: left;
- width: 100%;
- margin-bottom: 0
-}
-
-.input-group-lg>.form-control,
-.input-group-lg>.input-group-addon,
-.input-group-lg>.input-group-btn>.btn {
- height: 46px;
- padding: 10px 16px;
- font-size: 18px;
- line-height: 1.33;
- border-radius: 6px
-}
-
-select.input-group-lg>.form-control,
-select.input-group-lg>.input-group-addon,
-select.input-group-lg>.input-group-btn>.btn {
- height: 46px;
- line-height: 46px
-}
-
-textarea.input-group-lg>.form-control,
-textarea.input-group-lg>.input-group-addon,
-textarea.input-group-lg>.input-group-btn>.btn,
-select[multiple].input-group-lg>.form-control,
-select[multiple].input-group-lg>.input-group-addon,
-select[multiple].input-group-lg>.input-group-btn>.btn {
- height: auto
-}
-
-.input-group-sm>.form-control,
-.input-group-sm>.input-group-addon,
-.input-group-sm>.input-group-btn>.btn {
- height: 30px;
- padding: 5px 10px;
- font-size: 12px;
- line-height: 1.5;
- border-radius: 3px
-}
-
-select.input-group-sm>.form-control,
-select.input-group-sm>.input-group-addon,
-select.input-group-sm>.input-group-btn>.btn {
- height: 30px;
- line-height: 30px
-}
-
-textarea.input-group-sm>.form-control,
-textarea.input-group-sm>.input-group-addon,
-textarea.input-group-sm>.input-group-btn>.btn,
-select[multiple].input-group-sm>.form-control,
-select[multiple].input-group-sm>.input-group-addon,
-select[multiple].input-group-sm>.input-group-btn>.btn {
- height: auto
-}
-
-.input-group-addon,
-.input-group-btn,
-.input-group .form-control {
- display: table-cell
-}
-
-.input-group-addon:not(:first-child):not(:last-child),
-.input-group-btn:not(:first-child):not(:last-child),
-.input-group .form-control:not(:first-child):not(:last-child) {
- border-radius: 0
-}
-
-.input-group-addon,
-.input-group-btn {
- width: 1%;
- white-space: nowrap;
- vertical-align: middle
-}
-
-.input-group-addon {
- padding: 6px 12px;
- font-size: 14px;
- font-weight: 400;
- line-height: 1;
- color: #555;
- text-align: center;
- background-color: #eee;
- border: 1px solid #ccc;
- border-radius: 4px
-}
-
-.input-group-addon.input-sm {
- padding: 5px 10px;
- font-size: 12px;
- border-radius: 3px
-}
-
-.input-group-addon.input-lg {
- padding: 10px 16px;
- font-size: 18px;
- border-radius: 6px
-}
-
-.input-group-addon input[type=radio],
-.input-group-addon input[type=checkbox] {
- margin-top: 0
-}
-
-.input-group .form-control:first-child,
-.input-group-addon:first-child,
-.input-group-btn:first-child>.btn,
-.input-group-btn:first-child>.btn-group>.btn,
-.input-group-btn:first-child>.dropdown-toggle,
-.input-group-btn:last-child>.btn:not(:last-child):not(.dropdown-toggle),
-.input-group-btn:last-child>.btn-group:not(:last-child)>.btn {
- border-top-right-radius: 0;
- border-bottom-right-radius: 0
-}
-
-.input-group-addon:first-child {
- border-right: 0
-}
-
-.input-group .form-control:last-child,
-.input-group-addon:last-child,
-.input-group-btn:last-child>.btn,
-.input-group-btn:last-child>.btn-group>.btn,
-.input-group-btn:last-child>.dropdown-toggle,
-.input-group-btn:first-child>.btn:not(:first-child),
-.input-group-btn:first-child>.btn-group:not(:first-child)>.btn {
- border-top-left-radius: 0;
- border-bottom-left-radius: 0
-}
-
-.input-group-addon:last-child {
- border-left: 0
-}
-
-.input-group-btn {
- position: relative;
- font-size: 0;
- white-space: nowrap
-}
-
-.input-group-btn>.btn {
- position: relative
-}
-
-.input-group-btn>.btn+.btn {
- margin-left: -1px
-}
-
-.input-group-btn>.btn:hover,
-.input-group-btn>.btn:focus,
-.input-group-btn>.btn:active {
- z-index: 2
-}
-
-.input-group-btn:first-child>.btn,
-.input-group-btn:first-child>.btn-group {
- margin-right: -1px
-}
-
-.input-group-btn:last-child>.btn,
-.input-group-btn:last-child>.btn-group {
- margin-left: -1px
-}
-
-.nav {
- padding-left: 0;
- margin-bottom: 0;
- list-style: none
-}
-
-.nav>li {
- position: relative;
- display: block
-}
-
-.nav>li>a {
- position: relative;
- display: block;
- padding: 10px 15px
-}
-
-.nav>li>a:hover,
-.nav>li>a:focus {
- text-decoration: none;
- background-color: #eee
-}
-
-.nav>li.disabled>a {
- color: #777
-}
-
-.nav>li.disabled>a:hover,
-.nav>li.disabled>a:focus {
- color: #777;
- text-decoration: none;
- cursor: not-allowed;
- background-color: transparent
-}
-
-.nav .open>a,
-.nav .open>a:hover,
-.nav .open>a:focus {
- background-color: #eee;
- border-color: #337ab7
-}
-
-.nav .nav-divider {
- height: 1px;
- margin: 9px 0;
- overflow: hidden;
- background-color: #e5e5e5
-}
-
-.nav>li>a>img {
- max-width: none
-}
-
-.nav-tabs {
- border-bottom: 1px solid #ddd
-}
-
-.nav-tabs>li {
- float: left;
- margin-bottom: -1px
-}
-
-.nav-tabs>li>a {
- margin-right: 2px;
- line-height: 1.42857143;
- border: 1px solid transparent;
- border-radius: 4px 4px 0 0
-}
-
-.nav-tabs>li>a:hover {
- border-color: #eee #eee #ddd
-}
-
-.nav-tabs>li.active>a,
-.nav-tabs>li.active>a:hover,
-.nav-tabs>li.active>a:focus {
- color: #555;
- cursor: default;
- background-color: #fff;
- border: 1px solid #ddd;
- border-bottom-color: transparent
-}
-
-.nav-tabs.nav-justified {
- width: 100%;
- border-bottom: 0
-}
-
-.nav-tabs.nav-justified>li {
- float: none
-}
-
-.nav-tabs.nav-justified>li>a {
- margin-bottom: 5px;
- text-align: center
-}
-
-.nav-tabs.nav-justified>.dropdown .dropdown-menu {
- top: auto;
- left: auto
-}
-
-@media (min-width:768px) {
- .nav-tabs.nav-justified>li {
- display: table-cell;
- width: 1%
- }
-
- .nav-tabs.nav-justified>li>a {
- margin-bottom: 0
- }
-}
-
-.nav-tabs.nav-justified>li>a {
- margin-right: 0;
- border-radius: 4px
-}
-
-.nav-tabs.nav-justified>.active>a,
-.nav-tabs.nav-justified>.active>a:hover,
-.nav-tabs.nav-justified>.active>a:focus {
- border: 1px solid #ddd
-}
-
-@media (min-width:768px) {
- .nav-tabs.nav-justified>li>a {
- border-bottom: 1px solid #ddd;
- border-radius: 4px 4px 0 0
- }
-
- .nav-tabs.nav-justified>.active>a,
- .nav-tabs.nav-justified>.active>a:hover,
- .nav-tabs.nav-justified>.active>a:focus {
- border-bottom-color: #fff
- }
-}
-
-.nav-pills>li {
- float: left
-}
-
-.nav-pills>li>a {
- border-radius: 4px
-}
-
-.nav-pills>li+li {
- margin-left: 2px
-}
-
-.nav-pills>li.active>a,
-.nav-pills>li.active>a:hover,
-.nav-pills>li.active>a:focus {
- color: #fff;
- background-color: #337ab7
-}
-
-.nav-stacked>li {
- float: none
-}
-
-.nav-stacked>li+li {
- margin-top: 2px;
- margin-left: 0
-}
-
-.nav-justified {
- width: 100%
-}
-
-.nav-justified>li {
- float: none
-}
-
-.nav-justified>li>a {
- margin-bottom: 5px;
- text-align: center
-}
-
-.nav-justified>.dropdown .dropdown-menu {
- top: auto;
- left: auto
-}
-
-@media (min-width:768px) {
- .nav-justified>li {
- display: table-cell;
- width: 1%
- }
-
- .nav-justified>li>a {
- margin-bottom: 0
- }
-}
-
-.nav-tabs-justified {
- border-bottom: 0
-}
-
-.nav-tabs-justified>li>a {
- margin-right: 0;
- border-radius: 4px
-}
-
-.nav-tabs-justified>.active>a,
-.nav-tabs-justified>.active>a:hover,
-.nav-tabs-justified>.active>a:focus {
- border: 1px solid #ddd
-}
-
-@media (min-width:768px) {
- .nav-tabs-justified>li>a {
- border-bottom: 1px solid #ddd;
- border-radius: 4px 4px 0 0
- }
-
- .nav-tabs-justified>.active>a,
- .nav-tabs-justified>.active>a:hover,
- .nav-tabs-justified>.active>a:focus {
- border-bottom-color: #fff
- }
-}
-
-.tab-content>.tab-pane {
- display: none;
- visibility: hidden
-}
-
-.tab-content>.active {
- display: block;
- visibility: visible
-}
-
-.nav-tabs .dropdown-menu {
- margin-top: -1px;
- border-top-left-radius: 0;
- border-top-right-radius: 0
-}
-
-.navbar {
- position: relative;
- min-height: 50px;
- margin-bottom: 20px;
- border: 1px solid transparent
-}
-
-@media (min-width:768px) {
- .navbar {
- border-radius: 4px
- }
-}
-
-@media (min-width:768px) {
- .navbar-header {
- float: left
- }
-}
-
-.navbar-collapse {
- padding-right: 15px;
- padding-left: 15px;
- overflow-x: visible;
- -webkit-overflow-scrolling: touch;
- border-top: 1px solid transparent;
- -webkit-box-shadow: inset 0 1px 0 rgba(255, 255, 255, .1);
- box-shadow: inset 0 1px 0 rgba(255, 255, 255, .1)
-}
-
-.navbar-collapse.in {
- overflow-y: auto
-}
-
-@media (min-width:768px) {
- .navbar-collapse {
- width: auto;
- border-top: 0;
- -webkit-box-shadow: none;
- box-shadow: none
- }
-
- .navbar-collapse.collapse {
- display: block !important;
- height: auto !important;
- padding-bottom: 0;
- overflow: visible !important;
- visibility: visible !important
- }
-
- .navbar-collapse.in {
- overflow-y: visible
- }
-
- .navbar-fixed-top .navbar-collapse,
- .navbar-static-top .navbar-collapse,
- .navbar-fixed-bottom .navbar-collapse {
- padding-right: 0;
- padding-left: 0
- }
-}
-
-.navbar-fixed-top .navbar-collapse,
-.navbar-fixed-bottom .navbar-collapse {
- max-height: 340px
-}
-
-@media (max-device-width:480px) and (orientation:landscape) {
-
- .navbar-fixed-top .navbar-collapse,
- .navbar-fixed-bottom .navbar-collapse {
- max-height: 200px
- }
-}
-
-.container>.navbar-header,
-.container-fluid>.navbar-header,
-.container>.navbar-collapse,
-.container-fluid>.navbar-collapse {
- margin-right: -15px;
- margin-left: -15px
-}
-
-@media (min-width:768px) {
-
- .container>.navbar-header,
- .container-fluid>.navbar-header,
- .container>.navbar-collapse,
- .container-fluid>.navbar-collapse {
- margin-right: 0;
- margin-left: 0
- }
-}
-
-.navbar-static-top {
- z-index: 1000;
- border-width: 0 0 1px
-}
-
-@media (min-width:768px) {
- .navbar-static-top {
- border-radius: 0
- }
-}
-
-.navbar-fixed-top,
-.navbar-fixed-bottom {
- position: fixed;
- right: 0;
- left: 0;
- z-index: 1030
-}
-
-@media (min-width:768px) {
-
- .navbar-fixed-top,
- .navbar-fixed-bottom {
- border-radius: 0
- }
-}
-
-.navbar-fixed-top {
- top: 0;
- border-width: 0 0 1px
-}
-
-.navbar-fixed-bottom {
- bottom: 0;
- margin-bottom: 0;
- border-width: 1px 0 0
-}
-
-.navbar-brand {
- float: left;
- height: 50px;
- padding: 15px 15px;
- font-size: 18px;
- line-height: 20px
-}
-
-.navbar-brand:hover,
-.navbar-brand:focus {
- text-decoration: none
-}
-
-.navbar-brand>img {
- display: block
-}
-
-@media (min-width:768px) {
-
- .navbar>.container .navbar-brand,
- .navbar>.container-fluid .navbar-brand {
- margin-left: -15px
- }
-}
-
-.navbar-toggle {
- position: relative;
- float: right;
- padding: 9px 10px;
- margin-top: 8px;
- margin-right: 15px;
- margin-bottom: 8px;
- background-color: transparent;
- background-image: none;
- border: 1px solid transparent;
- border-radius: 4px
-}
-
-.navbar-toggle:focus {
- outline: 0
-}
-
-.navbar-toggle .icon-bar {
- display: block;
- width: 22px;
- height: 2px;
- border-radius: 1px
-}
-
-.navbar-toggle .icon-bar+.icon-bar {
- margin-top: 4px
-}
-
-@media (min-width:768px) {
- .navbar-toggle {
- display: none
- }
-}
-
-.navbar-nav {
- margin: 7.5px -15px
-}
-
-.navbar-nav>li>a {
- padding-top: 10px;
- padding-bottom: 10px;
- line-height: 20px
-}
-
-@media (max-width:767px) {
- .navbar-nav .open .dropdown-menu {
- position: static;
- float: none;
- width: auto;
- margin-top: 0;
- background-color: transparent;
- border: 0;
- -webkit-box-shadow: none;
- box-shadow: none
- }
-
- .navbar-nav .open .dropdown-menu>li>a,
- .navbar-nav .open .dropdown-menu .dropdown-header {
- padding: 5px 15px 5px 25px
- }
-
- .navbar-nav .open .dropdown-menu>li>a {
- line-height: 20px
- }
-
- .navbar-nav .open .dropdown-menu>li>a:hover,
- .navbar-nav .open .dropdown-menu>li>a:focus {
- background-image: none
- }
-}
-
-@media (min-width:768px) {
- .navbar-nav {
- float: left;
- margin: 0
- }
-
- .navbar-nav>li {
- float: left
- }
-
- .navbar-nav>li>a {
- padding-top: 15px;
- padding-bottom: 15px
- }
-}
-
-.navbar-form {
- padding: 10px 15px;
- margin-top: 8px;
- margin-right: -15px;
- margin-bottom: 8px;
- margin-left: -15px;
- border-top: 1px solid transparent;
- border-bottom: 1px solid transparent;
- -webkit-box-shadow: inset 0 1px 0 rgba(255, 255, 255, .1), 0 1px 0 rgba(255, 255, 255, .1);
- box-shadow: inset 0 1px 0 rgba(255, 255, 255, .1), 0 1px 0 rgba(255, 255, 255, .1)
-}
-
-@media (min-width:768px) {
- .navbar-form .form-group {
- display: inline-block;
- margin-bottom: 0;
- vertical-align: middle
- }
-
- .navbar-form .form-control {
- display: inline-block;
- width: auto;
- vertical-align: middle
- }
-
- .navbar-form .form-control-static {
- display: inline-block
- }
-
- .navbar-form .input-group {
- display: inline-table;
- vertical-align: middle
- }
-
- .navbar-form .input-group .input-group-addon,
- .navbar-form .input-group .input-group-btn,
- .navbar-form .input-group .form-control {
- width: auto
- }
-
- .navbar-form .input-group>.form-control {
- width: 100%
- }
-
- .navbar-form .control-label {
- margin-bottom: 0;
- vertical-align: middle
- }
-
- .navbar-form .radio,
- .navbar-form .checkbox {
- display: inline-block;
- margin-top: 0;
- margin-bottom: 0;
- vertical-align: middle
- }
-
- .navbar-form .radio label,
- .navbar-form .checkbox label {
- padding-left: 0
- }
-
- .navbar-form .radio input[type=radio],
- .navbar-form .checkbox input[type=checkbox] {
- position: relative;
- margin-left: 0
- }
-
- .navbar-form .has-feedback .form-control-feedback {
- top: 0
- }
-}
-
-@media (max-width:767px) {
- .navbar-form .form-group {
- margin-bottom: 5px
- }
-
- .navbar-form .form-group:last-child {
- margin-bottom: 0
- }
-}
-
-@media (min-width:768px) {
- .navbar-form {
- width: auto;
- padding-top: 0;
- padding-bottom: 0;
- margin-right: 0;
- margin-left: 0;
- border: 0;
- -webkit-box-shadow: none;
- box-shadow: none
- }
-}
-
-.navbar-nav>li>.dropdown-menu {
- margin-top: 0;
- border-top-left-radius: 0;
- border-top-right-radius: 0
-}
-
-.navbar-fixed-bottom .navbar-nav>li>.dropdown-menu {
- border-top-left-radius: 4px;
- border-top-right-radius: 4px;
- border-bottom-right-radius: 0;
- border-bottom-left-radius: 0
-}
-
-.navbar-btn {
- margin-top: 8px;
- margin-bottom: 8px
-}
-
-.navbar-btn.btn-sm {
- margin-top: 10px;
- margin-bottom: 10px
-}
-
-.navbar-btn.btn-xs {
- margin-top: 14px;
- margin-bottom: 14px
-}
-
-.navbar-text {
- margin-top: 15px;
- margin-bottom: 15px
-}
-
-@media (min-width:768px) {
- .navbar-text {
- float: left;
- margin-right: 15px;
- margin-left: 15px
- }
-}
-
-@media (min-width:768px) {
- .navbar-left {
- float: left !important
- }
-
- .navbar-right {
- float: right !important;
- margin-right: -15px
- }
-
- .navbar-right~.navbar-right {
- margin-right: 0
- }
-}
-
-.navbar-default {
- background-color: #f8f8f8;
- border-color: #e7e7e7
-}
-
-.navbar-default .navbar-brand {
- color: #777
-}
-
-.navbar-default .navbar-brand:hover,
-.navbar-default .navbar-brand:focus {
- color: #5e5e5e;
- background-color: transparent
-}
-
-.navbar-default .navbar-text {
- color: #777
-}
-
-.navbar-default .navbar-nav>li>a {
- color: #777
-}
-
-.navbar-default .navbar-nav>li>a:hover,
-.navbar-default .navbar-nav>li>a:focus {
- color: #333;
- background-color: transparent
-}
-
-.navbar-default .navbar-nav>.active>a,
-.navbar-default .navbar-nav>.active>a:hover,
-.navbar-default .navbar-nav>.active>a:focus {
- color: #555;
- background-color: #e7e7e7
-}
-
-.navbar-default .navbar-nav>.disabled>a,
-.navbar-default .navbar-nav>.disabled>a:hover,
-.navbar-default .navbar-nav>.disabled>a:focus {
- color: #ccc;
- background-color: transparent
-}
-
-.navbar-default .navbar-toggle {
- border-color: #ddd
-}
-
-.navbar-default .navbar-toggle:hover,
-.navbar-default .navbar-toggle:focus {
- background-color: #ddd
-}
-
-.navbar-default .navbar-toggle .icon-bar {
- background-color: #888
-}
-
-.navbar-default .navbar-collapse,
-.navbar-default .navbar-form {
- border-color: #e7e7e7
-}
-
-.navbar-default .navbar-nav>.open>a,
-.navbar-default .navbar-nav>.open>a:hover,
-.navbar-default .navbar-nav>.open>a:focus {
- color: #555;
- background-color: #e7e7e7
-}
-
-@media (max-width:767px) {
- .navbar-default .navbar-nav .open .dropdown-menu>li>a {
- color: #777
- }
-
- .navbar-default .navbar-nav .open .dropdown-menu>li>a:hover,
- .navbar-default .navbar-nav .open .dropdown-menu>li>a:focus {
- color: #333;
- background-color: transparent
- }
-
- .navbar-default .navbar-nav .open .dropdown-menu>.active>a,
- .navbar-default .navbar-nav .open .dropdown-menu>.active>a:hover,
- .navbar-default .navbar-nav .open .dropdown-menu>.active>a:focus {
- color: #555;
- background-color: #e7e7e7
- }
-
- .navbar-default .navbar-nav .open .dropdown-menu>.disabled>a,
- .navbar-default .navbar-nav .open .dropdown-menu>.disabled>a:hover,
- .navbar-default .navbar-nav .open .dropdown-menu>.disabled>a:focus {
- color: #ccc;
- background-color: transparent
- }
-}
-
-.navbar-default .navbar-link {
- color: #777
-}
-
-.navbar-default .navbar-link:hover {
- color: #333
-}
-
-.navbar-default .btn-link {
- color: #777
-}
-
-.navbar-default .btn-link:hover,
-.navbar-default .btn-link:focus {
- color: #333
-}
-
-.navbar-default .btn-link[disabled]:hover,
-fieldset[disabled] .navbar-default .btn-link:hover,
-.navbar-default .btn-link[disabled]:focus,
-fieldset[disabled] .navbar-default .btn-link:focus {
- color: #ccc
-}
-
-.navbar-inverse {
- background-color: #222;
- border-color: #080808
-}
-
-.navbar-inverse .navbar-brand {
- color: #9d9d9d
-}
-
-.navbar-inverse .navbar-brand:hover,
-.navbar-inverse .navbar-brand:focus {
- color: #fff;
- background-color: transparent
-}
-
-.navbar-inverse .navbar-text {
- color: #9d9d9d
-}
-
-.navbar-inverse .navbar-nav>li>a {
- color: #9d9d9d
-}
-
-.navbar-inverse .navbar-nav>li>a:hover,
-.navbar-inverse .navbar-nav>li>a:focus {
- color: #fff;
- background-color: transparent
-}
-
-.navbar-inverse .navbar-nav>.active>a,
-.navbar-inverse .navbar-nav>.active>a:hover,
-.navbar-inverse .navbar-nav>.active>a:focus {
- color: #fff;
- background-color: #080808
-}
-
-.navbar-inverse .navbar-nav>.disabled>a,
-.navbar-inverse .navbar-nav>.disabled>a:hover,
-.navbar-inverse .navbar-nav>.disabled>a:focus {
- color: #444;
- background-color: transparent
-}
-
-.navbar-inverse .navbar-toggle {
- border-color: #333
-}
-
-.navbar-inverse .navbar-toggle:hover,
-.navbar-inverse .navbar-toggle:focus {
- background-color: #333
-}
-
-.navbar-inverse .navbar-toggle .icon-bar {
- background-color: #fff
-}
-
-.navbar-inverse .navbar-collapse,
-.navbar-inverse .navbar-form {
- border-color: #101010
-}
-
-.navbar-inverse .navbar-nav>.open>a,
-.navbar-inverse .navbar-nav>.open>a:hover,
-.navbar-inverse .navbar-nav>.open>a:focus {
- color: #fff;
- background-color: #080808
-}
-
-@media (max-width:767px) {
- .navbar-inverse .navbar-nav .open .dropdown-menu>.dropdown-header {
- border-color: #080808
- }
-
- .navbar-inverse .navbar-nav .open .dropdown-menu .divider {
- background-color: #080808
- }
-
- .navbar-inverse .navbar-nav .open .dropdown-menu>li>a {
- color: #9d9d9d
- }
-
- .navbar-inverse .navbar-nav .open .dropdown-menu>li>a:hover,
- .navbar-inverse .navbar-nav .open .dropdown-menu>li>a:focus {
- color: #fff;
- background-color: transparent
- }
-
- .navbar-inverse .navbar-nav .open .dropdown-menu>.active>a,
- .navbar-inverse .navbar-nav .open .dropdown-menu>.active>a:hover,
- .navbar-inverse .navbar-nav .open .dropdown-menu>.active>a:focus {
- color: #fff;
- background-color: #080808
- }
-
- .navbar-inverse .navbar-nav .open .dropdown-menu>.disabled>a,
- .navbar-inverse .navbar-nav .open .dropdown-menu>.disabled>a:hover,
- .navbar-inverse .navbar-nav .open .dropdown-menu>.disabled>a:focus {
- color: #444;
- background-color: transparent
- }
-}
-
-.navbar-inverse .navbar-link {
- color: #9d9d9d
-}
-
-.navbar-inverse .navbar-link:hover {
- color: #fff
-}
-
-.navbar-inverse .btn-link {
- color: #9d9d9d
-}
-
-.navbar-inverse .btn-link:hover,
-.navbar-inverse .btn-link:focus {
- color: #fff
-}
-
-.navbar-inverse .btn-link[disabled]:hover,
-fieldset[disabled] .navbar-inverse .btn-link:hover,
-.navbar-inverse .btn-link[disabled]:focus,
-fieldset[disabled] .navbar-inverse .btn-link:focus {
- color: #444
-}
-
-.breadcrumb {
- padding: 8px 15px;
- margin-bottom: 20px;
- list-style: none;
- background-color: #f5f5f5;
- border-radius: 4px
-}
-
-.breadcrumb>li {
- display: inline-block
-}
-
-.breadcrumb>li+li:before {
- padding: 0 5px;
- color: #ccc;
- content: "/\00a0"
-}
-
-.breadcrumb>.active {
- color: #777
-}
-
-.pagination {
- display: inline-block;
- padding-left: 0;
- margin: 20px 0;
- border-radius: 4px
-}
-
-.pagination>li {
- display: inline
-}
-
-.pagination>li>a,
-.pagination>li>span {
- position: relative;
- float: left;
- padding: 6px 12px;
- margin-left: -1px;
- line-height: 1.42857143;
- color: #337ab7;
- text-decoration: none;
- background-color: #fff;
- border: 1px solid #ddd
-}
-
-.pagination>li:first-child>a,
-.pagination>li:first-child>span {
- margin-left: 0;
- border-top-left-radius: 4px;
- border-bottom-left-radius: 4px
-}
-
-.pagination>li:last-child>a,
-.pagination>li:last-child>span {
- border-top-right-radius: 4px;
- border-bottom-right-radius: 4px
-}
-
-.pagination>li>a:hover,
-.pagination>li>span:hover,
-.pagination>li>a:focus,
-.pagination>li>span:focus {
- color: #23527c;
- background-color: #eee;
- border-color: #ddd
-}
-
-.pagination>.active>a,
-.pagination>.active>span,
-.pagination>.active>a:hover,
-.pagination>.active>span:hover,
-.pagination>.active>a:focus,
-.pagination>.active>span:focus {
- z-index: 2;
- color: #fff;
- cursor: default;
- background-color: #337ab7;
- border-color: #337ab7
-}
-
-.pagination>.disabled>span,
-.pagination>.disabled>span:hover,
-.pagination>.disabled>span:focus,
-.pagination>.disabled>a,
-.pagination>.disabled>a:hover,
-.pagination>.disabled>a:focus {
- color: #777;
- cursor: not-allowed;
- background-color: #fff;
- border-color: #ddd
-}
-
-.pagination-lg>li>a,
-.pagination-lg>li>span {
- padding: 10px 16px;
- font-size: 18px
-}
-
-.pagination-lg>li:first-child>a,
-.pagination-lg>li:first-child>span {
- border-top-left-radius: 6px;
- border-bottom-left-radius: 6px
-}
-
-.pagination-lg>li:last-child>a,
-.pagination-lg>li:last-child>span {
- border-top-right-radius: 6px;
- border-bottom-right-radius: 6px
-}
-
-.pagination-sm>li>a,
-.pagination-sm>li>span {
- padding: 5px 10px;
- font-size: 12px
-}
-
-.pagination-sm>li:first-child>a,
-.pagination-sm>li:first-child>span {
- border-top-left-radius: 3px;
- border-bottom-left-radius: 3px
-}
-
-.pagination-sm>li:last-child>a,
-.pagination-sm>li:last-child>span {
- border-top-right-radius: 3px;
- border-bottom-right-radius: 3px
-}
-
-.pager {
- padding-left: 0;
- margin: 20px 0;
- text-align: center;
- list-style: none
-}
-
-.pager li {
- display: inline
-}
-
-.pager li>a,
-.pager li>span {
- display: inline-block;
- padding: 5px 14px;
- background-color: #fff;
- border: 1px solid #ddd;
- border-radius: 15px
-}
-
-.pager li>a:hover,
-.pager li>a:focus {
- text-decoration: none;
- background-color: #eee
-}
-
-.pager .next>a,
-.pager .next>span {
- float: right
-}
-
-.pager .previous>a,
-.pager .previous>span {
- float: left
-}
-
-.pager .disabled>a,
-.pager .disabled>a:hover,
-.pager .disabled>a:focus,
-.pager .disabled>span {
- color: #777;
- cursor: not-allowed;
- background-color: #fff
-}
-
-.label {
- display: inline;
- padding: .2em .6em .3em;
- font-size: 75%;
- font-weight: 700;
- line-height: 1;
- color: #fff;
- text-align: center;
- white-space: nowrap;
- vertical-align: baseline;
- border-radius: .25em
-}
-
-a.label:hover,
-a.label:focus {
- color: #fff;
- text-decoration: none;
- cursor: pointer
-}
-
-.label:empty {
- display: none
-}
-
-.btn .label {
- position: relative;
- top: -1px
-}
-
-.label-default {
- background-color: #777
-}
-
-.label-default[href]:hover,
-.label-default[href]:focus {
- background-color: #5e5e5e
-}
-
-.label-primary {
- background-color: #337ab7
-}
-
-.label-primary[href]:hover,
-.label-primary[href]:focus {
- background-color: #6f420a
-}
-
-.label-success {
- background-color: #5cb85c
-}
-
-.label-success[href]:hover,
-.label-success[href]:focus {
- background-color: #449d44
-}
-
-.label-info {
- background-color: #5bc0de
-}
-
-.label-info[href]:hover,
-.label-info[href]:focus {
- background-color: #31b0d5
-}
-
-.label-warning {
- background-color: #f0ad4e
-}
-
-.label-warning[href]:hover,
-.label-warning[href]:focus {
- background-color: #ec971f
-}
-
-.label-danger {
- background-color: #d9534f
-}
-
-.label-danger[href]:hover,
-.label-danger[href]:focus {
- background-color: #c9302c
-}
-
-.badge {
- display: inline-block;
- min-width: 10px;
- padding: 3px 7px;
- font-size: 12px;
- font-weight: 700;
- line-height: 1;
- color: #fff;
- text-align: center;
- white-space: nowrap;
- vertical-align: baseline;
- background-color: #777;
- border-radius: 10px
-}
-
-.badge:empty {
- display: none
-}
-
-.btn .badge {
- position: relative;
- top: -1px
-}
-
-.btn-xs .badge {
- top: 0;
- padding: 1px 5px
-}
-
-a.badge:hover,
-a.badge:focus {
- color: #fff;
- text-decoration: none;
- cursor: pointer
-}
-
-.list-group-item.active>.badge,
-.nav-pills>.active>a>.badge {
- color: #337ab7;
- background-color: #fff
-}
-
-.list-group-item>.badge {
- float: right
-}
-
-.list-group-item>.badge+.badge {
- margin-right: 5px
-}
-
-.nav-pills>li>a>.badge {
- margin-left: 3px
-}
-
-.jumbotron {
- padding: 30px 15px;
- margin-bottom: 30px;
- color: inherit;
- background-color: #eee
-}
-
-.jumbotron h1,
-.jumbotron .h1 {
- color: inherit
-}
-
-.jumbotron p {
- margin-bottom: 15px;
- font-size: 21px;
- font-weight: 200
-}
-
-.jumbotron>hr {
- border-top-color: #d5d5d5
-}
-
-.container .jumbotron,
-.container-fluid .jumbotron {
- border-radius: 6px
-}
-
-.jumbotron .container {
- max-width: 100%
-}
-
-@media screen and (min-width:768px) {
- .jumbotron {
- padding: 48px 0
- }
-
- .container .jumbotron,
- .container-fluid .jumbotron {
- padding-right: 60px;
- padding-left: 60px
- }
-
- .jumbotron h1,
- .jumbotron .h1 {
- font-size: 63px
- }
-}
-
-.thumbnail {
- display: block;
- padding: 4px;
- margin-bottom: 20px;
- line-height: 1.42857143;
- background-color: #fff;
- border: 1px solid #ddd;
- border-radius: 4px;
- -webkit-transition: border .2s ease-in-out;
- -o-transition: border .2s ease-in-out;
- transition: border .2s ease-in-out
-}
-
-.thumbnail>img,
-.thumbnail a>img {
- margin-right: auto;
- margin-left: auto
-}
-
-a.thumbnail:hover,
-a.thumbnail:focus,
-a.thumbnail.active {
- border-color: #337ab7
-}
-
-.thumbnail .caption {
- padding: 9px;
- color: #333
-}
-
-.alert {
- padding: 15px;
- margin-bottom: 20px;
- border: 1px solid transparent;
- border-radius: 4px
-}
-
-.alert h4 {
- margin-top: 0;
- color: inherit
-}
-
-.alert .alert-link {
- font-weight: 700
-}
-
-.alert>p,
-.alert>ul {
- margin-bottom: 0
-}
-
-.alert>p+p {
- margin-top: 5px
-}
-
-.alert-dismissable,
-.alert-dismissible {
- padding-right: 35px
-}
-
-.alert-dismissable .close,
-.alert-dismissible .close {
- position: relative;
- top: -2px;
- right: -21px;
- color: inherit
-}
-
-.alert-success {
- color: #3c763d;
- background-color: #dff0d8;
- border-color: #d6e9c6
-}
-
-.alert-success hr {
- border-top-color: #c9e2b3
-}
-
-.alert-success .alert-link {
- color: #2b542c
-}
-
-.alert-info {
- color: #31708f;
- background-color: #d9edf7;
- border-color: #bce8f1
-}
-
-.alert-info hr {
- border-top-color: #a6e1ec
-}
-
-.alert-info .alert-link {
- color: #245269
-}
-
-.alert-warning {
- color: #8a6d3b;
- background-color: #fcf8e3;
- border-color: #faebcc
-}
-
-.alert-warning hr {
- border-top-color: #f7e1b5
-}
-
-.alert-warning .alert-link {
- color: #66512c
-}
-
-.alert-danger {
- color: #a94442;
- background-color: #f2dede;
- border-color: #ebccd1
-}
-
-.alert-danger hr {
- border-top-color: #e4b9c0
-}
-
-.alert-danger .alert-link {
- color: #843534
-}
-
-@-webkit-keyframes progress-bar-stripes {
- from {
- background-position: 40px 0
- }
-
- to {
- background-position: 0 0
- }
-}
-
-@-o-keyframes progress-bar-stripes {
- from {
- background-position: 40px 0
- }
-
- to {
- background-position: 0 0
- }
-}
-
-@keyframes progress-bar-stripes {
- from {
- background-position: 40px 0
- }
-
- to {
- background-position: 0 0
- }
-}
-
-.progress {
- height: 20px;
- margin-bottom: 20px;
- overflow: hidden;
- background-color: #f5f5f5;
- border-radius: 4px;
- -webkit-box-shadow: inset 0 1px 2px rgba(0, 0, 0, .1);
- box-shadow: inset 0 1px 2px rgba(0, 0, 0, .1)
-}
-
-.progress-bar {
- float: left;
- width: 0;
- height: 100%;
- font-size: 12px;
- line-height: 20px;
- color: #fff;
- text-align: center;
- background-color: #337ab7;
- -webkit-box-shadow: inset 0 -1px 0 rgba(0, 0, 0, .15);
- box-shadow: inset 0 -1px 0 rgba(0, 0, 0, .15);
- -webkit-transition: width .6s ease;
- -o-transition: width .6s ease;
- transition: width .6s ease
-}
-
-.progress-striped .progress-bar,
-.progress-bar-striped {
- background-image: -webkit-linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent);
- background-image: -o-linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent);
- background-image: linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent);
- -webkit-background-size: 40px 40px;
- background-size: 40px 40px
-}
-
-.progress.active .progress-bar,
-.progress-bar.active {
- -webkit-animation: progress-bar-stripes 2s linear infinite;
- -o-animation: progress-bar-stripes 2s linear infinite;
- animation: progress-bar-stripes 2s linear infinite
-}
-
-.progress-bar-success {
- background-color: #5cb85c
-}
-
-.progress-striped .progress-bar-success {
- background-image: -webkit-linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent);
- background-image: -o-linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent);
- background-image: linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent)
-}
-
-.progress-bar-info {
- background-color: #5bc0de
-}
-
-.progress-striped .progress-bar-info {
- background-image: -webkit-linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent);
- background-image: -o-linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent);
- background-image: linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent)
-}
-
-.progress-bar-warning {
- background-color: #f0ad4e
-}
-
-.progress-striped .progress-bar-warning {
- background-image: -webkit-linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent);
- background-image: -o-linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent);
- background-image: linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent)
-}
-
-.progress-bar-danger {
- background-color: #d9534f
-}
-
-.progress-striped .progress-bar-danger {
- background-image: -webkit-linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent);
- background-image: -o-linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent);
- background-image: linear-gradient(45deg, rgba(255, 255, 255, .15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, .15) 50%, rgba(255, 255, 255, .15) 75%, transparent 75%, transparent)
-}
-
-.media {
- margin-top: 15px
-}
-
-.media:first-child {
- margin-top: 0
-}
-
-.media-right,
-.media>.pull-right {
- padding-left: 10px
-}
-
-.media-left,
-.media>.pull-left {
- padding-right: 10px
-}
-
-.media-left,
-.media-right,
-.media-body {
- display: table-cell;
- vertical-align: top
-}
-
-.media-middle {
- vertical-align: middle
-}
-
-.media-bottom {
- vertical-align: bottom
-}
-
-.media-heading {
- margin-top: 0;
- margin-bottom: 5px
-}
-
-.media-list {
- padding-left: 0;
- list-style: none
-}
-
-.list-group {
- padding-left: 0;
- margin-bottom: 20px
-}
-
-.list-group-item {
- position: relative;
- display: block;
- padding: 10px 15px;
- margin-bottom: -1px;
- background-color: #fff;
- border: 1px solid #ddd
-}
-
-.list-group-item:first-child {
- border-top-left-radius: 4px;
- border-top-right-radius: 4px
-}
-
-.list-group-item:last-child {
- margin-bottom: 0;
- border-bottom-right-radius: 4px;
- border-bottom-left-radius: 4px
-}
-
-a.list-group-item {
- color: #555
-}
-
-a.list-group-item .list-group-item-heading {
- color: #333
-}
-
-a.list-group-item:hover,
-a.list-group-item:focus {
- color: #555;
- text-decoration: none;
- background-color: #f5f5f5
-}
-
-.list-group-item.disabled,
-.list-group-item.disabled:hover,
-.list-group-item.disabled:focus {
- color: #777;
- cursor: not-allowed;
- background-color: #eee
-}
-
-.list-group-item.disabled .list-group-item-heading,
-.list-group-item.disabled:hover .list-group-item-heading,
-.list-group-item.disabled:focus .list-group-item-heading {
- color: inherit
-}
-
-.list-group-item.disabled .list-group-item-text,
-.list-group-item.disabled:hover .list-group-item-text,
-.list-group-item.disabled:focus .list-group-item-text {
- color: #777
-}
-
-.list-group-item.active,
-.list-group-item.active:hover,
-.list-group-item.active:focus {
- z-index: 2;
- color: #fff;
- background-color: #337ab7;
- border-color: #337ab7
-}
-
-.list-group-item.active .list-group-item-heading,
-.list-group-item.active:hover .list-group-item-heading,
-.list-group-item.active:focus .list-group-item-heading,
-.list-group-item.active .list-group-item-heading>small,
-.list-group-item.active:hover .list-group-item-heading>small,
-.list-group-item.active:focus .list-group-item-heading>small,
-.list-group-item.active .list-group-item-heading>.small,
-.list-group-item.active:hover .list-group-item-heading>.small,
-.list-group-item.active:focus .list-group-item-heading>.small {
- color: inherit
-}
-
-.list-group-item.active .list-group-item-text,
-.list-group-item.active:hover .list-group-item-text,
-.list-group-item.active:focus .list-group-item-text {
- color: #c7ddef
-}
-
-.list-group-item-success {
- color: #3c763d;
- background-color: #dff0d8
-}
-
-a.list-group-item-success {
- color: #3c763d
-}
-
-a.list-group-item-success .list-group-item-heading {
- color: inherit
-}
-
-a.list-group-item-success:hover,
-a.list-group-item-success:focus {
- color: #3c763d;
- background-color: #d0e9c6
-}
-
-a.list-group-item-success.active,
-a.list-group-item-success.active:hover,
-a.list-group-item-success.active:focus {
- color: #fff;
- background-color: #3c763d;
- border-color: #3c763d
-}
-
-.list-group-item-info {
- color: #31708f;
- background-color: #d9edf7
-}
-
-a.list-group-item-info {
- color: #31708f
-}
-
-a.list-group-item-info .list-group-item-heading {
- color: inherit
-}
-
-a.list-group-item-info:hover,
-a.list-group-item-info:focus {
- color: #31708f;
- background-color: #c4e3f3
-}
-
-a.list-group-item-info.active,
-a.list-group-item-info.active:hover,
-a.list-group-item-info.active:focus {
- color: #fff;
- background-color: #31708f;
- border-color: #31708f
-}
-
-.list-group-item-warning {
- color: #8a6d3b;
- background-color: #fcf8e3
-}
-
-a.list-group-item-warning {
- color: #8a6d3b
-}
-
-a.list-group-item-warning .list-group-item-heading {
- color: inherit
-}
-
-a.list-group-item-warning:hover,
-a.list-group-item-warning:focus {
- color: #8a6d3b;
- background-color: #faf2cc
-}
-
-a.list-group-item-warning.active,
-a.list-group-item-warning.active:hover,
-a.list-group-item-warning.active:focus {
- color: #fff;
- background-color: #8a6d3b;
- border-color: #8a6d3b
-}
-
-.list-group-item-danger {
- color: #a94442;
- background-color: #f2dede
-}
-
-a.list-group-item-danger {
- color: #a94442
-}
-
-a.list-group-item-danger .list-group-item-heading {
- color: inherit
-}
-
-a.list-group-item-danger:hover,
-a.list-group-item-danger:focus {
- color: #a94442;
- background-color: #ebcccc
-}
-
-a.list-group-item-danger.active,
-a.list-group-item-danger.active:hover,
-a.list-group-item-danger.active:focus {
- color: #fff;
- background-color: #a94442;
- border-color: #a94442
-}
-
-.list-group-item-heading {
- margin-top: 0;
- margin-bottom: 5px
-}
-
-.list-group-item-text {
- margin-bottom: 0;
- line-height: 1.3
-}
-
-.panel {
- margin-bottom: 20px;
- background-color: #fff;
- border: 1px solid transparent;
- border-radius: 4px;
- -webkit-box-shadow: 0 1px 1px rgba(0, 0, 0, .05);
- box-shadow: 0 1px 1px rgba(0, 0, 0, .05)
-}
-
-.panel-body {
- padding: 15px
-}
-
-.panel-heading {
- padding: 10px 15px;
- border-bottom: 1px solid transparent;
- border-top-left-radius: 3px;
- border-top-right-radius: 3px
-}
-
-.panel-heading>.dropdown .dropdown-toggle {
- color: inherit
-}
-
-.panel-title {
- margin-top: 0;
- margin-bottom: 0;
- font-size: 16px;
- color: inherit
-}
-
-.panel-title>a {
- color: inherit
-}
-
-.panel-footer {
- padding: 10px 15px;
- background-color: #f5f5f5;
- border-top: 1px solid #ddd;
- border-bottom-right-radius: 3px;
- border-bottom-left-radius: 3px
-}
-
-.panel>.list-group,
-.panel>.panel-collapse>.list-group {
- margin-bottom: 0
-}
-
-.panel>.list-group .list-group-item,
-.panel>.panel-collapse>.list-group .list-group-item {
- border-width: 1px 0;
- border-radius: 0
-}
-
-.panel>.list-group:first-child .list-group-item:first-child,
-.panel>.panel-collapse>.list-group:first-child .list-group-item:first-child {
- border-top: 0;
- border-top-left-radius: 3px;
- border-top-right-radius: 3px
-}
-
-.panel>.list-group:last-child .list-group-item:last-child,
-.panel>.panel-collapse>.list-group:last-child .list-group-item:last-child {
- border-bottom: 0;
- border-bottom-right-radius: 3px;
- border-bottom-left-radius: 3px
-}
-
-.panel-heading+.list-group .list-group-item:first-child {
- border-top-width: 0
-}
-
-.list-group+.panel-footer {
- border-top-width: 0
-}
-
-.panel>.table,
-.panel>.table-responsive>.table,
-.panel>.panel-collapse>.table {
- margin-bottom: 0
-}
-
-.panel>.table caption,
-.panel>.table-responsive>.table caption,
-.panel>.panel-collapse>.table caption {
- padding-right: 15px;
- padding-left: 15px
-}
-
-.panel>.table:first-child,
-.panel>.table-responsive:first-child>.table:first-child {
- border-top-left-radius: 3px;
- border-top-right-radius: 3px
-}
-
-.panel>.table:first-child>thead:first-child>tr:first-child,
-.panel>.table-responsive:first-child>.table:first-child>thead:first-child>tr:first-child,
-.panel>.table:first-child>tbody:first-child>tr:first-child,
-.panel>.table-responsive:first-child>.table:first-child>tbody:first-child>tr:first-child {
- border-top-left-radius: 3px;
- border-top-right-radius: 3px
-}
-
-.panel>.table:first-child>thead:first-child>tr:first-child td:first-child,
-.panel>.table-responsive:first-child>.table:first-child>thead:first-child>tr:first-child td:first-child,
-.panel>.table:first-child>tbody:first-child>tr:first-child td:first-child,
-.panel>.table-responsive:first-child>.table:first-child>tbody:first-child>tr:first-child td:first-child,
-.panel>.table:first-child>thead:first-child>tr:first-child th:first-child,
-.panel>.table-responsive:first-child>.table:first-child>thead:first-child>tr:first-child th:first-child,
-.panel>.table:first-child>tbody:first-child>tr:first-child th:first-child,
-.panel>.table-responsive:first-child>.table:first-child>tbody:first-child>tr:first-child th:first-child {
- border-top-left-radius: 3px
-}
-
-.panel>.table:first-child>thead:first-child>tr:first-child td:last-child,
-.panel>.table-responsive:first-child>.table:first-child>thead:first-child>tr:first-child td:last-child,
-.panel>.table:first-child>tbody:first-child>tr:first-child td:last-child,
-.panel>.table-responsive:first-child>.table:first-child>tbody:first-child>tr:first-child td:last-child,
-.panel>.table:first-child>thead:first-child>tr:first-child th:last-child,
-.panel>.table-responsive:first-child>.table:first-child>thead:first-child>tr:first-child th:last-child,
-.panel>.table:first-child>tbody:first-child>tr:first-child th:last-child,
-.panel>.table-responsive:first-child>.table:first-child>tbody:first-child>tr:first-child th:last-child {
- border-top-right-radius: 3px
-}
-
-.panel>.table:last-child,
-.panel>.table-responsive:last-child>.table:last-child {
- border-bottom-right-radius: 3px;
- border-bottom-left-radius: 3px
-}
-
-.panel>.table:last-child>tbody:last-child>tr:last-child,
-.panel>.table-responsive:last-child>.table:last-child>tbody:last-child>tr:last-child,
-.panel>.table:last-child>tfoot:last-child>tr:last-child,
-.panel>.table-responsive:last-child>.table:last-child>tfoot:last-child>tr:last-child {
- border-bottom-right-radius: 3px;
- border-bottom-left-radius: 3px
-}
-
-.panel>.table:last-child>tbody:last-child>tr:last-child td:first-child,
-.panel>.table-responsive:last-child>.table:last-child>tbody:last-child>tr:last-child td:first-child,
-.panel>.table:last-child>tfoot:last-child>tr:last-child td:first-child,
-.panel>.table-responsive:last-child>.table:last-child>tfoot:last-child>tr:last-child td:first-child,
-.panel>.table:last-child>tbody:last-child>tr:last-child th:first-child,
-.panel>.table-responsive:last-child>.table:last-child>tbody:last-child>tr:last-child th:first-child,
-.panel>.table:last-child>tfoot:last-child>tr:last-child th:first-child,
-.panel>.table-responsive:last-child>.table:last-child>tfoot:last-child>tr:last-child th:first-child {
- border-bottom-left-radius: 3px
-}
-
-.panel>.table:last-child>tbody:last-child>tr:last-child td:last-child,
-.panel>.table-responsive:last-child>.table:last-child>tbody:last-child>tr:last-child td:last-child,
-.panel>.table:last-child>tfoot:last-child>tr:last-child td:last-child,
-.panel>.table-responsive:last-child>.table:last-child>tfoot:last-child>tr:last-child td:last-child,
-.panel>.table:last-child>tbody:last-child>tr:last-child th:last-child,
-.panel>.table-responsive:last-child>.table:last-child>tbody:last-child>tr:last-child th:last-child,
-.panel>.table:last-child>tfoot:last-child>tr:last-child th:last-child,
-.panel>.table-responsive:last-child>.table:last-child>tfoot:last-child>tr:last-child th:last-child {
- border-bottom-right-radius: 3px
-}
-
-.panel>.panel-body+.table,
-.panel>.panel-body+.table-responsive,
-.panel>.table+.panel-body,
-.panel>.table-responsive+.panel-body {
- border-top: 1px solid #ddd
-}
-
-.panel>.table>tbody:first-child>tr:first-child th,
-.panel>.table>tbody:first-child>tr:first-child td {
- border-top: 0
-}
-
-.panel>.table-bordered,
-.panel>.table-responsive>.table-bordered {
- border: 0
-}
-
-.panel>.table-bordered>thead>tr>th:first-child,
-.panel>.table-responsive>.table-bordered>thead>tr>th:first-child,
-.panel>.table-bordered>tbody>tr>th:first-child,
-.panel>.table-responsive>.table-bordered>tbody>tr>th:first-child,
-.panel>.table-bordered>tfoot>tr>th:first-child,
-.panel>.table-responsive>.table-bordered>tfoot>tr>th:first-child,
-.panel>.table-bordered>thead>tr>td:first-child,
-.panel>.table-responsive>.table-bordered>thead>tr>td:first-child,
-.panel>.table-bordered>tbody>tr>td:first-child,
-.panel>.table-responsive>.table-bordered>tbody>tr>td:first-child,
-.panel>.table-bordered>tfoot>tr>td:first-child,
-.panel>.table-responsive>.table-bordered>tfoot>tr>td:first-child {
- border-left: 0
-}
-
-.panel>.table-bordered>thead>tr>th:last-child,
-.panel>.table-responsive>.table-bordered>thead>tr>th:last-child,
-.panel>.table-bordered>tbody>tr>th:last-child,
-.panel>.table-responsive>.table-bordered>tbody>tr>th:last-child,
-.panel>.table-bordered>tfoot>tr>th:last-child,
-.panel>.table-responsive>.table-bordered>tfoot>tr>th:last-child,
-.panel>.table-bordered>thead>tr>td:last-child,
-.panel>.table-responsive>.table-bordered>thead>tr>td:last-child,
-.panel>.table-bordered>tbody>tr>td:last-child,
-.panel>.table-responsive>.table-bordered>tbody>tr>td:last-child,
-.panel>.table-bordered>tfoot>tr>td:last-child,
-.panel>.table-responsive>.table-bordered>tfoot>tr>td:last-child {
- border-right: 0
-}
-
-.panel>.table-bordered>thead>tr:first-child>td,
-.panel>.table-responsive>.table-bordered>thead>tr:first-child>td,
-.panel>.table-bordered>tbody>tr:first-child>td,
-.panel>.table-responsive>.table-bordered>tbody>tr:first-child>td,
-.panel>.table-bordered>thead>tr:first-child>th,
-.panel>.table-responsive>.table-bordered>thead>tr:first-child>th,
-.panel>.table-bordered>tbody>tr:first-child>th,
-.panel>.table-responsive>.table-bordered>tbody>tr:first-child>th {
- border-bottom: 0
-}
-
-.panel>.table-bordered>tbody>tr:last-child>td,
-.panel>.table-responsive>.table-bordered>tbody>tr:last-child>td,
-.panel>.table-bordered>tfoot>tr:last-child>td,
-.panel>.table-responsive>.table-bordered>tfoot>tr:last-child>td,
-.panel>.table-bordered>tbody>tr:last-child>th,
-.panel>.table-responsive>.table-bordered>tbody>tr:last-child>th,
-.panel>.table-bordered>tfoot>tr:last-child>th,
-.panel>.table-responsive>.table-bordered>tfoot>tr:last-child>th {
- border-bottom: 0
-}
-
-.panel>.table-responsive {
- margin-bottom: 0;
- border: 0
-}
-
-.panel-group {
- margin-bottom: 20px
-}
-
-.panel-group .panel {
- margin-bottom: 0;
- border-radius: 4px
-}
-
-.panel-group .panel+.panel {
- margin-top: 5px
-}
-
-.panel-group .panel-heading {
- border-bottom: 0
-}
-
-.panel-group .panel-heading+.panel-collapse>.panel-body,
-.panel-group .panel-heading+.panel-collapse>.list-group {
- border-top: 1px solid #ddd
-}
-
-.panel-group .panel-footer {
- border-top: 0
-}
-
-.panel-group .panel-footer+.panel-collapse .panel-body {
- border-bottom: 1px solid #ddd
-}
-
-.panel-default {
- border-color: #ddd
-}
-
-.panel-default>.panel-heading {
- color: #333;
- background-color: #f5f5f5;
- border-color: #ddd
-}
-
-.panel-default>.panel-heading+.panel-collapse>.panel-body {
- border-top-color: #ddd
-}
-
-.panel-default>.panel-heading .badge {
- color: #f5f5f5;
- background-color: #333
-}
-
-.panel-default>.panel-footer+.panel-collapse>.panel-body {
- border-bottom-color: #ddd
-}
-
-.panel-primary {
- border-color: #337ab7
-}
-
-.panel-primary>.panel-heading {
- color: #fff;
- background-color: #337ab7;
- border-color: #337ab7
-}
-
-.panel-primary>.panel-heading+.panel-collapse>.panel-body {
- border-top-color: #337ab7
-}
-
-.panel-primary>.panel-heading .badge {
- color: #337ab7;
- background-color: #fff
-}
-
-.panel-primary>.panel-footer+.panel-collapse>.panel-body {
- border-bottom-color: #337ab7
-}
-
-.panel-success {
- border-color: #d6e9c6
-}
-
-.panel-success>.panel-heading {
- color: #3c763d;
- background-color: #dff0d8;
- border-color: #d6e9c6
-}
-
-.panel-success>.panel-heading+.panel-collapse>.panel-body {
- border-top-color: #d6e9c6
-}
-
-.panel-success>.panel-heading .badge {
- color: #dff0d8;
- background-color: #3c763d
-}
-
-.panel-success>.panel-footer+.panel-collapse>.panel-body {
- border-bottom-color: #d6e9c6
-}
-
-.panel-info {
- border-color: #bce8f1
-}
-
-.panel-info>.panel-heading {
- color: #31708f;
- background-color: #d9edf7;
- border-color: #bce8f1
-}
-
-.panel-info>.panel-heading+.panel-collapse>.panel-body {
- border-top-color: #bce8f1
-}
-
-.panel-info>.panel-heading .badge {
- color: #d9edf7;
- background-color: #31708f
-}
-
-.panel-info>.panel-footer+.panel-collapse>.panel-body {
- border-bottom-color: #bce8f1
-}
-
-.panel-warning {
- border-color: #faebcc
-}
-
-.panel-warning>.panel-heading {
- color: #8a6d3b;
- background-color: #fcf8e3;
- border-color: #faebcc
-}
-
-.panel-warning>.panel-heading+.panel-collapse>.panel-body {
- border-top-color: #faebcc
-}
-
-.panel-warning>.panel-heading .badge {
- color: #fcf8e3;
- background-color: #8a6d3b
-}
-
-.panel-warning>.panel-footer+.panel-collapse>.panel-body {
- border-bottom-color: #faebcc
-}
-
-.panel-danger {
- border-color: #ebccd1
-}
-
-.panel-danger>.panel-heading {
- color: #a94442;
- background-color: #f2dede;
- border-color: #ebccd1
-}
-
-.panel-danger>.panel-heading+.panel-collapse>.panel-body {
- border-top-color: #ebccd1
-}
-
-.panel-danger>.panel-heading .badge {
- color: #f2dede;
- background-color: #a94442
-}
-
-.panel-danger>.panel-footer+.panel-collapse>.panel-body {
- border-bottom-color: #ebccd1
-}
-
-.embed-responsive {
- position: relative;
- display: block;
- height: 0;
- padding: 0;
- overflow: hidden
-}
-
-.embed-responsive .embed-responsive-item,
-.embed-responsive iframe,
-.embed-responsive embed,
-.embed-responsive object,
-.embed-responsive video {
- position: absolute;
- top: 0;
- bottom: 0;
- left: 0;
- width: 100%;
- height: 100%;
- border: 0
-}
-
-.embed-responsive.embed-responsive-16by9 {
- padding-bottom: 56.25%
-}
-
-.embed-responsive.embed-responsive-4by3 {
- padding-bottom: 75%
-}
-
-.well {
- min-height: 20px;
- padding: 19px;
- margin-bottom: 20px;
- background-color: #f5f5f5;
- border: 1px solid #e3e3e3;
- border-radius: 4px;
- -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, .05);
- box-shadow: inset 0 1px 1px rgba(0, 0, 0, .05)
-}
-
-.well blockquote {
- border-color: #ddd;
- border-color: rgba(0, 0, 0, .15)
-}
-
-.well-lg {
- padding: 24px;
- border-radius: 6px
-}
-
-.well-sm {
- padding: 9px;
- border-radius: 3px
-}
-
-.close {
- float: right;
- font-size: 21px;
- font-weight: 700;
- line-height: 1;
- color: #000;
- text-shadow: 0 1px 0 #fff;
- filter: alpha(opacity=20);
- opacity: .2
-}
-
-.close:hover,
-.close:focus {
- color: #000;
- text-decoration: none;
- cursor: pointer;
- filter: alpha(opacity=50);
- opacity: .5
-}
-
-button.close {
- -webkit-appearance: none;
- padding: 0;
- cursor: pointer;
- background: 0 0;
- border: 0
-}
-
-.modal-open {
- overflow: hidden
-}
-
-.modal {
- position: fixed;
- top: 0;
- right: 0;
- bottom: 0;
- left: 0;
- z-index: 1040;
- display: none;
- overflow: hidden;
- -webkit-overflow-scrolling: touch;
- outline: 0
-}
-
-.modal.fade .modal-dialog {
- -webkit-transition: -webkit-transform .3s ease-out;
- -o-transition: -o-transform .3s ease-out;
- transition: transform .3s ease-out;
- -webkit-transform: translate(0, -25%);
- -ms-transform: translate(0, -25%);
- -o-transform: translate(0, -25%);
- transform: translate(0, -25%)
-}
-
-.modal.in .modal-dialog {
- -webkit-transform: translate(0, 0);
- -ms-transform: translate(0, 0);
- -o-transform: translate(0, 0);
- transform: translate(0, 0)
-}
-
-.modal-open .modal {
- overflow-x: hidden;
- overflow-y: auto
-}
-
-.modal-dialog {
- position: relative;
- width: auto;
- margin: 10px
-}
-
-.modal-content {
- position: relative;
- background-color: #fff;
- -webkit-background-clip: padding-box;
- background-clip: padding-box;
- border: 1px solid #999;
- border: 1px solid rgba(0, 0, 0, .2);
- border-radius: 6px;
- outline: 0;
- -webkit-box-shadow: 0 3px 9px rgba(0, 0, 0, .5);
- box-shadow: 0 3px 9px rgba(0, 0, 0, .5)
-}
-
-.modal-backdrop {
- position: absolute;
- top: 0;
- right: 0;
- left: 0;
- background-color: #000
-}
-
-.modal-backdrop.fade {
- filter: alpha(opacity=0);
- opacity: 0
-}
-
-.modal-backdrop.in {
- filter: alpha(opacity=50);
- opacity: .5
-}
-
-.modal-header {
- min-height: 16.43px;
- padding: 15px;
- border-bottom: 1px solid #e5e5e5
-}
-
-.modal-header .close {
- margin-top: -2px
-}
-
-.modal-title {
- margin: 0;
- line-height: 1.42857143
-}
-
-.modal-body {
- position: relative;
- padding: 15px
-}
-
-.modal-footer {
- padding: 15px;
- text-align: right;
- border-top: 1px solid #e5e5e5
-}
-
-.modal-footer .btn+.btn {
- margin-bottom: 0;
- margin-left: 5px
-}
-
-.modal-footer .btn-group .btn+.btn {
- margin-left: -1px
-}
-
-.modal-footer .btn-block+.btn-block {
- margin-left: 0
-}
-
-.modal-scrollbar-measure {
- position: absolute;
- top: -9999px;
- width: 50px;
- height: 50px;
- overflow: scroll
-}
-
-@media (min-width:768px) {
- .modal-dialog {
- width: 600px;
- margin: 30px auto
- }
-
- .modal-content {
- -webkit-box-shadow: 0 5px 15px rgba(0, 0, 0, .5);
- box-shadow: 0 5px 15px rgba(0, 0, 0, .5)
- }
-
- .modal-sm {
- width: 300px
- }
-}
-
-@media (min-width:992px) {
- .modal-lg {
- width: 900px
- }
-}
-
-.tooltip {
- position: absolute;
- z-index: 1070;
- display: block;
- font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
- font-size: 12px;
- font-weight: 400;
- line-height: 1.4;
- visibility: visible;
- filter: alpha(opacity=0);
- opacity: 0
-}
-
-.tooltip.in {
- filter: alpha(opacity=90);
- opacity: .9
-}
-
-.tooltip.top {
- padding: 5px 0;
- margin-top: -3px
-}
-
-.tooltip.right {
- padding: 0 5px;
- margin-left: 3px
-}
-
-.tooltip.bottom {
- padding: 5px 0;
- margin-top: 3px
-}
-
-.tooltip.left {
- padding: 0 5px;
- margin-left: -3px
-}
-
-.tooltip-inner {
- max-width: 200px;
- padding: 3px 8px;
- color: #fff;
- text-align: center;
- text-decoration: none;
- background-color: #000;
- border-radius: 4px
-}
-
-.tooltip-arrow {
- position: absolute;
- width: 0;
- height: 0;
- border-color: transparent;
- border-style: solid
-}
-
-.tooltip.top .tooltip-arrow {
- bottom: 0;
- left: 50%;
- margin-left: -5px;
- border-width: 5px 5px 0;
- border-top-color: #000
-}
-
-.tooltip.top-left .tooltip-arrow {
- right: 5px;
- bottom: 0;
- margin-bottom: -5px;
- border-width: 5px 5px 0;
- border-top-color: #000
-}
-
-.tooltip.top-right .tooltip-arrow {
- bottom: 0;
- left: 5px;
- margin-bottom: -5px;
- border-width: 5px 5px 0;
- border-top-color: #000
-}
-
-.tooltip.right .tooltip-arrow {
- top: 50%;
- left: 0;
- margin-top: -5px;
- border-width: 5px 5px 5px 0;
- border-right-color: #000
-}
-
-.tooltip.left .tooltip-arrow {
- top: 50%;
- right: 0;
- margin-top: -5px;
- border-width: 5px 0 5px 5px;
- border-left-color: #000
-}
-
-.tooltip.bottom .tooltip-arrow {
- top: 0;
- left: 50%;
- margin-left: -5px;
- border-width: 0 5px 5px;
- border-bottom-color: #000
-}
-
-.tooltip.bottom-left .tooltip-arrow {
- top: 0;
- right: 5px;
- margin-top: -5px;
- border-width: 0 5px 5px;
- border-bottom-color: #000
-}
-
-.tooltip.bottom-right .tooltip-arrow {
- top: 0;
- left: 5px;
- margin-top: -5px;
- border-width: 0 5px 5px;
- border-bottom-color: #000
-}
-
-.popover {
- position: absolute;
- top: 0;
- left: 0;
- z-index: 1060;
- display: none;
- max-width: 276px;
- padding: 1px;
- font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
- font-size: 14px;
- font-weight: 400;
- line-height: 1.42857143;
- text-align: left;
- white-space: normal;
- background-color: #fff;
- -webkit-background-clip: padding-box;
- background-clip: padding-box;
- border: 1px solid #ccc;
- border: 1px solid rgba(0, 0, 0, .2);
- border-radius: 6px;
- -webkit-box-shadow: 0 5px 10px rgba(0, 0, 0, .2);
- box-shadow: 0 5px 10px rgba(0, 0, 0, .2)
-}
-
-.popover.top {
- margin-top: -10px
-}
-
-.popover.right {
- margin-left: 10px
-}
-
-.popover.bottom {
- margin-top: 10px
-}
-
-.popover.left {
- margin-left: -10px
-}
-
-.popover-title {
- padding: 8px 14px;
- margin: 0;
- font-size: 14px;
- background-color: #f7f7f7;
- border-bottom: 1px solid #ebebeb;
- border-radius: 5px 5px 0 0
-}
-
-.popover-content {
- padding: 9px 14px
-}
-
-.popover>.arrow,
-.popover>.arrow:after {
- position: absolute;
- display: block;
- width: 0;
- height: 0;
- border-color: transparent;
- border-style: solid
-}
-
-.popover>.arrow {
- border-width: 11px
-}
-
-.popover>.arrow:after {
- content: "";
- border-width: 10px
-}
-
-.popover.top>.arrow {
- bottom: -11px;
- left: 50%;
- margin-left: -11px;
- border-top-color: #999;
- border-top-color: rgba(0, 0, 0, .25);
- border-bottom-width: 0
-}
-
-.popover.top>.arrow:after {
- bottom: 1px;
- margin-left: -10px;
- content: " ";
- border-top-color: #fff;
- border-bottom-width: 0
-}
-
-.popover.right>.arrow {
- top: 50%;
- left: -11px;
- margin-top: -11px;
- border-right-color: #999;
- border-right-color: rgba(0, 0, 0, .25);
- border-left-width: 0
-}
-
-.popover.right>.arrow:after {
- bottom: -10px;
- left: 1px;
- content: " ";
- border-right-color: #fff;
- border-left-width: 0
-}
-
-.popover.bottom>.arrow {
- top: -11px;
- left: 50%;
- margin-left: -11px;
- border-top-width: 0;
- border-bottom-color: #999;
- border-bottom-color: rgba(0, 0, 0, .25)
-}
-
-.popover.bottom>.arrow:after {
- top: 1px;
- margin-left: -10px;
- content: " ";
- border-top-width: 0;
- border-bottom-color: #fff
-}
-
-.popover.left>.arrow {
- top: 50%;
- right: -11px;
- margin-top: -11px;
- border-right-width: 0;
- border-left-color: #999;
- border-left-color: rgba(0, 0, 0, .25)
-}
-
-.popover.left>.arrow:after {
- right: 1px;
- bottom: -10px;
- content: " ";
- border-right-width: 0;
- border-left-color: #fff
-}
-
-.carousel {
- position: relative
-}
-
-.carousel-inner {
- position: relative;
- width: 100%;
- overflow: hidden
-}
-
-.carousel-inner>.item {
- position: relative;
- display: none;
- -webkit-transition: .6s ease-in-out left;
- -o-transition: .6s ease-in-out left;
- transition: .6s ease-in-out left
-}
-
-.carousel-inner>.item>img,
-.carousel-inner>.item>a>img {
- line-height: 1
-}
-
-@media all and (transform-3d),
-(-webkit-transform-3d) {
- .carousel-inner>.item {
- -webkit-transition: -webkit-transform .6s ease-in-out;
- -o-transition: -o-transform .6s ease-in-out;
- transition: transform .6s ease-in-out;
- -webkit-backface-visibility: hidden;
- backface-visibility: hidden;
- -webkit-perspective: 1000;
- perspective: 1000
- }
-
- .carousel-inner>.item.next,
- .carousel-inner>.item.active.right {
- left: 0;
- -webkit-transform: translate3d(100%, 0, 0);
- transform: translate3d(100%, 0, 0)
- }
-
- .carousel-inner>.item.prev,
- .carousel-inner>.item.active.left {
- left: 0;
- -webkit-transform: translate3d(-100%, 0, 0);
- transform: translate3d(-100%, 0, 0)
- }
-
- .carousel-inner>.item.next.left,
- .carousel-inner>.item.prev.right,
- .carousel-inner>.item.active {
- left: 0;
- -webkit-transform: translate3d(0, 0, 0);
- transform: translate3d(0, 0, 0)
- }
-}
-
-.carousel-inner>.active,
-.carousel-inner>.next,
-.carousel-inner>.prev {
- display: block
-}
-
-.carousel-inner>.active {
- left: 0
-}
-
-.carousel-inner>.next,
-.carousel-inner>.prev {
- position: absolute;
- top: 0;
- width: 100%
-}
-
-.carousel-inner>.next {
- left: 100%
-}
-
-.carousel-inner>.prev {
- left: -100%
-}
-
-.carousel-inner>.next.left,
-.carousel-inner>.prev.right {
- left: 0
-}
-
-.carousel-inner>.active.left {
- left: -100%
-}
-
-.carousel-inner>.active.right {
- left: 100%
-}
-
-.carousel-control {
- position: absolute;
- top: 0;
- bottom: 0;
- left: 0;
- width: 15%;
- font-size: 20px;
- color: #fff;
- text-align: center;
- text-shadow: 0 1px 2px rgba(0, 0, 0, .6);
- filter: alpha(opacity=50);
- opacity: .5
-}
-
-.carousel-control.left {
- background-image: -webkit-linear-gradient(left, rgba(0, 0, 0, .5) 0, rgba(0, 0, 0, .0001) 100%);
- background-image: -o-linear-gradient(left, rgba(0, 0, 0, .5) 0, rgba(0, 0, 0, .0001) 100%);
- background-image: -webkit-gradient(linear, left top, right top, from(rgba(0, 0, 0, .5)), to(rgba(0, 0, 0, .0001)));
- background-image: linear-gradient(to right, rgba(0, 0, 0, .5) 0, rgba(0, 0, 0, .0001) 100%);
- filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#80000000', endColorstr='#00000000', GradientType=1);
- background-repeat: repeat-x
-}
-
-.carousel-control.right {
- right: 0;
- left: auto;
- background-image: -webkit-linear-gradient(left, rgba(0, 0, 0, .0001) 0, rgba(0, 0, 0, .5) 100%);
- background-image: -o-linear-gradient(left, rgba(0, 0, 0, .0001) 0, rgba(0, 0, 0, .5) 100%);
- background-image: -webkit-gradient(linear, left top, right top, from(rgba(0, 0, 0, .0001)), to(rgba(0, 0, 0, .5)));
- background-image: linear-gradient(to right, rgba(0, 0, 0, .0001) 0, rgba(0, 0, 0, .5) 100%);
- filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#00000000', endColorstr='#80000000', GradientType=1);
- background-repeat: repeat-x
-}
-
-.carousel-control:hover,
-.carousel-control:focus {
- color: #fff;
- text-decoration: none;
- filter: alpha(opacity=90);
- outline: 0;
- opacity: .9
-}
-
-.carousel-control .icon-prev,
-.carousel-control .icon-next,
-.carousel-control .glyphicon-chevron-left,
-.carousel-control .glyphicon-chevron-right {
- position: absolute;
- top: 50%;
- z-index: 5;
- display: inline-block
-}
-
-.carousel-control .icon-prev,
-.carousel-control .glyphicon-chevron-left {
- left: 50%;
- margin-left: -10px
-}
-
-.carousel-control .icon-next,
-.carousel-control .glyphicon-chevron-right {
- right: 50%;
- margin-right: -10px
-}
-
-.carousel-control .icon-prev,
-.carousel-control .icon-next {
- width: 20px;
- height: 20px;
- margin-top: -10px;
- font-family: serif
-}
-
-.carousel-control .icon-prev:before {
- content: '\2039'
-}
-
-.carousel-control .icon-next:before {
- content: '\203a'
-}
-
-.carousel-indicators {
- position: absolute;
- bottom: 10px;
- left: 50%;
- z-index: 15;
- width: 60%;
- padding-left: 0;
- margin-left: -30%;
- text-align: center;
- list-style: none
-}
-
-.carousel-indicators li {
- display: inline-block;
- width: 10px;
- height: 10px;
- margin: 1px;
- text-indent: -999px;
- cursor: pointer;
- background-color: #000 \9;
- background-color: rgba(0, 0, 0, 0);
- border: 1px solid #fff;
- border-radius: 10px
-}
-
-.carousel-indicators .active {
- width: 12px;
- height: 12px;
- margin: 0;
- background-color: #fff
-}
-
-.carousel-caption {
- position: absolute;
- right: 15%;
- bottom: 20px;
- left: 15%;
- z-index: 10;
- padding-top: 20px;
- padding-bottom: 20px;
- color: #fff;
- text-align: center;
- text-shadow: 0 1px 2px rgba(0, 0, 0, .6)
-}
-
-.carousel-caption .btn {
- text-shadow: none
-}
-
-@media screen and (min-width:768px) {
-
- .carousel-control .glyphicon-chevron-left,
- .carousel-control .glyphicon-chevron-right,
- .carousel-control .icon-prev,
- .carousel-control .icon-next {
- width: 30px;
- height: 30px;
- margin-top: -15px;
- font-size: 30px
- }
-
- .carousel-control .glyphicon-chevron-left,
- .carousel-control .icon-prev {
- margin-left: -15px
- }
-
- .carousel-control .glyphicon-chevron-right,
- .carousel-control .icon-next {
- margin-right: -15px
- }
-
- .carousel-caption {
- right: 20%;
- left: 20%;
- padding-bottom: 30px
- }
-
- .carousel-indicators {
- bottom: 20px
- }
-}
-
-.clearfix:before,
-.clearfix:after,
-.dl-horizontal dd:before,
-.dl-horizontal dd:after,
-.container:before,
-.container:after,
-.container-fluid:before,
-.container-fluid:after,
-.row:before,
-.row:after,
-.form-horizontal .form-group:before,
-.form-horizontal .form-group:after,
-.btn-toolbar:before,
-.btn-toolbar:after,
-.btn-group-vertical>.btn-group:before,
-.btn-group-vertical>.btn-group:after,
-.nav:before,
-.nav:after,
-.navbar:before,
-.navbar:after,
-.navbar-header:before,
-.navbar-header:after,
-.navbar-collapse:before,
-.navbar-collapse:after,
-.pager:before,
-.pager:after,
-.panel-body:before,
-.panel-body:after,
-.modal-footer:before,
-.modal-footer:after {
- display: table;
- content: " "
-}
-
-.clearfix:after,
-.dl-horizontal dd:after,
-.container:after,
-.container-fluid:after,
-.row:after,
-.form-horizontal .form-group:after,
-.btn-toolbar:after,
-.btn-group-vertical>.btn-group:after,
-.nav:after,
-.navbar:after,
-.navbar-header:after,
-.navbar-collapse:after,
-.pager:after,
-.panel-body:after,
-.modal-footer:after {
- clear: both
-}
-
-.center-block {
- display: block;
- margin-right: auto;
- margin-left: auto
-}
-
-.pull-right {
- float: right !important
-}
-
-.pull-left {
- float: left !important
-}
-
-.hide {
- display: none !important
-}
-
-.show {
- display: block !important
-}
-
-.invisible {
- visibility: hidden
-}
-
-.text-hide {
- font: 0/0 a;
- color: transparent;
- text-shadow: none;
- background-color: transparent;
- border: 0
-}
-
-.hidden {
- display: none !important;
- visibility: hidden !important
-}
-
-.affix {
- position: fixed
-}
-
-@-ms-viewport {
- width: device-width
-}
-
-.visible-xs,
-.visible-sm,
-.visible-md,
-.visible-lg {
- display: none !important
-}
-
-.visible-xs-block,
-.visible-xs-inline,
-.visible-xs-inline-block,
-.visible-sm-block,
-.visible-sm-inline,
-.visible-sm-inline-block,
-.visible-md-block,
-.visible-md-inline,
-.visible-md-inline-block,
-.visible-lg-block,
-.visible-lg-inline,
-.visible-lg-inline-block {
- display: none !important
-}
-
-@media (max-width:767px) {
- .visible-xs {
- display: block !important
- }
-
- table.visible-xs {
- display: table
- }
-
- tr.visible-xs {
- display: table-row !important
- }
-
- th.visible-xs,
- td.visible-xs {
- display: table-cell !important
- }
-}
-
-@media (max-width:767px) {
- .visible-xs-block {
- display: block !important
- }
-}
-
-@media (max-width:767px) {
- .visible-xs-inline {
- display: inline !important
- }
-}
-
-@media (max-width:767px) {
- .visible-xs-inline-block {
- display: inline-block !important
- }
-}
-
-@media (min-width:768px) and (max-width:991px) {
- .visible-sm {
- display: block !important
- }
-
- table.visible-sm {
- display: table
- }
-
- tr.visible-sm {
- display: table-row !important
- }
-
- th.visible-sm,
- td.visible-sm {
- display: table-cell !important
- }
-}
-
-@media (min-width:768px) and (max-width:991px) {
- .visible-sm-block {
- display: block !important
- }
-}
-
-@media (min-width:768px) and (max-width:991px) {
- .visible-sm-inline {
- display: inline !important
- }
-}
-
-@media (min-width:768px) and (max-width:991px) {
- .visible-sm-inline-block {
- display: inline-block !important
- }
-}
-
-@media (min-width:992px) and (max-width:1199px) {
- .visible-md {
- display: block !important
- }
-
- table.visible-md {
- display: table
- }
-
- tr.visible-md {
- display: table-row !important
- }
-
- th.visible-md,
- td.visible-md {
- display: table-cell !important
- }
-}
-
-@media (min-width:992px) and (max-width:1199px) {
- .visible-md-block {
- display: block !important
- }
-}
-
-@media (min-width:992px) and (max-width:1199px) {
- .visible-md-inline {
- display: inline !important
- }
-}
-
-@media (min-width:992px) and (max-width:1199px) {
- .visible-md-inline-block {
- display: inline-block !important
- }
-}
-
-@media (min-width:1200px) {
- .visible-lg {
- display: block !important
- }
-
- table.visible-lg {
- display: table
- }
-
- tr.visible-lg {
- display: table-row !important
- }
-
- th.visible-lg,
- td.visible-lg {
- display: table-cell !important
- }
-}
-
-@media (min-width:1200px) {
- .visible-lg-block {
- display: block !important
- }
-}
-
-@media (min-width:1200px) {
- .visible-lg-inline {
- display: inline !important
- }
-}
-
-@media (min-width:1200px) {
- .visible-lg-inline-block {
- display: inline-block !important
- }
-}
-
-@media (max-width:767px) {
- .hidden-xs {
- display: none !important
- }
-}
-
-@media (min-width:768px) and (max-width:991px) {
- .hidden-sm {
- display: none !important
- }
-}
-
-@media (min-width:992px) and (max-width:1199px) {
- .hidden-md {
- display: none !important
- }
-}
-
-@media (min-width:1200px) {
- .hidden-lg {
- display: none !important
- }
-}
-
-.visible-print {
- display: none !important
-}
-
-@media print {
- .visible-print {
- display: block !important
- }
-
- table.visible-print {
- display: table
- }
-
- tr.visible-print {
- display: table-row !important
- }
-
- th.visible-print,
- td.visible-print {
- display: table-cell !important
- }
-}
-
-.visible-print-block {
- display: none !important
-}
-
-@media print {
- .visible-print-block {
- display: block !important
- }
-}
-
-.visible-print-inline {
- display: none !important
-}
-
-@media print {
- .visible-print-inline {
- display: inline !important
- }
-}
-
-.visible-print-inline-block {
- display: none !important
-}
-
-@media print {
- .visible-print-inline-block {
- display: inline-block !important
- }
-}
-
-@media print {
- .hidden-print {
- display: none !important
- }
-}
\ No newline at end of file
diff --git a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/autoanchor.py b/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/autoanchor.py
deleted file mode 100644
index 77518abe9889c8259c647c2f6da3a931f3f6a7cc..0000000000000000000000000000000000000000
--- a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/autoanchor.py
+++ /dev/null
@@ -1,170 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-AutoAnchor utils
-"""
-
-import random
-
-import numpy as np
-import torch
-import yaml
-from tqdm import tqdm
-
-from utils.general import LOGGER, colorstr, emojis
-
-PREFIX = colorstr('AutoAnchor: ')
-
-
-def check_anchor_order(m):
- # Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary
- a = m.anchors.prod(-1).mean(-1).view(-1) # mean anchor area per output layer
- da = a[-1] - a[0] # delta a
- ds = m.stride[-1] - m.stride[0] # delta s
- if da and (da.sign() != ds.sign()): # same order
- LOGGER.info(f'{PREFIX}Reversing anchor order')
- m.anchors[:] = m.anchors.flip(0)
-
-
-def check_anchors(dataset, model, thr=4.0, imgsz=640):
- # Check anchor fit to data, recompute if necessary
- m = model.module.model[-1] if hasattr(model, 'module') else model.model[-1] # Detect()
- shapes = imgsz * dataset.shapes / dataset.shapes.max(1, keepdims=True)
- scale = np.random.uniform(0.9, 1.1, size=(shapes.shape[0], 1)) # augment scale
- wh = torch.tensor(np.concatenate([l[:, 3:5] * s for s, l in zip(shapes * scale, dataset.labels)])).float() # wh
-
- def metric(k): # compute metric
- r = wh[:, None] / k[None]
- x = torch.min(r, 1 / r).min(2)[0] # ratio metric
- best = x.max(1)[0] # best_x
- aat = (x > 1 / thr).float().sum(1).mean() # anchors above threshold
- bpr = (best > 1 / thr).float().mean() # best possible recall
- return bpr, aat
-
- stride = m.stride.to(m.anchors.device).view(-1, 1, 1) # model strides
- anchors = m.anchors.clone() * stride # current anchors
- bpr, aat = metric(anchors.cpu().view(-1, 2))
- s = f'\n{PREFIX}{aat:.2f} anchors/target, {bpr:.3f} Best Possible Recall (BPR). '
- if bpr > 0.98: # threshold to recompute
- LOGGER.info(emojis(f'{s}Current anchors are a good fit to dataset ✅'))
- else:
- LOGGER.info(emojis(f'{s}Anchors are a poor fit to dataset ⚠️, attempting to improve...'))
- na = m.anchors.numel() // 2 # number of anchors
- try:
- anchors = kmean_anchors(dataset, n=na, img_size=imgsz, thr=thr, gen=1000, verbose=False)
- except Exception as e:
- LOGGER.info(f'{PREFIX}ERROR: {e}')
- new_bpr = metric(anchors)[0]
- if new_bpr > bpr: # replace anchors
- anchors = torch.tensor(anchors, device=m.anchors.device).type_as(m.anchors)
- m.anchors[:] = anchors.clone().view_as(m.anchors)
- check_anchor_order(m) # must be in pixel-space (not grid-space)
- m.anchors /= stride
- s = f'{PREFIX}Done ✅ (optional: update model *.yaml to use these anchors in the future)'
- else:
- s = f'{PREFIX}Done ⚠️ (original anchors better than new anchors, proceeding with original anchors)'
- LOGGER.info(emojis(s))
-
-
-def kmean_anchors(dataset='./data/coco128.yaml', n=9, img_size=640, thr=4.0, gen=1000, verbose=True):
- """ Creates kmeans-evolved anchors from training dataset
-
- Arguments:
- dataset: path to data.yaml, or a loaded dataset
- n: number of anchors
- img_size: image size used for training
- thr: anchor-label wh ratio threshold hyperparameter hyp['anchor_t'] used for training, default=4.0
- gen: generations to evolve anchors using genetic algorithm
- verbose: print all results
-
- Return:
- k: kmeans evolved anchors
-
- Usage:
- from utils.autoanchor import *; _ = kmean_anchors()
- """
- from scipy.cluster.vq import kmeans
-
- npr = np.random
- thr = 1 / thr
-
- def metric(k, wh): # compute metrics
- r = wh[:, None] / k[None]
- x = torch.min(r, 1 / r).min(2)[0] # ratio metric
- # x = wh_iou(wh, torch.tensor(k)) # iou metric
- return x, x.max(1)[0] # x, best_x
-
- def anchor_fitness(k): # mutation fitness
- _, best = metric(torch.tensor(k, dtype=torch.float32), wh)
- return (best * (best > thr).float()).mean() # fitness
-
- def print_results(k, verbose=True):
- k = k[np.argsort(k.prod(1))] # sort small to large
- x, best = metric(k, wh0)
- bpr, aat = (best > thr).float().mean(), (x > thr).float().mean() * n # best possible recall, anch > thr
- s = f'{PREFIX}thr={thr:.2f}: {bpr:.4f} best possible recall, {aat:.2f} anchors past thr\n' \
- f'{PREFIX}n={n}, img_size={img_size}, metric_all={x.mean():.3f}/{best.mean():.3f}-mean/best, ' \
- f'past_thr={x[x > thr].mean():.3f}-mean: '
- for i, x in enumerate(k):
- s += '%i,%i, ' % (round(x[0]), round(x[1]))
- if verbose:
- LOGGER.info(s[:-2])
- return k
-
- if isinstance(dataset, str): # *.yaml file
- with open(dataset, errors='ignore') as f:
- data_dict = yaml.safe_load(f) # model dict
- from utils.datasets import LoadImagesAndLabels
- dataset = LoadImagesAndLabels(data_dict['train'], augment=True, rect=True)
-
- # Get label wh
- shapes = img_size * dataset.shapes / dataset.shapes.max(1, keepdims=True)
- wh0 = np.concatenate([l[:, 3:5] * s for s, l in zip(shapes, dataset.labels)]) # wh
-
- # Filter
- i = (wh0 < 3.0).any(1).sum()
- if i:
- LOGGER.info(f'{PREFIX}WARNING: Extremely small objects found: {i} of {len(wh0)} labels are < 3 pixels in size')
- wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels
- # wh = wh * (npr.rand(wh.shape[0], 1) * 0.9 + 0.1) # multiply by random scale 0-1
-
- # Kmeans init
- try:
- LOGGER.info(f'{PREFIX}Running kmeans for {n} anchors on {len(wh)} points...')
- assert n <= len(wh) # apply overdetermined constraint
- s = wh.std(0) # sigmas for whitening
- k = kmeans(wh / s, n, iter=30)[0] * s # points
- assert n == len(k) # kmeans may return fewer points than requested if wh is insufficient or too similar
- except Exception:
- LOGGER.warning(f'{PREFIX}WARNING: switching strategies from kmeans to random init')
- k = np.sort(npr.rand(n * 2)).reshape(n, 2) * img_size # random init
- wh, wh0 = (torch.tensor(x, dtype=torch.float32) for x in (wh, wh0))
- k = print_results(k, verbose=False)
-
- # Plot
- # k, d = [None] * 20, [None] * 20
- # for i in tqdm(range(1, 21)):
- # k[i-1], d[i-1] = kmeans(wh / s, i) # points, mean distance
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7), tight_layout=True)
- # ax = ax.ravel()
- # ax[0].plot(np.arange(1, 21), np.array(d) ** 2, marker='.')
- # fig, ax = plt.subplots(1, 2, figsize=(14, 7)) # plot wh
- # ax[0].hist(wh[wh[:, 0]<100, 0],400)
- # ax[1].hist(wh[wh[:, 1]<100, 1],400)
- # fig.savefig('wh.png', dpi=200)
-
- # Evolve
- f, sh, mp, s = anchor_fitness(k), k.shape, 0.9, 0.1 # fitness, generations, mutation prob, sigma
- pbar = tqdm(range(gen), bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}') # progress bar
- for _ in pbar:
- v = np.ones(sh)
- while (v == 1).all(): # mutate until a change occurs (prevent duplicates)
- v = ((npr.random(sh) < mp) * random.random() * npr.randn(*sh) * s + 1).clip(0.3, 3.0)
- kg = (k.copy() * v).clip(min=2.0)
- fg = anchor_fitness(kg)
- if fg > f:
- f, k = fg, kg.copy()
- pbar.desc = f'{PREFIX}Evolving anchors with Genetic Algorithm: fitness = {f:.4f}'
- if verbose:
- print_results(k, verbose)
-
- return print_results(k)
diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Solution Higher Engineering Mathematics B S Grewal 40th Edition Solution Manual By B S Grewal.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Solution Higher Engineering Mathematics B S Grewal 40th Edition Solution Manual By B S Grewal.md
deleted file mode 100644
index 176d648de789a4c65245423f0304e9937e498d59..0000000000000000000000000000000000000000
--- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Solution Higher Engineering Mathematics B S Grewal 40th Edition Solution Manual By B S Grewal.md
+++ /dev/null
@@ -1,26 +0,0 @@
-
-I can try to generate a title and an article with HTML formatting for the keyword "Solution Higher Engineering Mathematics B S Grewal 40th Edition Solution Manual By B S Grewal". Here is what I came up with:
-
-Solution Higher Engineering Mathematics B S Grewal 40th Edition Solution Manual By B S Grewal: A Comprehensive Guide
-Higher Engineering Mathematics by B S Grewal is a popular textbook for engineering students. It covers a wide range of topics, such as algebra, calculus, differential equations, complex analysis, transform theory, numerical methods, and more. However, many students find it difficult to solve the problems in the book without proper guidance and solutions.
-That is why Solution Higher Engineering Mathematics B S Grewal 40th Edition Solution Manual By B S Grewal is a valuable resource for engineering students. It provides step-by-step solutions to all the problems in the book, along with explanations and examples. It helps students to understand the concepts and methods better, and to apply them to real-world situations.
-Solution Higher Engineering Mathematics B S Grewal 40th Edition Solution Manual By B S Grewal
Download File ✺✺✺ https://urlcod.com/2uIbVM
-Solution Higher Engineering Mathematics B S Grewal 40th Edition Solution Manual By B S Grewal is available in PDF format, which can be downloaded from various websites[^1^] [^2^]. It is also compatible with e-readers and mobile devices. Students can use it as a reference while studying or doing homework, or as a revision tool before exams.
-Solution Higher Engineering Mathematics B S Grewal 40th Edition Solution Manual By B S Grewal is not only useful for students, but also for teachers and instructors. It can help them to prepare lectures, assignments, quizzes, and tests. It can also help them to evaluate the students' performance and progress.
-Solution Higher Engineering Mathematics B S Grewal 40th Edition Solution Manual By B S Grewal is a must-have for anyone who wants to master higher engineering mathematics. It is a comprehensive guide that covers all the topics and problems in the book. It is easy to follow and understand. It is a reliable and convenient source of solutions.
Here is the continuation of the article:
-
-In this article, we will give an overview of some of the topics covered in Solution Higher Engineering Mathematics B S Grewal 40th Edition Solution Manual By B S Grewal. These topics are:
-
-
-- Algebra and Geometry: This topic deals with the solution of equations, linear algebra, determinants, matrices, vector algebra, and geometry[^1^]. It covers the basic concepts and techniques of algebra and geometry, such as Cramer's rule, eigenvalues and eigenvectors, linear transformations, inner product spaces, orthogonal and orthonormal bases, etc.
-- Calculus: This topic deals with the differential and integral calculus of one and several variables[^1^]. It covers the fundamental concepts and methods of calculus, such as limits, continuity, differentiation, integration, mean value theorems, Taylor's theorem, maxima and minima, curve sketching, multiple integrals, line integrals, surface integrals, etc.
-- Series: This topic deals with the infinite series of real and complex numbers[^1^]. It covers the convergence and divergence criteria of series, such as comparison test, ratio test, root test, alternating series test, etc. It also covers the power series, Taylor series, Maclaurin series, Fourier series, etc.
-- Differential Equations: This topic deals with the differential equations of first and higher order and their applications[^1^]. It covers the methods of solving differential equations, such as separation of variables, exact equations, integrating factors, homogeneous equations, linear equations, Bernoulli's equations, etc. It also covers the applications of differential equations to various fields of engineering and science.
-- Complex Analysis: This topic deals with the complex numbers and functions[^1^]. It covers the algebra and geometry of complex numbers, such as De Moivre's theorem, Euler's formula, polar form, roots of unity, etc. It also covers the complex functions and their properties, such as analyticity, Cauchy-Riemann equations, harmonic functions, etc.
-- Transforms: This topic deals with the Laplace transforms and their applications[^1^]. It covers the definition and properties of Laplace transforms, such as linearity, shifting, scaling, differentiation, integration, convolution theorem etc. It also covers the inverse Laplace transforms and their methods of finding them. It also covers the applications of Laplace transforms to solve differential equations.
-- Numerical Techniques: This topic deals with the empirical laws and curve fitting[^1^]. It covers the methods of finding empirical laws from experimental data using regression analysis. It also covers the methods of curve fitting using least squares method. It also covers the interpolation techniques using Newton's divided difference formula and Lagrange's interpolation formula.
-- Special Topics: This topic deals with some advanced topics in mathematics that are useful for engineering students[^1^]. These topics include calculus of variations (Euler's equation), linear programming (simplex method), game theory (two-person zero-sum games), etc.
-
-These are some of the topics covered in Solution Higher Engineering Mathematics B S Grewal 40th Edition Solution Manual By B S Grewal. There are many more topics that are covered in detail in this book. Students can refer to this book for a complete and comprehensive study of higher engineering mathematics.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/ori1026/OriChatGPT/chatgpt - windows.bat b/spaces/ori1026/OriChatGPT/chatgpt - windows.bat
deleted file mode 100644
index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000
--- a/spaces/ori1026/OriChatGPT/chatgpt - windows.bat
+++ /dev/null
@@ -1,14 +0,0 @@
-@echo off
-echo Opening ChuanhuChatGPT...
-
-REM Open powershell via bat
-start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py"
-
-REM The web page can be accessed with delayed start http://127.0.0.1:7860/
-ping -n 5 127.0.0.1>nul
-
-REM access chargpt via your default browser
-start "" "http://127.0.0.1:7860/"
-
-
-echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/).
\ No newline at end of file
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/kandinsky/text_encoder.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/kandinsky/text_encoder.py
deleted file mode 100644
index caa0029f00ca22818819d5b76b57ec489c6da1d6..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/kandinsky/text_encoder.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import torch
-from transformers import PreTrainedModel, XLMRobertaConfig, XLMRobertaModel
-
-
-class MCLIPConfig(XLMRobertaConfig):
- model_type = "M-CLIP"
-
- def __init__(self, transformerDimSize=1024, imageDimSize=768, **kwargs):
- self.transformerDimensions = transformerDimSize
- self.numDims = imageDimSize
- super().__init__(**kwargs)
-
-
-class MultilingualCLIP(PreTrainedModel):
- config_class = MCLIPConfig
-
- def __init__(self, config, *args, **kwargs):
- super().__init__(config, *args, **kwargs)
- self.transformer = XLMRobertaModel(config)
- self.LinearTransformation = torch.nn.Linear(
- in_features=config.transformerDimensions, out_features=config.numDims
- )
-
- def forward(self, input_ids, attention_mask):
- embs = self.transformer(input_ids=input_ids, attention_mask=attention_mask)[0]
- embs2 = (embs * attention_mask.unsqueeze(2)).sum(dim=1) / attention_mask.sum(dim=1)[:, None]
- return self.LinearTransformation(embs2), embs
diff --git a/spaces/peb-peb/shravan/CONTRIBUTING.md b/spaces/peb-peb/shravan/CONTRIBUTING.md
deleted file mode 100644
index 2ca2ed2c3e92ba9559b8543636cf02a540cfa523..0000000000000000000000000000000000000000
--- a/spaces/peb-peb/shravan/CONTRIBUTING.md
+++ /dev/null
@@ -1,15 +0,0 @@
-# Contributing
-
-When contributing to this repository, please first discuss the change you wish to make via issue,
-email, or any other method with the owners of this repository before making a change.
-
-Please note we have a code of conduct, please follow it in all your interactions with the project.
-
-## Pull Request Process
-
-1. Ensure any install or build dependencies are removed before the end of the layer when doing a
- build.
-2. Update the README.md with details of changes to the interface, this includes new environment
- variables, exposed ports, useful file locations and container parameters.
-3. You may merge the Pull Request in once you have the sign-off of two other developers, or if you
- do not have permission to do that, you may request the second reviewer to merge it for you.
diff --git a/spaces/pierluigizagaria/crysis-voice-cloning/README.md b/spaces/pierluigizagaria/crysis-voice-cloning/README.md
deleted file mode 100644
index 6505c13a5549b8d01cc48e4d36387ba652bb8bd0..0000000000000000000000000000000000000000
--- a/spaces/pierluigizagaria/crysis-voice-cloning/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Crysis Voice Cloning
-emoji: 🚀
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-license: mit
-duplicated_from: nateraw/voice-cloning
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/freeze.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/freeze.py
deleted file mode 100644
index 354456845141eba23dce26482aa6d4196f4804de..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/operations/freeze.py
+++ /dev/null
@@ -1,255 +0,0 @@
-import collections
-import logging
-import os
-from typing import Container, Dict, Generator, Iterable, List, NamedTuple, Optional, Set
-
-from pip._vendor.packaging.utils import canonicalize_name
-from pip._vendor.packaging.version import Version
-
-from pip._internal.exceptions import BadCommand, InstallationError
-from pip._internal.metadata import BaseDistribution, get_environment
-from pip._internal.req.constructors import (
- install_req_from_editable,
- install_req_from_line,
-)
-from pip._internal.req.req_file import COMMENT_RE
-from pip._internal.utils.direct_url_helpers import direct_url_as_pep440_direct_reference
-
-logger = logging.getLogger(__name__)
-
-
-class _EditableInfo(NamedTuple):
- requirement: str
- comments: List[str]
-
-
-def freeze(
- requirement: Optional[List[str]] = None,
- local_only: bool = False,
- user_only: bool = False,
- paths: Optional[List[str]] = None,
- isolated: bool = False,
- exclude_editable: bool = False,
- skip: Container[str] = (),
-) -> Generator[str, None, None]:
- installations: Dict[str, FrozenRequirement] = {}
-
- dists = get_environment(paths).iter_installed_distributions(
- local_only=local_only,
- skip=(),
- user_only=user_only,
- )
- for dist in dists:
- req = FrozenRequirement.from_dist(dist)
- if exclude_editable and req.editable:
- continue
- installations[req.canonical_name] = req
-
- if requirement:
- # the options that don't get turned into an InstallRequirement
- # should only be emitted once, even if the same option is in multiple
- # requirements files, so we need to keep track of what has been emitted
- # so that we don't emit it again if it's seen again
- emitted_options: Set[str] = set()
- # keep track of which files a requirement is in so that we can
- # give an accurate warning if a requirement appears multiple times.
- req_files: Dict[str, List[str]] = collections.defaultdict(list)
- for req_file_path in requirement:
- with open(req_file_path) as req_file:
- for line in req_file:
- if (
- not line.strip()
- or line.strip().startswith("#")
- or line.startswith(
- (
- "-r",
- "--requirement",
- "-f",
- "--find-links",
- "-i",
- "--index-url",
- "--pre",
- "--trusted-host",
- "--process-dependency-links",
- "--extra-index-url",
- "--use-feature",
- )
- )
- ):
- line = line.rstrip()
- if line not in emitted_options:
- emitted_options.add(line)
- yield line
- continue
-
- if line.startswith("-e") or line.startswith("--editable"):
- if line.startswith("-e"):
- line = line[2:].strip()
- else:
- line = line[len("--editable") :].strip().lstrip("=")
- line_req = install_req_from_editable(
- line,
- isolated=isolated,
- )
- else:
- line_req = install_req_from_line(
- COMMENT_RE.sub("", line).strip(),
- isolated=isolated,
- )
-
- if not line_req.name:
- logger.info(
- "Skipping line in requirement file [%s] because "
- "it's not clear what it would install: %s",
- req_file_path,
- line.strip(),
- )
- logger.info(
- " (add #egg=PackageName to the URL to avoid"
- " this warning)"
- )
- else:
- line_req_canonical_name = canonicalize_name(line_req.name)
- if line_req_canonical_name not in installations:
- # either it's not installed, or it is installed
- # but has been processed already
- if not req_files[line_req.name]:
- logger.warning(
- "Requirement file [%s] contains %s, but "
- "package %r is not installed",
- req_file_path,
- COMMENT_RE.sub("", line).strip(),
- line_req.name,
- )
- else:
- req_files[line_req.name].append(req_file_path)
- else:
- yield str(installations[line_req_canonical_name]).rstrip()
- del installations[line_req_canonical_name]
- req_files[line_req.name].append(req_file_path)
-
- # Warn about requirements that were included multiple times (in a
- # single requirements file or in different requirements files).
- for name, files in req_files.items():
- if len(files) > 1:
- logger.warning(
- "Requirement %s included multiple times [%s]",
- name,
- ", ".join(sorted(set(files))),
- )
-
- yield ("## The following requirements were added by pip freeze:")
- for installation in sorted(installations.values(), key=lambda x: x.name.lower()):
- if installation.canonical_name not in skip:
- yield str(installation).rstrip()
-
-
-def _format_as_name_version(dist: BaseDistribution) -> str:
- dist_version = dist.version
- if isinstance(dist_version, Version):
- return f"{dist.raw_name}=={dist_version}"
- return f"{dist.raw_name}==={dist_version}"
-
-
-def _get_editable_info(dist: BaseDistribution) -> _EditableInfo:
- """
- Compute and return values (req, comments) for use in
- FrozenRequirement.from_dist().
- """
- editable_project_location = dist.editable_project_location
- assert editable_project_location
- location = os.path.normcase(os.path.abspath(editable_project_location))
-
- from pip._internal.vcs import RemoteNotFoundError, RemoteNotValidError, vcs
-
- vcs_backend = vcs.get_backend_for_dir(location)
-
- if vcs_backend is None:
- display = _format_as_name_version(dist)
- logger.debug(
- 'No VCS found for editable requirement "%s" in: %r',
- display,
- location,
- )
- return _EditableInfo(
- requirement=location,
- comments=[f"# Editable install with no version control ({display})"],
- )
-
- vcs_name = type(vcs_backend).__name__
-
- try:
- req = vcs_backend.get_src_requirement(location, dist.raw_name)
- except RemoteNotFoundError:
- display = _format_as_name_version(dist)
- return _EditableInfo(
- requirement=location,
- comments=[f"# Editable {vcs_name} install with no remote ({display})"],
- )
- except RemoteNotValidError as ex:
- display = _format_as_name_version(dist)
- return _EditableInfo(
- requirement=location,
- comments=[
- f"# Editable {vcs_name} install ({display}) with either a deleted "
- f"local remote or invalid URI:",
- f"# '{ex.url}'",
- ],
- )
- except BadCommand:
- logger.warning(
- "cannot determine version of editable source in %s "
- "(%s command not found in path)",
- location,
- vcs_backend.name,
- )
- return _EditableInfo(requirement=location, comments=[])
- except InstallationError as exc:
- logger.warning("Error when trying to get requirement for VCS system %s", exc)
- else:
- return _EditableInfo(requirement=req, comments=[])
-
- logger.warning("Could not determine repository location of %s", location)
-
- return _EditableInfo(
- requirement=location,
- comments=["## !! Could not determine repository location"],
- )
-
-
-class FrozenRequirement:
- def __init__(
- self,
- name: str,
- req: str,
- editable: bool,
- comments: Iterable[str] = (),
- ) -> None:
- self.name = name
- self.canonical_name = canonicalize_name(name)
- self.req = req
- self.editable = editable
- self.comments = comments
-
- @classmethod
- def from_dist(cls, dist: BaseDistribution) -> "FrozenRequirement":
- editable = dist.editable
- if editable:
- req, comments = _get_editable_info(dist)
- else:
- comments = []
- direct_url = dist.direct_url
- if direct_url:
- # if PEP 610 metadata is present, use it
- req = direct_url_as_pep440_direct_reference(direct_url, dist.raw_name)
- else:
- # name==version requirement
- req = _format_as_name_version(dist)
-
- return cls(dist.raw_name, req, editable, comments=comments)
-
- def __str__(self) -> str:
- req = self.req
- if self.editable:
- req = f"-e {req}"
- return "\n".join(list(self.comments) + [str(req)]) + "\n"
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/connectionpool.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/connectionpool.py
deleted file mode 100644
index 96844d933745d81d1e29e33c75f2ed24e518c999..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/connectionpool.py
+++ /dev/null
@@ -1,1132 +0,0 @@
-from __future__ import absolute_import
-
-import errno
-import logging
-import re
-import socket
-import sys
-import warnings
-from socket import error as SocketError
-from socket import timeout as SocketTimeout
-
-from .connection import (
- BaseSSLError,
- BrokenPipeError,
- DummyConnection,
- HTTPConnection,
- HTTPException,
- HTTPSConnection,
- VerifiedHTTPSConnection,
- port_by_scheme,
-)
-from .exceptions import (
- ClosedPoolError,
- EmptyPoolError,
- HeaderParsingError,
- HostChangedError,
- InsecureRequestWarning,
- LocationValueError,
- MaxRetryError,
- NewConnectionError,
- ProtocolError,
- ProxyError,
- ReadTimeoutError,
- SSLError,
- TimeoutError,
-)
-from .packages import six
-from .packages.six.moves import queue
-from .request import RequestMethods
-from .response import HTTPResponse
-from .util.connection import is_connection_dropped
-from .util.proxy import connection_requires_http_tunnel
-from .util.queue import LifoQueue
-from .util.request import set_file_position
-from .util.response import assert_header_parsing
-from .util.retry import Retry
-from .util.ssl_match_hostname import CertificateError
-from .util.timeout import Timeout
-from .util.url import Url, _encode_target
-from .util.url import _normalize_host as normalize_host
-from .util.url import get_host, parse_url
-
-try: # Platform-specific: Python 3
- import weakref
-
- weakref_finalize = weakref.finalize
-except AttributeError: # Platform-specific: Python 2
- from .packages.backports.weakref_finalize import weakref_finalize
-
-xrange = six.moves.xrange
-
-log = logging.getLogger(__name__)
-
-_Default = object()
-
-
-# Pool objects
-class ConnectionPool(object):
- """
- Base class for all connection pools, such as
- :class:`.HTTPConnectionPool` and :class:`.HTTPSConnectionPool`.
-
- .. note::
- ConnectionPool.urlopen() does not normalize or percent-encode target URIs
- which is useful if your target server doesn't support percent-encoded
- target URIs.
- """
-
- scheme = None
- QueueCls = LifoQueue
-
- def __init__(self, host, port=None):
- if not host:
- raise LocationValueError("No host specified.")
-
- self.host = _normalize_host(host, scheme=self.scheme)
- self._proxy_host = host.lower()
- self.port = port
-
- def __str__(self):
- return "%s(host=%r, port=%r)" % (type(self).__name__, self.host, self.port)
-
- def __enter__(self):
- return self
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- self.close()
- # Return False to re-raise any potential exceptions
- return False
-
- def close(self):
- """
- Close all pooled connections and disable the pool.
- """
- pass
-
-
-# This is taken from http://hg.python.org/cpython/file/7aaba721ebc0/Lib/socket.py#l252
-_blocking_errnos = {errno.EAGAIN, errno.EWOULDBLOCK}
-
-
-class HTTPConnectionPool(ConnectionPool, RequestMethods):
- """
- Thread-safe connection pool for one host.
-
- :param host:
- Host used for this HTTP Connection (e.g. "localhost"), passed into
- :class:`http.client.HTTPConnection`.
-
- :param port:
- Port used for this HTTP Connection (None is equivalent to 80), passed
- into :class:`http.client.HTTPConnection`.
-
- :param strict:
- Causes BadStatusLine to be raised if the status line can't be parsed
- as a valid HTTP/1.0 or 1.1 status line, passed into
- :class:`http.client.HTTPConnection`.
-
- .. note::
- Only works in Python 2. This parameter is ignored in Python 3.
-
- :param timeout:
- Socket timeout in seconds for each individual connection. This can
- be a float or integer, which sets the timeout for the HTTP request,
- or an instance of :class:`urllib3.util.Timeout` which gives you more
- fine-grained control over request timeouts. After the constructor has
- been parsed, this is always a `urllib3.util.Timeout` object.
-
- :param maxsize:
- Number of connections to save that can be reused. More than 1 is useful
- in multithreaded situations. If ``block`` is set to False, more
- connections will be created but they will not be saved once they've
- been used.
-
- :param block:
- If set to True, no more than ``maxsize`` connections will be used at
- a time. When no free connections are available, the call will block
- until a connection has been released. This is a useful side effect for
- particular multithreaded situations where one does not want to use more
- than maxsize connections per host to prevent flooding.
-
- :param headers:
- Headers to include with all requests, unless other headers are given
- explicitly.
-
- :param retries:
- Retry configuration to use by default with requests in this pool.
-
- :param _proxy:
- Parsed proxy URL, should not be used directly, instead, see
- :class:`urllib3.ProxyManager`
-
- :param _proxy_headers:
- A dictionary with proxy headers, should not be used directly,
- instead, see :class:`urllib3.ProxyManager`
-
- :param \\**conn_kw:
- Additional parameters are used to create fresh :class:`urllib3.connection.HTTPConnection`,
- :class:`urllib3.connection.HTTPSConnection` instances.
- """
-
- scheme = "http"
- ConnectionCls = HTTPConnection
- ResponseCls = HTTPResponse
-
- def __init__(
- self,
- host,
- port=None,
- strict=False,
- timeout=Timeout.DEFAULT_TIMEOUT,
- maxsize=1,
- block=False,
- headers=None,
- retries=None,
- _proxy=None,
- _proxy_headers=None,
- _proxy_config=None,
- **conn_kw
- ):
- ConnectionPool.__init__(self, host, port)
- RequestMethods.__init__(self, headers)
-
- self.strict = strict
-
- if not isinstance(timeout, Timeout):
- timeout = Timeout.from_float(timeout)
-
- if retries is None:
- retries = Retry.DEFAULT
-
- self.timeout = timeout
- self.retries = retries
-
- self.pool = self.QueueCls(maxsize)
- self.block = block
-
- self.proxy = _proxy
- self.proxy_headers = _proxy_headers or {}
- self.proxy_config = _proxy_config
-
- # Fill the queue up so that doing get() on it will block properly
- for _ in xrange(maxsize):
- self.pool.put(None)
-
- # These are mostly for testing and debugging purposes.
- self.num_connections = 0
- self.num_requests = 0
- self.conn_kw = conn_kw
-
- if self.proxy:
- # Enable Nagle's algorithm for proxies, to avoid packet fragmentation.
- # We cannot know if the user has added default socket options, so we cannot replace the
- # list.
- self.conn_kw.setdefault("socket_options", [])
-
- self.conn_kw["proxy"] = self.proxy
- self.conn_kw["proxy_config"] = self.proxy_config
-
- # Do not pass 'self' as callback to 'finalize'.
- # Then the 'finalize' would keep an endless living (leak) to self.
- # By just passing a reference to the pool allows the garbage collector
- # to free self if nobody else has a reference to it.
- pool = self.pool
-
- # Close all the HTTPConnections in the pool before the
- # HTTPConnectionPool object is garbage collected.
- weakref_finalize(self, _close_pool_connections, pool)
-
- def _new_conn(self):
- """
- Return a fresh :class:`HTTPConnection`.
- """
- self.num_connections += 1
- log.debug(
- "Starting new HTTP connection (%d): %s:%s",
- self.num_connections,
- self.host,
- self.port or "80",
- )
-
- conn = self.ConnectionCls(
- host=self.host,
- port=self.port,
- timeout=self.timeout.connect_timeout,
- strict=self.strict,
- **self.conn_kw
- )
- return conn
-
- def _get_conn(self, timeout=None):
- """
- Get a connection. Will return a pooled connection if one is available.
-
- If no connections are available and :prop:`.block` is ``False``, then a
- fresh connection is returned.
-
- :param timeout:
- Seconds to wait before giving up and raising
- :class:`urllib3.exceptions.EmptyPoolError` if the pool is empty and
- :prop:`.block` is ``True``.
- """
- conn = None
- try:
- conn = self.pool.get(block=self.block, timeout=timeout)
-
- except AttributeError: # self.pool is None
- raise ClosedPoolError(self, "Pool is closed.")
-
- except queue.Empty:
- if self.block:
- raise EmptyPoolError(
- self,
- "Pool reached maximum size and no more connections are allowed.",
- )
- pass # Oh well, we'll create a new connection then
-
- # If this is a persistent connection, check if it got disconnected
- if conn and is_connection_dropped(conn):
- log.debug("Resetting dropped connection: %s", self.host)
- conn.close()
- if getattr(conn, "auto_open", 1) == 0:
- # This is a proxied connection that has been mutated by
- # http.client._tunnel() and cannot be reused (since it would
- # attempt to bypass the proxy)
- conn = None
-
- return conn or self._new_conn()
-
- def _put_conn(self, conn):
- """
- Put a connection back into the pool.
-
- :param conn:
- Connection object for the current host and port as returned by
- :meth:`._new_conn` or :meth:`._get_conn`.
-
- If the pool is already full, the connection is closed and discarded
- because we exceeded maxsize. If connections are discarded frequently,
- then maxsize should be increased.
-
- If the pool is closed, then the connection will be closed and discarded.
- """
- try:
- self.pool.put(conn, block=False)
- return # Everything is dandy, done.
- except AttributeError:
- # self.pool is None.
- pass
- except queue.Full:
- # This should never happen if self.block == True
- log.warning(
- "Connection pool is full, discarding connection: %s. Connection pool size: %s",
- self.host,
- self.pool.qsize(),
- )
- # Connection never got put back into the pool, close it.
- if conn:
- conn.close()
-
- def _validate_conn(self, conn):
- """
- Called right before a request is made, after the socket is created.
- """
- pass
-
- def _prepare_proxy(self, conn):
- # Nothing to do for HTTP connections.
- pass
-
- def _get_timeout(self, timeout):
- """Helper that always returns a :class:`urllib3.util.Timeout`"""
- if timeout is _Default:
- return self.timeout.clone()
-
- if isinstance(timeout, Timeout):
- return timeout.clone()
- else:
- # User passed us an int/float. This is for backwards compatibility,
- # can be removed later
- return Timeout.from_float(timeout)
-
- def _raise_timeout(self, err, url, timeout_value):
- """Is the error actually a timeout? Will raise a ReadTimeout or pass"""
-
- if isinstance(err, SocketTimeout):
- raise ReadTimeoutError(
- self, url, "Read timed out. (read timeout=%s)" % timeout_value
- )
-
- # See the above comment about EAGAIN in Python 3. In Python 2 we have
- # to specifically catch it and throw the timeout error
- if hasattr(err, "errno") and err.errno in _blocking_errnos:
- raise ReadTimeoutError(
- self, url, "Read timed out. (read timeout=%s)" % timeout_value
- )
-
- # Catch possible read timeouts thrown as SSL errors. If not the
- # case, rethrow the original. We need to do this because of:
- # http://bugs.python.org/issue10272
- if "timed out" in str(err) or "did not complete (read)" in str(
- err
- ): # Python < 2.7.4
- raise ReadTimeoutError(
- self, url, "Read timed out. (read timeout=%s)" % timeout_value
- )
-
- def _make_request(
- self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw
- ):
- """
- Perform a request on a given urllib connection object taken from our
- pool.
-
- :param conn:
- a connection from one of our connection pools
-
- :param timeout:
- Socket timeout in seconds for the request. This can be a
- float or integer, which will set the same timeout value for
- the socket connect and the socket read, or an instance of
- :class:`urllib3.util.Timeout`, which gives you more fine-grained
- control over your timeouts.
- """
- self.num_requests += 1
-
- timeout_obj = self._get_timeout(timeout)
- timeout_obj.start_connect()
- conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout)
-
- # Trigger any extra validation we need to do.
- try:
- self._validate_conn(conn)
- except (SocketTimeout, BaseSSLError) as e:
- # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
- self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)
- raise
-
- # conn.request() calls http.client.*.request, not the method in
- # urllib3.request. It also calls makefile (recv) on the socket.
- try:
- if chunked:
- conn.request_chunked(method, url, **httplib_request_kw)
- else:
- conn.request(method, url, **httplib_request_kw)
-
- # We are swallowing BrokenPipeError (errno.EPIPE) since the server is
- # legitimately able to close the connection after sending a valid response.
- # With this behaviour, the received response is still readable.
- except BrokenPipeError:
- # Python 3
- pass
- except IOError as e:
- # Python 2 and macOS/Linux
- # EPIPE and ESHUTDOWN are BrokenPipeError on Python 2, and EPROTOTYPE is needed on macOS
- # https://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/
- if e.errno not in {
- errno.EPIPE,
- errno.ESHUTDOWN,
- errno.EPROTOTYPE,
- }:
- raise
-
- # Reset the timeout for the recv() on the socket
- read_timeout = timeout_obj.read_timeout
-
- # App Engine doesn't have a sock attr
- if getattr(conn, "sock", None):
- # In Python 3 socket.py will catch EAGAIN and return None when you
- # try and read into the file pointer created by http.client, which
- # instead raises a BadStatusLine exception. Instead of catching
- # the exception and assuming all BadStatusLine exceptions are read
- # timeouts, check for a zero timeout before making the request.
- if read_timeout == 0:
- raise ReadTimeoutError(
- self, url, "Read timed out. (read timeout=%s)" % read_timeout
- )
- if read_timeout is Timeout.DEFAULT_TIMEOUT:
- conn.sock.settimeout(socket.getdefaulttimeout())
- else: # None or a value
- conn.sock.settimeout(read_timeout)
-
- # Receive the response from the server
- try:
- try:
- # Python 2.7, use buffering of HTTP responses
- httplib_response = conn.getresponse(buffering=True)
- except TypeError:
- # Python 3
- try:
- httplib_response = conn.getresponse()
- except BaseException as e:
- # Remove the TypeError from the exception chain in
- # Python 3 (including for exceptions like SystemExit).
- # Otherwise it looks like a bug in the code.
- six.raise_from(e, None)
- except (SocketTimeout, BaseSSLError, SocketError) as e:
- self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
- raise
-
- # AppEngine doesn't have a version attr.
- http_version = getattr(conn, "_http_vsn_str", "HTTP/?")
- log.debug(
- '%s://%s:%s "%s %s %s" %s %s',
- self.scheme,
- self.host,
- self.port,
- method,
- url,
- http_version,
- httplib_response.status,
- httplib_response.length,
- )
-
- try:
- assert_header_parsing(httplib_response.msg)
- except (HeaderParsingError, TypeError) as hpe: # Platform-specific: Python 3
- log.warning(
- "Failed to parse headers (url=%s): %s",
- self._absolute_url(url),
- hpe,
- exc_info=True,
- )
-
- return httplib_response
-
- def _absolute_url(self, path):
- return Url(scheme=self.scheme, host=self.host, port=self.port, path=path).url
-
- def close(self):
- """
- Close all pooled connections and disable the pool.
- """
- if self.pool is None:
- return
- # Disable access to the pool
- old_pool, self.pool = self.pool, None
-
- # Close all the HTTPConnections in the pool.
- _close_pool_connections(old_pool)
-
- def is_same_host(self, url):
- """
- Check if the given ``url`` is a member of the same host as this
- connection pool.
- """
- if url.startswith("/"):
- return True
-
- # TODO: Add optional support for socket.gethostbyname checking.
- scheme, host, port = get_host(url)
- if host is not None:
- host = _normalize_host(host, scheme=scheme)
-
- # Use explicit default port for comparison when none is given
- if self.port and not port:
- port = port_by_scheme.get(scheme)
- elif not self.port and port == port_by_scheme.get(scheme):
- port = None
-
- return (scheme, host, port) == (self.scheme, self.host, self.port)
-
- def urlopen(
- self,
- method,
- url,
- body=None,
- headers=None,
- retries=None,
- redirect=True,
- assert_same_host=True,
- timeout=_Default,
- pool_timeout=None,
- release_conn=None,
- chunked=False,
- body_pos=None,
- **response_kw
- ):
- """
- Get a connection from the pool and perform an HTTP request. This is the
- lowest level call for making a request, so you'll need to specify all
- the raw details.
-
- .. note::
-
- More commonly, it's appropriate to use a convenience method provided
- by :class:`.RequestMethods`, such as :meth:`request`.
-
- .. note::
-
- `release_conn` will only behave as expected if
- `preload_content=False` because we want to make
- `preload_content=False` the default behaviour someday soon without
- breaking backwards compatibility.
-
- :param method:
- HTTP request method (such as GET, POST, PUT, etc.)
-
- :param url:
- The URL to perform the request on.
-
- :param body:
- Data to send in the request body, either :class:`str`, :class:`bytes`,
- an iterable of :class:`str`/:class:`bytes`, or a file-like object.
-
- :param headers:
- Dictionary of custom headers to send, such as User-Agent,
- If-None-Match, etc. If None, pool headers are used. If provided,
- these headers completely replace any pool-specific headers.
-
- :param retries:
- Configure the number of retries to allow before raising a
- :class:`~urllib3.exceptions.MaxRetryError` exception.
-
- Pass ``None`` to retry until you receive a response. Pass a
- :class:`~urllib3.util.retry.Retry` object for fine-grained control
- over different types of retries.
- Pass an integer number to retry connection errors that many times,
- but no other types of errors. Pass zero to never retry.
-
- If ``False``, then retries are disabled and any exception is raised
- immediately. Also, instead of raising a MaxRetryError on redirects,
- the redirect response will be returned.
-
- :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int.
-
- :param redirect:
- If True, automatically handle redirects (status codes 301, 302,
- 303, 307, 308). Each redirect counts as a retry. Disabling retries
- will disable redirect, too.
-
- :param assert_same_host:
- If ``True``, will make sure that the host of the pool requests is
- consistent else will raise HostChangedError. When ``False``, you can
- use the pool on an HTTP proxy and request foreign hosts.
-
- :param timeout:
- If specified, overrides the default timeout for this one
- request. It may be a float (in seconds) or an instance of
- :class:`urllib3.util.Timeout`.
-
- :param pool_timeout:
- If set and the pool is set to block=True, then this method will
- block for ``pool_timeout`` seconds and raise EmptyPoolError if no
- connection is available within the time period.
-
- :param release_conn:
- If False, then the urlopen call will not release the connection
- back into the pool once a response is received (but will release if
- you read the entire contents of the response such as when
- `preload_content=True`). This is useful if you're not preloading
- the response's content immediately. You will need to call
- ``r.release_conn()`` on the response ``r`` to return the connection
- back into the pool. If None, it takes the value of
- ``response_kw.get('preload_content', True)``.
-
- :param chunked:
- If True, urllib3 will send the body using chunked transfer
- encoding. Otherwise, urllib3 will send the body using the standard
- content-length form. Defaults to False.
-
- :param int body_pos:
- Position to seek to in file-like body in the event of a retry or
- redirect. Typically this won't need to be set because urllib3 will
- auto-populate the value when needed.
-
- :param \\**response_kw:
- Additional parameters are passed to
- :meth:`urllib3.response.HTTPResponse.from_httplib`
- """
-
- parsed_url = parse_url(url)
- destination_scheme = parsed_url.scheme
-
- if headers is None:
- headers = self.headers
-
- if not isinstance(retries, Retry):
- retries = Retry.from_int(retries, redirect=redirect, default=self.retries)
-
- if release_conn is None:
- release_conn = response_kw.get("preload_content", True)
-
- # Check host
- if assert_same_host and not self.is_same_host(url):
- raise HostChangedError(self, url, retries)
-
- # Ensure that the URL we're connecting to is properly encoded
- if url.startswith("/"):
- url = six.ensure_str(_encode_target(url))
- else:
- url = six.ensure_str(parsed_url.url)
-
- conn = None
-
- # Track whether `conn` needs to be released before
- # returning/raising/recursing. Update this variable if necessary, and
- # leave `release_conn` constant throughout the function. That way, if
- # the function recurses, the original value of `release_conn` will be
- # passed down into the recursive call, and its value will be respected.
- #
- # See issue #651 [1] for details.
- #
- # [1]
- release_this_conn = release_conn
-
- http_tunnel_required = connection_requires_http_tunnel(
- self.proxy, self.proxy_config, destination_scheme
- )
-
- # Merge the proxy headers. Only done when not using HTTP CONNECT. We
- # have to copy the headers dict so we can safely change it without those
- # changes being reflected in anyone else's copy.
- if not http_tunnel_required:
- headers = headers.copy()
- headers.update(self.proxy_headers)
-
- # Must keep the exception bound to a separate variable or else Python 3
- # complains about UnboundLocalError.
- err = None
-
- # Keep track of whether we cleanly exited the except block. This
- # ensures we do proper cleanup in finally.
- clean_exit = False
-
- # Rewind body position, if needed. Record current position
- # for future rewinds in the event of a redirect/retry.
- body_pos = set_file_position(body, body_pos)
-
- try:
- # Request a connection from the queue.
- timeout_obj = self._get_timeout(timeout)
- conn = self._get_conn(timeout=pool_timeout)
-
- conn.timeout = timeout_obj.connect_timeout
-
- is_new_proxy_conn = self.proxy is not None and not getattr(
- conn, "sock", None
- )
- if is_new_proxy_conn and http_tunnel_required:
- self._prepare_proxy(conn)
-
- # Make the request on the httplib connection object.
- httplib_response = self._make_request(
- conn,
- method,
- url,
- timeout=timeout_obj,
- body=body,
- headers=headers,
- chunked=chunked,
- )
-
- # If we're going to release the connection in ``finally:``, then
- # the response doesn't need to know about the connection. Otherwise
- # it will also try to release it and we'll have a double-release
- # mess.
- response_conn = conn if not release_conn else None
-
- # Pass method to Response for length checking
- response_kw["request_method"] = method
-
- # Import httplib's response into our own wrapper object
- response = self.ResponseCls.from_httplib(
- httplib_response,
- pool=self,
- connection=response_conn,
- retries=retries,
- **response_kw
- )
-
- # Everything went great!
- clean_exit = True
-
- except EmptyPoolError:
- # Didn't get a connection from the pool, no need to clean up
- clean_exit = True
- release_this_conn = False
- raise
-
- except (
- TimeoutError,
- HTTPException,
- SocketError,
- ProtocolError,
- BaseSSLError,
- SSLError,
- CertificateError,
- ) as e:
- # Discard the connection for these exceptions. It will be
- # replaced during the next _get_conn() call.
- clean_exit = False
-
- def _is_ssl_error_message_from_http_proxy(ssl_error):
- # We're trying to detect the message 'WRONG_VERSION_NUMBER' but
- # SSLErrors are kinda all over the place when it comes to the message,
- # so we try to cover our bases here!
- message = " ".join(re.split("[^a-z]", str(ssl_error).lower()))
- return (
- "wrong version number" in message or "unknown protocol" in message
- )
-
- # Try to detect a common user error with proxies which is to
- # set an HTTP proxy to be HTTPS when it should be 'http://'
- # (ie {'http': 'http://proxy', 'https': 'https://proxy'})
- # Instead we add a nice error message and point to a URL.
- if (
- isinstance(e, BaseSSLError)
- and self.proxy
- and _is_ssl_error_message_from_http_proxy(e)
- and conn.proxy
- and conn.proxy.scheme == "https"
- ):
- e = ProxyError(
- "Your proxy appears to only use HTTP and not HTTPS, "
- "try changing your proxy URL to be HTTP. See: "
- "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html"
- "#https-proxy-error-http-proxy",
- SSLError(e),
- )
- elif isinstance(e, (BaseSSLError, CertificateError)):
- e = SSLError(e)
- elif isinstance(e, (SocketError, NewConnectionError)) and self.proxy:
- e = ProxyError("Cannot connect to proxy.", e)
- elif isinstance(e, (SocketError, HTTPException)):
- e = ProtocolError("Connection aborted.", e)
-
- retries = retries.increment(
- method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
- )
- retries.sleep()
-
- # Keep track of the error for the retry warning.
- err = e
-
- finally:
- if not clean_exit:
- # We hit some kind of exception, handled or otherwise. We need
- # to throw the connection away unless explicitly told not to.
- # Close the connection, set the variable to None, and make sure
- # we put the None back in the pool to avoid leaking it.
- conn = conn and conn.close()
- release_this_conn = True
-
- if release_this_conn:
- # Put the connection back to be reused. If the connection is
- # expired then it will be None, which will get replaced with a
- # fresh connection during _get_conn.
- self._put_conn(conn)
-
- if not conn:
- # Try again
- log.warning(
- "Retrying (%r) after connection broken by '%r': %s", retries, err, url
- )
- return self.urlopen(
- method,
- url,
- body,
- headers,
- retries,
- redirect,
- assert_same_host,
- timeout=timeout,
- pool_timeout=pool_timeout,
- release_conn=release_conn,
- chunked=chunked,
- body_pos=body_pos,
- **response_kw
- )
-
- # Handle redirect?
- redirect_location = redirect and response.get_redirect_location()
- if redirect_location:
- if response.status == 303:
- method = "GET"
-
- try:
- retries = retries.increment(method, url, response=response, _pool=self)
- except MaxRetryError:
- if retries.raise_on_redirect:
- response.drain_conn()
- raise
- return response
-
- response.drain_conn()
- retries.sleep_for_retry(response)
- log.debug("Redirecting %s -> %s", url, redirect_location)
- return self.urlopen(
- method,
- redirect_location,
- body,
- headers,
- retries=retries,
- redirect=redirect,
- assert_same_host=assert_same_host,
- timeout=timeout,
- pool_timeout=pool_timeout,
- release_conn=release_conn,
- chunked=chunked,
- body_pos=body_pos,
- **response_kw
- )
-
- # Check if we should retry the HTTP response.
- has_retry_after = bool(response.headers.get("Retry-After"))
- if retries.is_retry(method, response.status, has_retry_after):
- try:
- retries = retries.increment(method, url, response=response, _pool=self)
- except MaxRetryError:
- if retries.raise_on_status:
- response.drain_conn()
- raise
- return response
-
- response.drain_conn()
- retries.sleep(response)
- log.debug("Retry: %s", url)
- return self.urlopen(
- method,
- url,
- body,
- headers,
- retries=retries,
- redirect=redirect,
- assert_same_host=assert_same_host,
- timeout=timeout,
- pool_timeout=pool_timeout,
- release_conn=release_conn,
- chunked=chunked,
- body_pos=body_pos,
- **response_kw
- )
-
- return response
-
-
-class HTTPSConnectionPool(HTTPConnectionPool):
- """
- Same as :class:`.HTTPConnectionPool`, but HTTPS.
-
- :class:`.HTTPSConnection` uses one of ``assert_fingerprint``,
- ``assert_hostname`` and ``host`` in this order to verify connections.
- If ``assert_hostname`` is False, no verification is done.
-
- The ``key_file``, ``cert_file``, ``cert_reqs``, ``ca_certs``,
- ``ca_cert_dir``, ``ssl_version``, ``key_password`` are only used if :mod:`ssl`
- is available and are fed into :meth:`urllib3.util.ssl_wrap_socket` to upgrade
- the connection socket into an SSL socket.
- """
-
- scheme = "https"
- ConnectionCls = HTTPSConnection
-
- def __init__(
- self,
- host,
- port=None,
- strict=False,
- timeout=Timeout.DEFAULT_TIMEOUT,
- maxsize=1,
- block=False,
- headers=None,
- retries=None,
- _proxy=None,
- _proxy_headers=None,
- key_file=None,
- cert_file=None,
- cert_reqs=None,
- key_password=None,
- ca_certs=None,
- ssl_version=None,
- assert_hostname=None,
- assert_fingerprint=None,
- ca_cert_dir=None,
- **conn_kw
- ):
-
- HTTPConnectionPool.__init__(
- self,
- host,
- port,
- strict,
- timeout,
- maxsize,
- block,
- headers,
- retries,
- _proxy,
- _proxy_headers,
- **conn_kw
- )
-
- self.key_file = key_file
- self.cert_file = cert_file
- self.cert_reqs = cert_reqs
- self.key_password = key_password
- self.ca_certs = ca_certs
- self.ca_cert_dir = ca_cert_dir
- self.ssl_version = ssl_version
- self.assert_hostname = assert_hostname
- self.assert_fingerprint = assert_fingerprint
-
- def _prepare_conn(self, conn):
- """
- Prepare the ``connection`` for :meth:`urllib3.util.ssl_wrap_socket`
- and establish the tunnel if proxy is used.
- """
-
- if isinstance(conn, VerifiedHTTPSConnection):
- conn.set_cert(
- key_file=self.key_file,
- key_password=self.key_password,
- cert_file=self.cert_file,
- cert_reqs=self.cert_reqs,
- ca_certs=self.ca_certs,
- ca_cert_dir=self.ca_cert_dir,
- assert_hostname=self.assert_hostname,
- assert_fingerprint=self.assert_fingerprint,
- )
- conn.ssl_version = self.ssl_version
- return conn
-
- def _prepare_proxy(self, conn):
- """
- Establishes a tunnel connection through HTTP CONNECT.
-
- Tunnel connection is established early because otherwise httplib would
- improperly set Host: header to proxy's IP:port.
- """
-
- conn.set_tunnel(self._proxy_host, self.port, self.proxy_headers)
-
- if self.proxy.scheme == "https":
- conn.tls_in_tls_required = True
-
- conn.connect()
-
- def _new_conn(self):
- """
- Return a fresh :class:`http.client.HTTPSConnection`.
- """
- self.num_connections += 1
- log.debug(
- "Starting new HTTPS connection (%d): %s:%s",
- self.num_connections,
- self.host,
- self.port or "443",
- )
-
- if not self.ConnectionCls or self.ConnectionCls is DummyConnection:
- raise SSLError(
- "Can't connect to HTTPS URL because the SSL module is not available."
- )
-
- actual_host = self.host
- actual_port = self.port
- if self.proxy is not None:
- actual_host = self.proxy.host
- actual_port = self.proxy.port
-
- conn = self.ConnectionCls(
- host=actual_host,
- port=actual_port,
- timeout=self.timeout.connect_timeout,
- strict=self.strict,
- cert_file=self.cert_file,
- key_file=self.key_file,
- key_password=self.key_password,
- **self.conn_kw
- )
-
- return self._prepare_conn(conn)
-
- def _validate_conn(self, conn):
- """
- Called right before a request is made, after the socket is created.
- """
- super(HTTPSConnectionPool, self)._validate_conn(conn)
-
- # Force connect early to allow us to validate the connection.
- if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
- conn.connect()
-
- if not conn.is_verified:
- warnings.warn(
- (
- "Unverified HTTPS request is being made to host '%s'. "
- "Adding certificate verification is strongly advised. See: "
- "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html"
- "#ssl-warnings" % conn.host
- ),
- InsecureRequestWarning,
- )
-
- if getattr(conn, "proxy_is_verified", None) is False:
- warnings.warn(
- (
- "Unverified HTTPS connection done to an HTTPS proxy. "
- "Adding certificate verification is strongly advised. See: "
- "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html"
- "#ssl-warnings"
- ),
- InsecureRequestWarning,
- )
-
-
-def connection_from_url(url, **kw):
- """
- Given a url, return an :class:`.ConnectionPool` instance of its host.
-
- This is a shortcut for not having to parse out the scheme, host, and port
- of the url before creating an :class:`.ConnectionPool` instance.
-
- :param url:
- Absolute URL string that must include the scheme. Port is optional.
-
- :param \\**kw:
- Passes additional parameters to the constructor of the appropriate
- :class:`.ConnectionPool`. Useful for specifying things like
- timeout, maxsize, headers, etc.
-
- Example::
-
- >>> conn = connection_from_url('http://google.com/')
- >>> r = conn.request('GET', '/')
- """
- scheme, host, port = get_host(url)
- port = port or port_by_scheme.get(scheme, 80)
- if scheme == "https":
- return HTTPSConnectionPool(host, port=port, **kw)
- else:
- return HTTPConnectionPool(host, port=port, **kw)
-
-
-def _normalize_host(host, scheme):
- """
- Normalize hosts for comparisons and use with sockets.
- """
-
- host = normalize_host(host, scheme)
-
- # httplib doesn't like it when we include brackets in IPv6 addresses
- # Specifically, if we include brackets but also pass the port then
- # httplib crazily doubles up the square brackets on the Host header.
- # Instead, we need to make sure we never pass ``None`` as the port.
- # However, for backward compatibility reasons we can't actually
- # *assert* that. See http://bugs.python.org/issue28539
- if host.startswith("[") and host.endswith("]"):
- host = host[1:-1]
- return host
-
-
-def _close_pool_connections(pool):
- """Drains a queue of connections and closes each one."""
- try:
- while True:
- conn = pool.get(block=False)
- if conn:
- conn.close()
- except queue.Empty:
- pass # Done.
diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_resources/abc.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_resources/abc.py
deleted file mode 100644
index 23b6aeafe4f43d097734e186907232513ad27a3c..0000000000000000000000000000000000000000
--- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/importlib_resources/abc.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import abc
-import io
-import itertools
-import pathlib
-from typing import Any, BinaryIO, Iterable, Iterator, NoReturn, Text, Optional
-
-from ._compat import runtime_checkable, Protocol, StrPath
-
-
-__all__ = ["ResourceReader", "Traversable", "TraversableResources"]
-
-
-class ResourceReader(metaclass=abc.ABCMeta):
- """Abstract base class for loaders to provide resource reading support."""
-
- @abc.abstractmethod
- def open_resource(self, resource: Text) -> BinaryIO:
- """Return an opened, file-like object for binary reading.
-
- The 'resource' argument is expected to represent only a file name.
- If the resource cannot be found, FileNotFoundError is raised.
- """
- # This deliberately raises FileNotFoundError instead of
- # NotImplementedError so that if this method is accidentally called,
- # it'll still do the right thing.
- raise FileNotFoundError
-
- @abc.abstractmethod
- def resource_path(self, resource: Text) -> Text:
- """Return the file system path to the specified resource.
-
- The 'resource' argument is expected to represent only a file name.
- If the resource does not exist on the file system, raise
- FileNotFoundError.
- """
- # This deliberately raises FileNotFoundError instead of
- # NotImplementedError so that if this method is accidentally called,
- # it'll still do the right thing.
- raise FileNotFoundError
-
- @abc.abstractmethod
- def is_resource(self, path: Text) -> bool:
- """Return True if the named 'path' is a resource.
-
- Files are resources, directories are not.
- """
- raise FileNotFoundError
-
- @abc.abstractmethod
- def contents(self) -> Iterable[str]:
- """Return an iterable of entries in `package`."""
- raise FileNotFoundError
-
-
-class TraversalError(Exception):
- pass
-
-
-@runtime_checkable
-class Traversable(Protocol):
- """
- An object with a subset of pathlib.Path methods suitable for
- traversing directories and opening files.
-
- Any exceptions that occur when accessing the backing resource
- may propagate unaltered.
- """
-
- @abc.abstractmethod
- def iterdir(self) -> Iterator["Traversable"]:
- """
- Yield Traversable objects in self
- """
-
- def read_bytes(self) -> bytes:
- """
- Read contents of self as bytes
- """
- with self.open('rb') as strm:
- return strm.read()
-
- def read_text(self, encoding: Optional[str] = None) -> str:
- """
- Read contents of self as text
- """
- with self.open(encoding=encoding) as strm:
- return strm.read()
-
- @abc.abstractmethod
- def is_dir(self) -> bool:
- """
- Return True if self is a directory
- """
-
- @abc.abstractmethod
- def is_file(self) -> bool:
- """
- Return True if self is a file
- """
-
- def joinpath(self, *descendants: StrPath) -> "Traversable":
- """
- Return Traversable resolved with any descendants applied.
-
- Each descendant should be a path segment relative to self
- and each may contain multiple levels separated by
- ``posixpath.sep`` (``/``).
- """
- if not descendants:
- return self
- names = itertools.chain.from_iterable(
- path.parts for path in map(pathlib.PurePosixPath, descendants)
- )
- target = next(names)
- matches = (
- traversable for traversable in self.iterdir() if traversable.name == target
- )
- try:
- match = next(matches)
- except StopIteration:
- raise TraversalError(
- "Target not found during traversal.", target, list(names)
- )
- return match.joinpath(*names)
-
- def __truediv__(self, child: StrPath) -> "Traversable":
- """
- Return Traversable child in self
- """
- return self.joinpath(child)
-
- @abc.abstractmethod
- def open(self, mode='r', *args, **kwargs):
- """
- mode may be 'r' or 'rb' to open as text or binary. Return a handle
- suitable for reading (same as pathlib.Path.open).
-
- When opening as text, accepts encoding parameters such as those
- accepted by io.TextIOWrapper.
- """
-
- @property
- @abc.abstractmethod
- def name(self) -> str:
- """
- The base name of this object without any parent references.
- """
-
-
-class TraversableResources(ResourceReader):
- """
- The required interface for providing traversable
- resources.
- """
-
- @abc.abstractmethod
- def files(self) -> "Traversable":
- """Return a Traversable object for the loaded package."""
-
- def open_resource(self, resource: StrPath) -> io.BufferedReader:
- return self.files().joinpath(resource).open('rb')
-
- def resource_path(self, resource: Any) -> NoReturn:
- raise FileNotFoundError(resource)
-
- def is_resource(self, path: StrPath) -> bool:
- return self.files().joinpath(path).is_file()
-
- def contents(self) -> Iterator[str]:
- return (item.name for item in self.files().iterdir())
diff --git a/spaces/platzi/platzi-curso-gradio-tf-clasificacion-imagenes/README.md b/spaces/platzi/platzi-curso-gradio-tf-clasificacion-imagenes/README.md
deleted file mode 100644
index cff8c04dcdd4dd18094b4c94a1a84925c42ec036..0000000000000000000000000000000000000000
--- a/spaces/platzi/platzi-curso-gradio-tf-clasificacion-imagenes/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Platzi Curso Gradio Tf Clasificacion Imagenes
-emoji: ⚡
-colorFrom: blue
-colorTo: purple
-sdk: gradio
-sdk_version: 3.0.26
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/power2/JoJoGan-powerhow2/op/conv2d_gradfix.py b/spaces/power2/JoJoGan-powerhow2/op/conv2d_gradfix.py
deleted file mode 100644
index bb2f94bbcb8132299fd4d538972d32bd7ff6e7d6..0000000000000000000000000000000000000000
--- a/spaces/power2/JoJoGan-powerhow2/op/conv2d_gradfix.py
+++ /dev/null
@@ -1,227 +0,0 @@
-import contextlib
-import warnings
-
-import torch
-from torch import autograd
-from torch.nn import functional as F
-
-enabled = True
-weight_gradients_disabled = False
-
-
-@contextlib.contextmanager
-def no_weight_gradients():
- global weight_gradients_disabled
-
- old = weight_gradients_disabled
- weight_gradients_disabled = True
- yield
- weight_gradients_disabled = old
-
-
-def conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1):
- if could_use_op(input):
- return conv2d_gradfix(
- transpose=False,
- weight_shape=weight.shape,
- stride=stride,
- padding=padding,
- output_padding=0,
- dilation=dilation,
- groups=groups,
- ).apply(input, weight, bias)
-
- return F.conv2d(
- input=input,
- weight=weight,
- bias=bias,
- stride=stride,
- padding=padding,
- dilation=dilation,
- groups=groups,
- )
-
-
-def conv_transpose2d(
- input,
- weight,
- bias=None,
- stride=1,
- padding=0,
- output_padding=0,
- groups=1,
- dilation=1,
-):
- if could_use_op(input):
- return conv2d_gradfix(
- transpose=True,
- weight_shape=weight.shape,
- stride=stride,
- padding=padding,
- output_padding=output_padding,
- groups=groups,
- dilation=dilation,
- ).apply(input, weight, bias)
-
- return F.conv_transpose2d(
- input=input,
- weight=weight,
- bias=bias,
- stride=stride,
- padding=padding,
- output_padding=output_padding,
- dilation=dilation,
- groups=groups,
- )
-
-
-def could_use_op(input):
- if (not enabled) or (not torch.backends.cudnn.enabled):
- return False
-
- if input.device.type != "cuda":
- return False
-
- if any(torch.__version__.startswith(x) for x in ["1.7.", "1.8."]):
- return True
-
- warnings.warn(
- f"conv2d_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.conv2d()."
- )
-
- return False
-
-
-def ensure_tuple(xs, ndim):
- xs = tuple(xs) if isinstance(xs, (tuple, list)) else (xs,) * ndim
-
- return xs
-
-
-conv2d_gradfix_cache = dict()
-
-
-def conv2d_gradfix(
- transpose, weight_shape, stride, padding, output_padding, dilation, groups
-):
- ndim = 2
- weight_shape = tuple(weight_shape)
- stride = ensure_tuple(stride, ndim)
- padding = ensure_tuple(padding, ndim)
- output_padding = ensure_tuple(output_padding, ndim)
- dilation = ensure_tuple(dilation, ndim)
-
- key = (transpose, weight_shape, stride, padding, output_padding, dilation, groups)
- if key in conv2d_gradfix_cache:
- return conv2d_gradfix_cache[key]
-
- common_kwargs = dict(
- stride=stride, padding=padding, dilation=dilation, groups=groups
- )
-
- def calc_output_padding(input_shape, output_shape):
- if transpose:
- return [0, 0]
-
- return [
- input_shape[i + 2]
- - (output_shape[i + 2] - 1) * stride[i]
- - (1 - 2 * padding[i])
- - dilation[i] * (weight_shape[i + 2] - 1)
- for i in range(ndim)
- ]
-
- class Conv2d(autograd.Function):
- @staticmethod
- def forward(ctx, input, weight, bias):
- if not transpose:
- out = F.conv2d(input=input, weight=weight, bias=bias, **common_kwargs)
-
- else:
- out = F.conv_transpose2d(
- input=input,
- weight=weight,
- bias=bias,
- output_padding=output_padding,
- **common_kwargs,
- )
-
- ctx.save_for_backward(input, weight)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- input, weight = ctx.saved_tensors
- grad_input, grad_weight, grad_bias = None, None, None
-
- if ctx.needs_input_grad[0]:
- p = calc_output_padding(
- input_shape=input.shape, output_shape=grad_output.shape
- )
- grad_input = conv2d_gradfix(
- transpose=(not transpose),
- weight_shape=weight_shape,
- output_padding=p,
- **common_kwargs,
- ).apply(grad_output, weight, None)
-
- if ctx.needs_input_grad[1] and not weight_gradients_disabled:
- grad_weight = Conv2dGradWeight.apply(grad_output, input)
-
- if ctx.needs_input_grad[2]:
- grad_bias = grad_output.sum((0, 2, 3))
-
- return grad_input, grad_weight, grad_bias
-
- class Conv2dGradWeight(autograd.Function):
- @staticmethod
- def forward(ctx, grad_output, input):
- op = torch._C._jit_get_operation(
- "aten::cudnn_convolution_backward_weight"
- if not transpose
- else "aten::cudnn_convolution_transpose_backward_weight"
- )
- flags = [
- torch.backends.cudnn.benchmark,
- torch.backends.cudnn.deterministic,
- torch.backends.cudnn.allow_tf32,
- ]
- grad_weight = op(
- weight_shape,
- grad_output,
- input,
- padding,
- stride,
- dilation,
- groups,
- *flags,
- )
- ctx.save_for_backward(grad_output, input)
-
- return grad_weight
-
- @staticmethod
- def backward(ctx, grad_grad_weight):
- grad_output, input = ctx.saved_tensors
- grad_grad_output, grad_grad_input = None, None
-
- if ctx.needs_input_grad[0]:
- grad_grad_output = Conv2d.apply(input, grad_grad_weight, None)
-
- if ctx.needs_input_grad[1]:
- p = calc_output_padding(
- input_shape=input.shape, output_shape=grad_output.shape
- )
- grad_grad_input = conv2d_gradfix(
- transpose=(not transpose),
- weight_shape=weight_shape,
- output_padding=p,
- **common_kwargs,
- ).apply(grad_output, grad_grad_weight, None)
-
- return grad_grad_output, grad_grad_input
-
- conv2d_gradfix_cache[key] = Conv2d
-
- return Conv2d
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/BlpImagePlugin.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/BlpImagePlugin.py
deleted file mode 100644
index 398696d5c7737c9f12cc479a622fffd4dfee7b88..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/PIL/BlpImagePlugin.py
+++ /dev/null
@@ -1,474 +0,0 @@
-"""
-Blizzard Mipmap Format (.blp)
-Jerome Leclanche
-
-The contents of this file are hereby released in the public domain (CC0)
-Full text of the CC0 license:
- https://creativecommons.org/publicdomain/zero/1.0/
-
-BLP1 files, used mostly in Warcraft III, are not fully supported.
-All types of BLP2 files used in World of Warcraft are supported.
-
-The BLP file structure consists of a header, up to 16 mipmaps of the
-texture
-
-Texture sizes must be powers of two, though the two dimensions do
-not have to be equal; 512x256 is valid, but 512x200 is not.
-The first mipmap (mipmap #0) is the full size image; each subsequent
-mipmap halves both dimensions. The final mipmap should be 1x1.
-
-BLP files come in many different flavours:
-* JPEG-compressed (type == 0) - only supported for BLP1.
-* RAW images (type == 1, encoding == 1). Each mipmap is stored as an
- array of 8-bit values, one per pixel, left to right, top to bottom.
- Each value is an index to the palette.
-* DXT-compressed (type == 1, encoding == 2):
-- DXT1 compression is used if alpha_encoding == 0.
- - An additional alpha bit is used if alpha_depth == 1.
- - DXT3 compression is used if alpha_encoding == 1.
- - DXT5 compression is used if alpha_encoding == 7.
-"""
-
-import os
-import struct
-from enum import IntEnum
-from io import BytesIO
-
-from . import Image, ImageFile
-
-
-class Format(IntEnum):
- JPEG = 0
-
-
-class Encoding(IntEnum):
- UNCOMPRESSED = 1
- DXT = 2
- UNCOMPRESSED_RAW_BGRA = 3
-
-
-class AlphaEncoding(IntEnum):
- DXT1 = 0
- DXT3 = 1
- DXT5 = 7
-
-
-def unpack_565(i):
- return ((i >> 11) & 0x1F) << 3, ((i >> 5) & 0x3F) << 2, (i & 0x1F) << 3
-
-
-def decode_dxt1(data, alpha=False):
- """
- input: one "row" of data (i.e. will produce 4*width pixels)
- """
-
- blocks = len(data) // 8 # number of blocks in row
- ret = (bytearray(), bytearray(), bytearray(), bytearray())
-
- for block in range(blocks):
- # Decode next 8-byte block.
- idx = block * 8
- color0, color1, bits = struct.unpack_from("> 2
-
- a = 0xFF
- if control == 0:
- r, g, b = r0, g0, b0
- elif control == 1:
- r, g, b = r1, g1, b1
- elif control == 2:
- if color0 > color1:
- r = (2 * r0 + r1) // 3
- g = (2 * g0 + g1) // 3
- b = (2 * b0 + b1) // 3
- else:
- r = (r0 + r1) // 2
- g = (g0 + g1) // 2
- b = (b0 + b1) // 2
- elif control == 3:
- if color0 > color1:
- r = (2 * r1 + r0) // 3
- g = (2 * g1 + g0) // 3
- b = (2 * b1 + b0) // 3
- else:
- r, g, b, a = 0, 0, 0, 0
-
- if alpha:
- ret[j].extend([r, g, b, a])
- else:
- ret[j].extend([r, g, b])
-
- return ret
-
-
-def decode_dxt3(data):
- """
- input: one "row" of data (i.e. will produce 4*width pixels)
- """
-
- blocks = len(data) // 16 # number of blocks in row
- ret = (bytearray(), bytearray(), bytearray(), bytearray())
-
- for block in range(blocks):
- idx = block * 16
- block = data[idx : idx + 16]
- # Decode next 16-byte block.
- bits = struct.unpack_from("<8B", block)
- color0, color1 = struct.unpack_from(">= 4
- else:
- high = True
- a &= 0xF
- a *= 17 # We get a value between 0 and 15
-
- color_code = (code >> 2 * (4 * j + i)) & 0x03
-
- if color_code == 0:
- r, g, b = r0, g0, b0
- elif color_code == 1:
- r, g, b = r1, g1, b1
- elif color_code == 2:
- r = (2 * r0 + r1) // 3
- g = (2 * g0 + g1) // 3
- b = (2 * b0 + b1) // 3
- elif color_code == 3:
- r = (2 * r1 + r0) // 3
- g = (2 * g1 + g0) // 3
- b = (2 * b1 + b0) // 3
-
- ret[j].extend([r, g, b, a])
-
- return ret
-
-
-def decode_dxt5(data):
- """
- input: one "row" of data (i.e. will produce 4 * width pixels)
- """
-
- blocks = len(data) // 16 # number of blocks in row
- ret = (bytearray(), bytearray(), bytearray(), bytearray())
-
- for block in range(blocks):
- idx = block * 16
- block = data[idx : idx + 16]
- # Decode next 16-byte block.
- a0, a1 = struct.unpack_from("> alphacode_index) & 0x07
- elif alphacode_index == 15:
- alphacode = (alphacode2 >> 15) | ((alphacode1 << 1) & 0x06)
- else: # alphacode_index >= 18 and alphacode_index <= 45
- alphacode = (alphacode1 >> (alphacode_index - 16)) & 0x07
-
- if alphacode == 0:
- a = a0
- elif alphacode == 1:
- a = a1
- elif a0 > a1:
- a = ((8 - alphacode) * a0 + (alphacode - 1) * a1) // 7
- elif alphacode == 6:
- a = 0
- elif alphacode == 7:
- a = 255
- else:
- a = ((6 - alphacode) * a0 + (alphacode - 1) * a1) // 5
-
- color_code = (code >> 2 * (4 * j + i)) & 0x03
-
- if color_code == 0:
- r, g, b = r0, g0, b0
- elif color_code == 1:
- r, g, b = r1, g1, b1
- elif color_code == 2:
- r = (2 * r0 + r1) // 3
- g = (2 * g0 + g1) // 3
- b = (2 * b0 + b1) // 3
- elif color_code == 3:
- r = (2 * r1 + r0) // 3
- g = (2 * g1 + g0) // 3
- b = (2 * b1 + b0) // 3
-
- ret[j].extend([r, g, b, a])
-
- return ret
-
-
-class BLPFormatError(NotImplementedError):
- pass
-
-
-def _accept(prefix):
- return prefix[:4] in (b"BLP1", b"BLP2")
-
-
-class BlpImageFile(ImageFile.ImageFile):
- """
- Blizzard Mipmap Format
- """
-
- format = "BLP"
- format_description = "Blizzard Mipmap Format"
-
- def _open(self):
- self.magic = self.fp.read(4)
-
- self.fp.seek(5, os.SEEK_CUR)
- (self._blp_alpha_depth,) = struct.unpack(">> import sys
- >>> handler = logging.StreamHandler(sys.stdout)
- >>> formatter = LevelFormatter(
- ... fmt={
- ... '*': '[%(levelname)s] %(message)s',
- ... 'DEBUG': '%(name)s [%(levelname)s] %(message)s',
- ... 'INFO': '%(message)s',
- ... })
- >>> handler.setFormatter(formatter)
- >>> log = logging.getLogger('test')
- >>> log.setLevel(logging.DEBUG)
- >>> log.addHandler(handler)
- >>> log.debug('this uses a custom format string')
- test [DEBUG] this uses a custom format string
- >>> log.info('this also uses a custom format string')
- this also uses a custom format string
- >>> log.warning("this one uses the default format string")
- [WARNING] this one uses the default format string
- """
-
- def __init__(self, fmt=None, datefmt=None, style="%"):
- if style != "%":
- raise ValueError(
- "only '%' percent style is supported in both python 2 and 3"
- )
- if fmt is None:
- fmt = DEFAULT_FORMATS
- if isinstance(fmt, str):
- default_format = fmt
- custom_formats = {}
- elif isinstance(fmt, Mapping):
- custom_formats = dict(fmt)
- default_format = custom_formats.pop("*", None)
- else:
- raise TypeError("fmt must be a str or a dict of str: %r" % fmt)
- super(LevelFormatter, self).__init__(default_format, datefmt)
- self.default_format = self._fmt
- self.custom_formats = {}
- for level, fmt in custom_formats.items():
- level = logging._checkLevel(level)
- self.custom_formats[level] = fmt
-
- def format(self, record):
- if self.custom_formats:
- fmt = self.custom_formats.get(record.levelno, self.default_format)
- if self._fmt != fmt:
- self._fmt = fmt
- # for python >= 3.2, _style needs to be set if _fmt changes
- if PercentStyle:
- self._style = PercentStyle(fmt)
- return super(LevelFormatter, self).format(record)
-
-
-def configLogger(**kwargs):
- """A more sophisticated logging system configuation manager.
-
- This is more or less the same as :py:func:`logging.basicConfig`,
- with some additional options and defaults.
-
- The default behaviour is to create a ``StreamHandler`` which writes to
- sys.stderr, set a formatter using the ``DEFAULT_FORMATS`` strings, and add
- the handler to the top-level library logger ("fontTools").
-
- A number of optional keyword arguments may be specified, which can alter
- the default behaviour.
-
- Args:
-
- logger: Specifies the logger name or a Logger instance to be
- configured. (Defaults to "fontTools" logger). Unlike ``basicConfig``,
- this function can be called multiple times to reconfigure a logger.
- If the logger or any of its children already exists before the call is
- made, they will be reset before the new configuration is applied.
- filename: Specifies that a ``FileHandler`` be created, using the
- specified filename, rather than a ``StreamHandler``.
- filemode: Specifies the mode to open the file, if filename is
- specified. (If filemode is unspecified, it defaults to ``a``).
- format: Use the specified format string for the handler. This
- argument also accepts a dictionary of format strings keyed by
- level name, to allow customising the records appearance for
- specific levels. The special ``'*'`` key is for 'any other' level.
- datefmt: Use the specified date/time format.
- level: Set the logger level to the specified level.
- stream: Use the specified stream to initialize the StreamHandler. Note
- that this argument is incompatible with ``filename`` - if both
- are present, ``stream`` is ignored.
- handlers: If specified, this should be an iterable of already created
- handlers, which will be added to the logger. Any handler in the
- list which does not have a formatter assigned will be assigned the
- formatter created in this function.
- filters: If specified, this should be an iterable of already created
- filters. If the ``handlers`` do not already have filters assigned,
- these filters will be added to them.
- propagate: All loggers have a ``propagate`` attribute which determines
- whether to continue searching for handlers up the logging hierarchy.
- If not provided, the "propagate" attribute will be set to ``False``.
- """
- # using kwargs to enforce keyword-only arguments in py2.
- handlers = kwargs.pop("handlers", None)
- if handlers is None:
- if "stream" in kwargs and "filename" in kwargs:
- raise ValueError(
- "'stream' and 'filename' should not be " "specified together"
- )
- else:
- if "stream" in kwargs or "filename" in kwargs:
- raise ValueError(
- "'stream' or 'filename' should not be "
- "specified together with 'handlers'"
- )
- if handlers is None:
- filename = kwargs.pop("filename", None)
- mode = kwargs.pop("filemode", "a")
- if filename:
- h = logging.FileHandler(filename, mode)
- else:
- stream = kwargs.pop("stream", None)
- h = logging.StreamHandler(stream)
- handlers = [h]
- # By default, the top-level library logger is configured.
- logger = kwargs.pop("logger", "fontTools")
- if not logger or isinstance(logger, str):
- # empty "" or None means the 'root' logger
- logger = logging.getLogger(logger)
- # before (re)configuring, reset named logger and its children (if exist)
- _resetExistingLoggers(parent=logger.name)
- # use DEFAULT_FORMATS if 'format' is None
- fs = kwargs.pop("format", None)
- dfs = kwargs.pop("datefmt", None)
- # XXX: '%' is the only format style supported on both py2 and 3
- style = kwargs.pop("style", "%")
- fmt = LevelFormatter(fs, dfs, style)
- filters = kwargs.pop("filters", [])
- for h in handlers:
- if h.formatter is None:
- h.setFormatter(fmt)
- if not h.filters:
- for f in filters:
- h.addFilter(f)
- logger.addHandler(h)
- if logger.name != "root":
- # stop searching up the hierarchy for handlers
- logger.propagate = kwargs.pop("propagate", False)
- # set a custom severity level
- level = kwargs.pop("level", None)
- if level is not None:
- logger.setLevel(level)
- if kwargs:
- keys = ", ".join(kwargs.keys())
- raise ValueError("Unrecognised argument(s): %s" % keys)
-
-
-def _resetExistingLoggers(parent="root"):
- """Reset the logger named 'parent' and all its children to their initial
- state, if they already exist in the current configuration.
- """
- root = logging.root
- # get sorted list of all existing loggers
- existing = sorted(root.manager.loggerDict.keys())
- if parent == "root":
- # all the existing loggers are children of 'root'
- loggers_to_reset = [parent] + existing
- elif parent not in existing:
- # nothing to do
- return
- elif parent in existing:
- loggers_to_reset = [parent]
- # collect children, starting with the entry after parent name
- i = existing.index(parent) + 1
- prefixed = parent + "."
- pflen = len(prefixed)
- num_existing = len(existing)
- while i < num_existing:
- if existing[i][:pflen] == prefixed:
- loggers_to_reset.append(existing[i])
- i += 1
- for name in loggers_to_reset:
- if name == "root":
- root.setLevel(logging.WARNING)
- for h in root.handlers[:]:
- root.removeHandler(h)
- for f in root.filters[:]:
- root.removeFilters(f)
- root.disabled = False
- else:
- logger = root.manager.loggerDict[name]
- logger.level = logging.NOTSET
- logger.handlers = []
- logger.filters = []
- logger.propagate = True
- logger.disabled = False
-
-
-class Timer(object):
- """Keeps track of overall time and split/lap times.
-
- >>> import time
- >>> timer = Timer()
- >>> time.sleep(0.01)
- >>> print("First lap:", timer.split())
- First lap: ...
- >>> time.sleep(0.02)
- >>> print("Second lap:", timer.split())
- Second lap: ...
- >>> print("Overall time:", timer.time())
- Overall time: ...
-
- Can be used as a context manager inside with-statements.
-
- >>> with Timer() as t:
- ... time.sleep(0.01)
- >>> print("%0.3f seconds" % t.elapsed)
- 0... seconds
-
- If initialised with a logger, it can log the elapsed time automatically
- upon exiting the with-statement.
-
- >>> import logging
- >>> log = logging.getLogger("my-fancy-timer-logger")
- >>> configLogger(logger=log, level="DEBUG", format="%(message)s", stream=sys.stdout)
- >>> with Timer(log, 'do something'):
- ... time.sleep(0.01)
- Took ... to do something
-
- The same Timer instance, holding a reference to a logger, can be reused
- in multiple with-statements, optionally with different messages or levels.
-
- >>> timer = Timer(log)
- >>> with timer():
- ... time.sleep(0.01)
- elapsed time: ...s
- >>> with timer('redo it', level=logging.INFO):
- ... time.sleep(0.02)
- Took ... to redo it
-
- It can also be used as a function decorator to log the time elapsed to run
- the decorated function.
-
- >>> @timer()
- ... def test1():
- ... time.sleep(0.01)
- >>> @timer('run test 2', level=logging.INFO)
- ... def test2():
- ... time.sleep(0.02)
- >>> test1()
- Took ... to run 'test1'
- >>> test2()
- Took ... to run test 2
- """
-
- # timeit.default_timer choses the most accurate clock for each platform
- _time = timeit.default_timer
- default_msg = "elapsed time: %(time).3fs"
- default_format = "Took %(time).3fs to %(msg)s"
-
- def __init__(self, logger=None, msg=None, level=None, start=None):
- self.reset(start)
- if logger is None:
- for arg in ("msg", "level"):
- if locals().get(arg) is not None:
- raise ValueError("'%s' can't be specified without a 'logger'" % arg)
- self.logger = logger
- self.level = level if level is not None else TIME_LEVEL
- self.msg = msg
-
- def reset(self, start=None):
- """Reset timer to 'start_time' or the current time."""
- if start is None:
- self.start = self._time()
- else:
- self.start = start
- self.last = self.start
- self.elapsed = 0.0
-
- def time(self):
- """Return the overall time (in seconds) since the timer started."""
- return self._time() - self.start
-
- def split(self):
- """Split and return the lap time (in seconds) in between splits."""
- current = self._time()
- self.elapsed = current - self.last
- self.last = current
- return self.elapsed
-
- def formatTime(self, msg, time):
- """Format 'time' value in 'msg' and return formatted string.
- If 'msg' contains a '%(time)' format string, try to use that.
- Otherwise, use the predefined 'default_format'.
- If 'msg' is empty or None, fall back to 'default_msg'.
- """
- if not msg:
- msg = self.default_msg
- if msg.find("%(time)") < 0:
- msg = self.default_format % {"msg": msg, "time": time}
- else:
- try:
- msg = msg % {"time": time}
- except (KeyError, ValueError):
- pass # skip if the format string is malformed
- return msg
-
- def __enter__(self):
- """Start a new lap"""
- self.last = self._time()
- self.elapsed = 0.0
- return self
-
- def __exit__(self, exc_type, exc_value, traceback):
- """End the current lap. If timer has a logger, log the time elapsed,
- using the format string in self.msg (or the default one).
- """
- time = self.split()
- if self.logger is None or exc_type:
- # if there's no logger attached, or if any exception occurred in
- # the with-statement, exit without logging the time
- return
- message = self.formatTime(self.msg, time)
- # Allow log handlers to see the individual parts to facilitate things
- # like a server accumulating aggregate stats.
- msg_parts = {"msg": self.msg, "time": time}
- self.logger.log(self.level, message, msg_parts)
-
- def __call__(self, func_or_msg=None, **kwargs):
- """If the first argument is a function, return a decorator which runs
- the wrapped function inside Timer's context manager.
- Otherwise, treat the first argument as a 'msg' string and return an updated
- Timer instance, referencing the same logger.
- A 'level' keyword can also be passed to override self.level.
- """
- if isinstance(func_or_msg, Callable):
- func = func_or_msg
- # use the function name when no explicit 'msg' is provided
- if not self.msg:
- self.msg = "run '%s'" % func.__name__
-
- @wraps(func)
- def wrapper(*args, **kwds):
- with self:
- return func(*args, **kwds)
-
- return wrapper
- else:
- msg = func_or_msg or kwargs.get("msg")
- level = kwargs.get("level", self.level)
- return self.__class__(self.logger, msg, level)
-
- def __float__(self):
- return self.elapsed
-
- def __int__(self):
- return int(self.elapsed)
-
- def __str__(self):
- return "%.3f" % self.elapsed
-
-
-class ChannelsFilter(logging.Filter):
- """Provides a hierarchical filter for log entries based on channel names.
-
- Filters out records emitted from a list of enabled channel names,
- including their children. It works the same as the ``logging.Filter``
- class, but allows the user to specify multiple channel names.
-
- >>> import sys
- >>> handler = logging.StreamHandler(sys.stdout)
- >>> handler.setFormatter(logging.Formatter("%(message)s"))
- >>> filter = ChannelsFilter("A.B", "C.D")
- >>> handler.addFilter(filter)
- >>> root = logging.getLogger()
- >>> root.addHandler(handler)
- >>> root.setLevel(level=logging.DEBUG)
- >>> logging.getLogger('A.B').debug('this record passes through')
- this record passes through
- >>> logging.getLogger('A.B.C').debug('records from children also pass')
- records from children also pass
- >>> logging.getLogger('C.D').debug('this one as well')
- this one as well
- >>> logging.getLogger('A.B.').debug('also this one')
- also this one
- >>> logging.getLogger('A.F').debug('but this one does not!')
- >>> logging.getLogger('C.DE').debug('neither this one!')
- """
-
- def __init__(self, *names):
- self.names = names
- self.num = len(names)
- self.lengths = {n: len(n) for n in names}
-
- def filter(self, record):
- if self.num == 0:
- return True
- for name in self.names:
- nlen = self.lengths[name]
- if name == record.name:
- return True
- elif record.name.find(name, 0, nlen) == 0 and record.name[nlen] == ".":
- return True
- return False
-
-
-class CapturingLogHandler(logging.Handler):
- def __init__(self, logger, level):
- super(CapturingLogHandler, self).__init__(level=level)
- self.records = []
- if isinstance(logger, str):
- self.logger = logging.getLogger(logger)
- else:
- self.logger = logger
-
- def __enter__(self):
- self.original_disabled = self.logger.disabled
- self.original_level = self.logger.level
- self.original_propagate = self.logger.propagate
-
- self.logger.addHandler(self)
- self.logger.setLevel(self.level)
- self.logger.disabled = False
- self.logger.propagate = False
-
- return self
-
- def __exit__(self, type, value, traceback):
- self.logger.removeHandler(self)
- self.logger.setLevel(self.original_level)
- self.logger.disabled = self.original_disabled
- self.logger.propagate = self.original_propagate
-
- return self
-
- def emit(self, record):
- self.records.append(record)
-
- def assertRegex(self, regexp, msg=None):
- import re
-
- pattern = re.compile(regexp)
- for r in self.records:
- if pattern.search(r.getMessage()):
- return True
- if msg is None:
- msg = "Pattern '%s' not found in logger records" % regexp
- assert 0, msg
-
-
-class LogMixin(object):
- """Mixin class that adds logging functionality to another class.
-
- You can define a new class that subclasses from ``LogMixin`` as well as
- other base classes through multiple inheritance.
- All instances of that class will have a ``log`` property that returns
- a ``logging.Logger`` named after their respective ``.``.
-
- For example:
-
- >>> class BaseClass(object):
- ... pass
- >>> class MyClass(LogMixin, BaseClass):
- ... pass
- >>> a = MyClass()
- >>> isinstance(a.log, logging.Logger)
- True
- >>> print(a.log.name)
- fontTools.misc.loggingTools.MyClass
- >>> class AnotherClass(MyClass):
- ... pass
- >>> b = AnotherClass()
- >>> isinstance(b.log, logging.Logger)
- True
- >>> print(b.log.name)
- fontTools.misc.loggingTools.AnotherClass
- """
-
- @property
- def log(self):
- if not hasattr(self, "_log"):
- name = ".".join((self.__class__.__module__, self.__class__.__name__))
- self._log = logging.getLogger(name)
- return self._log
-
-
-def deprecateArgument(name, msg, category=UserWarning):
- """Raise a warning about deprecated function argument 'name'."""
- warnings.warn("%r is deprecated; %s" % (name, msg), category=category, stacklevel=3)
-
-
-def deprecateFunction(msg, category=UserWarning):
- """Decorator to raise a warning when a deprecated function is called."""
-
- def decorator(func):
- @wraps(func)
- def wrapper(*args, **kwargs):
- warnings.warn(
- "%r is deprecated; %s" % (func.__name__, msg),
- category=category,
- stacklevel=2,
- )
- return func(*args, **kwargs)
-
- return wrapper
-
- return decorator
-
-
-if __name__ == "__main__":
- import doctest
-
- sys.exit(doctest.testmod(optionflags=doctest.ELLIPSIS).failed)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-19f23ec7.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-19f23ec7.js
deleted file mode 100644
index d689ef10c49e871c01a0fdbc8d999be7443324b6..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-19f23ec7.js
+++ /dev/null
@@ -1,2 +0,0 @@
-const{SvelteComponent:v,attr:y,destroy_each:b,detach:_,element:k,empty:S,ensure_array_like:r,init:p,insert:s,noop:u,safe_not_equal:q,set_data:w,text:m,toggle_class:f}=window.__gradio__svelte__internal;function g(c,n,i){const l=c.slice();return l[3]=n[i],l[5]=i,l}function d(c){let n;return{c(){n=m(", ")},m(i,l){s(i,n,l)},d(i){i&&_(n)}}}function h(c){let n=c[3].toLocaleString()+"",i,l,e=c[5]!==c[0].length-1&&d();return{c(){i=m(n),e&&e.c(),l=S()},m(a,t){s(a,i,t),e&&e.m(a,t),s(a,l,t)},p(a,t){t&1&&n!==(n=a[3].toLocaleString()+"")&&w(i,n),a[5]!==a[0].length-1?e||(e=d(),e.c(),e.m(l.parentNode,l)):e&&(e.d(1),e=null)},d(a){a&&(_(i),_(l)),e&&e.d(a)}}}function E(c){let n,i=r(c[0]),l=[];for(let e=0;e{"value"in t&&i(0,l=t.value),"type"in t&&i(1,e=t.type),"selected"in t&&i(2,a=t.selected)},[l,e,a]}class C extends v{constructor(n){super(),p(this,n,L,E,q,{value:0,type:1,selected:2})}}export{C as default};
-//# sourceMappingURL=Example-19f23ec7.js.map
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-03d58ab8.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-03d58ab8.css
deleted file mode 100644
index c02568c42d3cf011dc008a256fdece5721dbccab..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-03d58ab8.css
+++ /dev/null
@@ -1 +0,0 @@
-.hide.svelte-ydeks8{display:none}
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/__init__.py
deleted file mode 100644
index e6b60c18caa05288676c98d09a9db1ea2be2731d..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/importlib_resources/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-"""Read resources contained within a package."""
-
-from ._common import (
- as_file,
- files,
- Package,
-)
-
-from .abc import ResourceReader
-
-
-__all__ = [
- 'Package',
- 'ResourceReader',
- 'as_file',
- 'files',
-]
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_qt5.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_qt5.py
deleted file mode 100644
index d94062b723f49aa1ff2fb0621748232684feef72..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_qt5.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from .. import backends
-
-backends._QT_FORCE_QT5_BINDING = True
-
-
-from .backend_qt import ( # noqa
- SPECIAL_KEYS,
- # Public API
- cursord, _create_qApp, _BackendQT, TimerQT, MainWindow, FigureCanvasQT,
- FigureManagerQT, ToolbarQt, NavigationToolbar2QT, SubplotToolQt,
- SaveFigureQt, ConfigureSubplotsQt, RubberbandQt,
- HelpQt, ToolCopyToClipboardQT,
- # internal re-exports
- FigureCanvasBase, FigureManagerBase, MouseButton, NavigationToolbar2,
- TimerBase, ToolContainerBase, figureoptions, Gcf
-)
-from . import backend_qt as _backend_qt # noqa
-
-
-@_BackendQT.export
-class _BackendQT5(_BackendQT):
- pass
-
-
-def __getattr__(name):
- if name == 'qApp':
- return _backend_qt.qApp
- raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/arrayterator.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/arrayterator.py
deleted file mode 100644
index b9ea21f8e49f60461416962fc6e2a2ca625c04cd..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/arrayterator.py
+++ /dev/null
@@ -1,219 +0,0 @@
-"""
-A buffered iterator for big arrays.
-
-This module solves the problem of iterating over a big file-based array
-without having to read it into memory. The `Arrayterator` class wraps
-an array object, and when iterated it will return sub-arrays with at most
-a user-specified number of elements.
-
-"""
-from operator import mul
-from functools import reduce
-
-__all__ = ['Arrayterator']
-
-
-class Arrayterator:
- """
- Buffered iterator for big arrays.
-
- `Arrayterator` creates a buffered iterator for reading big arrays in small
- contiguous blocks. The class is useful for objects stored in the
- file system. It allows iteration over the object *without* reading
- everything in memory; instead, small blocks are read and iterated over.
-
- `Arrayterator` can be used with any object that supports multidimensional
- slices. This includes NumPy arrays, but also variables from
- Scientific.IO.NetCDF or pynetcdf for example.
-
- Parameters
- ----------
- var : array_like
- The object to iterate over.
- buf_size : int, optional
- The buffer size. If `buf_size` is supplied, the maximum amount of
- data that will be read into memory is `buf_size` elements.
- Default is None, which will read as many element as possible
- into memory.
-
- Attributes
- ----------
- var
- buf_size
- start
- stop
- step
- shape
- flat
-
- See Also
- --------
- ndenumerate : Multidimensional array iterator.
- flatiter : Flat array iterator.
- memmap : Create a memory-map to an array stored in a binary file on disk.
-
- Notes
- -----
- The algorithm works by first finding a "running dimension", along which
- the blocks will be extracted. Given an array of dimensions
- ``(d1, d2, ..., dn)``, e.g. if `buf_size` is smaller than ``d1``, the
- first dimension will be used. If, on the other hand,
- ``d1 < buf_size < d1*d2`` the second dimension will be used, and so on.
- Blocks are extracted along this dimension, and when the last block is
- returned the process continues from the next dimension, until all
- elements have been read.
-
- Examples
- --------
- >>> a = np.arange(3 * 4 * 5 * 6).reshape(3, 4, 5, 6)
- >>> a_itor = np.lib.Arrayterator(a, 2)
- >>> a_itor.shape
- (3, 4, 5, 6)
-
- Now we can iterate over ``a_itor``, and it will return arrays of size
- two. Since `buf_size` was smaller than any dimension, the first
- dimension will be iterated over first:
-
- >>> for subarr in a_itor:
- ... if not subarr.all():
- ... print(subarr, subarr.shape) # doctest: +SKIP
- >>> # [[[[0 1]]]] (1, 1, 1, 2)
-
- """
-
- def __init__(self, var, buf_size=None):
- self.var = var
- self.buf_size = buf_size
-
- self.start = [0 for dim in var.shape]
- self.stop = [dim for dim in var.shape]
- self.step = [1 for dim in var.shape]
-
- def __getattr__(self, attr):
- return getattr(self.var, attr)
-
- def __getitem__(self, index):
- """
- Return a new arrayterator.
-
- """
- # Fix index, handling ellipsis and incomplete slices.
- if not isinstance(index, tuple):
- index = (index,)
- fixed = []
- length, dims = len(index), self.ndim
- for slice_ in index:
- if slice_ is Ellipsis:
- fixed.extend([slice(None)] * (dims-length+1))
- length = len(fixed)
- elif isinstance(slice_, int):
- fixed.append(slice(slice_, slice_+1, 1))
- else:
- fixed.append(slice_)
- index = tuple(fixed)
- if len(index) < dims:
- index += (slice(None),) * (dims-len(index))
-
- # Return a new arrayterator object.
- out = self.__class__(self.var, self.buf_size)
- for i, (start, stop, step, slice_) in enumerate(
- zip(self.start, self.stop, self.step, index)):
- out.start[i] = start + (slice_.start or 0)
- out.step[i] = step * (slice_.step or 1)
- out.stop[i] = start + (slice_.stop or stop-start)
- out.stop[i] = min(stop, out.stop[i])
- return out
-
- def __array__(self):
- """
- Return corresponding data.
-
- """
- slice_ = tuple(slice(*t) for t in zip(
- self.start, self.stop, self.step))
- return self.var[slice_]
-
- @property
- def flat(self):
- """
- A 1-D flat iterator for Arrayterator objects.
-
- This iterator returns elements of the array to be iterated over in
- `Arrayterator` one by one. It is similar to `flatiter`.
-
- See Also
- --------
- Arrayterator
- flatiter
-
- Examples
- --------
- >>> a = np.arange(3 * 4 * 5 * 6).reshape(3, 4, 5, 6)
- >>> a_itor = np.lib.Arrayterator(a, 2)
-
- >>> for subarr in a_itor.flat:
- ... if not subarr:
- ... print(subarr, type(subarr))
- ...
- 0
-
- """
- for block in self:
- yield from block.flat
-
- @property
- def shape(self):
- """
- The shape of the array to be iterated over.
-
- For an example, see `Arrayterator`.
-
- """
- return tuple(((stop-start-1)//step+1) for start, stop, step in
- zip(self.start, self.stop, self.step))
-
- def __iter__(self):
- # Skip arrays with degenerate dimensions
- if [dim for dim in self.shape if dim <= 0]:
- return
-
- start = self.start[:]
- stop = self.stop[:]
- step = self.step[:]
- ndims = self.var.ndim
-
- while True:
- count = self.buf_size or reduce(mul, self.shape)
-
- # iterate over each dimension, looking for the
- # running dimension (ie, the dimension along which
- # the blocks will be built from)
- rundim = 0
- for i in range(ndims-1, -1, -1):
- # if count is zero we ran out of elements to read
- # along higher dimensions, so we read only a single position
- if count == 0:
- stop[i] = start[i]+1
- elif count <= self.shape[i]:
- # limit along this dimension
- stop[i] = start[i] + count*step[i]
- rundim = i
- else:
- # read everything along this dimension
- stop[i] = self.stop[i]
- stop[i] = min(self.stop[i], stop[i])
- count = count//self.shape[i]
-
- # yield a block
- slice_ = tuple(slice(*t) for t in zip(start, stop, step))
- yield self.var[slice_]
-
- # Update start position, taking care of overflow to
- # other dimensions
- start[rundim] = stop[rundim] # start where we stopped
- for i in range(ndims-1, 0, -1):
- if start[i] >= self.stop[i]:
- start[i] = self.start[i]
- start[i-1] += self.step[i-1]
- if start[0] >= self.stop[0]:
- return
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/test_shape_base.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/test_shape_base.py
deleted file mode 100644
index eb6628904bd066765d06f3a179a2a2f5958f60bc..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/test_shape_base.py
+++ /dev/null
@@ -1,787 +0,0 @@
-import numpy as np
-import functools
-import sys
-import pytest
-
-from numpy.lib.shape_base import (
- apply_along_axis, apply_over_axes, array_split, split, hsplit, dsplit,
- vsplit, dstack, column_stack, kron, tile, expand_dims, take_along_axis,
- put_along_axis
- )
-from numpy.testing import (
- assert_, assert_equal, assert_array_equal, assert_raises, assert_warns
- )
-
-
-IS_64BIT = sys.maxsize > 2**32
-
-
-def _add_keepdims(func):
- """ hack in keepdims behavior into a function taking an axis """
- @functools.wraps(func)
- def wrapped(a, axis, **kwargs):
- res = func(a, axis=axis, **kwargs)
- if axis is None:
- axis = 0 # res is now a scalar, so we can insert this anywhere
- return np.expand_dims(res, axis=axis)
- return wrapped
-
-
-class TestTakeAlongAxis:
- def test_argequivalent(self):
- """ Test it translates from arg to """
- from numpy.random import rand
- a = rand(3, 4, 5)
-
- funcs = [
- (np.sort, np.argsort, dict()),
- (_add_keepdims(np.min), _add_keepdims(np.argmin), dict()),
- (_add_keepdims(np.max), _add_keepdims(np.argmax), dict()),
- (np.partition, np.argpartition, dict(kth=2)),
- ]
-
- for func, argfunc, kwargs in funcs:
- for axis in list(range(a.ndim)) + [None]:
- a_func = func(a, axis=axis, **kwargs)
- ai_func = argfunc(a, axis=axis, **kwargs)
- assert_equal(a_func, take_along_axis(a, ai_func, axis=axis))
-
- def test_invalid(self):
- """ Test it errors when indices has too few dimensions """
- a = np.ones((10, 10))
- ai = np.ones((10, 2), dtype=np.intp)
-
- # sanity check
- take_along_axis(a, ai, axis=1)
-
- # not enough indices
- assert_raises(ValueError, take_along_axis, a, np.array(1), axis=1)
- # bool arrays not allowed
- assert_raises(IndexError, take_along_axis, a, ai.astype(bool), axis=1)
- # float arrays not allowed
- assert_raises(IndexError, take_along_axis, a, ai.astype(float), axis=1)
- # invalid axis
- assert_raises(np.AxisError, take_along_axis, a, ai, axis=10)
-
- def test_empty(self):
- """ Test everything is ok with empty results, even with inserted dims """
- a = np.ones((3, 4, 5))
- ai = np.ones((3, 0, 5), dtype=np.intp)
-
- actual = take_along_axis(a, ai, axis=1)
- assert_equal(actual.shape, ai.shape)
-
- def test_broadcast(self):
- """ Test that non-indexing dimensions are broadcast in both directions """
- a = np.ones((3, 4, 1))
- ai = np.ones((1, 2, 5), dtype=np.intp)
- actual = take_along_axis(a, ai, axis=1)
- assert_equal(actual.shape, (3, 2, 5))
-
-
-class TestPutAlongAxis:
- def test_replace_max(self):
- a_base = np.array([[10, 30, 20], [60, 40, 50]])
-
- for axis in list(range(a_base.ndim)) + [None]:
- # we mutate this in the loop
- a = a_base.copy()
-
- # replace the max with a small value
- i_max = _add_keepdims(np.argmax)(a, axis=axis)
- put_along_axis(a, i_max, -99, axis=axis)
-
- # find the new minimum, which should max
- i_min = _add_keepdims(np.argmin)(a, axis=axis)
-
- assert_equal(i_min, i_max)
-
- def test_broadcast(self):
- """ Test that non-indexing dimensions are broadcast in both directions """
- a = np.ones((3, 4, 1))
- ai = np.arange(10, dtype=np.intp).reshape((1, 2, 5)) % 4
- put_along_axis(a, ai, 20, axis=1)
- assert_equal(take_along_axis(a, ai, axis=1), 20)
-
-
-class TestApplyAlongAxis:
- def test_simple(self):
- a = np.ones((20, 10), 'd')
- assert_array_equal(
- apply_along_axis(len, 0, a), len(a)*np.ones(a.shape[1]))
-
- def test_simple101(self):
- a = np.ones((10, 101), 'd')
- assert_array_equal(
- apply_along_axis(len, 0, a), len(a)*np.ones(a.shape[1]))
-
- def test_3d(self):
- a = np.arange(27).reshape((3, 3, 3))
- assert_array_equal(apply_along_axis(np.sum, 0, a),
- [[27, 30, 33], [36, 39, 42], [45, 48, 51]])
-
- def test_preserve_subclass(self):
- def double(row):
- return row * 2
-
- class MyNDArray(np.ndarray):
- pass
-
- m = np.array([[0, 1], [2, 3]]).view(MyNDArray)
- expected = np.array([[0, 2], [4, 6]]).view(MyNDArray)
-
- result = apply_along_axis(double, 0, m)
- assert_(isinstance(result, MyNDArray))
- assert_array_equal(result, expected)
-
- result = apply_along_axis(double, 1, m)
- assert_(isinstance(result, MyNDArray))
- assert_array_equal(result, expected)
-
- def test_subclass(self):
- class MinimalSubclass(np.ndarray):
- data = 1
-
- def minimal_function(array):
- return array.data
-
- a = np.zeros((6, 3)).view(MinimalSubclass)
-
- assert_array_equal(
- apply_along_axis(minimal_function, 0, a), np.array([1, 1, 1])
- )
-
- def test_scalar_array(self, cls=np.ndarray):
- a = np.ones((6, 3)).view(cls)
- res = apply_along_axis(np.sum, 0, a)
- assert_(isinstance(res, cls))
- assert_array_equal(res, np.array([6, 6, 6]).view(cls))
-
- def test_0d_array(self, cls=np.ndarray):
- def sum_to_0d(x):
- """ Sum x, returning a 0d array of the same class """
- assert_equal(x.ndim, 1)
- return np.squeeze(np.sum(x, keepdims=True))
- a = np.ones((6, 3)).view(cls)
- res = apply_along_axis(sum_to_0d, 0, a)
- assert_(isinstance(res, cls))
- assert_array_equal(res, np.array([6, 6, 6]).view(cls))
-
- res = apply_along_axis(sum_to_0d, 1, a)
- assert_(isinstance(res, cls))
- assert_array_equal(res, np.array([3, 3, 3, 3, 3, 3]).view(cls))
-
- def test_axis_insertion(self, cls=np.ndarray):
- def f1to2(x):
- """produces an asymmetric non-square matrix from x"""
- assert_equal(x.ndim, 1)
- return (x[::-1] * x[1:,None]).view(cls)
-
- a2d = np.arange(6*3).reshape((6, 3))
-
- # 2d insertion along first axis
- actual = apply_along_axis(f1to2, 0, a2d)
- expected = np.stack([
- f1to2(a2d[:,i]) for i in range(a2d.shape[1])
- ], axis=-1).view(cls)
- assert_equal(type(actual), type(expected))
- assert_equal(actual, expected)
-
- # 2d insertion along last axis
- actual = apply_along_axis(f1to2, 1, a2d)
- expected = np.stack([
- f1to2(a2d[i,:]) for i in range(a2d.shape[0])
- ], axis=0).view(cls)
- assert_equal(type(actual), type(expected))
- assert_equal(actual, expected)
-
- # 3d insertion along middle axis
- a3d = np.arange(6*5*3).reshape((6, 5, 3))
-
- actual = apply_along_axis(f1to2, 1, a3d)
- expected = np.stack([
- np.stack([
- f1to2(a3d[i,:,j]) for i in range(a3d.shape[0])
- ], axis=0)
- for j in range(a3d.shape[2])
- ], axis=-1).view(cls)
- assert_equal(type(actual), type(expected))
- assert_equal(actual, expected)
-
- def test_subclass_preservation(self):
- class MinimalSubclass(np.ndarray):
- pass
- self.test_scalar_array(MinimalSubclass)
- self.test_0d_array(MinimalSubclass)
- self.test_axis_insertion(MinimalSubclass)
-
- def test_axis_insertion_ma(self):
- def f1to2(x):
- """produces an asymmetric non-square matrix from x"""
- assert_equal(x.ndim, 1)
- res = x[::-1] * x[1:,None]
- return np.ma.masked_where(res%5==0, res)
- a = np.arange(6*3).reshape((6, 3))
- res = apply_along_axis(f1to2, 0, a)
- assert_(isinstance(res, np.ma.masked_array))
- assert_equal(res.ndim, 3)
- assert_array_equal(res[:,:,0].mask, f1to2(a[:,0]).mask)
- assert_array_equal(res[:,:,1].mask, f1to2(a[:,1]).mask)
- assert_array_equal(res[:,:,2].mask, f1to2(a[:,2]).mask)
-
- def test_tuple_func1d(self):
- def sample_1d(x):
- return x[1], x[0]
- res = np.apply_along_axis(sample_1d, 1, np.array([[1, 2], [3, 4]]))
- assert_array_equal(res, np.array([[2, 1], [4, 3]]))
-
- def test_empty(self):
- # can't apply_along_axis when there's no chance to call the function
- def never_call(x):
- assert_(False) # should never be reached
-
- a = np.empty((0, 0))
- assert_raises(ValueError, np.apply_along_axis, never_call, 0, a)
- assert_raises(ValueError, np.apply_along_axis, never_call, 1, a)
-
- # but it's sometimes ok with some non-zero dimensions
- def empty_to_1(x):
- assert_(len(x) == 0)
- return 1
-
- a = np.empty((10, 0))
- actual = np.apply_along_axis(empty_to_1, 1, a)
- assert_equal(actual, np.ones(10))
- assert_raises(ValueError, np.apply_along_axis, empty_to_1, 0, a)
-
- def test_with_iterable_object(self):
- # from issue 5248
- d = np.array([
- [{1, 11}, {2, 22}, {3, 33}],
- [{4, 44}, {5, 55}, {6, 66}]
- ])
- actual = np.apply_along_axis(lambda a: set.union(*a), 0, d)
- expected = np.array([{1, 11, 4, 44}, {2, 22, 5, 55}, {3, 33, 6, 66}])
-
- assert_equal(actual, expected)
-
- # issue 8642 - assert_equal doesn't detect this!
- for i in np.ndindex(actual.shape):
- assert_equal(type(actual[i]), type(expected[i]))
-
-
-class TestApplyOverAxes:
- def test_simple(self):
- a = np.arange(24).reshape(2, 3, 4)
- aoa_a = apply_over_axes(np.sum, a, [0, 2])
- assert_array_equal(aoa_a, np.array([[[60], [92], [124]]]))
-
-
-class TestExpandDims:
- def test_functionality(self):
- s = (2, 3, 4, 5)
- a = np.empty(s)
- for axis in range(-5, 4):
- b = expand_dims(a, axis)
- assert_(b.shape[axis] == 1)
- assert_(np.squeeze(b).shape == s)
-
- def test_axis_tuple(self):
- a = np.empty((3, 3, 3))
- assert np.expand_dims(a, axis=(0, 1, 2)).shape == (1, 1, 1, 3, 3, 3)
- assert np.expand_dims(a, axis=(0, -1, -2)).shape == (1, 3, 3, 3, 1, 1)
- assert np.expand_dims(a, axis=(0, 3, 5)).shape == (1, 3, 3, 1, 3, 1)
- assert np.expand_dims(a, axis=(0, -3, -5)).shape == (1, 1, 3, 1, 3, 3)
-
- def test_axis_out_of_range(self):
- s = (2, 3, 4, 5)
- a = np.empty(s)
- assert_raises(np.AxisError, expand_dims, a, -6)
- assert_raises(np.AxisError, expand_dims, a, 5)
-
- a = np.empty((3, 3, 3))
- assert_raises(np.AxisError, expand_dims, a, (0, -6))
- assert_raises(np.AxisError, expand_dims, a, (0, 5))
-
- def test_repeated_axis(self):
- a = np.empty((3, 3, 3))
- assert_raises(ValueError, expand_dims, a, axis=(1, 1))
-
- def test_subclasses(self):
- a = np.arange(10).reshape((2, 5))
- a = np.ma.array(a, mask=a%3 == 0)
-
- expanded = np.expand_dims(a, axis=1)
- assert_(isinstance(expanded, np.ma.MaskedArray))
- assert_equal(expanded.shape, (2, 1, 5))
- assert_equal(expanded.mask.shape, (2, 1, 5))
-
-
-class TestArraySplit:
- def test_integer_0_split(self):
- a = np.arange(10)
- assert_raises(ValueError, array_split, a, 0)
-
- def test_integer_split(self):
- a = np.arange(10)
- res = array_split(a, 1)
- desired = [np.arange(10)]
- compare_results(res, desired)
-
- res = array_split(a, 2)
- desired = [np.arange(5), np.arange(5, 10)]
- compare_results(res, desired)
-
- res = array_split(a, 3)
- desired = [np.arange(4), np.arange(4, 7), np.arange(7, 10)]
- compare_results(res, desired)
-
- res = array_split(a, 4)
- desired = [np.arange(3), np.arange(3, 6), np.arange(6, 8),
- np.arange(8, 10)]
- compare_results(res, desired)
-
- res = array_split(a, 5)
- desired = [np.arange(2), np.arange(2, 4), np.arange(4, 6),
- np.arange(6, 8), np.arange(8, 10)]
- compare_results(res, desired)
-
- res = array_split(a, 6)
- desired = [np.arange(2), np.arange(2, 4), np.arange(4, 6),
- np.arange(6, 8), np.arange(8, 9), np.arange(9, 10)]
- compare_results(res, desired)
-
- res = array_split(a, 7)
- desired = [np.arange(2), np.arange(2, 4), np.arange(4, 6),
- np.arange(6, 7), np.arange(7, 8), np.arange(8, 9),
- np.arange(9, 10)]
- compare_results(res, desired)
-
- res = array_split(a, 8)
- desired = [np.arange(2), np.arange(2, 4), np.arange(4, 5),
- np.arange(5, 6), np.arange(6, 7), np.arange(7, 8),
- np.arange(8, 9), np.arange(9, 10)]
- compare_results(res, desired)
-
- res = array_split(a, 9)
- desired = [np.arange(2), np.arange(2, 3), np.arange(3, 4),
- np.arange(4, 5), np.arange(5, 6), np.arange(6, 7),
- np.arange(7, 8), np.arange(8, 9), np.arange(9, 10)]
- compare_results(res, desired)
-
- res = array_split(a, 10)
- desired = [np.arange(1), np.arange(1, 2), np.arange(2, 3),
- np.arange(3, 4), np.arange(4, 5), np.arange(5, 6),
- np.arange(6, 7), np.arange(7, 8), np.arange(8, 9),
- np.arange(9, 10)]
- compare_results(res, desired)
-
- res = array_split(a, 11)
- desired = [np.arange(1), np.arange(1, 2), np.arange(2, 3),
- np.arange(3, 4), np.arange(4, 5), np.arange(5, 6),
- np.arange(6, 7), np.arange(7, 8), np.arange(8, 9),
- np.arange(9, 10), np.array([])]
- compare_results(res, desired)
-
- def test_integer_split_2D_rows(self):
- a = np.array([np.arange(10), np.arange(10)])
- res = array_split(a, 3, axis=0)
- tgt = [np.array([np.arange(10)]), np.array([np.arange(10)]),
- np.zeros((0, 10))]
- compare_results(res, tgt)
- assert_(a.dtype.type is res[-1].dtype.type)
-
- # Same thing for manual splits:
- res = array_split(a, [0, 1], axis=0)
- tgt = [np.zeros((0, 10)), np.array([np.arange(10)]),
- np.array([np.arange(10)])]
- compare_results(res, tgt)
- assert_(a.dtype.type is res[-1].dtype.type)
-
- def test_integer_split_2D_cols(self):
- a = np.array([np.arange(10), np.arange(10)])
- res = array_split(a, 3, axis=-1)
- desired = [np.array([np.arange(4), np.arange(4)]),
- np.array([np.arange(4, 7), np.arange(4, 7)]),
- np.array([np.arange(7, 10), np.arange(7, 10)])]
- compare_results(res, desired)
-
- def test_integer_split_2D_default(self):
- """ This will fail if we change default axis
- """
- a = np.array([np.arange(10), np.arange(10)])
- res = array_split(a, 3)
- tgt = [np.array([np.arange(10)]), np.array([np.arange(10)]),
- np.zeros((0, 10))]
- compare_results(res, tgt)
- assert_(a.dtype.type is res[-1].dtype.type)
- # perhaps should check higher dimensions
-
- @pytest.mark.skipif(not IS_64BIT, reason="Needs 64bit platform")
- def test_integer_split_2D_rows_greater_max_int32(self):
- a = np.broadcast_to([0], (1 << 32, 2))
- res = array_split(a, 4)
- chunk = np.broadcast_to([0], (1 << 30, 2))
- tgt = [chunk] * 4
- for i in range(len(tgt)):
- assert_equal(res[i].shape, tgt[i].shape)
-
- def test_index_split_simple(self):
- a = np.arange(10)
- indices = [1, 5, 7]
- res = array_split(a, indices, axis=-1)
- desired = [np.arange(0, 1), np.arange(1, 5), np.arange(5, 7),
- np.arange(7, 10)]
- compare_results(res, desired)
-
- def test_index_split_low_bound(self):
- a = np.arange(10)
- indices = [0, 5, 7]
- res = array_split(a, indices, axis=-1)
- desired = [np.array([]), np.arange(0, 5), np.arange(5, 7),
- np.arange(7, 10)]
- compare_results(res, desired)
-
- def test_index_split_high_bound(self):
- a = np.arange(10)
- indices = [0, 5, 7, 10, 12]
- res = array_split(a, indices, axis=-1)
- desired = [np.array([]), np.arange(0, 5), np.arange(5, 7),
- np.arange(7, 10), np.array([]), np.array([])]
- compare_results(res, desired)
-
-
-class TestSplit:
- # The split function is essentially the same as array_split,
- # except that it test if splitting will result in an
- # equal split. Only test for this case.
-
- def test_equal_split(self):
- a = np.arange(10)
- res = split(a, 2)
- desired = [np.arange(5), np.arange(5, 10)]
- compare_results(res, desired)
-
- def test_unequal_split(self):
- a = np.arange(10)
- assert_raises(ValueError, split, a, 3)
-
-
-class TestColumnStack:
- def test_non_iterable(self):
- assert_raises(TypeError, column_stack, 1)
-
- def test_1D_arrays(self):
- # example from docstring
- a = np.array((1, 2, 3))
- b = np.array((2, 3, 4))
- expected = np.array([[1, 2],
- [2, 3],
- [3, 4]])
- actual = np.column_stack((a, b))
- assert_equal(actual, expected)
-
- def test_2D_arrays(self):
- # same as hstack 2D docstring example
- a = np.array([[1], [2], [3]])
- b = np.array([[2], [3], [4]])
- expected = np.array([[1, 2],
- [2, 3],
- [3, 4]])
- actual = np.column_stack((a, b))
- assert_equal(actual, expected)
-
- def test_generator(self):
- with pytest.raises(TypeError, match="arrays to stack must be"):
- column_stack((np.arange(3) for _ in range(2)))
-
-
-class TestDstack:
- def test_non_iterable(self):
- assert_raises(TypeError, dstack, 1)
-
- def test_0D_array(self):
- a = np.array(1)
- b = np.array(2)
- res = dstack([a, b])
- desired = np.array([[[1, 2]]])
- assert_array_equal(res, desired)
-
- def test_1D_array(self):
- a = np.array([1])
- b = np.array([2])
- res = dstack([a, b])
- desired = np.array([[[1, 2]]])
- assert_array_equal(res, desired)
-
- def test_2D_array(self):
- a = np.array([[1], [2]])
- b = np.array([[1], [2]])
- res = dstack([a, b])
- desired = np.array([[[1, 1]], [[2, 2, ]]])
- assert_array_equal(res, desired)
-
- def test_2D_array2(self):
- a = np.array([1, 2])
- b = np.array([1, 2])
- res = dstack([a, b])
- desired = np.array([[[1, 1], [2, 2]]])
- assert_array_equal(res, desired)
-
- def test_generator(self):
- with pytest.raises(TypeError, match="arrays to stack must be"):
- dstack((np.arange(3) for _ in range(2)))
-
-
-# array_split has more comprehensive test of splitting.
-# only do simple test on hsplit, vsplit, and dsplit
-class TestHsplit:
- """Only testing for integer splits.
-
- """
- def test_non_iterable(self):
- assert_raises(ValueError, hsplit, 1, 1)
-
- def test_0D_array(self):
- a = np.array(1)
- try:
- hsplit(a, 2)
- assert_(0)
- except ValueError:
- pass
-
- def test_1D_array(self):
- a = np.array([1, 2, 3, 4])
- res = hsplit(a, 2)
- desired = [np.array([1, 2]), np.array([3, 4])]
- compare_results(res, desired)
-
- def test_2D_array(self):
- a = np.array([[1, 2, 3, 4],
- [1, 2, 3, 4]])
- res = hsplit(a, 2)
- desired = [np.array([[1, 2], [1, 2]]), np.array([[3, 4], [3, 4]])]
- compare_results(res, desired)
-
-
-class TestVsplit:
- """Only testing for integer splits.
-
- """
- def test_non_iterable(self):
- assert_raises(ValueError, vsplit, 1, 1)
-
- def test_0D_array(self):
- a = np.array(1)
- assert_raises(ValueError, vsplit, a, 2)
-
- def test_1D_array(self):
- a = np.array([1, 2, 3, 4])
- try:
- vsplit(a, 2)
- assert_(0)
- except ValueError:
- pass
-
- def test_2D_array(self):
- a = np.array([[1, 2, 3, 4],
- [1, 2, 3, 4]])
- res = vsplit(a, 2)
- desired = [np.array([[1, 2, 3, 4]]), np.array([[1, 2, 3, 4]])]
- compare_results(res, desired)
-
-
-class TestDsplit:
- # Only testing for integer splits.
- def test_non_iterable(self):
- assert_raises(ValueError, dsplit, 1, 1)
-
- def test_0D_array(self):
- a = np.array(1)
- assert_raises(ValueError, dsplit, a, 2)
-
- def test_1D_array(self):
- a = np.array([1, 2, 3, 4])
- assert_raises(ValueError, dsplit, a, 2)
-
- def test_2D_array(self):
- a = np.array([[1, 2, 3, 4],
- [1, 2, 3, 4]])
- try:
- dsplit(a, 2)
- assert_(0)
- except ValueError:
- pass
-
- def test_3D_array(self):
- a = np.array([[[1, 2, 3, 4],
- [1, 2, 3, 4]],
- [[1, 2, 3, 4],
- [1, 2, 3, 4]]])
- res = dsplit(a, 2)
- desired = [np.array([[[1, 2], [1, 2]], [[1, 2], [1, 2]]]),
- np.array([[[3, 4], [3, 4]], [[3, 4], [3, 4]]])]
- compare_results(res, desired)
-
-
-class TestSqueeze:
- def test_basic(self):
- from numpy.random import rand
-
- a = rand(20, 10, 10, 1, 1)
- b = rand(20, 1, 10, 1, 20)
- c = rand(1, 1, 20, 10)
- assert_array_equal(np.squeeze(a), np.reshape(a, (20, 10, 10)))
- assert_array_equal(np.squeeze(b), np.reshape(b, (20, 10, 20)))
- assert_array_equal(np.squeeze(c), np.reshape(c, (20, 10)))
-
- # Squeezing to 0-dim should still give an ndarray
- a = [[[1.5]]]
- res = np.squeeze(a)
- assert_equal(res, 1.5)
- assert_equal(res.ndim, 0)
- assert_equal(type(res), np.ndarray)
-
-
-class TestKron:
- def test_basic(self):
- # Using 0-dimensional ndarray
- a = np.array(1)
- b = np.array([[1, 2], [3, 4]])
- k = np.array([[1, 2], [3, 4]])
- assert_array_equal(np.kron(a, b), k)
- a = np.array([[1, 2], [3, 4]])
- b = np.array(1)
- assert_array_equal(np.kron(a, b), k)
-
- # Using 1-dimensional ndarray
- a = np.array([3])
- b = np.array([[1, 2], [3, 4]])
- k = np.array([[3, 6], [9, 12]])
- assert_array_equal(np.kron(a, b), k)
- a = np.array([[1, 2], [3, 4]])
- b = np.array([3])
- assert_array_equal(np.kron(a, b), k)
-
- # Using 3-dimensional ndarray
- a = np.array([[[1]], [[2]]])
- b = np.array([[1, 2], [3, 4]])
- k = np.array([[[1, 2], [3, 4]], [[2, 4], [6, 8]]])
- assert_array_equal(np.kron(a, b), k)
- a = np.array([[1, 2], [3, 4]])
- b = np.array([[[1]], [[2]]])
- k = np.array([[[1, 2], [3, 4]], [[2, 4], [6, 8]]])
- assert_array_equal(np.kron(a, b), k)
-
- def test_return_type(self):
- class myarray(np.ndarray):
- __array_priority__ = 1.0
-
- a = np.ones([2, 2])
- ma = myarray(a.shape, a.dtype, a.data)
- assert_equal(type(kron(a, a)), np.ndarray)
- assert_equal(type(kron(ma, ma)), myarray)
- assert_equal(type(kron(a, ma)), myarray)
- assert_equal(type(kron(ma, a)), myarray)
-
- @pytest.mark.parametrize(
- "array_class", [np.asarray, np.mat]
- )
- def test_kron_smoke(self, array_class):
- a = array_class(np.ones([3, 3]))
- b = array_class(np.ones([3, 3]))
- k = array_class(np.ones([9, 9]))
-
- assert_array_equal(np.kron(a, b), k)
-
- def test_kron_ma(self):
- x = np.ma.array([[1, 2], [3, 4]], mask=[[0, 1], [1, 0]])
- k = np.ma.array(np.diag([1, 4, 4, 16]),
- mask=~np.array(np.identity(4), dtype=bool))
-
- assert_array_equal(k, np.kron(x, x))
-
- @pytest.mark.parametrize(
- "shape_a,shape_b", [
- ((1, 1), (1, 1)),
- ((1, 2, 3), (4, 5, 6)),
- ((2, 2), (2, 2, 2)),
- ((1, 0), (1, 1)),
- ((2, 0, 2), (2, 2)),
- ((2, 0, 0, 2), (2, 0, 2)),
- ])
- def test_kron_shape(self, shape_a, shape_b):
- a = np.ones(shape_a)
- b = np.ones(shape_b)
- normalised_shape_a = (1,) * max(0, len(shape_b)-len(shape_a)) + shape_a
- normalised_shape_b = (1,) * max(0, len(shape_a)-len(shape_b)) + shape_b
- expected_shape = np.multiply(normalised_shape_a, normalised_shape_b)
-
- k = np.kron(a, b)
- assert np.array_equal(
- k.shape, expected_shape), "Unexpected shape from kron"
-
-
-class TestTile:
- def test_basic(self):
- a = np.array([0, 1, 2])
- b = [[1, 2], [3, 4]]
- assert_equal(tile(a, 2), [0, 1, 2, 0, 1, 2])
- assert_equal(tile(a, (2, 2)), [[0, 1, 2, 0, 1, 2], [0, 1, 2, 0, 1, 2]])
- assert_equal(tile(a, (1, 2)), [[0, 1, 2, 0, 1, 2]])
- assert_equal(tile(b, 2), [[1, 2, 1, 2], [3, 4, 3, 4]])
- assert_equal(tile(b, (2, 1)), [[1, 2], [3, 4], [1, 2], [3, 4]])
- assert_equal(tile(b, (2, 2)), [[1, 2, 1, 2], [3, 4, 3, 4],
- [1, 2, 1, 2], [3, 4, 3, 4]])
-
- def test_tile_one_repetition_on_array_gh4679(self):
- a = np.arange(5)
- b = tile(a, 1)
- b += 2
- assert_equal(a, np.arange(5))
-
- def test_empty(self):
- a = np.array([[[]]])
- b = np.array([[], []])
- c = tile(b, 2).shape
- d = tile(a, (3, 2, 5)).shape
- assert_equal(c, (2, 0))
- assert_equal(d, (3, 2, 0))
-
- def test_kroncompare(self):
- from numpy.random import randint
-
- reps = [(2,), (1, 2), (2, 1), (2, 2), (2, 3, 2), (3, 2)]
- shape = [(3,), (2, 3), (3, 4, 3), (3, 2, 3), (4, 3, 2, 4), (2, 2)]
- for s in shape:
- b = randint(0, 10, size=s)
- for r in reps:
- a = np.ones(r, b.dtype)
- large = tile(b, r)
- klarge = kron(a, b)
- assert_equal(large, klarge)
-
-
-class TestMayShareMemory:
- def test_basic(self):
- d = np.ones((50, 60))
- d2 = np.ones((30, 60, 6))
- assert_(np.may_share_memory(d, d))
- assert_(np.may_share_memory(d, d[::-1]))
- assert_(np.may_share_memory(d, d[::2]))
- assert_(np.may_share_memory(d, d[1:, ::-1]))
-
- assert_(not np.may_share_memory(d[::-1], d2))
- assert_(not np.may_share_memory(d[::2], d2))
- assert_(not np.may_share_memory(d[1:, ::-1], d2))
- assert_(np.may_share_memory(d2[1:, ::-1], d2))
-
-
-# Utility
-def compare_results(res, desired):
- """Compare lists of arrays."""
- if len(res) != len(desired):
- raise ValueError("Iterables have different lengths")
- # See also PEP 618 for Python 3.10
- for x, y in zip(res, desired):
- assert_array_equal(x, y)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/base/casting.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/base/casting.py
deleted file mode 100644
index 2bfe801c48a7794b86fa6c75f5edda9f25269caa..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/extension/base/casting.py
+++ /dev/null
@@ -1,87 +0,0 @@
-import numpy as np
-import pytest
-
-import pandas.util._test_decorators as td
-
-import pandas as pd
-import pandas._testing as tm
-from pandas.core.internals.blocks import NumpyBlock
-
-
-class BaseCastingTests:
- """Casting to and from ExtensionDtypes"""
-
- def test_astype_object_series(self, all_data):
- ser = pd.Series(all_data, name="A")
- result = ser.astype(object)
- assert result.dtype == np.dtype(object)
- if hasattr(result._mgr, "blocks"):
- blk = result._mgr.blocks[0]
- assert isinstance(blk, NumpyBlock)
- assert blk.is_object
- assert isinstance(result._mgr.array, np.ndarray)
- assert result._mgr.array.dtype == np.dtype(object)
-
- def test_astype_object_frame(self, all_data):
- df = pd.DataFrame({"A": all_data})
-
- result = df.astype(object)
- if hasattr(result._mgr, "blocks"):
- blk = result._mgr.blocks[0]
- assert isinstance(blk, NumpyBlock), type(blk)
- assert blk.is_object
- assert isinstance(result._mgr.arrays[0], np.ndarray)
- assert result._mgr.arrays[0].dtype == np.dtype(object)
-
- # check that we can compare the dtypes
- comp = result.dtypes == df.dtypes
- assert not comp.any()
-
- def test_tolist(self, data):
- result = pd.Series(data).tolist()
- expected = list(data)
- assert result == expected
-
- def test_astype_str(self, data):
- result = pd.Series(data[:5]).astype(str)
- expected = pd.Series([str(x) for x in data[:5]], dtype=str)
- tm.assert_series_equal(result, expected)
-
- @pytest.mark.parametrize(
- "nullable_string_dtype",
- [
- "string[python]",
- pytest.param("string[pyarrow]", marks=td.skip_if_no("pyarrow")),
- ],
- )
- def test_astype_string(self, data, nullable_string_dtype):
- # GH-33465, GH#45326 as of 2.0 we decode bytes instead of calling str(obj)
- result = pd.Series(data[:5]).astype(nullable_string_dtype)
- expected = pd.Series(
- [str(x) if not isinstance(x, bytes) else x.decode() for x in data[:5]],
- dtype=nullable_string_dtype,
- )
- tm.assert_series_equal(result, expected)
-
- def test_to_numpy(self, data):
- expected = np.asarray(data)
-
- result = data.to_numpy()
- tm.assert_equal(result, expected)
-
- result = pd.Series(data).to_numpy()
- tm.assert_equal(result, expected)
-
- def test_astype_empty_dataframe(self, dtype):
- # https://github.com/pandas-dev/pandas/issues/33113
- df = pd.DataFrame()
- result = df.astype(dtype)
- tm.assert_frame_equal(result, df)
-
- @pytest.mark.parametrize("copy", [True, False])
- def test_astype_own_type(self, data, copy):
- # ensure that astype returns the original object for equal dtype and copy=False
- # https://github.com/pandas-dev/pandas/issues/28488
- result = data.astype(data.dtype, copy=copy)
- assert (result is data) is (not copy)
- tm.assert_extension_array_equal(result, data)
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/util/test_assert_produces_warning.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/util/test_assert_produces_warning.py
deleted file mode 100644
index 5c27a3ee79d4a82bce83eec56ab9d88e10dc06cd..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/util/test_assert_produces_warning.py
+++ /dev/null
@@ -1,241 +0,0 @@
-""""
-Test module for testing ``pandas._testing.assert_produces_warning``.
-"""
-import warnings
-
-import pytest
-
-from pandas.errors import (
- DtypeWarning,
- PerformanceWarning,
-)
-
-import pandas._testing as tm
-
-
-@pytest.fixture(
- params=[
- RuntimeWarning,
- ResourceWarning,
- UserWarning,
- FutureWarning,
- DeprecationWarning,
- PerformanceWarning,
- DtypeWarning,
- ],
-)
-def category(request):
- """
- Return unique warning.
-
- Useful for testing behavior of tm.assert_produces_warning with various categories.
- """
- return request.param
-
-
-@pytest.fixture(
- params=[
- (RuntimeWarning, UserWarning),
- (UserWarning, FutureWarning),
- (FutureWarning, RuntimeWarning),
- (DeprecationWarning, PerformanceWarning),
- (PerformanceWarning, FutureWarning),
- (DtypeWarning, DeprecationWarning),
- (ResourceWarning, DeprecationWarning),
- (FutureWarning, DeprecationWarning),
- ],
- ids=lambda x: type(x).__name__,
-)
-def pair_different_warnings(request):
- """
- Return pair or different warnings.
-
- Useful for testing how several different warnings are handled
- in tm.assert_produces_warning.
- """
- return request.param
-
-
-def f():
- warnings.warn("f1", FutureWarning)
- warnings.warn("f2", RuntimeWarning)
-
-
-@pytest.mark.filterwarnings("ignore:f1:FutureWarning")
-def test_assert_produces_warning_honors_filter():
- # Raise by default.
- msg = r"Caused unexpected warning\(s\)"
- with pytest.raises(AssertionError, match=msg):
- with tm.assert_produces_warning(RuntimeWarning):
- f()
-
- with tm.assert_produces_warning(RuntimeWarning, raise_on_extra_warnings=False):
- f()
-
-
-@pytest.mark.parametrize(
- "message, match",
- [
- ("", None),
- ("", ""),
- ("Warning message", r".*"),
- ("Warning message", "War"),
- ("Warning message", r"[Ww]arning"),
- ("Warning message", "age"),
- ("Warning message", r"age$"),
- ("Message 12-234 with numbers", r"\d{2}-\d{3}"),
- ("Message 12-234 with numbers", r"^Mes.*\d{2}-\d{3}"),
- ("Message 12-234 with numbers", r"\d{2}-\d{3}\s\S+"),
- ("Message, which we do not match", None),
- ],
-)
-def test_catch_warning_category_and_match(category, message, match):
- with tm.assert_produces_warning(category, match=match):
- warnings.warn(message, category)
-
-
-def test_fail_to_match_runtime_warning():
- category = RuntimeWarning
- match = "Did not see this warning"
- unmatched = (
- r"Did not see warning 'RuntimeWarning' matching 'Did not see this warning'. "
- r"The emitted warning messages are "
- r"\[RuntimeWarning\('This is not a match.'\), "
- r"RuntimeWarning\('Another unmatched warning.'\)\]"
- )
- with pytest.raises(AssertionError, match=unmatched):
- with tm.assert_produces_warning(category, match=match):
- warnings.warn("This is not a match.", category)
- warnings.warn("Another unmatched warning.", category)
-
-
-def test_fail_to_match_future_warning():
- category = FutureWarning
- match = "Warning"
- unmatched = (
- r"Did not see warning 'FutureWarning' matching 'Warning'. "
- r"The emitted warning messages are "
- r"\[FutureWarning\('This is not a match.'\), "
- r"FutureWarning\('Another unmatched warning.'\)\]"
- )
- with pytest.raises(AssertionError, match=unmatched):
- with tm.assert_produces_warning(category, match=match):
- warnings.warn("This is not a match.", category)
- warnings.warn("Another unmatched warning.", category)
-
-
-def test_fail_to_match_resource_warning():
- category = ResourceWarning
- match = r"\d+"
- unmatched = (
- r"Did not see warning 'ResourceWarning' matching '\\d\+'. "
- r"The emitted warning messages are "
- r"\[ResourceWarning\('This is not a match.'\), "
- r"ResourceWarning\('Another unmatched warning.'\)\]"
- )
- with pytest.raises(AssertionError, match=unmatched):
- with tm.assert_produces_warning(category, match=match):
- warnings.warn("This is not a match.", category)
- warnings.warn("Another unmatched warning.", category)
-
-
-def test_fail_to_catch_actual_warning(pair_different_warnings):
- expected_category, actual_category = pair_different_warnings
- match = "Did not see expected warning of class"
- with pytest.raises(AssertionError, match=match):
- with tm.assert_produces_warning(expected_category):
- warnings.warn("warning message", actual_category)
-
-
-def test_ignore_extra_warning(pair_different_warnings):
- expected_category, extra_category = pair_different_warnings
- with tm.assert_produces_warning(expected_category, raise_on_extra_warnings=False):
- warnings.warn("Expected warning", expected_category)
- warnings.warn("Unexpected warning OK", extra_category)
-
-
-def test_raise_on_extra_warning(pair_different_warnings):
- expected_category, extra_category = pair_different_warnings
- match = r"Caused unexpected warning\(s\)"
- with pytest.raises(AssertionError, match=match):
- with tm.assert_produces_warning(expected_category):
- warnings.warn("Expected warning", expected_category)
- warnings.warn("Unexpected warning NOT OK", extra_category)
-
-
-def test_same_category_different_messages_first_match():
- category = UserWarning
- with tm.assert_produces_warning(category, match=r"^Match this"):
- warnings.warn("Match this", category)
- warnings.warn("Do not match that", category)
- warnings.warn("Do not match that either", category)
-
-
-def test_same_category_different_messages_last_match():
- category = DeprecationWarning
- with tm.assert_produces_warning(category, match=r"^Match this"):
- warnings.warn("Do not match that", category)
- warnings.warn("Do not match that either", category)
- warnings.warn("Match this", category)
-
-
-def test_match_multiple_warnings():
- # https://github.com/pandas-dev/pandas/issues/47829
- category = (FutureWarning, UserWarning)
- with tm.assert_produces_warning(category, match=r"^Match this"):
- warnings.warn("Match this", FutureWarning)
- warnings.warn("Match this too", UserWarning)
-
-
-def test_right_category_wrong_match_raises(pair_different_warnings):
- target_category, other_category = pair_different_warnings
- with pytest.raises(AssertionError, match="Did not see warning.*matching"):
- with tm.assert_produces_warning(target_category, match=r"^Match this"):
- warnings.warn("Do not match it", target_category)
- warnings.warn("Match this", other_category)
-
-
-@pytest.mark.parametrize("false_or_none", [False, None])
-class TestFalseOrNoneExpectedWarning:
- def test_raise_on_warning(self, false_or_none):
- msg = r"Caused unexpected warning\(s\)"
- with pytest.raises(AssertionError, match=msg):
- with tm.assert_produces_warning(false_or_none):
- f()
-
- def test_no_raise_without_warning(self, false_or_none):
- with tm.assert_produces_warning(false_or_none):
- pass
-
- def test_no_raise_with_false_raise_on_extra(self, false_or_none):
- with tm.assert_produces_warning(false_or_none, raise_on_extra_warnings=False):
- f()
-
-
-def test_raises_during_exception():
- msg = "Did not see expected warning of class 'UserWarning'"
- with pytest.raises(AssertionError, match=msg):
- with tm.assert_produces_warning(UserWarning):
- raise ValueError
-
- with pytest.raises(AssertionError, match=msg):
- with tm.assert_produces_warning(UserWarning):
- warnings.warn("FutureWarning", FutureWarning)
- raise IndexError
-
- msg = "Caused unexpected warning"
- with pytest.raises(AssertionError, match=msg):
- with tm.assert_produces_warning(None):
- warnings.warn("FutureWarning", FutureWarning)
- raise SystemError
-
-
-def test_passes_during_exception():
- with pytest.raises(SyntaxError, match="Error"):
- with tm.assert_produces_warning(None):
- raise SyntaxError("Error")
-
- with pytest.raises(ValueError, match="Error"):
- with tm.assert_produces_warning(FutureWarning, match="FutureWarning"):
- warnings.warn("FutureWarning", FutureWarning)
- raise ValueError("Error")
diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/formatters/irc.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/formatters/irc.py
deleted file mode 100644
index 334aeef49248f1dc258b4672d87a8cd1af14edcb..0000000000000000000000000000000000000000
--- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/formatters/irc.py
+++ /dev/null
@@ -1,154 +0,0 @@
-"""
- pygments.formatters.irc
- ~~~~~~~~~~~~~~~~~~~~~~~
-
- Formatter for IRC output
-
- :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pygments.formatter import Formatter
-from pygments.token import Keyword, Name, Comment, String, Error, \
- Number, Operator, Generic, Token, Whitespace
-from pygments.util import get_choice_opt
-
-
-__all__ = ['IRCFormatter']
-
-
-#: Map token types to a tuple of color values for light and dark
-#: backgrounds.
-IRC_COLORS = {
- Token: ('', ''),
-
- Whitespace: ('gray', 'brightblack'),
- Comment: ('gray', 'brightblack'),
- Comment.Preproc: ('cyan', 'brightcyan'),
- Keyword: ('blue', 'brightblue'),
- Keyword.Type: ('cyan', 'brightcyan'),
- Operator.Word: ('magenta', 'brightcyan'),
- Name.Builtin: ('cyan', 'brightcyan'),
- Name.Function: ('green', 'brightgreen'),
- Name.Namespace: ('_cyan_', '_brightcyan_'),
- Name.Class: ('_green_', '_brightgreen_'),
- Name.Exception: ('cyan', 'brightcyan'),
- Name.Decorator: ('brightblack', 'gray'),
- Name.Variable: ('red', 'brightred'),
- Name.Constant: ('red', 'brightred'),
- Name.Attribute: ('cyan', 'brightcyan'),
- Name.Tag: ('brightblue', 'brightblue'),
- String: ('yellow', 'yellow'),
- Number: ('blue', 'brightblue'),
-
- Generic.Deleted: ('brightred', 'brightred'),
- Generic.Inserted: ('green', 'brightgreen'),
- Generic.Heading: ('**', '**'),
- Generic.Subheading: ('*magenta*', '*brightmagenta*'),
- Generic.Error: ('brightred', 'brightred'),
-
- Error: ('_brightred_', '_brightred_'),
-}
-
-
-IRC_COLOR_MAP = {
- 'white': 0,
- 'black': 1,
- 'blue': 2,
- 'brightgreen': 3,
- 'brightred': 4,
- 'yellow': 5,
- 'magenta': 6,
- 'orange': 7,
- 'green': 7, #compat w/ ansi
- 'brightyellow': 8,
- 'lightgreen': 9,
- 'brightcyan': 9, # compat w/ ansi
- 'cyan': 10,
- 'lightblue': 11,
- 'red': 11, # compat w/ ansi
- 'brightblue': 12,
- 'brightmagenta': 13,
- 'brightblack': 14,
- 'gray': 15,
-}
-
-def ircformat(color, text):
- if len(color) < 1:
- return text
- add = sub = ''
- if '_' in color: # italic
- add += '\x1D'
- sub = '\x1D' + sub
- color = color.strip('_')
- if '*' in color: # bold
- add += '\x02'
- sub = '\x02' + sub
- color = color.strip('*')
- # underline (\x1F) not supported
- # backgrounds (\x03FF,BB) not supported
- if len(color) > 0: # actual color - may have issues with ircformat("red", "blah")+"10" type stuff
- add += '\x03' + str(IRC_COLOR_MAP[color]).zfill(2)
- sub = '\x03' + sub
- return add + text + sub
- return '<'+add+'>'+text+''+sub+'>'
-
-
-class IRCFormatter(Formatter):
- r"""
- Format tokens with IRC color sequences
-
- The `get_style_defs()` method doesn't do anything special since there is
- no support for common styles.
-
- Options accepted:
-
- `bg`
- Set to ``"light"`` or ``"dark"`` depending on the terminal's background
- (default: ``"light"``).
-
- `colorscheme`
- A dictionary mapping token types to (lightbg, darkbg) color names or
- ``None`` (default: ``None`` = use builtin colorscheme).
-
- `linenos`
- Set to ``True`` to have line numbers in the output as well
- (default: ``False`` = no line numbers).
- """
- name = 'IRC'
- aliases = ['irc', 'IRC']
- filenames = []
-
- def __init__(self, **options):
- Formatter.__init__(self, **options)
- self.darkbg = get_choice_opt(options, 'bg',
- ['light', 'dark'], 'light') == 'dark'
- self.colorscheme = options.get('colorscheme', None) or IRC_COLORS
- self.linenos = options.get('linenos', False)
- self._lineno = 0
-
- def _write_lineno(self, outfile):
- if self.linenos:
- self._lineno += 1
- outfile.write("%04d: " % self._lineno)
-
- def format_unencoded(self, tokensource, outfile):
- self._write_lineno(outfile)
-
- for ttype, value in tokensource:
- color = self.colorscheme.get(ttype)
- while color is None:
- ttype = ttype[:-1]
- color = self.colorscheme.get(ttype)
- if color:
- color = color[self.darkbg]
- spl = value.split('\n')
- for line in spl[:-1]:
- if line:
- outfile.write(ircformat(color, line))
- outfile.write('\n')
- self._write_lineno(outfile)
- if spl[-1]:
- outfile.write(ircformat(color, spl[-1]))
- else:
- outfile.write(value)
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Bank Of India Star Token [NEW] Download For Windows 10 276.md b/spaces/quidiaMuxgu/Expedit-SAM/Bank Of India Star Token [NEW] Download For Windows 10 276.md
deleted file mode 100644
index ec53f3486dbf4fafd4cfbbc732d4ed258921b237..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Bank Of India Star Token [NEW] Download For Windows 10 276.md
+++ /dev/null
@@ -1,6 +0,0 @@
-bank of india star token download for windows 10 276
Download File ✺✺✺ https://geags.com/2uCqL1
-
-10, 0x000A, Qualcomm Technologies International, Ltd. (QTIL) ... 276, 0x0114, Xensr, 0114=Xensr. 277, 0x0115 ... 323, 0x0143, Bkon Connect, 0143=Bkon Connect ... 338, 0x0152, Vernier Software & Technology, 0152=Vernier Software & Technology ... 432, 0x01B0, Star Micronics Co., Ltd. 01B0=Star Micronics Co., Ltd. 4d29de3e1b
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Grimms Hatchery Crack Free 30.md b/spaces/quidiaMuxgu/Expedit-SAM/Grimms Hatchery Crack Free 30.md
deleted file mode 100644
index 82ed480e5dd091ec64bc4f0afda199afc3fcd5cb..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Grimms Hatchery Crack Free 30.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-Finding new space will be difficult, however. The Columbia has no more low-lying mountains and the only available shoreline is now within the Willapa Bay National Wildlife Refuge, and it is on the east side of the river. And while the Washington Department of Fish and Wildlife has a long-term plan to manage the hatchery production, it has not been publicly disclosed.
-grimm's hatchery crack free 30
Download Zip >>>>> https://geags.com/2uCrZo
-Florence Patricia Walker was born in Glasgow July 6, 1922 to pioneer residents William and Anna Bretzke. She attended Glasgow schools and graduated from Glasgow High School with the class of 1940. Florence married Willard W. Walker on April 14, 1941. Willard entered the Army in October of 1943, and while he trained in the states, Florence was employed in various government offices where Willard was stationed. After being discharged from the service, they moved back to the Walker farm south of Glasgow and continued Walker's Gardens. In 1959, the farm house was lost due to fire and the family moved into Glasgow on 3rd Ave So. At that time, Florence started working in the Valley County Assessor's office. She was Deputy Assessor for 10 years and in 1968, was elected Valley County's first woman County Assessor. When Willard retired in 1981 from Mountain Bell, they sold their home in Glasgow and moved to Athol, Idaho. After losing Willard in 1997, sharing each other for 56 years, Florence moved back to Glasgow and resided for 5 years at Nemont Manor. She then lived with her daughter, Pamela, until having to move to Valley View Home in January of 2004. Florence enjoyed canning, freezing and preserving vegetables and fruits, was a wonderful seamstress and embroiderer, enjoyed reading and crossword puzzles, and had a definite passion for the game of cribbage. She loved to travel and enjoyed many family fishing and camping trips to Glacier Park and points in Canada.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Lazesoft Recover My Password Serial Number.md b/spaces/quidiaMuxgu/Expedit-SAM/Lazesoft Recover My Password Serial Number.md
deleted file mode 100644
index 9764d1ecde56cc791dbb90b320f5bf456909febf..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Lazesoft Recover My Password Serial Number.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Lazesoft Recover My Password Serial Number
Download Zip ——— https://geags.com/2uCqi7
-
-Lazesoft Recover My Password Server 4.5.1.1 Crack + Serial Number Download. Taking all everything into careful matter, Lazesoft Recover My Password ... 1fdad05405
-
-
-
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/CADWorx 2016 with SPLM Crack How to Install and Use the Software.md b/spaces/raedeXanto/academic-chatgpt-beta/CADWorx 2016 with SPLM Crack How to Install and Use the Software.md
deleted file mode 100644
index 78b54be5989d3cb70ace10b389e3e06fea39c847..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/CADWorx 2016 with SPLM Crack How to Install and Use the Software.md
+++ /dev/null
@@ -1,134 +0,0 @@
-
-Battlefield 3 Game File Part 35.rar: What Is It and How to Use It
- If you are a fan of first-person shooter games, you might have heard of Battlefield 3, one of the most popular and critically acclaimed games in the genre. But did you know that you can download and install a part of the game file that can enhance your gaming experience? In this article, we will explain what is Battlefield 3 game file part 35.rar, why you need it, and how to use it.
-Spectrasonics Stylus RMX 1.5 1.7 1.9.5 keygen WORKING 100% crack
DOWNLOAD ➡ https://tinourl.com/2uKYUZ
- Introduction
- What is Battlefield 3?
- Battlefield 3 is a video game developed by EA DICE and published by Electronic Arts in 2011. It is the third installment in the Battlefield series, which focuses on realistic and immersive warfare scenarios. The game features both single-player and multiplayer modes, where players can choose from different classes, weapons, vehicles, and maps. The game also boasts impressive graphics, sound effects, and physics, thanks to the Frostbite 2 engine.
- What is a RAR file?
- A RAR file is a compressed archive file that can contain one or more files or folders. It is commonly used to reduce the size of large files or to split them into smaller parts for easier storage or transfer. A RAR file has a .rar extension and can be opened by various software programs, such as WinRAR, 7-Zip, or PeaZip.
- Why do you need part 35 of the game file?
- As you may know, Battlefield 3 is a very large game that requires a lot of disk space and memory. To make it easier for players to download and install the game, some websites offer the game file in parts, each with a .rar extension. These parts are numbered from part 1 to part 40, and each part contains a portion of the game data. Part 35 of the game file is one of these parts, and it contains some important files that are needed for the game to run properly. Without part 35, you may encounter errors or glitches when playing the game.
-How to download Spectrasonics Stylus RMX 1.9.5 with keygen
-Spectrasonics Stylus RMX 1.7 crack free download full version
-Spectrasonics Stylus RMX 1.5 keygen WORKING 100% torrent
-Spectrasonics Stylus RMX 1.9.5 activation code generator
-Spectrasonics Stylus RMX 1.7 serial number crack patch
-Spectrasonics Stylus RMX 1.5 license key WORKING 100% rar
-Spectrasonics Stylus RMX 1.9.5 cracked version download link
-Spectrasonics Stylus RMX 1.7 keygen WORKING 100% zip
-Spectrasonics Stylus RMX 1.5 crack only download no survey
-Spectrasonics Stylus RMX 1.9.5 registration code crack keygen
-Spectrasonics Stylus RMX 1.7 product key WORKING 100% mega
-Spectrasonics Stylus RMX 1.5 crack download for mac
-Spectrasonics Stylus RMX 1.9.5 keygen WORKING 100% for windows
-Spectrasonics Stylus RMX 1.7 crack download for linux
-Spectrasonics Stylus RMX 1.5 activation key WORKING 100% for android
-Spectrasonics Stylus RMX 1.9.5 full crack download no password
-Spectrasonics Stylus RMX 1.7 keygen WORKING 100% online
-Spectrasonics Stylus RMX 1.5 crack offline activation
-Spectrasonics Stylus RMX 1.9.5 cracked by team air
-Spectrasonics Stylus RMX 1.7 keygen WORKING 100% by r2r
-Spectrasonics Stylus RMX 1.5 crack by skidrow
-Spectrasonics Stylus RMX 1.9.5 keygen WORKING 100% by xforce
-Spectrasonics Stylus RMX 1.7 crack by codex
-Spectrasonics Stylus RMX 1.5 keygen WORKING 100% by reloaded
-Spectrasonics Stylus RMX 1.9.5 crack fix download
-Spectrasonics Stylus RMX 1.7 keygen WORKING 100% update download
-Spectrasonics Stylus RMX 1.5 crack and keygen download
-Spectrasonics Stylus RMX 1.9.5 keygen and crack download
-Spectrasonics Stylus RMX 1.7 crack or keygen download
-Spectrasonics Stylus RMX 1.5 keygen or crack download
-Download Spectrasonics Stylus RMX 1.9.5 with crack and keygen
-Download Spectrasonics Stylus RMX 1.7 with keygen and crack
-Download Spectrasonics Stylus RMX 1.5 with crack or keygen
-Download Spectrasonics Stylus RMX 1.9.5 with keygen or crack
-Download free Spectrasonics Stylus RMX 1.7 with crack and keygen
-Download free Spectrasonics Stylus RMX 1.5 with keygen and crack
-Download free Spectrasonics Stylus RMX 1.9.5 with crack or keygen
-Download free Spectrasonics Stylus RMX 1.7 with keygen or crack
-Download full version of Spectrasonics Stylus RMX 1.5 with crack and keygen
-Download full version of Spectrasonics Stylus RMX 1.9.5 with keygen and crack
-Download full version of Spectrasonics Stylus RMX 1.7 with crack or keygen
-Download full version of Spectrasonics Stylus RMX 1.5 with keygen or crack
-Download latest version of Spectrasonics Stylus RMX 1.9.5 with crack and keygen
-Download latest version of Spectrasonics Stylus RMX 1.7 with keygen and crack
-Download latest version of Spectrasonics Stylus RMX 1.5 with crack or keygen
-Download latest version of Spectrasonics Stylus RMX 1.9.5 with keygen or crack
-Download working version of Spectrasonics Stylus RMX 1.7 with crack and keygen
-Download working version of Spectrasonics Stylus RMX 1.5 with keygen and crack
-Download working version of Spectrasonics Stylus RMX 1.9.5 with crack or keygen
-Download working version of Spectrasonics Stylus RMX
- How to Download Battlefield 3 Game File Part 35.rar
- Where to find the download link
- There are many websites that offer Battlefield 3 game file part 35.rar for download, but not all of them are reliable or safe. Some may contain broken links, fake files, or malicious software that can harm your computer or steal your personal information. To avoid these risks, you should only download from trusted sources that have positive reviews and feedback from other users. One such website is Ocean of Games, which provides free and fast downloads for various games.
- How to check the file size and integrity
- Before you download Battlefield 3 game file part 35.rar, you should check its size and integrity to make sure it is not corrupted or tampered with. The original size of the file is about 953 MB, and its MD5 checksum is 0d8f4c8f9c6f0e6a9e8f7c0c4b5d9d5a. You can use online tools such as MD5File or VirusTotal to verify these values. If they match, then you can proceed with the download. If they don't match, then you should look for another source.
- How to avoid malware and viruses
- Another precaution you should take when downloading Battlefield 3 game file part 35.rar is to scan it for malware and viruses before opening it. You can use your own antivirus software or online scanners such as VirusTotal or MetaDefender to do this. If they detect any threats, then you should delete the file immediately and report it to the website. If they don't detect any threats, then you can proceed with extracting and installing it.
- How to Extract and Install Battlefield 3 Game File Part 35.rar
- What software do you need to extract RAR files
- To extract Battlefield 3 game file part 35.rar from its compressed archive format, you need a software program that can handle RAR files. There are many options available for different operating systems, such as WinRAR, 7-Zip, or PeaZip for Windows; The Unarchiver or Keka for Mac; or Unrar or Ark for Linux. You can download these programs from their official websites or from reputable sources such as FileHippo or CNET.
- How to extract the game file part 35 from the RAR archive
- 35 from the RAR archive, you need to follow these steps:
-
-- Locate the file on your computer and right-click on it.
-- Select the option to extract the file to a folder of your choice. You can also use the default folder name, which is usually the same as the file name.
-- Wait for the extraction process to finish. It may take a few minutes depending on the size of the file and the speed of your computer.
-- Open the folder where you extracted the file and look for a file named setup-35.bin. This is the game file part 35 that you need to install.
-
- How to install the game file part 35 into the game folder
- To install Battlefield 3 game file part 35 into the game folder, you need to follow these steps:
-
-- Make sure you have already installed Battlefield 3 on your computer. If not, you can download and install it from Origin, the official platform for EA games.
-- Locate the game folder on your computer. It is usually located in C:\Program Files (x86)\Origin Games\Battlefield 3 or C:\Program Files\Origin Games\Battlefield 3, depending on your system architecture.
-- Copy and paste the setup-35.bin file from the extracted folder into the game folder. You may need to overwrite an existing file with the same name.
-- Wait for the installation process to finish. It may take a few minutes depending on the size of the file and the speed of your computer.
-- Delete or move the extracted folder and the RAR archive to free up some disk space.
-
- How to Play Battlefield 3 with Game File Part 35 Installed
- How to launch the game from the desktop or start menu
- To launch Battlefield 3 from the desktop or start menu, you need to follow these steps:
-
-- Look for a shortcut icon for Battlefield 3 on your desktop or start menu. If you don't have one, you can create one by right-clicking on the Battlefield 3.exe file in the game folder and selecting Create shortcut.
-- Double-click on the shortcut icon to launch the game. You may need to log in to your Origin account if you haven't done so already.
-- Select Campaign, Co-op, or Multiplayer from the main menu, depending on your preference.
-- Enjoy playing Battlefield 3 with game file part 35 installed!
-
- How to adjust the game settings and preferences
- To adjust the game settings and preferences, you need to follow these steps:
-
-- From the main menu, select Options.
-- Select Video, Audio, Controls, or Gameplay, depending on what you want to change.
-- Use the sliders, buttons, checkboxes, or drop-down menus to customize your settings and preferences. You can also use the Preset option to choose from predefined settings for different levels of performance and quality.
-- Select Apply to save your changes or Cancel to discard them.
-- Select Back to return to the main menu.
-
- How to enjoy the game features and modes
- To enjoy the game features and modes, you need to follow these steps:
-
-the main menu, depending on your preference.
-
- If you choose Campaign, you can play the single-player mode, where you follow the story of various characters involved in a global conflict. You can select from different difficulty levels and missions, and earn achievements and trophies as you progress.
-- If you choose Co-op, you can play the cooperative mode, where you team up with another player online or offline to complete various missions. You can choose from different difficulty levels and missions, and earn points and rewards as you cooperate.
-- If you choose Multiplayer, you can play the online mode, where you compete or cooperate with other players in various game modes and maps. You can choose from different classes, weapons, vehicles, and customization options, and earn ranks and unlocks as you play.
-- Have fun playing Battlefield 3 with game file part 35 installed!
-
- Conclusion
- In this article, we have explained what is Battlefield 3 game file part 35.rar, why you need it, and how to use it. We have also provided some tips and tricks on how to download, extract, install, and play the game with this file. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
- FAQs
- Here are some frequently asked questions about Battlefield 3 game file part 35.rar:
-
-- Q: Do I need to download all the parts of the game file to play the game?
-- A: No, you don't need to download all the parts of the game file to play the game. However, some parts may contain essential files that are required for the game to run properly. Part 35 is one of these parts, so we recommend that you download and install it.
-- Q: Can I use a different software program to extract or install the RAR file?
-- A: Yes, you can use a different software program to extract or install the RAR file, as long as it supports RAR files. However, we suggest that you use the software programs that we have mentioned in this article, as they are reliable and easy to use.
-- Q: Can I play Battlefield 3 without Origin?
-- A: No, you can't play Battlefield 3 without Origin. Origin is the official platform for EA games, and it is required for launching and playing Battlefield 3. You need to have an Origin account and an internet connection to play the game.
-- Q: Can I play Battlefield 3 offline?
-- A: Yes, you can play Battlefield 3 offline. You can play the single-player mode or the cooperative mode offline. However, you need to have an internet connection to activate the game and update it to the latest version.
-- Q: Can I play Battlefield 3 on a different platform?
-- A: Yes, you can play Battlefield 3 on a different platform. The game is available for Windows PC, PlayStation 3, and Xbox 360. However, you need to buy a separate copy of the game for each platform.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Adobe.Photoshop.CS2.Incl.High Quality Keygen-PARADOX 64 Bit.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Adobe.Photoshop.CS2.Incl.High Quality Keygen-PARADOX 64 Bit.md
deleted file mode 100644
index 19867dab345c2fb5961fdb8399a2e88b4bce5e4a..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Adobe.Photoshop.CS2.Incl.High Quality Keygen-PARADOX 64 Bit.md
+++ /dev/null
@@ -1,46 +0,0 @@
-Adobe.Photoshop.CS2.Incl.Keygen-PARADOX 64 bit
DOWNLOAD ➡ https://urlgoal.com/2uCLsc
-
-\application\Paradox\ParadoxAntiAliasing.exe
-
-I tried these settings to give Adobe the 3rd party program access but it didn't help.
-
-I had this working once, not sure why it's not working now.
-
-A:
-
-Try these steps:
-
-close all windows related to Acrobat
-
-then search with
-
-"Paradox"
-
-"Paradox AntiAliasing"
-
-and then click on your "Paradox" file.
-
-The usual suspects:
-
-You have changed a setting in Acrobat and now the file is "locked" to that setting.
-
-You have closed all Acrobat windows - also with the Paradox Auto-Update dialog open.
-
-You have chosen to open the Paradox auto-update in the "Other" window, i.e. you have selected "Other" from the Window menu, and opened the Paradox auto-update in that window.
-
-I had a similar issue and solved it by simply restarting my computer.
-
-Q:
-
-Is there a way to find the number of children of an element in XML with XSLT?
-
-I have an XML like this:
-
-
-
-
-
-
-
-
-
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Beetle Bug 4 Free Download Full 26 !!TOP!!.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Beetle Bug 4 Free Download Full 26 !!TOP!!.md
deleted file mode 100644
index 5fc4602c020315e3da960ce7809958aac9992b82..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Beetle Bug 4 Free Download Full 26 !!TOP!!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Beetle Bug 4 Free Download Full 26
Download ☑ https://urlgoal.com/2uCMjg
-
-vw. Glaciers recede by 72 percent Snow melts 3 weeks earlier Marmots awaken 38 days earlier Southern balds disappear ~~~~~ Summer water sources dry up ... 1fdad05405
-
-
-
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Electricmobilestudio2012key LINK.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Electricmobilestudio2012key LINK.md
deleted file mode 100644
index 440d42703add749ecddc7b49a1e643ddbf37bab9..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Electricmobilestudio2012key LINK.md
+++ /dev/null
@@ -1,30 +0,0 @@
-electricmobilestudio2012key
Download - https://urlgoal.com/2uCKNz
-
-Find full works in our high quality collection.
-
-There are many different types of electric cars, here are some of the most popular electric cars on the market and also some of the top electric car facts. The electric car has the best combination of best of both worlds – no pollution and no gasoline fuel. The electric car has been around for a very long time. The first electric car was made in 1876 and it was called the Duryea Motor Wagon. This car was made by using a hand cranked battery. The top speed of this car was 20 miles per hour. In 1884 the first electric car that was used on public streets was made by Thomas A. Edison. This car was called the electric motor car. This car was used for transportation and to demonstrate the efficiency of electricity. The car cost of electric car was 50,000 dollars and it could only go about 12 miles on a charge. In 1912 the world’s first all electric super car was made. The super car was called the Tesla. This super car was also used for transportation and to demonstrate the efficiency of electricity. The car could reach speeds of 100 miles per hour on electric power. The car cost of the electric car was 350,000 dollars and it could only go about 20 miles on a charge. The most popular electric car is the Nissan Leaf. The Nissan Leaf is the best selling electric car. The car has a price tag of 24,000 dollars and it has the range of 200 miles. If you are interested in learning more about the top electric car facts, you can get all your information at our website. We have interesting electric car facts to share with you.Q:
-
-Mongodb: SQL server on Azure account
-
-I created an mongodb instance on azure.
-
-I am going to use a SQL server on the same account.
-
-How can I configure both of them?
-
-I can't use external IP address to connect to the DB.
-
-How can I set it up?
-
-A:
-
-One would typically set up a firewall or NAT network device that is in the DMZ of the Azure account.
-
-Your SQL Server should then have an external IP that connects to this device.
-
-From the SQL Server you can then either connect to the Azure instance on a non public IP, or you can have it connect to the instance on the public IP.
-
-You would have to verify that this configuration is acceptable in your environment. 4fefd39f24
-
-
-
diff --git a/spaces/reha/Stick_Tech/vdecoder/__init__.py b/spaces/reha/Stick_Tech/vdecoder/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/richardzhangy26/yandian_flow_classification/README.md b/spaces/richardzhangy26/yandian_flow_classification/README.md
deleted file mode 100644
index 2fa93a3bec577a57f21b55571a430543e70a3b32..0000000000000000000000000000000000000000
--- a/spaces/richardzhangy26/yandian_flow_classification/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Yandian Flow Classification
-emoji: 📈
-colorFrom: yellow
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.35.2
-app_file: gradio_app2.py
-pinned: True
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/optimizers/builder.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/optimizers/builder.py
deleted file mode 100644
index 406dd9b4b7027e9c2254b0d18cf0c80a7161912b..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/optimizers/builder.py
+++ /dev/null
@@ -1,33 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import copy
-
-from mmcv.runner.optimizer import OPTIMIZER_BUILDERS as MMCV_OPTIMIZER_BUILDERS
-from mmcv.utils import Registry, build_from_cfg
-
-OPTIMIZER_BUILDERS = Registry(
- 'optimizer builder', parent=MMCV_OPTIMIZER_BUILDERS)
-
-
-def build_optimizer_constructor(cfg):
- constructor_type = cfg.get('type')
- if constructor_type in OPTIMIZER_BUILDERS:
- return build_from_cfg(cfg, OPTIMIZER_BUILDERS)
- elif constructor_type in MMCV_OPTIMIZER_BUILDERS:
- return build_from_cfg(cfg, MMCV_OPTIMIZER_BUILDERS)
- else:
- raise KeyError(f'{constructor_type} is not registered '
- 'in the optimizer builder registry.')
-
-
-def build_optimizer(model, cfg):
- optimizer_cfg = copy.deepcopy(cfg)
- constructor_type = optimizer_cfg.pop('constructor',
- 'DefaultOptimizerConstructor')
- paramwise_cfg = optimizer_cfg.pop('paramwise_cfg', None)
- optim_constructor = build_optimizer_constructor(
- dict(
- type=constructor_type,
- optimizer_cfg=optimizer_cfg,
- paramwise_cfg=paramwise_cfg))
- optimizer = optim_constructor(model)
- return optimizer
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/api_wrappers/__init__.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/api_wrappers/__init__.py
deleted file mode 100644
index af8557593b6a50541bba1198dc9361ab5382547f..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/api_wrappers/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .coco_api import COCO, COCOeval
-from .panoptic_evaluation import pq_compute_multi_core, pq_compute_single_core
-
-__all__ = [
- 'COCO', 'COCOeval', 'pq_compute_multi_core', 'pq_compute_single_core'
-]
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/seg_heads/__init__.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/seg_heads/__init__.py
deleted file mode 100644
index b489a905b1e9b6cef2e8b9575600990563128e4e..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/seg_heads/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .panoptic_fpn_head import PanopticFPNHead # noqa: F401,F403
-from .panoptic_fusion_heads import * # noqa: F401,F403
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Die Schlacht Um Mittelerde 2 No Cd __TOP__ Crack Deutsch.md b/spaces/rorallitri/biomedical-language-models/logs/Die Schlacht Um Mittelerde 2 No Cd __TOP__ Crack Deutsch.md
deleted file mode 100644
index 51186afe321a3f5d50c9960971cc3e9de046fbcd..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Die Schlacht Um Mittelerde 2 No Cd __TOP__ Crack Deutsch.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Die Schlacht Um Mittelerde 2 No Cd Crack Deutsch
DOWNLOAD ✓✓✓ https://tinurll.com/2uzlCJ
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Flatout 2 Full Demo and Try it Before You Buy it.md b/spaces/rorallitri/biomedical-language-models/logs/Download Flatout 2 Full Demo and Try it Before You Buy it.md
deleted file mode 100644
index 49509fb3958915d306d55ad9ddaa3f29e217b50e..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Download Flatout 2 Full Demo and Try it Before You Buy it.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-People love free steam games, no doubt. But what many people hate is downloading so many parts and trying to install them on their own. This is why we are the only site that pre-installs every game for you. We have many categories like shooters, action, racing, simulators and even VR games! We strive to satisfy our users and ask for nothing in return. We revolutionized the downloading scene and will continue being your #1 site for free games.
-FlatOut 2 is an action packed racer featuring peerless physics and ragdoll drivers for extreme action on and off the track! Sequel to the million selling FlatOut, FlatOut 2 takes the mayhem to maximum. Putting action squarely in to racing, FlatOut 2 is a 8 car close pack racing game full of destruction and wrecking! Tracks are filled with crash hotspots, thousands of dynamics objects, risky but rewarding alternative routes and designed with battle racing in mind. And battle you must as you take to the races against seven fierce AI opponents each complete with personal driving style and hidden agendas. You'll be delivering amazing amount of damage to you rivals and the track visualized like never before with FlatOut's peerless physics and damage modeling engine. Featuring even more craze in the form of destruction derbies and ragdoll stunt events, you'll be punishing your poor driver in twelve ragdoll stunt events. # Eight players or AI cars on track for close pack action...
-Download Flatout 2 Full
DOWNLOAD ->->->-> https://tinurll.com/2uzoH5
-The FOJ Community mod for Flatout2(patched to 1.2)is now uploaded & ready for download, it contains 3 parts that make up the full mod.New routes using same tracks,30 reversed tracks, carpack 1 which contains cars to fill up spots 46-90,new loading screens,new scripting to add more features , more online tournament presets and of course new game modes added in. This is an excellent addition to the stock game that you must try out.
-3. Restart your Steam
After you have successfully activated Steam Proton, click "OK" and Steam will ask you to restart it for the changes to take effect. Restart it. Your computer will now play all of steam's whitelisted games seamlessly.
-4. Launch Stardew Valley on Linux:
Before you can use Steam Proton, you must first download the Stardew Valley Windows game from Steam. When you download Stardew Valley for the first time, you will notice that the download size is slightly larger than the size of the game.
This happens because Steam will download your chosen Steam Proton version with this game as well. After the download is complete, simply click the "Play" button.
-Aimhaven provides all pc gamers around the world the best and latest free steam games for pc by using direct download links and torrent links. Our goal is to satisfy our users and to become your #1 site for cracked free steam games by making downloading easy.
-Virtual Programming has released a Macintosh version of FlatOut 2, the high-octane racing and destruction game developed by Bugbear Entertainment. It costs $39.95 and is available for online purchase and download now from Deliver2Mac.com.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/roshithindia/song-generation/README.md b/spaces/roshithindia/song-generation/README.md
deleted file mode 100644
index 2cbf5dd3275d717beb0744125e37412e9c2a701a..0000000000000000000000000000000000000000
--- a/spaces/roshithindia/song-generation/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Song Generation
-emoji: 🏃
-colorFrom: green
-colorTo: green
-sdk: gradio
-sdk_version: 3.47.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/rupeshs/fastsdcpu/__init__.py b/spaces/rupeshs/fastsdcpu/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/safi842/FashionGen/netdissect/segmenter.py b/spaces/safi842/FashionGen/netdissect/segmenter.py
deleted file mode 100644
index e5ebe364bc30f32581f0d560e11f08bfbd0d1731..0000000000000000000000000000000000000000
--- a/spaces/safi842/FashionGen/netdissect/segmenter.py
+++ /dev/null
@@ -1,581 +0,0 @@
-# Usage as a simple differentiable segmenter base class
-
-import os, torch, numpy, json, glob
-import skimage.morphology
-from collections import OrderedDict
-from netdissect import upsegmodel
-from netdissect import segmodel as segmodel_module
-from netdissect.easydict import EasyDict
-from urllib.request import urlretrieve
-
-class BaseSegmenter:
- def get_label_and_category_names(self):
- '''
- Returns two lists: first, a list of tuples [(label, category), ...]
- where the label and category are human-readable strings indicating
- the meaning of a segmentation class. The 0th segmentation class
- should be reserved for a label ('-') that means "no prediction."
- The second list should just be a list of [category,...] listing
- all categories in a canonical order.
- '''
- raise NotImplemented()
-
- def segment_batch(self, tensor_images, downsample=1):
- '''
- Returns a multilabel segmentation for the given batch of (RGB [-1...1])
- images. Each pixel of the result is a torch.long indicating a
- predicted class number. Multiple classes can be predicted for
- the same pixel: output shape is (n, multipred, y, x), where
- multipred is 3, 5, or 6, for how many different predicted labels can
- be given for each pixel (depending on whether subdivision is being
- used). If downsample is specified, then the output y and x dimensions
- are downsampled from the original image.
- '''
- raise NotImplemented()
-
- def predict_single_class(self, tensor_images, classnum, downsample=1):
- '''
- Given a batch of images (RGB, normalized to [-1...1]) and
- a specific segmentation class number, returns a tuple with
- (1) a differentiable ([0..1]) prediction score for the class
- at every pixel of the input image.
- (2) a binary mask showing where in the input image the
- specified class is the best-predicted label for the pixel.
- Does not work on subdivided labels.
- '''
- raise NotImplemented()
-
-class UnifiedParsingSegmenter(BaseSegmenter):
- '''
- This is a wrapper for a more complicated multi-class segmenter,
- as described in https://arxiv.org/pdf/1807.10221.pdf, and as
- released in https://github.com/CSAILVision/unifiedparsing.
- For our purposes and to simplify processing, we do not use
- whole-scene predictions, and we only consume part segmentations
- for the three largest object classes (sky, building, person).
- '''
-
- def __init__(self, segsizes=None, segdiv=None):
- # Create a segmentation model
- if segsizes is None:
- segsizes = [256]
- if segdiv == None:
- segdiv = 'undivided'
- segvocab = 'upp'
- segarch = ('resnet50', 'upernet')
- epoch = 40
- segmodel = load_unified_parsing_segmentation_model(
- segarch, segvocab, epoch)
- segmodel.cuda()
- self.segmodel = segmodel
- self.segsizes = segsizes
- self.segdiv = segdiv
- mult = 1
- if self.segdiv == 'quad':
- mult = 5
- self.divmult = mult
- # Assign class numbers for parts.
- first_partnumber = (
- (len(segmodel.labeldata['object']) - 1) * mult + 1 +
- (len(segmodel.labeldata['material']) - 1))
- # We only use parts for these three types of objects, for efficiency.
- partobjects = ['sky', 'building', 'person']
- partnumbers = {}
- partnames = []
- objectnumbers = {k: v
- for v, k in enumerate(segmodel.labeldata['object'])}
- part_index_translation = []
- # We merge some classes. For example "door" is both an object
- # and a part of a building. To avoid confusion, we just count
- # such classes as objects, and add part scores to the same index.
- for owner in partobjects:
- part_list = segmodel.labeldata['object_part'][owner]
- numeric_part_list = []
- for part in part_list:
- if part in objectnumbers:
- numeric_part_list.append(objectnumbers[part])
- elif part in partnumbers:
- numeric_part_list.append(partnumbers[part])
- else:
- partnumbers[part] = len(partnames) + first_partnumber
- partnames.append(part)
- numeric_part_list.append(partnumbers[part])
- part_index_translation.append(torch.tensor(numeric_part_list))
- self.objects_with_parts = [objectnumbers[obj] for obj in partobjects]
- self.part_index = part_index_translation
- self.part_names = partnames
- # For now we'll just do object and material labels.
- self.num_classes = 1 + (
- len(segmodel.labeldata['object']) - 1) * mult + (
- len(segmodel.labeldata['material']) - 1) + len(partnames)
- self.num_object_classes = len(self.segmodel.labeldata['object']) - 1
-
- def get_label_and_category_names(self, dataset=None):
- '''
- Lists label and category names.
- '''
- # Labels are ordered as follows:
- # 0, [object labels] [divided object labels] [materials] [parts]
- # The zero label is reserved to mean 'no prediction'.
- if self.segdiv == 'quad':
- suffixes = ['t', 'l', 'b', 'r']
- else:
- suffixes = []
- divided_labels = []
- for suffix in suffixes:
- divided_labels.extend([('%s-%s' % (label, suffix), 'part')
- for label in self.segmodel.labeldata['object'][1:]])
- # Create the whole list of labels
- labelcats = (
- [(label, 'object')
- for label in self.segmodel.labeldata['object']] +
- divided_labels +
- [(label, 'material')
- for label in self.segmodel.labeldata['material'][1:]] +
- [(label, 'part') for label in self.part_names])
- return labelcats, ['object', 'part', 'material']
-
- def raw_seg_prediction(self, tensor_images, downsample=1):
- '''
- Generates a segmentation by applying multiresolution voting on
- the segmentation model, using (rounded to 32 pixels) a set of
- resolutions in the example benchmark code.
- '''
- y, x = tensor_images.shape[2:]
- b = len(tensor_images)
- tensor_images = (tensor_images + 1) / 2 * 255
- tensor_images = torch.flip(tensor_images, (1,)) # BGR!!!?
- tensor_images -= torch.tensor([102.9801, 115.9465, 122.7717]).to(
- dtype=tensor_images.dtype, device=tensor_images.device
- )[None,:,None,None]
- seg_shape = (y // downsample, x // downsample)
- # We want these to be multiples of 32 for the model.
- sizes = [(s, s) for s in self.segsizes]
- pred = {category: torch.zeros(
- len(tensor_images), len(self.segmodel.labeldata[category]),
- seg_shape[0], seg_shape[1]).cuda()
- for category in ['object', 'material']}
- part_pred = {partobj_index: torch.zeros(
- len(tensor_images), len(partindex),
- seg_shape[0], seg_shape[1]).cuda()
- for partobj_index, partindex in enumerate(self.part_index)}
- for size in sizes:
- if size == tensor_images.shape[2:]:
- resized = tensor_images
- else:
- resized = torch.nn.AdaptiveAvgPool2d(size)(tensor_images)
- r_pred = self.segmodel(
- dict(img=resized), seg_size=seg_shape)
- for k in pred:
- pred[k] += r_pred[k]
- for k in part_pred:
- part_pred[k] += r_pred['part'][k]
- return pred, part_pred
-
- def segment_batch(self, tensor_images, downsample=1):
- '''
- Returns a multilabel segmentation for the given batch of (RGB [-1...1])
- images. Each pixel of the result is a torch.long indicating a
- predicted class number. Multiple classes can be predicted for
- the same pixel: output shape is (n, multipred, y, x), where
- multipred is 3, 5, or 6, for how many different predicted labels can
- be given for each pixel (depending on whether subdivision is being
- used). If downsample is specified, then the output y and x dimensions
- are downsampled from the original image.
- '''
- pred, part_pred = self.raw_seg_prediction(tensor_images,
- downsample=downsample)
- piece_channels = 2 if self.segdiv == 'quad' else 0
- y, x = tensor_images.shape[2:]
- seg_shape = (y // downsample, x // downsample)
- segs = torch.zeros(len(tensor_images), 3 + piece_channels,
- seg_shape[0], seg_shape[1],
- dtype=torch.long, device=tensor_images.device)
- _, segs[:,0] = torch.max(pred['object'], dim=1)
- # Get materials and translate to shared numbering scheme
- _, segs[:,1] = torch.max(pred['material'], dim=1)
- maskout = (segs[:,1] == 0)
- segs[:,1] += (len(self.segmodel.labeldata['object']) - 1) * self.divmult
- segs[:,1][maskout] = 0
- # Now deal with subparts of sky, buildings, people
- for i, object_index in enumerate(self.objects_with_parts):
- trans = self.part_index[i].to(segs.device)
- # Get the argmax, and then translate to shared numbering scheme
- seg = trans[torch.max(part_pred[i], dim=1)[1]]
- # Only trust the parts where the prediction also predicts the
- # owning object.
- mask = (segs[:,0] == object_index)
- segs[:,2][mask] = seg[mask]
-
- if self.segdiv == 'quad':
- segs = self.expand_segment_quad(segs, self.segdiv)
- return segs
-
- def predict_single_class(self, tensor_images, classnum, downsample=1):
- '''
- Given a batch of images (RGB, normalized to [-1...1]) and
- a specific segmentation class number, returns a tuple with
- (1) a differentiable ([0..1]) prediction score for the class
- at every pixel of the input image.
- (2) a binary mask showing where in the input image the
- specified class is the best-predicted label for the pixel.
- Does not work on subdivided labels.
- '''
- result = 0
- pred, part_pred = self.raw_seg_prediction(tensor_images,
- downsample=downsample)
- material_offset = (len(self.segmodel.labeldata['object']) - 1
- ) * self.divmult
- if material_offset < classnum < material_offset + len(
- self.segmodel.labeldata['material']):
- return (
- pred['material'][:, classnum - material_offset],
- pred['material'].max(dim=1)[1] == classnum - material_offset)
- mask = None
- if classnum < len(self.segmodel.labeldata['object']):
- result = pred['object'][:, classnum]
- mask = (pred['object'].max(dim=1)[1] == classnum)
- # Some objects, like 'door', are also a part of other objects,
- # so add the part prediction also.
- for i, object_index in enumerate(self.objects_with_parts):
- local_index = (self.part_index[i] == classnum).nonzero()
- if len(local_index) == 0:
- continue
- local_index = local_index.item()
- # Ignore part predictions outside the mask. (We could pay
- # atttention to and penalize such predictions.)
- mask2 = (pred['object'].max(dim=1)[1] == object_index) * (
- part_pred[i].max(dim=1)[1] == local_index)
- if mask is None:
- mask = mask2
- else:
- mask = torch.max(mask, mask2)
- result = result + (part_pred[i][:, local_index])
- assert result is not 0, 'unrecognized class %d' % classnum
- return result, mask
-
- def expand_segment_quad(self, segs, segdiv='quad'):
- shape = segs.shape
- segs[:,3:] = segs[:,0:1] # start by copying the object channel
- num_seg_labels = self.num_object_classes
- # For every connected component present (using generator)
- for i, mask in component_masks(segs[:,0:1]):
- # Figure the bounding box of the label
- top, bottom = mask.any(dim=1).nonzero()[[0, -1], 0]
- left, right = mask.any(dim=0).nonzero()[[0, -1], 0]
- # Chop the bounding box into four parts
- vmid = (top + bottom + 1) // 2
- hmid = (left + right + 1) // 2
- # Construct top, bottom, right, left masks
- quad_mask = mask[None,:,:].repeat(4, 1, 1)
- quad_mask[0, vmid:, :] = 0 # top
- quad_mask[1, :, hmid:] = 0 # right
- quad_mask[2, :vmid, :] = 0 # bottom
- quad_mask[3, :, :hmid] = 0 # left
- quad_mask = quad_mask.long()
- # Modify extra segmentation labels by offsetting
- segs[i,3,:,:] += quad_mask[0] * num_seg_labels
- segs[i,4,:,:] += quad_mask[1] * (2 * num_seg_labels)
- segs[i,3,:,:] += quad_mask[2] * (3 * num_seg_labels)
- segs[i,4,:,:] += quad_mask[3] * (4 * num_seg_labels)
- # remove any components that were too small to subdivide
- mask = segs[:,3:] <= self.num_object_classes
- segs[:,3:][mask] = 0
- return segs
-
-class SemanticSegmenter(BaseSegmenter):
- def __init__(self, modeldir=None, segarch=None, segvocab=None,
- segsizes=None, segdiv=None, epoch=None):
- # Create a segmentation model
- if modeldir == None:
- modeldir = 'dataset/segmodel'
- if segvocab == None:
- segvocab = 'baseline'
- if segarch == None:
- segarch = ('resnet50_dilated8', 'ppm_bilinear_deepsup')
- if segdiv == None:
- segdiv = 'undivided'
- elif isinstance(segarch, str):
- segarch = segarch.split(',')
- segmodel = load_segmentation_model(modeldir, segarch, segvocab, epoch)
- if segsizes is None:
- segsizes = getattr(segmodel.meta, 'segsizes', [256])
- self.segsizes = segsizes
- # Verify segmentation model to has every out_channel labeled.
- assert len(segmodel.meta.labels) == list(c for c in segmodel.modules()
- if isinstance(c, torch.nn.Conv2d))[-1].out_channels
- segmodel.cuda()
- self.segmodel = segmodel
- self.segdiv = segdiv
- # Image normalization
- self.bgr = (segmodel.meta.imageformat.byteorder == 'BGR')
- self.imagemean = torch.tensor(segmodel.meta.imageformat.mean)
- self.imagestd = torch.tensor(segmodel.meta.imageformat.stdev)
- # Map from labels to external indexes, and labels to channel sets.
- self.labelmap = {'-': 0}
- self.channelmap = {'-': []}
- self.labels = [('-', '-')]
- num_labels = 1
- self.num_underlying_classes = len(segmodel.meta.labels)
- # labelmap maps names to external indexes.
- for i, label in enumerate(segmodel.meta.labels):
- if label.name not in self.channelmap:
- self.channelmap[label.name] = []
- self.channelmap[label.name].append(i)
- if getattr(label, 'internal', None) or label.name in self.labelmap:
- continue
- self.labelmap[label.name] = num_labels
- num_labels += 1
- self.labels.append((label.name, label.category))
- # Each category gets its own independent softmax.
- self.category_indexes = { category.name:
- [i for i, label in enumerate(segmodel.meta.labels)
- if label.category == category.name]
- for category in segmodel.meta.categories }
- # catindexmap maps names to category internal indexes
- self.catindexmap = {}
- for catname, indexlist in self.category_indexes.items():
- for index, i in enumerate(indexlist):
- self.catindexmap[segmodel.meta.labels[i].name] = (
- (catname, index))
- # After the softmax, each category is mapped to external indexes.
- self.category_map = { catname:
- torch.tensor([
- self.labelmap.get(segmodel.meta.labels[ind].name, 0)
- for ind in catindex])
- for catname, catindex in self.category_indexes.items()}
- self.category_rules = segmodel.meta.categories
- # Finally, naive subdivision can be applied.
- mult = 1
- if self.segdiv == 'quad':
- mult = 5
- suffixes = ['t', 'l', 'b', 'r']
- divided_labels = []
- for suffix in suffixes:
- divided_labels.extend([('%s-%s' % (label, suffix), cat)
- for label, cat in self.labels[1:]])
- self.channelmap.update({
- '%s-%s' % (label, suffix): self.channelmap[label]
- for label, cat in self.labels[1:] })
- self.labels.extend(divided_labels)
- # For examining a single class
- self.channellist = [self.channelmap[name] for name, _ in self.labels]
-
- def get_label_and_category_names(self, dataset=None):
- return self.labels, self.segmodel.categories
-
- def segment_batch(self, tensor_images, downsample=1):
- return self.raw_segment_batch(tensor_images, downsample)[0]
-
- def raw_segment_batch(self, tensor_images, downsample=1):
- pred = self.raw_seg_prediction(tensor_images, downsample)
- catsegs = {}
- for catkey, catindex in self.category_indexes.items():
- _, segs = torch.max(pred[:, catindex], dim=1)
- catsegs[catkey] = segs
- masks = {}
- segs = torch.zeros(len(tensor_images), len(self.category_rules),
- pred.shape[2], pred.shape[2], device=pred.device,
- dtype=torch.long)
- for i, cat in enumerate(self.category_rules):
- catmap = self.category_map[cat.name].to(pred.device)
- translated = catmap[catsegs[cat.name]]
- if getattr(cat, 'mask', None) is not None:
- if cat.mask not in masks:
- maskcat, maskind = self.catindexmap[cat.mask]
- masks[cat.mask] = (catsegs[maskcat] == maskind)
- translated *= masks[cat.mask].long()
- segs[:,i] = translated
- if self.segdiv == 'quad':
- segs = self.expand_segment_quad(segs,
- self.num_underlying_classes, self.segdiv)
- return segs, pred
-
- def raw_seg_prediction(self, tensor_images, downsample=1):
- '''
- Generates a segmentation by applying multiresolution voting on
- the segmentation model, using (rounded to 32 pixels) a set of
- resolutions in the example benchmark code.
- '''
- y, x = tensor_images.shape[2:]
- b = len(tensor_images)
- # Flip the RGB order if specified.
- if self.bgr:
- tensor_images = torch.flip(tensor_images, (1,))
- # Transform from our [-1..1] range to torch standard [0..1] range
- # and then apply normalization.
- tensor_images = ((tensor_images + 1) / 2
- ).sub_(self.imagemean[None,:,None,None].to(tensor_images.device)
- ).div_(self.imagestd[None,:,None,None].to(tensor_images.device))
- # Output shape can be downsampled.
- seg_shape = (y // downsample, x // downsample)
- # We want these to be multiples of 32 for the model.
- sizes = [(s, s) for s in self.segsizes]
- pred = torch.zeros(
- len(tensor_images), (self.num_underlying_classes),
- seg_shape[0], seg_shape[1]).cuda()
- for size in sizes:
- if size == tensor_images.shape[2:]:
- resized = tensor_images
- else:
- resized = torch.nn.AdaptiveAvgPool2d(size)(tensor_images)
- raw_pred = self.segmodel(
- dict(img_data=resized), segSize=seg_shape)
- softmax_pred = torch.empty_like(raw_pred)
- for catindex in self.category_indexes.values():
- softmax_pred[:, catindex] = torch.nn.functional.softmax(
- raw_pred[:, catindex], dim=1)
- pred += softmax_pred
- return pred
-
- def expand_segment_quad(self, segs, num_seg_labels, segdiv='quad'):
- shape = segs.shape
- output = segs.repeat(1, 3, 1, 1)
- # For every connected component present (using generator)
- for i, mask in component_masks(segs):
- # Figure the bounding box of the label
- top, bottom = mask.any(dim=1).nonzero()[[0, -1], 0]
- left, right = mask.any(dim=0).nonzero()[[0, -1], 0]
- # Chop the bounding box into four parts
- vmid = (top + bottom + 1) // 2
- hmid = (left + right + 1) // 2
- # Construct top, bottom, right, left masks
- quad_mask = mask[None,:,:].repeat(4, 1, 1)
- quad_mask[0, vmid:, :] = 0 # top
- quad_mask[1, :, hmid:] = 0 # right
- quad_mask[2, :vmid, :] = 0 # bottom
- quad_mask[3, :, :hmid] = 0 # left
- quad_mask = quad_mask.long()
- # Modify extra segmentation labels by offsetting
- output[i,1,:,:] += quad_mask[0] * num_seg_labels
- output[i,2,:,:] += quad_mask[1] * (2 * num_seg_labels)
- output[i,1,:,:] += quad_mask[2] * (3 * num_seg_labels)
- output[i,2,:,:] += quad_mask[3] * (4 * num_seg_labels)
- return output
-
- def predict_single_class(self, tensor_images, classnum, downsample=1):
- '''
- Given a batch of images (RGB, normalized to [-1...1]) and
- a specific segmentation class number, returns a tuple with
- (1) a differentiable ([0..1]) prediction score for the class
- at every pixel of the input image.
- (2) a binary mask showing where in the input image the
- specified class is the best-predicted label for the pixel.
- Does not work on subdivided labels.
- '''
- seg, pred = self.raw_segment_batch(tensor_images,
- downsample=downsample)
- result = pred[:,self.channellist[classnum]].sum(dim=1)
- mask = (seg == classnum).max(1)[0]
- return result, mask
-
-def component_masks(segmentation_batch):
- '''
- Splits connected components into regions (slower, requires cpu).
- '''
- npbatch = segmentation_batch.cpu().numpy()
- for i in range(segmentation_batch.shape[0]):
- labeled, num = skimage.morphology.label(npbatch[i][0], return_num=True)
- labeled = torch.from_numpy(labeled).to(segmentation_batch.device)
- for label in range(1, num):
- yield i, (labeled == label)
-
-def load_unified_parsing_segmentation_model(segmodel_arch, segvocab, epoch):
- segmodel_dir = 'dataset/segmodel/%s-%s-%s' % ((segvocab,) + segmodel_arch)
- # Load json of class names and part/object structure
- with open(os.path.join(segmodel_dir, 'labels.json')) as f:
- labeldata = json.load(f)
- nr_classes={k: len(labeldata[k])
- for k in ['object', 'scene', 'material']}
- nr_classes['part'] = sum(len(p) for p in labeldata['object_part'].values())
- # Create a segmentation model
- segbuilder = upsegmodel.ModelBuilder()
- # example segmodel_arch = ('resnet101', 'upernet')
- seg_encoder = segbuilder.build_encoder(
- arch=segmodel_arch[0],
- fc_dim=2048,
- weights=os.path.join(segmodel_dir, 'encoder_epoch_%d.pth' % epoch))
- seg_decoder = segbuilder.build_decoder(
- arch=segmodel_arch[1],
- fc_dim=2048, use_softmax=True,
- nr_classes=nr_classes,
- weights=os.path.join(segmodel_dir, 'decoder_epoch_%d.pth' % epoch))
- segmodel = upsegmodel.SegmentationModule(
- seg_encoder, seg_decoder, labeldata)
- segmodel.categories = ['object', 'part', 'material']
- segmodel.eval()
- return segmodel
-
-def load_segmentation_model(modeldir, segmodel_arch, segvocab, epoch=None):
- # Load csv of class names
- segmodel_dir = 'dataset/segmodel/%s-%s-%s' % ((segvocab,) + segmodel_arch)
- with open(os.path.join(segmodel_dir, 'labels.json')) as f:
- labeldata = EasyDict(json.load(f))
- # Automatically pick the last epoch available.
- if epoch is None:
- choices = [os.path.basename(n)[14:-4] for n in
- glob.glob(os.path.join(segmodel_dir, 'encoder_epoch_*.pth'))]
- epoch = max([int(c) for c in choices if c.isdigit()])
- # Create a segmentation model
- segbuilder = segmodel_module.ModelBuilder()
- # example segmodel_arch = ('resnet101', 'upernet')
- seg_encoder = segbuilder.build_encoder(
- arch=segmodel_arch[0],
- fc_dim=2048,
- weights=os.path.join(segmodel_dir, 'encoder_epoch_%d.pth' % epoch))
- seg_decoder = segbuilder.build_decoder(
- arch=segmodel_arch[1],
- fc_dim=2048, inference=True, num_class=len(labeldata.labels),
- weights=os.path.join(segmodel_dir, 'decoder_epoch_%d.pth' % epoch))
- segmodel = segmodel_module.SegmentationModule(seg_encoder, seg_decoder,
- torch.nn.NLLLoss(ignore_index=-1))
- segmodel.categories = [cat.name for cat in labeldata.categories]
- segmodel.labels = [label.name for label in labeldata.labels]
- categories = OrderedDict()
- label_category = numpy.zeros(len(segmodel.labels), dtype=int)
- for i, label in enumerate(labeldata.labels):
- label_category[i] = segmodel.categories.index(label.category)
- segmodel.meta = labeldata
- segmodel.eval()
- return segmodel
-
-def ensure_upp_segmenter_downloaded(directory):
- baseurl = 'http://netdissect.csail.mit.edu/data/segmodel'
- dirname = 'upp-resnet50-upernet'
- files = ['decoder_epoch_40.pth', 'encoder_epoch_40.pth', 'labels.json']
- download_dir = os.path.join(directory, dirname)
- os.makedirs(download_dir, exist_ok=True)
- for fn in files:
- if os.path.isfile(os.path.join(download_dir, fn)):
- continue # Skip files already downloaded
- url = '%s/%s/%s' % (baseurl, dirname, fn)
- print('Downloading %s' % url)
- urlretrieve(url, os.path.join(download_dir, fn))
- assert os.path.isfile(os.path.join(directory, dirname, 'labels.json'))
-
-def test_main():
- '''
- Test the unified segmenter.
- '''
- from PIL import Image
- testim = Image.open('script/testdata/test_church_242.jpg')
- tensor_im = (torch.from_numpy(numpy.asarray(testim)).permute(2, 0, 1)
- .float() / 255 * 2 - 1)[None, :, :, :].cuda()
- segmenter = UnifiedParsingSegmenter()
- seg = segmenter.segment_batch(tensor_im)
- bc = torch.bincount(seg.view(-1))
- labels, cats = segmenter.get_label_and_category_names()
- for label in bc.nonzero()[:,0]:
- if label.item():
- # What is the prediction for this class?
- pred, mask = segmenter.predict_single_class(tensor_im, label.item())
- assert mask.sum().item() == bc[label].item()
- assert len(((seg == label).max(1)[0] - mask).nonzero()) == 0
- inside_pred = pred[mask].mean().item()
- outside_pred = pred[~mask].mean().item()
- print('%s (%s, #%d): %d pixels, pred %.2g inside %.2g outside' %
- (labels[label.item()] + (label.item(), bc[label].item(),
- inside_pred, outside_pred)))
-
-if __name__ == '__main__':
- test_main()
diff --git a/spaces/saicmsaicm/pet-breed/README.md b/spaces/saicmsaicm/pet-breed/README.md
deleted file mode 100644
index 89c4b63b6b18a654db937714435211ff66de2bb5..0000000000000000000000000000000000000000
--- a/spaces/saicmsaicm/pet-breed/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Pet Breed
-emoji: 🌍
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.41.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sarinam/speaker-anonymization/IMSToucan/Layers/MultiSequential.py b/spaces/sarinam/speaker-anonymization/IMSToucan/Layers/MultiSequential.py
deleted file mode 100644
index bccf8cd18bf94a42fcc1ef94f3fb23e86a114394..0000000000000000000000000000000000000000
--- a/spaces/sarinam/speaker-anonymization/IMSToucan/Layers/MultiSequential.py
+++ /dev/null
@@ -1,33 +0,0 @@
-# Written by Shigeki Karita, 2019
-# Published under Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
-# Adapted by Florian Lux, 2021
-
-import torch
-
-
-class MultiSequential(torch.nn.Sequential):
- """
- Multi-input multi-output torch.nn.Sequential.
- """
-
- def forward(self, *args):
- """
- Repeat.
- """
- for m in self:
- args = m(*args)
- return args
-
-
-def repeat(N, fn):
- """
- Repeat module N times.
-
- Args:
- N (int): Number of repeat time.
- fn (Callable): Function to generate module.
-
- Returns:
- MultiSequential: Repeated model instance.
- """
- return MultiSequential(*[fn(n) for n in range(N)])
diff --git a/spaces/sarthakrw/web-query/README.md b/spaces/sarthakrw/web-query/README.md
deleted file mode 100644
index af6f62950ddf324f1c20a563d397fda3ddf72f94..0000000000000000000000000000000000000000
--- a/spaces/sarthakrw/web-query/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Web Query
-emoji: 🔥
-colorFrom: purple
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.41.1
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sasha/WinoBiasCheck/honestbias.py b/spaces/sasha/WinoBiasCheck/honestbias.py
deleted file mode 100644
index 5b32c95f85a69b68aab20cc38cc50463ce23a8f4..0000000000000000000000000000000000000000
--- a/spaces/sasha/WinoBiasCheck/honestbias.py
+++ /dev/null
@@ -1,100 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Honest
-
-Source: https://github.com/MilaNLProc/honest
-"""
-
-from transformers import AutoTokenizer, AutoModel , AutoModelForMaskedLM
-from transformers import pipeline
-import plotly.graph_objects as go
-from honest import honest
-from pathlib import Path
-import pandas as pd
-import pickle
-import ast
-# Initialize HONEST evaluator
-# data_set can be "binary" as in (Nozza et al., 2021) or "queer_nonqueer" (Nozza et al., 2022) or combined as "all"
-
-
-
-def get_completion(sentence, template, tokenizer):
- start = sentence.find(template[:-4])+len(template[:-4])
- if tokenizer.eos_token != None and tokenizer.eos_token in sentence:
- end = sentence.find(tokenizer.eos_token)
- elif tokenizer.sep_token != None and tokenizer.sep_token in sentence:
- end = sentence.find(tokenizer.sep_token)
- else:
- end = len(sentence)-1
-
- return sentence[start:end]
-
-
-
-# Define nlp_fill pipeline
-def calculate_HONEST(modelname, modeltype):
- honestdatapath = modelname.replace('/','')+'_HONESTdata.csv'
- honestscorepath = modelname.replace('/','')+'_HONESTscore.pkl'
- if Path(honestdatapath).is_file():
- print("loading local data")
- honest_df = pd.read_csv(honestdatapath)
- honest_score = pickle.load(open(honestscorepath, 'rb'))
- else:
- evaluator = honest.HonestEvaluator("en")
- masked_templates = evaluator.templates(data_set="binary")
- if modeltype == "CLM":
- generator = pipeline('text-generation', model=modelname)
- filled_templates = [[fill['generated_text'][len(masked_sentence.replace(' [M]','')):] for fill in generator(masked_sentence.replace(' [M].',''), max_length=len(masked_sentence.replace(' [M].','').split(" "))+10, num_return_sequences=5)] for masked_sentence in masked_templates.keys()]
- honest_score, honest_df = evaluator.honest_dataframe(filled_templates, masked_templates)
- honest_df.to_csv(modelname.replace('/','')+'_HONESTdata.csv')
- pickle.dump(honest_score,open(honestscorepath,'wb'))
- elif modeltype == "MLM":
- tokenizer = AutoTokenizer.from_pretrained(modelname, add_special_tokens = False)
- model = AutoModelForMaskedLM.from_pretrained(modelname)
- nlp_fill = pipeline('fill-mask', model=model, tokenizer=tokenizer, top_k=5)
- filled_templates = [[get_completion(fill['sequence'],masked_sentence,tokenizer) for fill in nlp_fill(masked_sentence.replace('[M]',tokenizer.mask_token))] for masked_sentence in masked_templates.keys()]
- honest_score, honest_df = evaluator.honest_dataframe(filled_templates, masked_templates)
- honest_df.to_csv(modelname.replace('/','')+'_HONESTdata.csv')
- pickle.dump(honest_score,open(honestscorepath,'wb'))
- return(honest_score, honest_df)
-
-
-"""Analyze completion dataframe"""
-
-def analyze_honest(honest_df):
- hurtlex_cat = ['animals', 'crime and immoral behavior', 'derogatory words', 'female genitalia', 'male genitalia', 'homosexuality', 'potential negative connotations', 'professions and occupations', 'prostitution', 'social and economic disadvantage']
- hurtlex_cat_names = ['an', 're', 'cds', 'asf', 'asm', 'om', 'qas', 'pa', 'pr', 'is']
- df_identity = honest_df.groupby('category')[hurtlex_cat_names].sum()
- df_count = honest_df.groupby('category')[hurtlex_cat_names].count()*5
- df_perc = df_identity/df_count*100
-
- plots = [go.Scatterpolar(r=df_perc.loc[i], theta=hurtlex_cat, fill='toself',
- name=i) for i, row in df_perc.iterrows()]
-
- honest_fig = go.Figure(
- data=plots,
- layout=go.Layout(
- polar={'radialaxis': {'visible': True}}
- )
- )
- return(honest_fig)
-
-"""Show filled terms"""
-
-def show_filled_terms(honest_df):
- grouped_df = honest_df.groupby(['raw', 'identity'])
- filled_terms = []
- for key, item in grouped_df:
- all_terms = []
- key_group = grouped_df.get_group(key)
- for l in key_group.filled_words:
- terms = ast.literal_eval(str(l))
- all_terms = all_terms + terms
- all_terms = list(set(all_terms))
- filled_terms.append([key[0].replace('[I]',key[1]).replace('[M]',''), key_group.category.values[0], all_terms])
- filled_terms_df = pd.DataFrame(filled_terms)
- female_df, male_df = [x for _, x in filled_terms_df.groupby([1])]
- female_df.columns = ['prompt','category','filled_words']
- female_df = female_df.drop(['category'],axis=1)
- male_df.columns = ['prompt','category','filled_words']
- male_df = male_df.drop(['category'],axis=1)
- return(female_df, male_df)
diff --git a/spaces/scedlatioru/img-to-music/Luxion KeyShot Pro 9.0.289 Crack License Key With Keygen 2020 (Win Mac) 2021.md b/spaces/scedlatioru/img-to-music/Luxion KeyShot Pro 9.0.289 Crack License Key With Keygen 2020 (Win Mac) 2021.md
deleted file mode 100644
index 2f2a5b9b0b01c7670f2d5c9b7875080c85317246..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/Luxion KeyShot Pro 9.0.289 Crack License Key With Keygen 2020 (Win Mac) 2021.md
+++ /dev/null
@@ -1,120 +0,0 @@
-## Luxion KeyShot Pro 9.0.289 Crack License Key With Keygen 2020 (Win Mac)
-
-
-
-
-
-
-
-
-
-**LINK ===== [https://urlca.com/2txvPW](https://urlca.com/2txvPW)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# Luxion KeyShot Pro 9.0.289 Crack License Key With Keygen 2020 (Win Mac)
-
-
-
-Luxion KeyShot Pro 9.0.289 Crack is a powerful and easy-to-use 3D rendering software that allows you to create stunning visuals in real-time. With KeyShot, you can create photorealistic images and animations of your 3D models with just a few clicks. Whether you need to showcase your product, design, or concept, KeyShot can help you bring your ideas to life.
-
-
-
-KeyShot Pro 9.0.289 License Key is the latest version of the software that comes with many new features and improvements. Some of the highlights include:
-
-
-
-- A new RealClothTM material that lets you create realistic fabrics and textiles with complex weave patterns and advanced control.
-
-- A new Fuzz geometry shader that adds realistic hair, fur, and grass to any object.
-
-- A new Denoise feature that reduces noise and improves image quality in both the viewport and the final render.
-
-- A new 3D model library that gives you access to hundreds of free models from various categories.
-
-- A new web configurator that allows you to create interactive web presentations of your scenes and models.
-
-- A new GPU mode that leverages the power of NVIDIA RTX graphics cards to accelerate ray tracing and rendering.
-
-
-
-KeyShot Pro 9.0.289 Keygen is a tool that generates valid serial numbers for activating the software. By using this keygen, you can enjoy all the features of KeyShot Pro without any limitations. However, this keygen is only for educational purposes and we do not recommend using it for commercial or illegal purposes. If you like the software, please support the developers by purchasing a legitimate license.
-
-
-
-KeyShot Pro 9.0.289 Crack is a file that modifies the original executable of the software to bypass the license verification process. By using this crack, you can run the software without entering any serial number or license key. However, this crack is also only for educational purposes and we do not recommend using it for commercial or illegal purposes. If you like the software, please support the developers by purchasing a legitimate license.
-
-
-
-Luxion KeyShot Pro 9.0.289 Crack License Key With Keygen 2020 (Win Mac) is a complete package that gives you everything you need to create stunning 3D renders and animations with ease and speed. Download it now and unleash your creativity!
-
-
-
-How to Install Luxion KeyShot Pro 9.0.289 Crack License Key With Keygen 2020 (Win Mac)
-
-
-
-Installing Luxion KeyShot Pro 9.0.289 Crack License Key With Keygen 2020 (Win Mac) is very easy and straightforward. Just follow these simple steps:
-
-
-
-1. Download the software from the link provided below.
-
-2. Extract the zip file and run the setup.exe file.
-
-3. Follow the instructions on the screen and complete the installation process.
-
-4. Do not launch the software yet. Close it if it runs automatically.
-
-5. Copy the crack file from the crack folder and paste it into the installation directory.
-
-6. Run the keygen file and generate a serial number.
-
-7. Launch the software and enter the serial number when prompted.
-
-8. Enjoy Luxion KeyShot Pro 9.0.289 Crack License Key With Keygen 2020 (Win Mac)!
-
-
-
-Note: You may need to disable your antivirus or firewall before installing or running the software. This is because some antivirus or firewall programs may detect the crack or keygen as malicious files and block them. However, we assure you that they are safe and clean and do not contain any viruses or malware.
-
-
-
-Why Choose Luxion KeyShot Pro 9.0.289 Crack License Key With Keygen 2020 (Win Mac)
-
-
-
-Luxion KeyShot Pro 9.0.289 Crack License Key With Keygen 2020 (Win Mac) is one of the best 3D rendering software in the market. It has many advantages and benefits that make it stand out from other similar software. Here are some of the reasons why you should choose Luxion KeyShot Pro 9.0.289 Crack License Key With Keygen 2020 (Win Mac):
-
-
-
-- It is very easy to use and learn. You do not need any prior experience or knowledge of 3D rendering to use KeyShot. You can simply drag and drop your 3D models into the software and start rendering them with a few clicks.
-
-- It is very fast and efficient. You can see the results of your changes in real-time in the viewport without waiting for long rendering times. You can also use the GPU mode to speed up the rendering process even more.
-
-- It is very versatile and flexible. You can import and export your 3D models from various formats and applications such as SolidWorks, SketchUp, Maya, Blender, etc. You can also customize and adjust your materials, lighting, camera, environment, etc. to suit your needs and preferences.
-
-- It is very realistic and accurate. You can create photorealistic images and animations of your 3D models with high-quality details and effects such as shadows, reflections, refractions, depth of field, motion blur, etc. You can also use the RealClothTM and Fuzz features to add realistic fabrics and hair to your models.
-
-- It is very creative and fun. You can explore different possibilities and options with your 3D models and scenes using the web configurator and the 3D model library. You can also generate unique and original content using the creative tools such as patterns, textures, colors, gradients, etc.
-
-
-
-Luxion KeyShot Pro 9.0.289 Crack License Key With Keygen 2020 (Win Mac) is a software that will help you create amazing 3D renders and animations with ease and speed. It is a software that will make you enjoy your work and unleash your creativity!
-
- dfd1c89656
-
-
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/Busy39fullcracksoftware.md b/spaces/scedlatioru/img-to-music/example/Busy39fullcracksoftware.md
deleted file mode 100644
index 00cf640aeccfc569cc3d3653854db9ea54bdc472..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Busy39fullcracksoftware.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
-https://coub.com/stories/2728629-upd-busy39fullcracksoftware https://coub.com/stories/2728629-upd-busy39fullcracksoftware (Tuesday, 18 January 2022 05:30). pgupvoo (Wednesday, 19 January 2022 00:30). https://coub.com/stories/2728629-upd-busy39fullcracksoftware
-busy39fullcracksoftware
Download ····· https://gohhs.com/2uEzfg
-https://coub.com/stories/17761899-busy39fullcracksoftware https://coub.com/stories/17761899-busy39fullcracksoftware (Wednesday, 19 January 2022 05:30). .com Marrakesh (Wednesday, 19 January 2022 05:30).
-. https://coub.com/stories/3064099-repack-busy39fullcracksoftware (Wednesday, 19 January 2022 01:15). . com Marrakesh (Wednesday, 19 January 2022 01:12). Matix. Matix https://coub.com/stories/17761899-busy39fullcracksoftware
-BusyFeetWallet 2015-15-28 15:53:03 [Message] InvalidSender :: InvalidSender Your URL: https://coub.com/stories/2728629-upd-busy39fullcracksoftware Link: https://coub.com/stories/2728629-upd-busy39fullcracksoftware Response: [] Tue Jan 18 15:48:32 2016
-2018-12-18 02:23:03 [Message] InvalidReceiver :: InvalidReceiver Your URL: https://coub.com/stories/2728629-upd-busy39fullcracksoftware Link: https://coub.com/stories/2728629-upd-busy39fullcracksoftware Response: [] Tue Jan 18 15:48:32 2016
-BusyFeetWallet 2015-15-28 15:50:33 [Message] InvalidSender :: InvalidSender Your URL: https://coub.com/stories/2728629-upd-busy39fullcracksoftware Link: https://coub.com/stories/2728629-upd-busy39fullcracksoftware Response: [] Tue Jan 18 15:48:32 2016
-
-BusyFeetWallet 2015-15-28 15:59:47 [Message] InvalidSender :: InvalidSender Your URL: https://coub.com/stories/2728629-upd-busy39fullcracksoftware Link: https://coub.com/stories/2728629-upd-busy39fullcracksoftware Response: [] Tue Jan 18 15:48:32 2016
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/scedlatioru/img-to-music/example/Windows Phone Support Tool 4.8 2345 Download Skype.md b/spaces/scedlatioru/img-to-music/example/Windows Phone Support Tool 4.8 2345 Download Skype.md
deleted file mode 100644
index 735fe7481c75e3e223299b427f689699b378d1b7..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Windows Phone Support Tool 4.8 2345 Download Skype.md
+++ /dev/null
@@ -1,10 +0,0 @@
-windows phone support tool 4.8 2345 download skype
Download ✵ https://gohhs.com/2uEzG1
-
-Windows Phone 4.8 Support Tool 2345 Download Skype. 5 item. windows phone support tool 4.8 2345 download skype DOWNLOAD: phone support tool 4.8 2345 download skype. By downloading Skype.
-Program - Skype, for computer Windows XP, 7, 8 and 10.
-You can download the Skype program for the Windows operating system in Russian for free from the link below.
-You can download the program for a Skype computer for free in Russian at any time from the official website.
-Skype (Skype) free download for Windows 7 and 8 (8.1), as well as for other operating systems (Windows XP, Vista, Windows 10) is a free program for voice communication, text messaging and, of course, video 8a78ff9644
-
-
-
diff --git a/spaces/sciling/Face_and_Plate_License_Blur/utils/infer_utils.py b/spaces/sciling/Face_and_Plate_License_Blur/utils/infer_utils.py
deleted file mode 100644
index 9dc428cd4c570dde404453aafae512602b72cc80..0000000000000000000000000000000000000000
--- a/spaces/sciling/Face_and_Plate_License_Blur/utils/infer_utils.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import torch
-
-
-
-def decode_infer(output, stride):
- # logging.info(torch.tensor(output.shape[0]))
- # logging.info(output.shape)
- # # bz is batch-size
- # bz = tuple(torch.tensor(output.shape[0]))
- # gridsize = tuple(torch.tensor(output.shape[-1]))
- # logging.info(gridsize)
- sh = torch.tensor(output.shape)
- bz = sh[0]
- gridsize = sh[-1]
-
- output = output.permute(0, 2, 3, 1)
- output = output.view(bz, gridsize, gridsize, self.gt_per_grid, 5+self.numclass)
- x1y1, x2y2, conf, prob = torch.split(
- output, [2, 2, 1, self.numclass], dim=4)
-
- shiftx = torch.arange(0, gridsize, dtype=torch.float32)
- shifty = torch.arange(0, gridsize, dtype=torch.float32)
- shifty, shiftx = torch.meshgrid([shiftx, shifty])
- shiftx = shiftx.unsqueeze(-1).repeat(bz, 1, 1, self.gt_per_grid)
- shifty = shifty.unsqueeze(-1).repeat(bz, 1, 1, self.gt_per_grid)
-
- xy_grid = torch.stack([shiftx, shifty], dim=4).cuda()
- x1y1 = (xy_grid+0.5-torch.exp(x1y1))*stride
- x2y2 = (xy_grid+0.5+torch.exp(x2y2))*stride
-
- xyxy = torch.cat((x1y1, x2y2), dim=4)
- conf = torch.sigmoid(conf)
- prob = torch.sigmoid(prob)
- output = torch.cat((xyxy, conf, prob), 4)
- output = output.view(bz, -1, 5+self.numclass)
- return output
\ No newline at end of file
diff --git a/spaces/sczhou/CodeFormer/CodeFormer/inference_codeformer.py b/spaces/sczhou/CodeFormer/CodeFormer/inference_codeformer.py
deleted file mode 100644
index fdfe8b301cc7c20c2fb653618e379d243603a108..0000000000000000000000000000000000000000
--- a/spaces/sczhou/CodeFormer/CodeFormer/inference_codeformer.py
+++ /dev/null
@@ -1,189 +0,0 @@
-# Modified by Shangchen Zhou from: https://github.com/TencentARC/GFPGAN/blob/master/inference_gfpgan.py
-import os
-import cv2
-import argparse
-import glob
-import torch
-from torchvision.transforms.functional import normalize
-from basicsr.utils import imwrite, img2tensor, tensor2img
-from basicsr.utils.download_util import load_file_from_url
-from facelib.utils.face_restoration_helper import FaceRestoreHelper
-import torch.nn.functional as F
-
-from basicsr.utils.registry import ARCH_REGISTRY
-
-pretrain_model_url = {
- 'restoration': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth',
-}
-
-def set_realesrgan():
- if not torch.cuda.is_available(): # CPU
- import warnings
- warnings.warn('The unoptimized RealESRGAN is slow on CPU. We do not use it. '
- 'If you really want to use it, please modify the corresponding codes.',
- category=RuntimeWarning)
- bg_upsampler = None
- else:
- from basicsr.archs.rrdbnet_arch import RRDBNet
- from basicsr.utils.realesrgan_utils import RealESRGANer
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2)
- bg_upsampler = RealESRGANer(
- scale=2,
- model_path='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth',
- model=model,
- tile=args.bg_tile,
- tile_pad=40,
- pre_pad=0,
- half=True) # need to set False in CPU mode
- return bg_upsampler
-
-if __name__ == '__main__':
- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
- parser = argparse.ArgumentParser()
-
- parser.add_argument('--w', type=float, default=0.5, help='Balance the quality and fidelity')
- parser.add_argument('--upscale', type=int, default=2, help='The final upsampling scale of the image. Default: 2')
- parser.add_argument('--test_path', type=str, default='./inputs/cropped_faces')
- parser.add_argument('--has_aligned', action='store_true', help='Input are cropped and aligned faces')
- parser.add_argument('--only_center_face', action='store_true', help='Only restore the center face')
- # large det_model: 'YOLOv5l', 'retinaface_resnet50'
- # small det_model: 'YOLOv5n', 'retinaface_mobile0.25'
- parser.add_argument('--detection_model', type=str, default='retinaface_resnet50')
- parser.add_argument('--draw_box', action='store_true')
- parser.add_argument('--bg_upsampler', type=str, default='None', help='background upsampler. Optional: realesrgan')
- parser.add_argument('--face_upsample', action='store_true', help='face upsampler after enhancement.')
- parser.add_argument('--bg_tile', type=int, default=400, help='Tile size for background sampler. Default: 400')
-
- args = parser.parse_args()
-
- # ------------------------ input & output ------------------------
- if args.test_path.endswith('/'): # solve when path ends with /
- args.test_path = args.test_path[:-1]
-
- w = args.w
- result_root = f'results/{os.path.basename(args.test_path)}_{w}'
-
- # ------------------ set up background upsampler ------------------
- if args.bg_upsampler == 'realesrgan':
- bg_upsampler = set_realesrgan()
- else:
- bg_upsampler = None
-
- # ------------------ set up face upsampler ------------------
- if args.face_upsample:
- if bg_upsampler is not None:
- face_upsampler = bg_upsampler
- else:
- face_upsampler = set_realesrgan()
- else:
- face_upsampler = None
-
- # ------------------ set up CodeFormer restorer -------------------
- net = ARCH_REGISTRY.get('CodeFormer')(dim_embd=512, codebook_size=1024, n_head=8, n_layers=9,
- connect_list=['32', '64', '128', '256']).to(device)
-
- # ckpt_path = 'weights/CodeFormer/codeformer.pth'
- ckpt_path = load_file_from_url(url=pretrain_model_url['restoration'],
- model_dir='weights/CodeFormer', progress=True, file_name=None)
- checkpoint = torch.load(ckpt_path)['params_ema']
- net.load_state_dict(checkpoint)
- net.eval()
-
- # ------------------ set up FaceRestoreHelper -------------------
- # large det_model: 'YOLOv5l', 'retinaface_resnet50'
- # small det_model: 'YOLOv5n', 'retinaface_mobile0.25'
- if not args.has_aligned:
- print(f'Face detection model: {args.detection_model}')
- if bg_upsampler is not None:
- print(f'Background upsampling: True, Face upsampling: {args.face_upsample}')
- else:
- print(f'Background upsampling: False, Face upsampling: {args.face_upsample}')
-
- face_helper = FaceRestoreHelper(
- args.upscale,
- face_size=512,
- crop_ratio=(1, 1),
- det_model = args.detection_model,
- save_ext='png',
- use_parse=True,
- device=device)
-
- # -------------------- start to processing ---------------------
- # scan all the jpg and png images
- for img_path in sorted(glob.glob(os.path.join(args.test_path, '*.[jp][pn]g'))):
- # clean all the intermediate results to process the next image
- face_helper.clean_all()
-
- img_name = os.path.basename(img_path)
- print(f'Processing: {img_name}')
- basename, ext = os.path.splitext(img_name)
- img = cv2.imread(img_path, cv2.IMREAD_COLOR)
-
- if args.has_aligned:
- # the input faces are already cropped and aligned
- img = cv2.resize(img, (512, 512), interpolation=cv2.INTER_LINEAR)
- face_helper.cropped_faces = [img]
- else:
- face_helper.read_image(img)
- # get face landmarks for each face
- num_det_faces = face_helper.get_face_landmarks_5(
- only_center_face=args.only_center_face, resize=640, eye_dist_threshold=5)
- print(f'\tdetect {num_det_faces} faces')
- # align and warp each face
- face_helper.align_warp_face()
-
- # face restoration for each cropped face
- for idx, cropped_face in enumerate(face_helper.cropped_faces):
- # prepare data
- cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True)
- normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
- cropped_face_t = cropped_face_t.unsqueeze(0).to(device)
-
- try:
- with torch.no_grad():
- output = net(cropped_face_t, w=w, adain=True)[0]
- restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1))
- del output
- torch.cuda.empty_cache()
- except Exception as error:
- print(f'\tFailed inference for CodeFormer: {error}')
- restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1))
-
- restored_face = restored_face.astype('uint8')
- face_helper.add_restored_face(restored_face)
-
- # paste_back
- if not args.has_aligned:
- # upsample the background
- if bg_upsampler is not None:
- # Now only support RealESRGAN for upsampling background
- bg_img = bg_upsampler.enhance(img, outscale=args.upscale)[0]
- else:
- bg_img = None
- face_helper.get_inverse_affine(None)
- # paste each restored face to the input image
- if args.face_upsample and face_upsampler is not None:
- restored_img = face_helper.paste_faces_to_input_image(upsample_img=bg_img, draw_box=args.draw_box, face_upsampler=face_upsampler)
- else:
- restored_img = face_helper.paste_faces_to_input_image(upsample_img=bg_img, draw_box=args.draw_box)
-
- # save faces
- for idx, (cropped_face, restored_face) in enumerate(zip(face_helper.cropped_faces, face_helper.restored_faces)):
- # save cropped face
- if not args.has_aligned:
- save_crop_path = os.path.join(result_root, 'cropped_faces', f'{basename}_{idx:02d}.png')
- imwrite(cropped_face, save_crop_path)
- # save restored face
- if args.has_aligned:
- save_face_name = f'{basename}.png'
- else:
- save_face_name = f'{basename}_{idx:02d}.png'
- save_restore_path = os.path.join(result_root, 'restored_faces', save_face_name)
- imwrite(restored_face, save_restore_path)
-
- # save restored img
- if not args.has_aligned and restored_img is not None:
- save_restore_path = os.path.join(result_root, 'final_results', f'{basename}.png')
- imwrite(restored_img, save_restore_path)
-
- print(f'\nAll results are saved in {result_root}')
diff --git a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py b/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py
deleted file mode 100644
index f0cf9779b270e1aead32845006f8b881fcba37ad..0000000000000000000000000000000000000000
--- a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py
+++ /dev/null
@@ -1,273 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-
-import torch
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-from torch import Tensor, nn
-from torchvision.ops.boxes import nms
-from transformers import BertConfig, BertModel, BertPreTrainedModel
-from transformers.modeling_outputs import BaseModelOutputWithPoolingAndCrossAttentions
-
-
-class BertModelWarper(nn.Module):
- def __init__(self, bert_model):
- super().__init__()
- # self.bert = bert_modelc
-
- self.config = bert_model.config
- self.embeddings = bert_model.embeddings
- self.encoder = bert_model.encoder
- self.pooler = bert_model.pooler
-
- self.get_extended_attention_mask = bert_model.get_extended_attention_mask
- self.invert_attention_mask = bert_model.invert_attention_mask
- self.get_head_mask = bert_model.get_head_mask
-
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_values=None,
- use_cache=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
- the model is configured as a decoder.
- encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
- the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
-
- If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids`
- (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)`
- instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`.
- use_cache (:obj:`bool`, `optional`):
- If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up
- decoding (see :obj:`past_key_values`).
- """
- output_attentions = (
- output_attentions if output_attentions is not None else self.config.output_attentions
- )
- output_hidden_states = (
- output_hidden_states
- if output_hidden_states is not None
- else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if self.config.is_decoder:
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- else:
- use_cache = False
-
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
- elif input_ids is not None:
- input_shape = input_ids.size()
- batch_size, seq_length = input_shape
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- batch_size, seq_length = input_shape
- else:
- raise ValueError("You have to specify either input_ids or inputs_embeds")
-
- device = input_ids.device if input_ids is not None else inputs_embeds.device
-
- # past_key_values_length
- past_key_values_length = (
- past_key_values[0][0].shape[2] if past_key_values is not None else 0
- )
-
- if attention_mask is None:
- attention_mask = torch.ones(
- ((batch_size, seq_length + past_key_values_length)), device=device
- )
- if token_type_ids is None:
- token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
-
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(
- attention_mask, input_shape, device
- )
-
- # If a 2D or 3D attention mask is provided for the cross-attention
- # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if self.config.is_decoder and encoder_hidden_states is not None:
- encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
- encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
- if encoder_attention_mask is None:
- encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
- encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
- else:
- encoder_extended_attention_mask = None
- # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO':
- # import ipdb; ipdb.set_trace()
-
- # Prepare head mask if needed
- # 1.0 in head_mask indicate we keep the head
- # attention_probs has shape bsz x n_heads x N x N
- # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
- # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
- head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
-
- embedding_output = self.embeddings(
- input_ids=input_ids,
- position_ids=position_ids,
- token_type_ids=token_type_ids,
- inputs_embeds=inputs_embeds,
- past_key_values_length=past_key_values_length,
- )
-
- encoder_outputs = self.encoder(
- embedding_output,
- attention_mask=extended_attention_mask,
- head_mask=head_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_extended_attention_mask,
- past_key_values=past_key_values,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- sequence_output = encoder_outputs[0]
- pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
-
- if not return_dict:
- return (sequence_output, pooled_output) + encoder_outputs[1:]
-
- return BaseModelOutputWithPoolingAndCrossAttentions(
- last_hidden_state=sequence_output,
- pooler_output=pooled_output,
- past_key_values=encoder_outputs.past_key_values,
- hidden_states=encoder_outputs.hidden_states,
- attentions=encoder_outputs.attentions,
- cross_attentions=encoder_outputs.cross_attentions,
- )
-
-
-class TextEncoderShell(nn.Module):
- def __init__(self, text_encoder):
- super().__init__()
- self.text_encoder = text_encoder
- self.config = self.text_encoder.config
-
- def forward(self, **kw):
- # feed into text encoder
- return self.text_encoder(**kw)
-
-
-def generate_masks_with_special_tokens(tokenized, special_tokens_list, tokenizer):
- """Generate attention mask between each pair of special tokens
- Args:
- input_ids (torch.Tensor): input ids. Shape: [bs, num_token]
- special_tokens_mask (list): special tokens mask.
- Returns:
- torch.Tensor: attention mask between each special tokens.
- """
- input_ids = tokenized["input_ids"]
- bs, num_token = input_ids.shape
- # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens
- special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool()
- for special_token in special_tokens_list:
- special_tokens_mask |= input_ids == special_token
-
- # idxs: each row is a list of indices of special tokens
- idxs = torch.nonzero(special_tokens_mask)
-
- # generate attention mask and positional ids
- attention_mask = (
- torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1)
- )
- position_ids = torch.zeros((bs, num_token), device=input_ids.device)
- previous_col = 0
- for i in range(idxs.shape[0]):
- row, col = idxs[i]
- if (col == 0) or (col == num_token - 1):
- attention_mask[row, col, col] = True
- position_ids[row, col] = 0
- else:
- attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True
- position_ids[row, previous_col + 1 : col + 1] = torch.arange(
- 0, col - previous_col, device=input_ids.device
- )
-
- previous_col = col
-
- # # padding mask
- # padding_mask = tokenized['attention_mask']
- # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool()
-
- return attention_mask, position_ids.to(torch.long)
-
-
-def generate_masks_with_special_tokens_and_transfer_map(tokenized, special_tokens_list, tokenizer):
- """Generate attention mask between each pair of special tokens
- Args:
- input_ids (torch.Tensor): input ids. Shape: [bs, num_token]
- special_tokens_mask (list): special tokens mask.
- Returns:
- torch.Tensor: attention mask between each special tokens.
- """
- input_ids = tokenized["input_ids"]
- bs, num_token = input_ids.shape
- # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens
- special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool()
- for special_token in special_tokens_list:
- special_tokens_mask |= input_ids == special_token
-
- # idxs: each row is a list of indices of special tokens
- idxs = torch.nonzero(special_tokens_mask)
-
- # generate attention mask and positional ids
- attention_mask = (
- torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1)
- )
- position_ids = torch.zeros((bs, num_token), device=input_ids.device)
- cate_to_token_mask_list = [[] for _ in range(bs)]
- previous_col = 0
- for i in range(idxs.shape[0]):
- row, col = idxs[i]
- if (col == 0) or (col == num_token - 1):
- attention_mask[row, col, col] = True
- position_ids[row, col] = 0
- else:
- attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True
- position_ids[row, previous_col + 1 : col + 1] = torch.arange(
- 0, col - previous_col, device=input_ids.device
- )
- c2t_maski = torch.zeros((num_token), device=input_ids.device).bool()
- c2t_maski[previous_col + 1 : col] = True
- cate_to_token_mask_list[row].append(c2t_maski)
- previous_col = col
-
- cate_to_token_mask_list = [
- torch.stack(cate_to_token_mask_listi, dim=0)
- for cate_to_token_mask_listi in cate_to_token_mask_list
- ]
-
- # # padding mask
- # padding_mask = tokenized['attention_mask']
- # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool()
-
- return attention_mask, position_ids.to(torch.long), cate_to_token_mask_list
diff --git a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/util/slio.py b/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/util/slio.py
deleted file mode 100644
index 72c1f0f7b82cdc931d381feef64fe15815ba657e..0000000000000000000000000000000000000000
--- a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/util/slio.py
+++ /dev/null
@@ -1,177 +0,0 @@
-# ==========================================================
-# Modified from mmcv
-# ==========================================================
-
-import json
-import pickle
-from abc import ABCMeta, abstractmethod
-from pathlib import Path
-
-import yaml
-
-try:
- from yaml import CLoader as Loader, CDumper as Dumper
-except ImportError:
- from yaml import Loader, Dumper
-
-
-# ===========================
-# Rigister handler
-# ===========================
-
-
-class BaseFileHandler(metaclass=ABCMeta):
- @abstractmethod
- def load_from_fileobj(self, file, **kwargs):
- pass
-
- @abstractmethod
- def dump_to_fileobj(self, obj, file, **kwargs):
- pass
-
- @abstractmethod
- def dump_to_str(self, obj, **kwargs):
- pass
-
- def load_from_path(self, filepath, mode="r", **kwargs):
- with open(filepath, mode) as f:
- return self.load_from_fileobj(f, **kwargs)
-
- def dump_to_path(self, obj, filepath, mode="w", **kwargs):
- with open(filepath, mode) as f:
- self.dump_to_fileobj(obj, f, **kwargs)
-
-
-class JsonHandler(BaseFileHandler):
- def load_from_fileobj(self, file):
- return json.load(file)
-
- def dump_to_fileobj(self, obj, file, **kwargs):
- json.dump(obj, file, **kwargs)
-
- def dump_to_str(self, obj, **kwargs):
- return json.dumps(obj, **kwargs)
-
-
-class PickleHandler(BaseFileHandler):
- def load_from_fileobj(self, file, **kwargs):
- return pickle.load(file, **kwargs)
-
- def load_from_path(self, filepath, **kwargs):
- return super(PickleHandler, self).load_from_path(filepath, mode="rb", **kwargs)
-
- def dump_to_str(self, obj, **kwargs):
- kwargs.setdefault("protocol", 2)
- return pickle.dumps(obj, **kwargs)
-
- def dump_to_fileobj(self, obj, file, **kwargs):
- kwargs.setdefault("protocol", 2)
- pickle.dump(obj, file, **kwargs)
-
- def dump_to_path(self, obj, filepath, **kwargs):
- super(PickleHandler, self).dump_to_path(obj, filepath, mode="wb", **kwargs)
-
-
-class YamlHandler(BaseFileHandler):
- def load_from_fileobj(self, file, **kwargs):
- kwargs.setdefault("Loader", Loader)
- return yaml.load(file, **kwargs)
-
- def dump_to_fileobj(self, obj, file, **kwargs):
- kwargs.setdefault("Dumper", Dumper)
- yaml.dump(obj, file, **kwargs)
-
- def dump_to_str(self, obj, **kwargs):
- kwargs.setdefault("Dumper", Dumper)
- return yaml.dump(obj, **kwargs)
-
-
-file_handlers = {
- "json": JsonHandler(),
- "yaml": YamlHandler(),
- "yml": YamlHandler(),
- "pickle": PickleHandler(),
- "pkl": PickleHandler(),
-}
-
-# ===========================
-# load and dump
-# ===========================
-
-
-def is_str(x):
- """Whether the input is an string instance.
-
- Note: This method is deprecated since python 2 is no longer supported.
- """
- return isinstance(x, str)
-
-
-def slload(file, file_format=None, **kwargs):
- """Load data from json/yaml/pickle files.
-
- This method provides a unified api for loading data from serialized files.
-
- Args:
- file (str or :obj:`Path` or file-like object): Filename or a file-like
- object.
- file_format (str, optional): If not specified, the file format will be
- inferred from the file extension, otherwise use the specified one.
- Currently supported formats include "json", "yaml/yml" and
- "pickle/pkl".
-
- Returns:
- The content from the file.
- """
- if isinstance(file, Path):
- file = str(file)
- if file_format is None and is_str(file):
- file_format = file.split(".")[-1]
- if file_format not in file_handlers:
- raise TypeError(f"Unsupported format: {file_format}")
-
- handler = file_handlers[file_format]
- if is_str(file):
- obj = handler.load_from_path(file, **kwargs)
- elif hasattr(file, "read"):
- obj = handler.load_from_fileobj(file, **kwargs)
- else:
- raise TypeError('"file" must be a filepath str or a file-object')
- return obj
-
-
-def sldump(obj, file=None, file_format=None, **kwargs):
- """Dump data to json/yaml/pickle strings or files.
-
- This method provides a unified api for dumping data as strings or to files,
- and also supports custom arguments for each file format.
-
- Args:
- obj (any): The python object to be dumped.
- file (str or :obj:`Path` or file-like object, optional): If not
- specified, then the object is dump to a str, otherwise to a file
- specified by the filename or file-like object.
- file_format (str, optional): Same as :func:`load`.
-
- Returns:
- bool: True for success, False otherwise.
- """
- if isinstance(file, Path):
- file = str(file)
- if file_format is None:
- if is_str(file):
- file_format = file.split(".")[-1]
- elif file is None:
- raise ValueError("file_format must be specified since file is None")
- if file_format not in file_handlers:
- raise TypeError(f"Unsupported format: {file_format}")
-
- handler = file_handlers[file_format]
- if file is None:
- return handler.dump_to_str(obj, **kwargs)
- elif is_str(file):
- handler.dump_to_path(obj, file, **kwargs)
- elif hasattr(file, "write"):
- handler.dump_to_fileobj(obj, file, **kwargs)
- else:
- raise TypeError('"file" must be a filename str or a file-object')
diff --git a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py b/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py
deleted file mode 100644
index f0cf9779b270e1aead32845006f8b881fcba37ad..0000000000000000000000000000000000000000
--- a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/bertwarper.py
+++ /dev/null
@@ -1,273 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-
-import torch
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-from torch import Tensor, nn
-from torchvision.ops.boxes import nms
-from transformers import BertConfig, BertModel, BertPreTrainedModel
-from transformers.modeling_outputs import BaseModelOutputWithPoolingAndCrossAttentions
-
-
-class BertModelWarper(nn.Module):
- def __init__(self, bert_model):
- super().__init__()
- # self.bert = bert_modelc
-
- self.config = bert_model.config
- self.embeddings = bert_model.embeddings
- self.encoder = bert_model.encoder
- self.pooler = bert_model.pooler
-
- self.get_extended_attention_mask = bert_model.get_extended_attention_mask
- self.invert_attention_mask = bert_model.invert_attention_mask
- self.get_head_mask = bert_model.get_head_mask
-
- def forward(
- self,
- input_ids=None,
- attention_mask=None,
- token_type_ids=None,
- position_ids=None,
- head_mask=None,
- inputs_embeds=None,
- encoder_hidden_states=None,
- encoder_attention_mask=None,
- past_key_values=None,
- use_cache=None,
- output_attentions=None,
- output_hidden_states=None,
- return_dict=None,
- ):
- r"""
- encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
- the model is configured as a decoder.
- encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
- Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
- the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
-
- If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids`
- (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)`
- instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`.
- use_cache (:obj:`bool`, `optional`):
- If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up
- decoding (see :obj:`past_key_values`).
- """
- output_attentions = (
- output_attentions if output_attentions is not None else self.config.output_attentions
- )
- output_hidden_states = (
- output_hidden_states
- if output_hidden_states is not None
- else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if self.config.is_decoder:
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- else:
- use_cache = False
-
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
- elif input_ids is not None:
- input_shape = input_ids.size()
- batch_size, seq_length = input_shape
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- batch_size, seq_length = input_shape
- else:
- raise ValueError("You have to specify either input_ids or inputs_embeds")
-
- device = input_ids.device if input_ids is not None else inputs_embeds.device
-
- # past_key_values_length
- past_key_values_length = (
- past_key_values[0][0].shape[2] if past_key_values is not None else 0
- )
-
- if attention_mask is None:
- attention_mask = torch.ones(
- ((batch_size, seq_length + past_key_values_length)), device=device
- )
- if token_type_ids is None:
- token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
-
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(
- attention_mask, input_shape, device
- )
-
- # If a 2D or 3D attention mask is provided for the cross-attention
- # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if self.config.is_decoder and encoder_hidden_states is not None:
- encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
- encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
- if encoder_attention_mask is None:
- encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
- encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
- else:
- encoder_extended_attention_mask = None
- # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO':
- # import ipdb; ipdb.set_trace()
-
- # Prepare head mask if needed
- # 1.0 in head_mask indicate we keep the head
- # attention_probs has shape bsz x n_heads x N x N
- # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
- # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
- head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
-
- embedding_output = self.embeddings(
- input_ids=input_ids,
- position_ids=position_ids,
- token_type_ids=token_type_ids,
- inputs_embeds=inputs_embeds,
- past_key_values_length=past_key_values_length,
- )
-
- encoder_outputs = self.encoder(
- embedding_output,
- attention_mask=extended_attention_mask,
- head_mask=head_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_extended_attention_mask,
- past_key_values=past_key_values,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- sequence_output = encoder_outputs[0]
- pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
-
- if not return_dict:
- return (sequence_output, pooled_output) + encoder_outputs[1:]
-
- return BaseModelOutputWithPoolingAndCrossAttentions(
- last_hidden_state=sequence_output,
- pooler_output=pooled_output,
- past_key_values=encoder_outputs.past_key_values,
- hidden_states=encoder_outputs.hidden_states,
- attentions=encoder_outputs.attentions,
- cross_attentions=encoder_outputs.cross_attentions,
- )
-
-
-class TextEncoderShell(nn.Module):
- def __init__(self, text_encoder):
- super().__init__()
- self.text_encoder = text_encoder
- self.config = self.text_encoder.config
-
- def forward(self, **kw):
- # feed into text encoder
- return self.text_encoder(**kw)
-
-
-def generate_masks_with_special_tokens(tokenized, special_tokens_list, tokenizer):
- """Generate attention mask between each pair of special tokens
- Args:
- input_ids (torch.Tensor): input ids. Shape: [bs, num_token]
- special_tokens_mask (list): special tokens mask.
- Returns:
- torch.Tensor: attention mask between each special tokens.
- """
- input_ids = tokenized["input_ids"]
- bs, num_token = input_ids.shape
- # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens
- special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool()
- for special_token in special_tokens_list:
- special_tokens_mask |= input_ids == special_token
-
- # idxs: each row is a list of indices of special tokens
- idxs = torch.nonzero(special_tokens_mask)
-
- # generate attention mask and positional ids
- attention_mask = (
- torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1)
- )
- position_ids = torch.zeros((bs, num_token), device=input_ids.device)
- previous_col = 0
- for i in range(idxs.shape[0]):
- row, col = idxs[i]
- if (col == 0) or (col == num_token - 1):
- attention_mask[row, col, col] = True
- position_ids[row, col] = 0
- else:
- attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True
- position_ids[row, previous_col + 1 : col + 1] = torch.arange(
- 0, col - previous_col, device=input_ids.device
- )
-
- previous_col = col
-
- # # padding mask
- # padding_mask = tokenized['attention_mask']
- # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool()
-
- return attention_mask, position_ids.to(torch.long)
-
-
-def generate_masks_with_special_tokens_and_transfer_map(tokenized, special_tokens_list, tokenizer):
- """Generate attention mask between each pair of special tokens
- Args:
- input_ids (torch.Tensor): input ids. Shape: [bs, num_token]
- special_tokens_mask (list): special tokens mask.
- Returns:
- torch.Tensor: attention mask between each special tokens.
- """
- input_ids = tokenized["input_ids"]
- bs, num_token = input_ids.shape
- # special_tokens_mask: bs, num_token. 1 for special tokens. 0 for normal tokens
- special_tokens_mask = torch.zeros((bs, num_token), device=input_ids.device).bool()
- for special_token in special_tokens_list:
- special_tokens_mask |= input_ids == special_token
-
- # idxs: each row is a list of indices of special tokens
- idxs = torch.nonzero(special_tokens_mask)
-
- # generate attention mask and positional ids
- attention_mask = (
- torch.eye(num_token, device=input_ids.device).bool().unsqueeze(0).repeat(bs, 1, 1)
- )
- position_ids = torch.zeros((bs, num_token), device=input_ids.device)
- cate_to_token_mask_list = [[] for _ in range(bs)]
- previous_col = 0
- for i in range(idxs.shape[0]):
- row, col = idxs[i]
- if (col == 0) or (col == num_token - 1):
- attention_mask[row, col, col] = True
- position_ids[row, col] = 0
- else:
- attention_mask[row, previous_col + 1 : col + 1, previous_col + 1 : col + 1] = True
- position_ids[row, previous_col + 1 : col + 1] = torch.arange(
- 0, col - previous_col, device=input_ids.device
- )
- c2t_maski = torch.zeros((num_token), device=input_ids.device).bool()
- c2t_maski[previous_col + 1 : col] = True
- cate_to_token_mask_list[row].append(c2t_maski)
- previous_col = col
-
- cate_to_token_mask_list = [
- torch.stack(cate_to_token_mask_listi, dim=0)
- for cate_to_token_mask_listi in cate_to_token_mask_list
- ]
-
- # # padding mask
- # padding_mask = tokenized['attention_mask']
- # attention_mask = attention_mask & padding_mask.unsqueeze(1).bool() & padding_mask.unsqueeze(2).bool()
-
- return attention_mask, position_ids.to(torch.long), cate_to_token_mask_list
diff --git a/spaces/sharathraju/489/README.md b/spaces/sharathraju/489/README.md
deleted file mode 100644
index ccb172dbf2822c958eaaf725c9691a697df2f669..0000000000000000000000000000000000000000
--- a/spaces/sharathraju/489/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: 489
-emoji: 📉
-colorFrom: blue
-colorTo: red
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/shengyi-qian/3DOI/monoarti/detr/misc.py b/spaces/shengyi-qian/3DOI/monoarti/detr/misc.py
deleted file mode 100644
index dc53c9d2ae0936f60960d5a3f13ba99e26635d97..0000000000000000000000000000000000000000
--- a/spaces/shengyi-qian/3DOI/monoarti/detr/misc.py
+++ /dev/null
@@ -1,468 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-"""
-Misc functions, including distributed helpers.
-
-Mostly copy-paste from torchvision references.
-"""
-import os
-import subprocess
-import time
-from collections import defaultdict, deque
-import datetime
-import pickle
-from packaging import version
-from typing import Optional, List
-
-import torch
-import torch.distributed as dist
-from torch import Tensor
-
-# needed due to empty tensor bug in pytorch and torchvision 0.5
-import torchvision
-if version.parse(torchvision.__version__) < version.parse('0.7'):
- from torchvision.ops import _new_empty_tensor
- from torchvision.ops.misc import _output_size
-
-
-class SmoothedValue(object):
- """Track a series of values and provide access to smoothed values over a
- window or the global series average.
- """
-
- def __init__(self, window_size=20, fmt=None):
- if fmt is None:
- fmt = "{median:.4f} ({global_avg:.4f})"
- self.deque = deque(maxlen=window_size)
- self.total = 0.0
- self.count = 0
- self.fmt = fmt
-
- def update(self, value, n=1):
- self.deque.append(value)
- self.count += n
- self.total += value * n
-
- def synchronize_between_processes(self):
- """
- Warning: does not synchronize the deque!
- """
- if not is_dist_avail_and_initialized():
- return
- t = torch.tensor([self.count, self.total], dtype=torch.float64, device='cuda')
- dist.barrier()
- dist.all_reduce(t)
- t = t.tolist()
- self.count = int(t[0])
- self.total = t[1]
-
- @property
- def median(self):
- d = torch.tensor(list(self.deque))
- return d.median().item()
-
- @property
- def avg(self):
- d = torch.tensor(list(self.deque), dtype=torch.float32)
- return d.mean().item()
-
- @property
- def global_avg(self):
- return self.total / self.count
-
- @property
- def max(self):
- return max(self.deque)
-
- @property
- def value(self):
- return self.deque[-1]
-
- def __str__(self):
- return self.fmt.format(
- median=self.median,
- avg=self.avg,
- global_avg=self.global_avg,
- max=self.max,
- value=self.value)
-
-
-def all_gather(data):
- """
- Run all_gather on arbitrary picklable data (not necessarily tensors)
- Args:
- data: any picklable object
- Returns:
- list[data]: list of data gathered from each rank
- """
- world_size = get_world_size()
- if world_size == 1:
- return [data]
-
- # serialized to a Tensor
- buffer = pickle.dumps(data)
- storage = torch.ByteStorage.from_buffer(buffer)
- tensor = torch.ByteTensor(storage).to("cuda")
-
- # obtain Tensor size of each rank
- local_size = torch.tensor([tensor.numel()], device="cuda")
- size_list = [torch.tensor([0], device="cuda") for _ in range(world_size)]
- dist.all_gather(size_list, local_size)
- size_list = [int(size.item()) for size in size_list]
- max_size = max(size_list)
-
- # receiving Tensor from all ranks
- # we pad the tensor because torch all_gather does not support
- # gathering tensors of different shapes
- tensor_list = []
- for _ in size_list:
- tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device="cuda"))
- if local_size != max_size:
- padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device="cuda")
- tensor = torch.cat((tensor, padding), dim=0)
- dist.all_gather(tensor_list, tensor)
-
- data_list = []
- for size, tensor in zip(size_list, tensor_list):
- buffer = tensor.cpu().numpy().tobytes()[:size]
- data_list.append(pickle.loads(buffer))
-
- return data_list
-
-
-def reduce_dict(input_dict, average=True):
- """
- Args:
- input_dict (dict): all the values will be reduced
- average (bool): whether to do average or sum
- Reduce the values in the dictionary from all processes so that all processes
- have the averaged results. Returns a dict with the same fields as
- input_dict, after reduction.
- """
- world_size = get_world_size()
- if world_size < 2:
- return input_dict
- with torch.no_grad():
- names = []
- values = []
- # sort the keys so that they are consistent across processes
- for k in sorted(input_dict.keys()):
- names.append(k)
- values.append(input_dict[k])
- values = torch.stack(values, dim=0)
- dist.all_reduce(values)
- if average:
- values /= world_size
- reduced_dict = {k: v for k, v in zip(names, values)}
- return reduced_dict
-
-
-class MetricLogger(object):
- def __init__(self, delimiter="\t"):
- self.meters = defaultdict(SmoothedValue)
- self.delimiter = delimiter
-
- def update(self, **kwargs):
- for k, v in kwargs.items():
- if isinstance(v, torch.Tensor):
- v = v.item()
- assert isinstance(v, (float, int))
- self.meters[k].update(v)
-
- def __getattr__(self, attr):
- if attr in self.meters:
- return self.meters[attr]
- if attr in self.__dict__:
- return self.__dict__[attr]
- raise AttributeError("'{}' object has no attribute '{}'".format(
- type(self).__name__, attr))
-
- def __str__(self):
- loss_str = []
- for name, meter in self.meters.items():
- loss_str.append(
- "{}: {}".format(name, str(meter))
- )
- return self.delimiter.join(loss_str)
-
- def synchronize_between_processes(self):
- for meter in self.meters.values():
- meter.synchronize_between_processes()
-
- def add_meter(self, name, meter):
- self.meters[name] = meter
-
- def log_every(self, iterable, print_freq, header=None):
- i = 0
- if not header:
- header = ''
- start_time = time.time()
- end = time.time()
- iter_time = SmoothedValue(fmt='{avg:.4f}')
- data_time = SmoothedValue(fmt='{avg:.4f}')
- space_fmt = ':' + str(len(str(len(iterable)))) + 'd'
- if torch.cuda.is_available():
- log_msg = self.delimiter.join([
- header,
- '[{0' + space_fmt + '}/{1}]',
- 'eta: {eta}',
- '{meters}',
- 'time: {time}',
- 'data: {data}',
- 'max mem: {memory:.0f}'
- ])
- else:
- log_msg = self.delimiter.join([
- header,
- '[{0' + space_fmt + '}/{1}]',
- 'eta: {eta}',
- '{meters}',
- 'time: {time}',
- 'data: {data}'
- ])
- MB = 1024.0 * 1024.0
- for obj in iterable:
- data_time.update(time.time() - end)
- yield obj
- iter_time.update(time.time() - end)
- if i % print_freq == 0 or i == len(iterable) - 1:
- eta_seconds = iter_time.global_avg * (len(iterable) - i)
- eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
- if torch.cuda.is_available():
- print(log_msg.format(
- i, len(iterable), eta=eta_string,
- meters=str(self),
- time=str(iter_time), data=str(data_time),
- memory=torch.cuda.max_memory_allocated() / MB))
- else:
- print(log_msg.format(
- i, len(iterable), eta=eta_string,
- meters=str(self),
- time=str(iter_time), data=str(data_time)))
- i += 1
- end = time.time()
- total_time = time.time() - start_time
- total_time_str = str(datetime.timedelta(seconds=int(total_time)))
- print('{} Total time: {} ({:.4f} s / it)'.format(
- header, total_time_str, total_time / len(iterable)))
-
-
-def get_sha():
- cwd = os.path.dirname(os.path.abspath(__file__))
-
- def _run(command):
- return subprocess.check_output(command, cwd=cwd).decode('ascii').strip()
- sha = 'N/A'
- diff = "clean"
- branch = 'N/A'
- try:
- sha = _run(['git', 'rev-parse', 'HEAD'])
- subprocess.check_output(['git', 'diff'], cwd=cwd)
- diff = _run(['git', 'diff-index', 'HEAD'])
- diff = "has uncommited changes" if diff else "clean"
- branch = _run(['git', 'rev-parse', '--abbrev-ref', 'HEAD'])
- except Exception:
- pass
- message = f"sha: {sha}, status: {diff}, branch: {branch}"
- return message
-
-
-def collate_fn(batch):
- batch = list(zip(*batch))
- batch[0] = nested_tensor_from_tensor_list(batch[0])
- return tuple(batch)
-
-
-def _max_by_axis(the_list):
- # type: (List[List[int]]) -> List[int]
- maxes = the_list[0]
- for sublist in the_list[1:]:
- for index, item in enumerate(sublist):
- maxes[index] = max(maxes[index], item)
- return maxes
-
-
-class NestedTensor(object):
- def __init__(self, tensors, mask: Optional[Tensor]):
- self.tensors = tensors
- self.mask = mask
-
- def to(self, device):
- # type: (Device) -> NestedTensor # noqa
- cast_tensor = self.tensors.to(device)
- mask = self.mask
- if mask is not None:
- assert mask is not None
- cast_mask = mask.to(device)
- else:
- cast_mask = None
- return NestedTensor(cast_tensor, cast_mask)
-
- def decompose(self):
- return self.tensors, self.mask
-
- def __repr__(self):
- return str(self.tensors)
-
-
-def nested_tensor_from_tensor_list(tensor_list: List[Tensor]):
- # TODO make this more general
- if tensor_list[0].ndim == 3:
- if torchvision._is_tracing():
- # nested_tensor_from_tensor_list() does not export well to ONNX
- # call _onnx_nested_tensor_from_tensor_list() instead
- return _onnx_nested_tensor_from_tensor_list(tensor_list)
-
- # TODO make it support different-sized images
- max_size = _max_by_axis([list(img.shape) for img in tensor_list])
- # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list]))
- batch_shape = [len(tensor_list)] + max_size
- b, c, h, w = batch_shape
- dtype = tensor_list[0].dtype
- device = tensor_list[0].device
- tensor = torch.zeros(batch_shape, dtype=dtype, device=device)
- mask = torch.ones((b, h, w), dtype=torch.bool, device=device)
- for img, pad_img, m in zip(tensor_list, tensor, mask):
- pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- m[: img.shape[1], :img.shape[2]] = False
- else:
- raise ValueError('not supported')
- return NestedTensor(tensor, mask)
-
-
-# _onnx_nested_tensor_from_tensor_list() is an implementation of
-# nested_tensor_from_tensor_list() that is supported by ONNX tracing.
-@torch.jit.unused
-def _onnx_nested_tensor_from_tensor_list(tensor_list: List[Tensor]) -> NestedTensor:
- max_size = []
- for i in range(tensor_list[0].dim()):
- max_size_i = torch.max(torch.stack([img.shape[i] for img in tensor_list]).to(torch.float32)).to(torch.int64)
- max_size.append(max_size_i)
- max_size = tuple(max_size)
-
- # work around for
- # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- # m[: img.shape[1], :img.shape[2]] = False
- # which is not yet supported in onnx
- padded_imgs = []
- padded_masks = []
- for img in tensor_list:
- padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))]
- padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0]))
- padded_imgs.append(padded_img)
-
- m = torch.zeros_like(img[0], dtype=torch.int, device=img.device)
- padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1)
- padded_masks.append(padded_mask.to(torch.bool))
-
- tensor = torch.stack(padded_imgs)
- mask = torch.stack(padded_masks)
-
- return NestedTensor(tensor, mask=mask)
-
-
-def setup_for_distributed(is_master):
- """
- This function disables printing when not in master process
- """
- import builtins as __builtin__
- builtin_print = __builtin__.print
-
- def print(*args, **kwargs):
- force = kwargs.pop('force', False)
- if is_master or force:
- builtin_print(*args, **kwargs)
-
- __builtin__.print = print
-
-
-def is_dist_avail_and_initialized():
- if not dist.is_available():
- return False
- if not dist.is_initialized():
- return False
- return True
-
-
-def get_world_size():
- if not is_dist_avail_and_initialized():
- return 1
- return dist.get_world_size()
-
-
-def get_rank():
- if not is_dist_avail_and_initialized():
- return 0
- return dist.get_rank()
-
-
-def is_main_process():
- return get_rank() == 0
-
-
-def save_on_master(*args, **kwargs):
- if is_main_process():
- torch.save(*args, **kwargs)
-
-
-def init_distributed_mode(args):
- if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ:
- args.rank = int(os.environ["RANK"])
- args.world_size = int(os.environ['WORLD_SIZE'])
- args.gpu = int(os.environ['LOCAL_RANK'])
- elif 'SLURM_PROCID' in os.environ:
- args.rank = int(os.environ['SLURM_PROCID'])
- args.gpu = args.rank % torch.cuda.device_count()
- else:
- print('Not using distributed mode')
- args.distributed = False
- return
-
- args.distributed = True
-
- torch.cuda.set_device(args.gpu)
- args.dist_backend = 'nccl'
- print('| distributed init (rank {}): {}'.format(
- args.rank, args.dist_url), flush=True)
- torch.distributed.init_process_group(backend=args.dist_backend, init_method=args.dist_url,
- world_size=args.world_size, rank=args.rank)
- torch.distributed.barrier()
- setup_for_distributed(args.rank == 0)
-
-
-@torch.no_grad()
-def accuracy(output, target, topk=(1,)):
- """Computes the precision@k for the specified values of k"""
- if target.numel() == 0:
- return [torch.zeros([], device=output.device)]
- maxk = max(topk)
- batch_size = target.size(0)
-
- _, pred = output.topk(maxk, 1, True, True)
- pred = pred.t()
- correct = pred.eq(target.view(1, -1).expand_as(pred))
-
- res = []
- for k in topk:
- correct_k = correct[:k].view(-1).float().sum(0)
- res.append(correct_k.mul_(100.0 / batch_size))
- return res
-
-
-def interpolate(input, size=None, scale_factor=None, mode="nearest", align_corners=None):
- # type: (Tensor, Optional[List[int]], Optional[float], str, Optional[bool]) -> Tensor
- """
- Equivalent to nn.functional.interpolate, but with support for empty batch sizes.
- This will eventually be supported natively by PyTorch, and this
- class can go away.
- """
- if version.parse(torchvision.__version__) < version.parse('0.7'):
- if input.numel() > 0:
- return torch.nn.functional.interpolate(
- input, size, scale_factor, mode, align_corners
- )
-
- output_shape = _output_size(2, input, size, scale_factor)
- output_shape = list(input.shape[:-2]) + list(output_shape)
- return _new_empty_tensor(input, output_shape)
- else:
- return torchvision.ops.misc.interpolate(input, size, scale_factor, mode, align_corners)
\ No newline at end of file
diff --git a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/maskformer_model.py b/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/maskformer_model.py
deleted file mode 100644
index b9f2ebaedd1da408c92c87d7fa0dafabd00aa9d1..0000000000000000000000000000000000000000
--- a/spaces/shikunl/prismer/prismer/experts/segmentation/mask2former/maskformer_model.py
+++ /dev/null
@@ -1,381 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from typing import Tuple
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from detectron2.config import configurable
-from detectron2.data import MetadataCatalog
-from detectron2.modeling import META_ARCH_REGISTRY, build_backbone, build_sem_seg_head
-from detectron2.modeling.backbone import Backbone
-from detectron2.modeling.postprocessing import sem_seg_postprocess
-from detectron2.structures import Boxes, ImageList, Instances, BitMasks
-from detectron2.utils.memory import retry_if_cuda_oom
-
-from .modeling.criterion import SetCriterion
-from .modeling.matcher import HungarianMatcher
-
-
-@META_ARCH_REGISTRY.register()
-class MaskFormer(nn.Module):
- """
- Main class for mask classification semantic segmentation architectures.
- """
-
- @configurable
- def __init__(
- self,
- *,
- backbone: Backbone,
- sem_seg_head: nn.Module,
- criterion: nn.Module,
- num_queries: int,
- object_mask_threshold: float,
- overlap_threshold: float,
- metadata,
- size_divisibility: int,
- sem_seg_postprocess_before_inference: bool,
- pixel_mean: Tuple[float],
- pixel_std: Tuple[float],
- # inference
- semantic_on: bool,
- panoptic_on: bool,
- instance_on: bool,
- test_topk_per_image: int,
- ):
- """
- Args:
- backbone: a backbone module, must follow detectron2's backbone interface
- sem_seg_head: a module that predicts semantic segmentation from backbone features
- criterion: a module that defines the loss
- num_queries: int, number of queries
- object_mask_threshold: float, threshold to filter query based on classification score
- for panoptic segmentation inference
- overlap_threshold: overlap threshold used in general inference for panoptic segmentation
- metadata: dataset meta, get `thing` and `stuff` category names for panoptic
- segmentation inference
- size_divisibility: Some backbones require the input height and width to be divisible by a
- specific integer. We can use this to override such requirement.
- sem_seg_postprocess_before_inference: whether to resize the prediction back
- to original input size before semantic segmentation inference or after.
- For high-resolution dataset like Mapillary, resizing predictions before
- inference will cause OOM error.
- pixel_mean, pixel_std: list or tuple with #channels element, representing
- the per-channel mean and std to be used to normalize the input image
- semantic_on: bool, whether to output semantic segmentation prediction
- instance_on: bool, whether to output instance segmentation prediction
- panoptic_on: bool, whether to output panoptic segmentation prediction
- test_topk_per_image: int, instance segmentation parameter, keep topk instances per image
- """
- super().__init__()
- self.backbone = backbone
- self.sem_seg_head = sem_seg_head
- self.criterion = criterion
- self.num_queries = num_queries
- self.overlap_threshold = overlap_threshold
- self.object_mask_threshold = object_mask_threshold
- self.metadata = metadata
- if size_divisibility < 0:
- # use backbone size_divisibility if not set
- size_divisibility = self.backbone.size_divisibility
- self.size_divisibility = size_divisibility
- self.sem_seg_postprocess_before_inference = sem_seg_postprocess_before_inference
- self.register_buffer("pixel_mean", torch.Tensor(pixel_mean).view(-1, 1, 1), False)
- self.register_buffer("pixel_std", torch.Tensor(pixel_std).view(-1, 1, 1), False)
-
- # additional args
- self.semantic_on = semantic_on
- self.instance_on = instance_on
- self.panoptic_on = panoptic_on
- self.test_topk_per_image = test_topk_per_image
-
- if not self.semantic_on:
- assert self.sem_seg_postprocess_before_inference
-
- @classmethod
- def from_config(cls, cfg):
- backbone = build_backbone(cfg)
- sem_seg_head = build_sem_seg_head(cfg, backbone.output_shape())
-
- # Loss parameters:
- deep_supervision = cfg.MODEL.MASK_FORMER.DEEP_SUPERVISION
- no_object_weight = cfg.MODEL.MASK_FORMER.NO_OBJECT_WEIGHT
-
- # loss weights
- class_weight = cfg.MODEL.MASK_FORMER.CLASS_WEIGHT
- dice_weight = cfg.MODEL.MASK_FORMER.DICE_WEIGHT
- mask_weight = cfg.MODEL.MASK_FORMER.MASK_WEIGHT
-
- # building criterion
- matcher = HungarianMatcher(
- cost_class=class_weight,
- cost_mask=mask_weight,
- cost_dice=dice_weight,
- num_points=cfg.MODEL.MASK_FORMER.TRAIN_NUM_POINTS,
- )
-
- weight_dict = {"loss_ce": class_weight, "loss_mask": mask_weight, "loss_dice": dice_weight}
-
- if deep_supervision:
- dec_layers = cfg.MODEL.MASK_FORMER.DEC_LAYERS
- aux_weight_dict = {}
- for i in range(dec_layers - 1):
- aux_weight_dict.update({k + f"_{i}": v for k, v in weight_dict.items()})
- weight_dict.update(aux_weight_dict)
-
- losses = ["labels", "masks"]
-
- criterion = SetCriterion(
- sem_seg_head.num_classes,
- matcher=matcher,
- weight_dict=weight_dict,
- eos_coef=no_object_weight,
- losses=losses,
- num_points=cfg.MODEL.MASK_FORMER.TRAIN_NUM_POINTS,
- oversample_ratio=cfg.MODEL.MASK_FORMER.OVERSAMPLE_RATIO,
- importance_sample_ratio=cfg.MODEL.MASK_FORMER.IMPORTANCE_SAMPLE_RATIO,
- )
-
- return {
- "backbone": backbone,
- "sem_seg_head": sem_seg_head,
- "criterion": criterion,
- "num_queries": cfg.MODEL.MASK_FORMER.NUM_OBJECT_QUERIES,
- "object_mask_threshold": cfg.MODEL.MASK_FORMER.TEST.OBJECT_MASK_THRESHOLD,
- "overlap_threshold": cfg.MODEL.MASK_FORMER.TEST.OVERLAP_THRESHOLD,
- "metadata": MetadataCatalog.get(cfg.DATASETS.TRAIN[0]),
- "size_divisibility": cfg.MODEL.MASK_FORMER.SIZE_DIVISIBILITY,
- "sem_seg_postprocess_before_inference": (
- cfg.MODEL.MASK_FORMER.TEST.SEM_SEG_POSTPROCESSING_BEFORE_INFERENCE
- or cfg.MODEL.MASK_FORMER.TEST.PANOPTIC_ON
- or cfg.MODEL.MASK_FORMER.TEST.INSTANCE_ON
- ),
- "pixel_mean": cfg.MODEL.PIXEL_MEAN,
- "pixel_std": cfg.MODEL.PIXEL_STD,
- # inference
- "semantic_on": cfg.MODEL.MASK_FORMER.TEST.SEMANTIC_ON,
- "instance_on": cfg.MODEL.MASK_FORMER.TEST.INSTANCE_ON,
- "panoptic_on": cfg.MODEL.MASK_FORMER.TEST.PANOPTIC_ON,
- "test_topk_per_image": cfg.TEST.DETECTIONS_PER_IMAGE,
- }
-
- @property
- def device(self):
- return self.pixel_mean.device
-
- def forward(self, batched_inputs):
- """
- Args:
- batched_inputs: a list, batched outputs of :class:`DatasetMapper`.
- Each item in the list contains the inputs for one image.
- For now, each item in the list is a dict that contains:
- * "image": Tensor, image in (C, H, W) format.
- * "instances": per-region ground truth
- * Other information that's included in the original dicts, such as:
- "height", "width" (int): the output resolution of the model (may be different
- from input resolution), used in inference.
- Returns:
- list[dict]:
- each dict has the results for one image. The dict contains the following keys:
-
- * "sem_seg":
- A Tensor that represents the
- per-pixel segmentation prediced by the head.
- The prediction has shape KxHxW that represents the logits of
- each class for each pixel.
- * "panoptic_seg":
- A tuple that represent panoptic output
- panoptic_seg (Tensor): of shape (height, width) where the values are ids for each segment.
- segments_info (list[dict]): Describe each segment in `panoptic_seg`.
- Each dict contains keys "id", "category_id", "isthing".
- """
- images = [x["image"].to(self.device) for x in batched_inputs]
- images = [(x - self.pixel_mean) / self.pixel_std for x in images]
- images = ImageList.from_tensors(images, self.size_divisibility)
-
- features = self.backbone(images.tensor)
- outputs = self.sem_seg_head(features)
-
- if self.training:
- # mask classification target
- if "instances" in batched_inputs[0]:
- gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
- targets = self.prepare_targets(gt_instances, images)
- else:
- targets = None
-
- # bipartite matching-based loss
- losses = self.criterion(outputs, targets)
-
- for k in list(losses.keys()):
- if k in self.criterion.weight_dict:
- losses[k] *= self.criterion.weight_dict[k]
- else:
- # remove this loss if not specified in `weight_dict`
- losses.pop(k)
- return losses
- else:
- mask_cls_results = outputs["pred_logits"]
- mask_pred_results = outputs["pred_masks"]
- # upsample masks
- mask_pred_results = F.interpolate(
- mask_pred_results,
- size=(images.tensor.shape[-2], images.tensor.shape[-1]),
- mode="bilinear",
- align_corners=False,
- )
-
- del outputs
-
- processed_results = []
- for mask_cls_result, mask_pred_result, input_per_image, image_size in zip(
- mask_cls_results, mask_pred_results, batched_inputs, images.image_sizes
- ):
- height = input_per_image.get("height", image_size[0])
- width = input_per_image.get("width", image_size[1])
- processed_results.append({})
-
- if self.sem_seg_postprocess_before_inference:
- mask_pred_result = retry_if_cuda_oom(sem_seg_postprocess)(
- mask_pred_result, image_size, height, width
- )
- mask_cls_result = mask_cls_result.to(mask_pred_result)
-
- # semantic segmentation inference
- if self.semantic_on:
- r = retry_if_cuda_oom(self.semantic_inference)(mask_cls_result, mask_pred_result)
- if not self.sem_seg_postprocess_before_inference:
- r = retry_if_cuda_oom(sem_seg_postprocess)(r, image_size, height, width)
- processed_results[-1]["sem_seg"] = r
-
- # panoptic segmentation inference
- if self.panoptic_on:
- panoptic_r = retry_if_cuda_oom(self.panoptic_inference)(mask_cls_result, mask_pred_result)
- processed_results[-1]["panoptic_seg"] = panoptic_r
-
- # instance segmentation inference
- if self.instance_on:
- instance_r = retry_if_cuda_oom(self.instance_inference)(mask_cls_result, mask_pred_result)
- processed_results[-1]["instances"] = instance_r
-
- return processed_results
-
- def prepare_targets(self, targets, images):
- h_pad, w_pad = images.tensor.shape[-2:]
- new_targets = []
- for targets_per_image in targets:
- # pad gt
- gt_masks = targets_per_image.gt_masks
- padded_masks = torch.zeros((gt_masks.shape[0], h_pad, w_pad), dtype=gt_masks.dtype, device=gt_masks.device)
- padded_masks[:, : gt_masks.shape[1], : gt_masks.shape[2]] = gt_masks
- new_targets.append(
- {
- "labels": targets_per_image.gt_classes,
- "masks": padded_masks,
- }
- )
- return new_targets
-
- def semantic_inference(self, mask_cls, mask_pred):
- mask_cls = F.softmax(mask_cls, dim=-1)[..., :-1]
- mask_pred = mask_pred.sigmoid()
- semseg = torch.einsum("qc,qhw->chw", mask_cls, mask_pred)
- return semseg
-
- def panoptic_inference(self, mask_cls, mask_pred):
- scores, labels = F.softmax(mask_cls, dim=-1).max(-1)
- mask_pred = mask_pred.sigmoid()
-
- keep = labels.ne(self.sem_seg_head.num_classes) & (scores > self.object_mask_threshold)
- cur_scores = scores[keep]
- cur_classes = labels[keep]
- cur_masks = mask_pred[keep]
- cur_mask_cls = mask_cls[keep]
- cur_mask_cls = cur_mask_cls[:, :-1]
-
- cur_prob_masks = cur_scores.view(-1, 1, 1) * cur_masks
-
- h, w = cur_masks.shape[-2:]
- panoptic_seg = torch.zeros((h, w), dtype=torch.int32, device=cur_masks.device)
- segments_info = []
-
- current_segment_id = 0
-
- if cur_masks.shape[0] == 0:
- # We didn't detect any mask :(
- return panoptic_seg, segments_info
- else:
- # take argmax
- cur_mask_ids = cur_prob_masks.argmax(0)
- stuff_memory_list = {}
- for k in range(cur_classes.shape[0]):
- pred_class = cur_classes[k].item()
- isthing = pred_class in self.metadata.thing_dataset_id_to_contiguous_id.values()
- mask_area = (cur_mask_ids == k).sum().item()
- original_area = (cur_masks[k] >= 0.5).sum().item()
- mask = (cur_mask_ids == k) & (cur_masks[k] >= 0.5)
-
- if mask_area > 0 and original_area > 0 and mask.sum().item() > 0:
- if mask_area / original_area < self.overlap_threshold:
- continue
-
- # merge stuff regions
- if not isthing:
- if int(pred_class) in stuff_memory_list.keys():
- panoptic_seg[mask] = stuff_memory_list[int(pred_class)]
- continue
- else:
- stuff_memory_list[int(pred_class)] = current_segment_id + 1
-
- current_segment_id += 1
- panoptic_seg[mask] = current_segment_id
-
- segments_info.append(
- {
- "id": current_segment_id,
- "isthing": bool(isthing),
- "category_id": int(pred_class),
- }
- )
-
- return panoptic_seg, segments_info
-
- def instance_inference(self, mask_cls, mask_pred):
- # mask_pred is already processed to have the same shape as original input
- image_size = mask_pred.shape[-2:]
-
- # [Q, K]
- scores = F.softmax(mask_cls, dim=-1)[:, :-1]
- labels = torch.arange(self.sem_seg_head.num_classes, device=self.device).unsqueeze(0).repeat(self.num_queries, 1).flatten(0, 1)
- # scores_per_image, topk_indices = scores.flatten(0, 1).topk(self.num_queries, sorted=False)
- scores_per_image, topk_indices = scores.flatten(0, 1).topk(self.test_topk_per_image, sorted=False)
- labels_per_image = labels[topk_indices]
-
- topk_indices = torch.div(topk_indices, self.sem_seg_head.num_classes, rounding_mode='trunc')
-
- # mask_pred = mask_pred.unsqueeze(1).repeat(1, self.sem_seg_head.num_classes, 1).flatten(0, 1)
- mask_pred = mask_pred[topk_indices]
-
- # if this is panoptic segmentation, we only keep the "thing" classes
- if self.panoptic_on:
- keep = torch.zeros_like(scores_per_image).bool()
- for i, lab in enumerate(labels_per_image):
- keep[i] = lab in self.metadata.thing_dataset_id_to_contiguous_id.values()
-
- scores_per_image = scores_per_image[keep]
- labels_per_image = labels_per_image[keep]
- mask_pred = mask_pred[keep]
-
- result = Instances(image_size)
- # mask (before sigmoid)
- result.pred_masks = (mask_pred > 0).float()
- result.pred_boxes = Boxes(torch.zeros(mask_pred.size(0), 4))
- # Uncomment the following to get boxes from masks (this is slow)
- # result.pred_boxes = BitMasks(mask_pred > 0).get_bounding_boxes()
-
- # calculate average mask prob
- mask_scores_per_image = (mask_pred.sigmoid().flatten(1) * result.pred_masks.flatten(1)).sum(1) / (result.pred_masks.flatten(1).sum(1) + 1e-6)
- result.scores = scores_per_image * mask_scores_per_image
- result.pred_classes = labels_per_image
- return result
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy the Ultimate GTA Experience with Grand Theft Auto The Trilogy - The Definitive Edition APK OBB for Android.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy the Ultimate GTA Experience with Grand Theft Auto The Trilogy - The Definitive Edition APK OBB for Android.md
deleted file mode 100644
index ab250b7cafadcd185a963e96964999ee272d433d..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Enjoy the Ultimate GTA Experience with Grand Theft Auto The Trilogy - The Definitive Edition APK OBB for Android.md
+++ /dev/null
@@ -1,139 +0,0 @@
-
-Grand Theft Auto: The Trilogy - The Definitive Edition APK OBB Download
-If you are a fan of the Grand Theft Auto (GTA) series, you must have heard of the latest release from Rockstar Games: Grand Theft Auto: The Trilogy - The Definitive Edition. This is a remastered collection of three classic GTA games: GTA III, GTA Vice City, and GTA San Andreas. In this article, we will tell you everything you need to know about this definitive edition, and how to download and install it on your Android devices or PC with GameLoop.
-grand theft auto the trilogy – the definitive edition apk obb download
Download 🆗 https://ssurll.com/2uNRyj
- What is Grand Theft Auto: The Trilogy - The Definitive Edition?
-Grand Theft Auto: The Trilogy - The Definitive Edition is a bundle of three iconic GTA games that have been remastered with 4K ultra HD graphics, enhanced lighting, improved gameplay, and new features. You can replay these games with a modern look and feel, while still enjoying the original stories, characters, and missions.
- A remastered collection of three classic GTA games
-The definitive edition includes the following three games:
-
-- GTA III: The game that started it all, GTA III is set in the fictional city of Liberty City, where you play as a silent protagonist who gets involved in a criminal underworld after being betrayed by his girlfriend.
-- GTA Vice City: Inspired by the 1980s Miami, GTA Vice City follows the story of Tommy Vercetti, a former mobster who seeks to build his own empire in the city of Vice City.
-- GTA San Andreas: The most expansive and ambitious GTA game ever made, GTA San Andreas takes place in the state of San Andreas, which consists of three cities: Los Santos, San Fierro, and Las Venturas. You play as Carl Johnson, who returns to his hometown after his mother's death and gets involved in a gang war.
-
- The features and improvements of the definitive edition
-The definitive edition brings several enhancements and additions to these games, such as:
-gta the trilogy definitive edition android apk obb
-how to download gta the trilogy definitive edition on android
-gta the trilogy definitive edition apk obb free download
-gta the trilogy definitive edition mobile apk obb
-gta the trilogy definitive edition 4k ultra hd graphics apk obb
-gta the trilogy definitive edition android release date
-gta the trilogy definitive edition remastered apk obb
-gta the trilogy definitive edition gameloop download
-gta the trilogy definitive edition pc download v1.8
-gta the trilogy definitive edition android gameplay
-gta the trilogy definitive edition rockstar games launcher
-gta the trilogy definitive edition android system requirements
-gta the trilogy definitive edition apk obb offline
-gta the trilogy definitive edition android mod apk obb
-gta the trilogy definitive edition xda developers
-gta the trilogy definitive edition android cheats
-gta the trilogy definitive edition apk obb highly compressed
-gta the trilogy definitive edition android review
-gta the trilogy definitive edition android trailer
-gta the trilogy definitive edition android emulator
-gta the trilogy definitive edition 2022 apk obb
-gta the trilogy definitive edition android beta download
-gta the trilogy definitive edition apk obb no verification
-gta the trilogy definitive edition android controller support
-gta the trilogy definitive edition android reddit
-gta the trilogy definitive edition android update
-gta the trilogy definitive edition apk obb full version
-gta the trilogy definitive edition android best settings
-gta the trilogy definitive edition android size
-gta the trilogy definitive edition android price
-gta the trilogy definitive edition apk obb unlimited money
-gta the trilogy definitive edition android online multiplayer
-gta the trilogy definitive edition apk obb latest version
-gta the trilogy definitive edition android tips and tricks
-gta the trilogy definitive edition android bugs and glitches
-gta the trilogy definitive edition apk obb mediafire link
-gta the trilogy definitive edition android comparison
-gta the trilogy definitive edition android save data
-gta the trilogy definitive edition apk obb google drive link
-gta the trilogy definitive edition android 60fps mod
-gta the trilogy definitive edition apk obb for low end devices
-gta the trilogy definitive edition android voice acting
-gta the trilogy definitive edition apk obb direct download link
-gta the trilogy definitive edition android new features
-gta the trilogy definitive edition apk obb password
-gta the trilogy definitive edition android mod menu
-gta the trilogy definitive edition apk obb zip file download
-gta the trilogy definitive edition android graphics settings
-
-- 4K ultra HD graphics: The games have been updated with high-resolution textures, models, and effects, making them look stunning on modern devices.
-- Enhanced lighting: The games have been improved with dynamic shadows, reflections, and weather effects, creating a more realistic and immersive atmosphere.
-- Improved gameplay: The games have been optimized with smoother controls, better camera angles, faster loading times, and bug fixes.
-- New features: The games have been added with new content, such as achievements, trophies, radio stations, songs, vehicles, weapons, and more.
-
- The system requirements and compatibility of the definitive edition
-The definitive edition is compatible with various platforms, such as Android, iOS, Windows PC , PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S, and Nintendo Switch. However, the system requirements may vary depending on the device and the platform. Here are the minimum and recommended system requirements for Android and PC devices :
-
-
-Device
-Minimum Requirements
-Recommended Requirements
-
-
-Android
-- Android 8.0 or higher
- 3 GB of RAM
- 8 GB of free storage space
- Adreno 530 or Mali-G71 MP20 GPU or higher
-- Android 10 or higher
- 4 GB of RAM
- 10 GB of free storage space
- Adreno 640 or Mali-G76 MP10 GPU or higher
-
-
-PC
-- Windows 10 64-bit
- Intel Core i5-4460 or AMD FX-6300 processor
- 8 GB of RAM
- NVIDIA GeForce GTX 760 or AMD Radeon R9 280X graphics card
- DirectX 11
- 45 GB of free storage space
-- Windows 10 64-bit
- Intel Core i7-7700K or AMD Ryzen 5 1600 processor
- 16 GB of RAM
- NVIDIA GeForce GTX 1060 or AMD Radeon RX 580 graphics card
- DirectX 12
- 55 GB of free storage space
-
-
- How to download and install Grand Theft Auto: The Trilogy - The Definitive Edition APK OBB on Android devices?
-If you want to play Grand Theft Auto: The Trilogy - The Definitive Edition on your Android device, you will need to download and install the APK and OBB files from a trusted source. Here are the steps to do so:
- Step 1: Download the APK and OBB files from a trusted source
-You can find many websites that offer the APK and OBB files for Grand Theft Auto: The Trilogy - The Definitive Edition, but not all of them are safe and reliable. Some of them may contain viruses, malware, or fake files that can harm your device or steal your data. Therefore, you should only download the files from a trusted source, such as [APKPure] or [APKMody]. These websites provide the latest and verified versions of the files, as well as detailed instructions on how to install them.
- Step 2: Enable the installation of unknown sources on your device
-Before you can install the APK file, you need to enable the installation of unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, follow these steps:
-
-- Go to your device's Settings and tap on Security.
-- Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.
-- A warning message will appear, telling you that installing unknown apps may harm your device. Tap on OK to confirm.
-
- Step 3: Install the APK file and extract the OBB file to the appropriate folder
-After you have enabled the installation of unknown sources, you can proceed to install the APK file and extract the OBB file to the appropriate folder. To do this, follow these steps:
-
-- Locate the downloaded APK file on your device's file manager and tap on it.
-- A prompt will appear, asking you if you want to install this application. Tap on Install and wait for the installation to complete.
-- Locate the downloaded OBB file on your device's file manager and tap on it.
-- A prompt will appear, asking you if you want to extract this file. Tap on Extract and choose a destination folder.
-- The destination folder should be Android/obb/com.rockstargames.gtadefinitiveedition. If this folder does not exist, create it manually.
-- Wait for the extraction to complete and make sure that the OBB file is in the correct folder.
-
- Step 4: Launch the game and enjoy the definitive GTA experience
-Now that you have installed the APK file and extracted the OBB file to the appropriate folder, you can launch the game and enjoy the definitive GTA experience. To do this, follow these steps:
-
-- Go to your device's app drawer and find the icon for Grand Theft Auto: The Trilogy - The Definitive Edition. Tap on it to launch the game.
-- The game will ask you for some permissions, such as storage, network, and location. Grant these permissions to ensure the game runs smoothly.
-- The game will also ask you to choose the language and the graphics quality. Select the options that suit your preferences and your device's capabilities.
-- The game will then load the main menu, where you can choose which game you want to play: GTA III, GTA Vice City, or GTA San Andreas. Tap on the game you want to play and start your adventure.
-
- How to download and install Grand Theft Auto: The Trilogy - The Definitive Edition on PC with GameLoop?
-If you want to play Grand Theft Auto: The Trilogy - The Definitive Edition on your PC, you will need to download and install it with GameLoop, which is an official emulator from Tencent that allows you to play Android games on your PC. Here are the steps to do so:
- Step 1: Download and install GameLoop on your PC
-You can download GameLoop from its official website: [GameLoop]. Once you have downloaded the installer, run it and follow the instructions to install GameLoop on your PC.
- Step 2: Search for Grand Theft Auto: The Trilogy - The Definitive Edition in the GameLoop library or search results
-After you have installed GameLoop, launch it and go to the Game Center tab. You will see a list of games that are available for download and play on GameLoop. You can either scroll down to find Grand Theft Auto: The Trilogy - The Definitive Edition, or use the search bar to type in the name of the game.
- Step 3: Click on the download button and wait for the installation to complete
-Once you have found Grand Theft Auto: The Trilogy - The Definitive Edition, click on the download button and wait for the game to be downloaded and installed on your PC. This may take some time depending on your internet speed and PC performance.
- Step 4: Launch the game and customize the settings according to your preferences
-After the game has been installed, you can launch it by clicking on the play button. You will see a pop-up window that shows you the keyboard and mouse controls for the game. You can customize these controls according to your preferences by clicking on the settings icon. You can also adjust the graphics quality, sound volume, language, and other options in the game's settings menu.
- Conclusion
-Grand Theft Auto: The Trilogy - The Definitive Edition is a must-have for any GTA fan who wants to relive the classic GTA games with a modern twist. You can download and install this game on your Android devices or PC with GameLoop by following the steps we have provided in this article. We hope you enjoy playing this game as much as we do!
- Frequently Asked Questions
-
-- Q: How much does Grand Theft Auto: The Trilogy - The Definitive Edition cost?
A: Grand Theft Auto: The Trilogy - The Definitive Edition costs $59.99 USD for all platforms. However, you may find some discounts or offers from different sources.
-- Q: Can I play Grand Theft Auto: The Trilogy - The Definitive Edition online with other players?
A: No, Grand Theft Auto: The Trilogy - The Definitive Edition does not have an online multiplayer mode. It is a single-player game only.
-- Q: Can I transfer my save data from the original GTA games to the definitive edition?
A: No, you cannot transfer your save data from the original GTA games to the definitive edition. You will have to start a new game from scratch.
-- Q: What are the differences between Grand Theft Auto: The Trilogy - The Definitive Edition and Grand Theft Auto V?
A: Grand Theft Auto V is a newer and more advanced GTA game that has a different story, setting, gameplay, and features than Grand Theft Auto: The Trilogy - The Definitive Edition. Grand Theft Auto V also has an online multiplayer mode called Grand Theft Auto Online, which is not available in Grand Theft Auto: The Trilogy - The Definitive Edition.
-- Q: Is Grand Theft Auto: The Trilogy - The Definitive Edition suitable for children?
A: No, Grand Theft Auto: The Trilogy - The Definitive Edition is not suitable for children. It is rated M for Mature by ESRB, which means it contains violence, blood and gore, sexual content, nudity, strong language, use of drugs and alcohol, and other mature themes.
-
I have finished writing the article. I hope you are satisfied with the quality and the content of the article. If you have any feedback or suggestions, please let me know. Thank you for choosing me as your content writer. Have a great day! ? 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Explore the World with Your Peridot APK - Download Now for Free.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Explore the World with Your Peridot APK - Download Now for Free.md
deleted file mode 100644
index 0c57162ca1720a5284a35de4047d054f340a1763..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Explore the World with Your Peridot APK - Download Now for Free.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-Peridot Apkpure: A New Virtual Pet Game for Android
-If you are looking for a new and exciting game to play on your Android device, you might want to check out Peridot Apkpure. This game lets you adopt and care for one-of-a-kind virtual pets that you can explore the world with. In this article, we will tell you what Peridot Apkpure is, how to play it, and why you should try it.
-peridot apkpure
Download Zip ✦ https://ssurll.com/2uNRTU
-What is Peridot Apkpure?
-Peridot Ap
Peridot Apkpure is a simulation game that lets you adopt and care for one-of-a-kind virtual pets. These pets are called peridots, and they are based on the yellow-green gemstone of the same name. Peridots are cute and colorful creatures that have different shapes and personalities. You can choose your peridot from a variety of options and customize its appearance and accessories.
-Peridot Apkpure is developed by Niantic, Inc., the creators of Pokemon Go and Harry Potter: Wizards Unite. Niantic is known for making games that use augmented reality (AR) technology, which blends the digital and physical worlds. Peridot Apkpure also uses AR to make your peridot come to life on your screen. You can see your peridot in your real environment and interact with it using your camera and touch screen.
-Peridot Apkpure is available for free download on APKCombo, a website that offers APK files for Android apps and games. APK files are the installation packages for Android applications that can be downloaded and installed manually on your device. APKCombo is a reliable and safe source for APK files, as it scans them for viruses and malware before uploading them to their website. You can download Peridot Apkpure from APKCombo by following the link [1](https://apkcombo.com/peridot/com.nianticlabs.peridot/).
-How to Play Peridot Apkpure?
-Playing Peridot Apkpure is easy and fun. Here are the basic steps to get started:
-
-- Choose your peridot from a variety of colors and shapes. You can also name your peridot and give it a gender.
-- Feed, groom, play with, and train your peridot to increase its happiness and skills. You can use different items and toys to interact with your peridot and make it happy. You can also teach your peridot new tricks and commands by using voice recognition.
-- Explore the world with your peridot and discover new places and activities. You can use AR mode to see your peridot in your surroundings, or use map mode to see nearby locations where you can take your peridot. You can also travel to different regions and countries with your peridot and learn about their cultures and landmarks.
-- Interact with other peridot owners and join events and competitions. You can use social media features to share your peridot's photos and videos with other players, or chat with them using text or voice messages. You can also join events and competitions where you can show off your peridot's skills and win prizes.
-
-Why You Should Try Peridot Apkpure?
-Peridot Apkpure is a game that has many benefits for its players. Here are some of the reasons why you should try it:
-peridot game apk download
-peridot niantic android
-peridot virtual pets simulation
-peridot apk latest version
-peridot apk free download
-peridot apk mod unlimited money
-peridot apk offline
-peridot apk for pc
-peridot apk old version
-peridot apk pure app
-peridot apk mirror
-peridot apk no ads
-peridot apk hack
-peridot apk obb
-peridot apk revdl
-peridot apk rexdl
-peridot apk uptodown
-peridot apk android 1
-peridot apk android oyun club
-peridot apk andropalace
-peridot apk apkpure.com
-peridot apk apkmirror.com
-peridot apk apkpure.co.id
-peridot apk apkpure.in
-peridot apk apkpure.co.uk
-peridot apk apkpure.net
-peridot apk apkpure.io
-peridot apk apkpure.vip
-peridot apk apkpure.fun
-peridot apk apkpure.me
-peridot app download apkpure
-download game peridot apkpure
-download aplikasi peridot apkpure
-download mod peridot apkpure
-download update peridot apkpure
-cara download peridot apkpure
-how to download peridot apkpure
-niantic labs peridot apkpure
-com.nianticlabs.peridot apkpure
-niantic inc. peridot apkpure
-niantic project peridot apkpure
-niantic games peridot apkpure
-niantic pokemon go peridot apkpure
-niantic ingress prime peridot apkpure
-niantic harry potter wizards unite peridot apkpure
-niantic catan world explorers peridot apkpure
-niantic lightship ardk peridot apkpure
-
-- Peridot Apkpure is a fun and relaxing game that can help you reduce stress and boredom. Playing with your peridot can make you feel happy and calm, as well as improve your mood and mental health.
-- Peridot Apkpure is a creative and educational game that can stimulate your imagination and curiosity. Creating and customizing your peridot can unleash your artistic side, while exploring the world with your peridot can expand your knowledge and horizons.
-- Peridot Apkpure is a social and interactive game that can help you connect with other people who share your interests. Meeting and chatting with other peridot owners can make you feel less lonely and more connected, as well as help you make new friends and learn from others.
-
-Conclusion
-Peridot Apkpure is a unique and innovative game that combines the charm of virtual pets with the excitement of augmented reality. It is a game that can appeal to anyone who loves animals, nature, and adventure. It is a game that you can download for free on APKCombo and enjoy on your Android device.
-If you are interested in trying Peridot Apkpure, you can follow this link [1](https://apkcombo.com/peridot/com.nianticlabs.peridot/) to download it from APKCombo. You can also visit Niantic's official website [2](https://nianticlabs.com/) or email them at support@nianticlabs.com for more information or support.
-FAQs
-What is peridot?
-Peridot is a yellow-green transparent variety of olivine, a mineral that forms deep in the earth's mantle and sometimes in meteorites. It is also a gemstone that has been used for jewelry and decoration since ancient times.
-What is APKCombo?
-APKCombo is a website that provides APK files for APKCombo is a website that provides APK files for Android apps and games. APK files are the installation packages for Android applications that can be downloaded and installed manually on your device. APKCombo scans all the APK files for viruses and malware before uploading them to their website.
-How can I download Peridot Apkpure from APKCombo?
-You can download Peridot Apkpure from APKCombo by following these steps:
-
-- Visit [1](https://apkcombo.com/peridot/com.nianticlabs.peridot/) on your browser.
-- Click on the "Download APK (92 MB)" button.
-- Choose a download server from the list.
-- Wait for the download to complete.
-- Open the downloaded file and follow the instructions to install the game on your device.
-
-Is Peridot Apkpure safe to play?
-Yes, Peridot Apkpure is safe to play as long as you download it from a trusted source like APKCombo. APKCombo scans all the APK files for viruses and malware before uploading them to their website. However, you should always be careful when downloading any app or game from the internet and check the permissions and reviews before installing them on your device.
-How can I contact Niantic, Inc. for support or feedback?
-You can contact Niantic, Inc. for support or feedback by visiting their official website [2](https://nianticlabs.com/) or by sending an email to support@nianticlabs.com.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA 19 Mod APK OBB Data The Ultimate Soccer Game for Android.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA 19 Mod APK OBB Data The Ultimate Soccer Game for Android.md
deleted file mode 100644
index d072991d8240aeac26759db61425e64556f0bc7a..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA 19 Mod APK OBB Data The Ultimate Soccer Game for Android.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-FIFA APK 19: How to Download and Install the Best Soccer Game on Android
- Introduction
-
If you are a fan of soccer, you probably have heard of FIFA, the most popular and realistic soccer game series in the world. FIFA is developed by EA Sports, a leading company in the gaming industry. FIFA has been releasing new versions of its game every year, with improved graphics, gameplay, and features.
-fifa apk 19
Download Zip ☑ https://ssurll.com/2uNUYh
- One of the latest versions of FIFA is FIFA 19, which was released in 2018 for various platforms, including PC, PlayStation, Xbox, Nintendo Switch, and mobile devices. However, if you want to play FIFA 19 on your Android phone or tablet, you might encounter some difficulties. That's because the official version of FIFA 19 for Android is not available on the Google Play Store. Instead, you have to download and install an unofficial version of FIFA 19, which is called FIFA APK 19.
- In this article, we will show you how to download and install FIFA APK 19 on your Android device, and how to enjoy the best soccer game on your mobile screen. We will also tell you why you should play FIFA APK 19, and what features and tips you can expect from this game.
- How to download FIFA APK 19
- Requirements for FIFA APK 19
- Before you download FIFA APK 19, you need to make sure that your Android device meets the minimum requirements for this game. Here are the requirements:
-
-- Android version: 4.4 or higher
-- RAM: 2 GB or more
-- Storage space: At least 4 GB of free space
-- Internet connection: Required for online features
-
- If your device meets these requirements, you can proceed to download FIFA APK 19. However, if your device does not meet these requirements, you might experience some issues with the game, such as lagging, crashing, or errors.
- Steps to download FIFA APK 19
- To download FIFA APK 19, you need to follow these steps:
-
-- Go to a trusted website that provides the download link for FIFA APK 19. For example, you can use [this website](^1^), [this website](^2^), or [this website](^3^).
-- On the website, look for the download button or link for FIFA APK 19. You might have to scroll down or click on some tabs to find it.
-- Click on the download button or link. You will be redirected to another page where you have to wait for a few seconds before the download starts.
-- Once the download starts, you will see a pop-up window asking you to confirm the download. Tap on OK or Download.
-- Wait for the download to finish. You will need to download two files: an APK file and an OBB file. The APK file is about 30 MB in size, while the OBB file is about 1 GB in size.
-- After downloading both files, locate them in your device's file manager. They are usually stored in the Downloads folder.
-
- Congratulations! You have successfully downloaded FIFA APK 19 on your Android device. Now, you need to install it.
- How to install FIFA APK 19
- How to install the APK file
- To install the APK file of FIFA APK 19, you need to follow these steps:
- Before you install the APK file, you need to enable the installation of unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
- - Next, go to your file manager and tap on the APK file of FIFA APK 19. You will see a pop-up window asking you to install the app. Tap on Install.
-fifa 19 apk download android
-fifa 19 apk mod offline
-fifa 19 apk data obb
-fifa 19 apk obb download
-fifa 19 apk offline mode
-fifa 19 apk latest version
-fifa 19 apk and obb file
-fifa 19 apk free download full version
-fifa 19 apk unlimited money
-fifa 19 apk highly compressed
-fifa 19 apk mobile game
-fifa 19 apk android game
-fifa 19 apk full unlocked
-fifa 19 apk no verification
-fifa 19 apk update patch
-fifa 19 apk real faces
-fifa 19 apk best graphics
-fifa 19 apk original game
-fifa 19 apk online multiplayer
-fifa 19 apk ultimate team
-fifa 19 apk career mode
-fifa 19 apk champions league
-fifa 19 apk world cup mode
-fifa 19 apk commentary download
-fifa 19 apk english language
-fifa 19 apk new kits and transfers
-fifa 19 apk new features and gameplay
-fifa 19 apk new stadiums and teams
-fifa 19 apk new skills and tricks
-fifa 19 apk new celebrations and animations
-fifa 19 apk requirements and compatibility
-fifa 19 apk size and installation guide
-fifa 19 apk review and rating
-fifa 19 apk download link and password
-fifa 19 apk how to play and tips
-fifa 19 apk cheats and hacks
-fifa 19 apk mod menu and coins generator
-fifa 19 apk comparison and difference with other versions
-fifa 19 apk problems and solutions
-fifa 19 apk questions and answers
- - Wait for the installation to finish. You will see a message saying that the app has been installed. Tap on Open or Done.
- - You have successfully installed the APK file of FIFA APK 19. However, you are not done yet. You still need to install the OBB file.
- How to install the OBB file
- To install the OBB file of FIFA APK 19, you need to follow these steps:
-
-- Go to your file manager and locate the OBB file of FIFA APK 19. It is usually named as com.ea.game.fifa14_row.obb.
-- Long press on the OBB file and select Copy or Move.
-- Navigate to the folder Android > obb > com.ea.game.fifa14_row and paste the OBB file there. If you don't see this folder, you can create it manually.
-- Wait for the copying or moving process to finish. You have successfully installed the OBB file of FIFA APK 19.
-
- Now, you are ready to play FIFA APK 19 on your Android device.
- How to play FIFA APK 19
- Features of FIFA APK 19
- FIFA APK 19 is an amazing soccer game that offers you many features and modes to enjoy. Here are some of the features of FIFA APK 19:
-
-- Realistic graphics and animations that make you feel like you are watching a real soccer match.
-- Smooth and responsive controls that let you perform various actions, such as passing, shooting, dribbling, tackling, and more.
-- A variety of game modes, such as Career Mode, Tournament Mode, Manager Mode, Online Mode, and more.
-- A huge database of players, teams, leagues, and stadiums from around the world. You can choose your favorite team and players, or create your own custom team and players.
-- A dynamic commentary system that provides you with insightful and exciting commentary during the game.
-- An online feature that lets you play with or against other players from around the world. You can also join leagues and tournaments and compete for glory and rewards.
-
- FIFA APK 19 is a game that will keep you entertained for hours with its amazing gameplay and features.
- Tips and tricks for FIFA APK 19
- If you want to improve your skills and performance in FIFA APK 19, here are some tips and tricks that you can use:
-
-- Practice your skills in the Training Mode. You can learn how to perform various actions, such as passing, shooting, dribbling, tackling, and more.
-- Adjust your settings according to your preference and device. You can change the difficulty level, camera angle, control scheme, sound effects, and more.
-- Use the right players for the right positions. Each player has different attributes, such as speed, strength, stamina, shooting, passing, dribbling, defending, and more. You should use the players that suit your playing style and strategy.
-- Use different tactics and formations depending on your opponent and situation. You can change your tactics and formations during the game by tapping on the menu button on the top right corner of the screen.
-- Use your coins wisely. You can earn coins by playing games, completing achievements, or watching ads. You can use coins to buy new players, upgrade your existing players, or unlock new items and features.
-
- With these tips and tricks, you can become a master of FIFA APK 19 in no time.
- Conclusion
- Summary of the article
- In this article, we have shown you how to download and install FIFA APK 19 on your Android device. We have also told you why you should play FIFA APK 19, and what features and tips you can expect from this game. FIFA APK 19 is an amazing soccer game that will give you hours of fun and excitement. If you love soccer, you should definitely try FIFA APK 19 on your Android device.
- FAQs
- Here are some frequently asked questions about FIFA APK 19:
-
-- Is FIFA APK 19 safe to download and install?- Yes, FIFA APK 19 is safe to download and install, as long as you use a trusted website that provides the download link. However, you should always be careful when downloading and installing any app from unknown sources, as they might contain malware or viruses. You should also scan your device with an antivirus app after installing FIFA APK 19.
-
- Is FIFA APK 19 legal to play?
-- FIFA APK 19 is not an official version of FIFA 19, and it is not authorized by EA Sports or any other entity. Therefore, playing FIFA APK 19 might be considered illegal in some countries or regions. You should check your local laws and regulations before playing FIFA APK 19. You should also be aware that playing FIFA APK 19 might violate the terms and conditions of EA Sports or Google Play Store, and you might face some consequences or penalties.
- - Is FIFA APK 19 compatible with all Android devices?
-- FIFA APK 19 is compatible with most Android devices that meet the minimum requirements for this game. However, some devices might not be able to run FIFA APK 19 smoothly or properly, due to different hardware specifications or software versions. You should try FIFA APK 19 on your device and see if it works well for you.
- - How can I update FIFA APK 19?
-- FIFA APK 19 does not have an automatic update feature, unlike the official version of FIFA 19. Therefore, if you want to update FIFA APK 19, you have to download and install the latest version of FIFA APK 19 from a trusted website. You might also have to delete the previous version of FIFA APK 19 from your device before installing the new one.
- - How can I contact the developer of FIFA APK 19?
-- FIFA APK 19 is developed by an unknown developer or group of developers, who are not affiliated with EA Sports or any other entity. Therefore, there is no official way to contact the developer of FIFA APK 19. However, you might be able to find some information or feedback from other users of FIFA APK 19 on the website where you downloaded the game, or on some online forums or social media platforms.
-
- I hope this article has helped you learn more about FIFA APK 19 and how to download and install it on your Android device. If you have any questions or comments, please feel free to leave them below. Thank you for reading!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simsantonioii/MusicGen-Continuation/app.py b/spaces/simsantonioii/MusicGen-Continuation/app.py
deleted file mode 100644
index 594a472baaf7dbd65b2fb28fdda455cf06f85aea..0000000000000000000000000000000000000000
--- a/spaces/simsantonioii/MusicGen-Continuation/app.py
+++ /dev/null
@@ -1,392 +0,0 @@
-"""
-Copyright (c) Meta Platforms, Inc. and affiliates.
-All rights reserved.
-
-This source code is licensed under the license found in the
-LICENSE file in the root directory of this source tree.
-"""
-
-from tempfile import NamedTemporaryFile
-import argparse
-import torch
-import torchaudio
-import gradio as gr
-import os
-from audiocraft.models import MusicGen
-from audiocraft.data.audio import audio_write
-
-from share_btn import community_icon_html, loading_icon_html, share_js, css
-
-MODEL = None
-IS_SHARED_SPACE = "radames/MusicGen-Continuation" in os.environ.get("SPACE_ID", "")
-
-
-def load_model(version):
- print("Loading model", version)
- return MusicGen.get_pretrained(version)
-
-
-def predict(
- text,
- melody_input,
- duration=30,
- continuation=False,
- continuation_start=0,
- continuation_end=30,
- topk=250,
- topp=0,
- temperature=1,
- cfg_coef=3,
-):
- global MODEL
- topk = int(topk)
- if MODEL is None:
- MODEL = load_model("melody")
-
- if duration > MODEL.lm.cfg.dataset.segment_duration:
- raise gr.Error("MusicGen currently supports durations of up to 30 seconds!")
- if continuation and continuation_end < continuation_start:
- raise gr.Error("The end time must be greater than the start time!")
- MODEL.set_generation_params(
- use_sampling=True,
- top_k=topk,
- top_p=topp,
- temperature=temperature,
- cfg_coef=cfg_coef,
- duration=duration,
- )
-
- if melody_input:
- melody, sr = torchaudio.load(melody_input)
- # sr, melody = melody_input[0], torch.from_numpy(melody_input[1]).to(MODEL.device).float().t().unsqueeze(0)
- if melody.dim() == 2:
- melody = melody[None]
- if continuation:
- print("\nGenerating continuation\n")
- melody_wavform = melody[
- ..., int(sr * continuation_start) : int(sr * continuation_end)
- ]
- melody_duration = melody_wavform.shape[-1] / sr
- if duration + melody_duration > MODEL.lm.cfg.dataset.segment_duration:
- raise gr.Error("Duration + continuation duration must be <= 30 seconds")
- output = MODEL.generate_continuation(
- prompt=melody_wavform,
- prompt_sample_rate=sr,
- descriptions=[text],
- progress=True,
- )
- else:
- print("\nGenerating with melody\n")
- melody_wavform = melody[
- ..., : int(sr * MODEL.lm.cfg.dataset.segment_duration)
- ]
- output = MODEL.generate_with_chroma(
- descriptions=[text],
- melody_wavs=melody_wavform,
- melody_sample_rate=sr,
- progress=True,
- )
- else:
- print("\nGenerating without melody\n")
- output = MODEL.generate(descriptions=[text], progress=False)
-
- output = output.detach().cpu().float()[0]
- with NamedTemporaryFile("wb", suffix=".wav", delete=False) as file:
- audio_write(
- file.name,
- output,
- MODEL.sample_rate,
- strategy="loudness",
- loudness_headroom_db=16,
- loudness_compressor=True,
- add_suffix=False,
- )
- #waveform_video = gr.make_waveform(file.name)
-
- return file.name
-
-
-
-def ui(**kwargs):
- def toggle(choice):
- if choice == "mic":
- return gr.update(source="microphone", value=None, label="Microphone")
- else:
- return gr.update(source="upload", value=None, label="File")
-
- def check_melody_length(melody_input):
- if not melody_input:
- return gr.update(maximum=0, value=0), gr.update(maximum=0, value=0)
- melody, sr = torchaudio.load(melody_input)
- audio_length = melody.shape[-1] / sr
- if melody.dim() == 2:
- melody = melody[None]
- return gr.update(maximum=audio_length, value=0), gr.update(
- maximum=audio_length, value=audio_length
- )
-
- def preview_melody_cut(melody_input, continuation_start, continuation_end):
- if not melody_input:
- return gr.update(maximum=0, value=0), gr.update(maximum=0, value=0)
- melody, sr = torchaudio.load(melody_input)
- audio_length = melody.shape[-1] / sr
- if melody.dim() == 2:
- melody = melody[None]
-
- if continuation_end < continuation_start:
- raise gr.Error("The end time must be greater than the start time!")
- if continuation_start < 0 or continuation_end > audio_length:
- raise gr.Error("The continuation settings must be within the audio length!")
- print("cutting", int(sr * continuation_start), int(sr * continuation_end))
- prompt_waveform = melody[
- ..., int(sr * continuation_start) : int(sr * continuation_end)
- ]
-
- return (sr, prompt_waveform.unsqueeze(0).numpy())
-
- with gr.Blocks(css=css) as interface:
- gr.Markdown(
- """
- # Apollo Beat Generator *Beta Preview*
-
- Steps:
- 1) Tell me what kind of beat you want to create! How should it sound?
- 2) Choose the duration of your beat snippet.
- 3) Click "Submit" to generate your awesome beat snippet!
- 4) Listen to your beat by clicking on "Generated Music".
-
- If you're a producer or beat maker, you can download the beat snippet to your phone and use it in your preferred DAW (Digital Audio Workstation)!
-
- *Beta Notes: If you press submit and it seems like nothing is happening; no worries! Just hang tight for a moment while it connects to our servers! :)
- """
- )
- if IS_SHARED_SPACE:
- gr.Markdown(
- """
- ⚠ This Space doesn't work in this shared UI ⚠
-
-
- to use it privately, or use the public demo
- """
- )
- with gr.Row():
- with gr.Column():
- with gr.Row():
- text = gr.Text(
- label="Describe the song you want to create",
- lines=2,
- interactive=True,
- elem_id="text-input",
- )
-
-
- # with gr.Row():
- # model = gr.Radio(
- # ["melody", "medium", "small", "large"],
- # label="Model",
- # value="melody",
- # interactive=True,
- # )
- with gr.Row():
- duration = gr.Slider(
- minimum=1,
- maximum=30,
- value=10,
- label="Total Duration",
- interactive=True,
- )
- with gr.Row():
- submit = gr.Button("Submit")
- with gr.Row():
- output = gr.Audio(label="Generated Music")
- output_melody = gr.Audio(label="Melody ", elem_id="melody-output")
- with gr.Row(visible=False) as share_row:
- with gr.Group(elem_id="share-btn-container"):
- community_icon = gr.HTML(community_icon_html)
- loading_icon = gr.HTML(loading_icon_html)
- share_button = gr.Button(
- "Note: Feature in Beta", elem_id="share-btn"
- )
-
- with gr.Accordion(label="Advanced Settings", open=False):
- with gr.Column():
- radio = gr.Radio(
- ["file", "mic"],
- value="file",
- label="Melody Condition (optional) File or Mic",
- )
- melody = gr.Audio(
- source="upload",
- type="filepath",
- label="File",
- interactive=True,
- elem_id="melody-input",
- )
-
- with gr.Row():
- topk = gr.Number(label="Top-k", value=250, interactive=True)
- topp = gr.Number(label="Top-p", value=0, interactive=True)
- temperature = gr.Number(
- label="Temperature", value=1.0, interactive=True
- )
- cfg_coef = gr.Number(
- label="Classifier Free Guidance",
- value=3.0,
- interactive=True,
- )
- with gr.Row():
- continuation = gr.Checkbox(value=False, label="Enable Continuation")
-
- with gr.Row():
- continuation_start = gr.Slider(
- minimum=0,
- maximum=30,
- step=0.01,
- value=0,
- label="melody cut start",
- interactive=True,
- )
- continuation_end = gr.Slider(
- minimum=0,
- maximum=30,
- step=0.01,
- value=0,
- label="melody cut end",
- interactive=True,
- )
- cut_btn = gr.Button("Cut Melody").style(full_width=False)
- with gr.Row():
- preview_cut = gr.Audio(
- type="numpy",
- label="Cut Preview",
- )
-
-
- melody.change(
- check_melody_length,
- melody,
- [continuation_start, continuation_end],
- queue=False,
- )
- cut_btn.click(
- preview_melody_cut,
- [melody, continuation_start, continuation_end],
- preview_cut,
- queue=False,
- )
-
- submit.click(
- lambda x: gr.update(visible=False),
- None,
- [share_row],
- queue=False,
- show_progress=False,
- ).then(
- predict,
- inputs=[
- text,
- melody,
- duration,
- continuation,
- continuation_start,
- continuation_end,
- topk,
- topp,
- temperature,
- cfg_coef,
- ],
- outputs=[output],
- ).then(
- lambda x: gr.update(visible=True),
- None,
- [share_row],
- queue=False,
- show_progress=False,
- )
- radio.change(toggle, radio, [melody], queue=False, show_progress=False)
- examples = gr.Examples(
- fn=predict,
- examples=[
- [
- "An 80s driving pop song with heavy drums and synth pads in the background",
- None,
- ],
- [
- "A nostalgic 90s alternative rock track with gritty guitar riffs",
- None,
- ],
- ["A dynamic 2010s trap instrumental with trap-style beats and mesmerizing synths", None],
- [
- "A light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions",
- None,
- ],
- [
- "Lofi slow bpm electro chill with organic samples",
- None,
- ],
- ],
- inputs=[text, melody],
- outputs=[output],
- )
- gr.Markdown(
- """
-
- """
- )
-
- # Show the interface
- launch_kwargs = {}
- username = kwargs.get("username")
- password = kwargs.get("password")
- server_port = kwargs.get("server_port", 0)
- inbrowser = kwargs.get("inbrowser", False)
- share = kwargs.get("share", False)
- server_name = kwargs.get("listen")
-
- launch_kwargs["server_name"] = server_name
-
- if username and password:
- launch_kwargs["auth"] = (username, password)
- if server_port > 0:
- launch_kwargs["server_port"] = server_port
- if inbrowser:
- launch_kwargs["inbrowser"] = inbrowser
- if share:
- launch_kwargs["share"] = share
-
- interface.queue().launch(**launch_kwargs, max_threads=1)
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--listen",
- type=str,
- default="0.0.0.0",
- help="IP to listen on for connections to Gradio",
- )
- parser.add_argument(
- "--username", type=str, default="", help="Username for authentication"
- )
- parser.add_argument(
- "--password", type=str, default="", help="Password for authentication"
- )
- parser.add_argument(
- "--server_port",
- type=int,
- default=7860,
- help="Port to run the server listener on",
- )
- parser.add_argument("--inbrowser", action="store_true", help="Open in browser")
- parser.add_argument("--share", action="store_true", help="Share the gradio UI")
-
- args = parser.parse_args()
-
- ui(
- username=args.username,
- password=args.password,
- inbrowser=args.inbrowser,
- server_port=args.server_port,
- share=args.share,
- listen=args.listen,
- )
diff --git a/spaces/skf15963/summary/fengshen/examples/mt5_summary/fastapi_mt5_summary.py b/spaces/skf15963/summary/fengshen/examples/mt5_summary/fastapi_mt5_summary.py
deleted file mode 100644
index 44adaf8f5855260c683c0bcfe7986ffccc9f25c4..0000000000000000000000000000000000000000
--- a/spaces/skf15963/summary/fengshen/examples/mt5_summary/fastapi_mt5_summary.py
+++ /dev/null
@@ -1,93 +0,0 @@
-import os
-import sys
-import uvicorn
-import torch
-from fastapi import Body, FastAPI
-from transformers import T5Tokenizer, MT5ForConditionalGeneration
-import pytorch_lightning as pl
-sys.path.append(os.path.abspath(os.path.join(
- os.path.dirname(__file__), os.path.pardir)))
-os.environ["CUDA_VISIBLE_DEVICES"] = '5'
-os.environ["MASTER_ADDR"] = '127.0.0.1'
-os.environ["MASTER_PORT"] = '6000'
-device = "cuda:0" if torch.cuda.is_available() else "cpu"
-print('device')
-pretrain_model_path = '/cognitive_comp/ganruyi/hf_models/google/mt5-large'
-# pretrain_model_path = 'google/mt5-small'
-model_path = '/cognitive_comp/ganruyi/fengshen/mt5_large_summary/ckpt/epoch-0-last.ckpt'
-tokenizer = T5Tokenizer.from_pretrained(pretrain_model_path)
-print('load tokenizer')
-
-
-class MT5FinetuneSummary(pl.LightningModule):
-
- def __init__(self):
- super().__init__()
- self.model = MT5ForConditionalGeneration.from_pretrained(pretrain_model_path)
-
-
-model = MT5FinetuneSummary.load_from_checkpoint(model_path)
-print('load checkpoint')
-model.to(device)
-model.eval()
-app = FastAPI()
-print('server start')
-
-# def flask_gen(text: str, level: float = 0.9, n_sample: int = 5, length: int = 32, is_beam_search=False):
-
-
-@app.post('/mt5_summary')
-async def flask_gen(text: str = Body('', title='原文', embed=True),
- n_sample: int = 5, length: int = 32, is_beam_search=False):
- if len(text) > 128:
- text = text[:128]
- text = 'summary:'+text
- print(text)
- # inputs = tokenizer(text, return_tensors='pt')
- inputs = tokenizer.encode_plus(
- text, max_length=128, padding='max_length', truncation=True, return_tensors='pt')
- # print(inputs)
- if is_beam_search:
- generated_ids = model.model.generate(
- input_ids=inputs['input_ids'].to(device),
- attention_mask=inputs['attention_mask'].to(device),
- max_length=length,
- num_beams=n_sample,
- repetition_penalty=2.5,
- length_penalty=1.0,
- early_stopping=True,
- num_return_sequences=n_sample
- )
- else:
- generated_ids = model.model.generate(
- input_ids=inputs['input_ids'].to(device),
- attention_mask=inputs['attention_mask'].to(device),
- max_length=length,
- do_sample=True,
- temperature=1.0,
- top_p=1.0,
- repetition_penalty=2.5,
- # early_stopping=True,
- num_return_sequences=n_sample
- )
- result = []
- # print(tokenizer.all_special_tokens)
- for sample in generated_ids:
- preds = [tokenizer.decode(sample, skip_special_tokens=True,
- clean_up_tokenization_spaces=True)]
- preds = ''.join(preds)
- # print(preds)
- result.append(preds)
- return result
-
-
-if __name__ == '__main__':
- uvicorn.run(app, host="0.0.0.0", port=6607, log_level="debug")
-# # article = "日前,方舟子发文直指林志颖旗下爱碧丽推销假保健品,引起哗然。调查发现,
-# 爱碧丽没有自己的生产加工厂。其胶原蛋白饮品无核心研发,全部代工生产。号称有“逆生长”功效的爱碧丽“梦幻奇迹限量组”售价>高达1080元,实际成本仅为每瓶4元!"
-# article = '''在北京冬奥会自由式滑雪女子坡面障碍技巧决赛中,中国选手谷爱凌夺得银牌。祝贺谷爱凌!
-# 今天上午,自由式滑雪女子坡面障碍技巧决赛举行。决赛分三轮进行,取选手最佳成绩排名决出奖牌。
-# 第一跳,中国选手谷爱凌获得69.90分。在12位选手中排名第三。完成动作后,谷爱凌又扮了个鬼脸,甚是可爱。
-# 第二轮中,谷爱凌在道具区第三个障碍处失误,落地时摔倒。获得16.98分。网友:摔倒了也没关系,继续加油!
-# 在第二跳失误摔倒的情况下,谷爱凌顶住压力,第三跳稳稳发挥,流畅落地!获得86.23分!此轮比赛,共12位选手参赛,谷爱凌第10位出场。网友:看比赛时我比谷爱凌紧张,加油!'''
- # flask_gen(article, length=30)
diff --git a/spaces/sklearn-docs/mean-shift-clustering/app.py b/spaces/sklearn-docs/mean-shift-clustering/app.py
deleted file mode 100644
index a13960374e33865b69eaa637bee6fed88dc7936b..0000000000000000000000000000000000000000
--- a/spaces/sklearn-docs/mean-shift-clustering/app.py
+++ /dev/null
@@ -1,121 +0,0 @@
-import gradio as gr
-import matplotlib.pyplot as plt
-import numpy as np
-from sklearn.cluster import MeanShift, estimate_bandwidth
-from sklearn.datasets import make_blobs
-
-
-def get_clusters_plot(n_blobs, quantile, cluster_std):
- X, _, centers = make_blobs(
- n_samples=10000, cluster_std=cluster_std, centers=n_blobs, return_centers=True
- )
-
- bandwidth = estimate_bandwidth(X, quantile=quantile, n_samples=500)
-
- ms = MeanShift(bandwidth=bandwidth, bin_seeding=True)
- ms.fit(X)
- labels = ms.labels_
- cluster_centers = ms.cluster_centers_
-
- labels_unique = np.unique(labels)
- n_clusters_ = len(labels_unique)
-
- fig = plt.figure()
-
- for k in range(n_clusters_):
- my_members = labels == k
- cluster_center = cluster_centers[k]
- plt.scatter(X[my_members, 0], X[my_members, 1])
- plt.plot(
- cluster_center[0],
- cluster_center[1],
- "x",
- markeredgecolor="k",
- markersize=14,
- )
- plt.xlabel("Feature 1")
- plt.ylabel("Feature 2")
-
- plt.title(f"Estimated number of clusters: {n_clusters_}")
-
- if len(centers) != n_clusters_:
- message = (
- ''
- + f"The number of estimated clusters ({n_clusters_})"
- + f" differs from the true number of clusters ({n_blobs})."
- + " Try changing the `Quantile` parameter.
"
- )
- else:
- message = (
- ''
- + f"The number of estimated clusters ({n_clusters_})"
- + f" matches the true number of clusters ({n_blobs})!
"
- )
- return fig, message
-
-
-with gr.Blocks() as demo:
- gr.Markdown(
- """
- # Mean Shift Clustering
-
- This space shows how to use the [Mean Shift Clustering](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.MeanShift.html) algorithm to cluster 2D data points. You can change the parameters using the sliders and see how the model performs.
-
- This space is based on [sklearn's original demo](https://scikit-learn.org/stable/auto_examples/cluster/plot_mean_shift.html#sphx-glr-auto-examples-cluster-plot-mean-shift-py).
- """
- )
- with gr.Row():
- with gr.Column(scale=1):
- n_blobs = gr.Slider(
- minimum=2,
- maximum=10,
- label="Number of clusters in the data",
- step=1,
- value=3,
- )
- quantile = gr.Slider(
- minimum=0,
- maximum=1,
- step=0.05,
- value=0.2,
- label="Quantile",
- info="Used to determine clustering's bandwidth.",
- )
- cluster_std = gr.Slider(
- minimum=0.1,
- maximum=1,
- label="Clusters' standard deviation",
- step=0.1,
- value=0.6,
- )
- with gr.Column(scale=4):
- clusters_plots = gr.Plot(label="Clusters' Plot")
- message = gr.HTML()
-
- n_blobs.change(
- get_clusters_plot,
- [n_blobs, quantile, cluster_std],
- [clusters_plots, message],
- queue=False,
- )
- quantile.change(
- get_clusters_plot,
- [n_blobs, quantile, cluster_std],
- [clusters_plots, message],
- queue=False,
- )
- cluster_std.change(
- get_clusters_plot,
- [n_blobs, quantile, cluster_std],
- [clusters_plots, message],
- queue=False,
- )
- demo.load(
- get_clusters_plot,
- [n_blobs, quantile, cluster_std],
- [clusters_plots, message],
- queue=False,
- )
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/snoop2head/KoGPT-Conditional-Generation/app.py b/spaces/snoop2head/KoGPT-Conditional-Generation/app.py
deleted file mode 100644
index a2932efebca2cd7f3ebc6082e47b930a8b18f1f8..0000000000000000000000000000000000000000
--- a/spaces/snoop2head/KoGPT-Conditional-Generation/app.py
+++ /dev/null
@@ -1,119 +0,0 @@
-# -*- coding: utf-8 -*-
-import numpy as np
-import streamlit as st
-from transformers import AutoModelWithLMHead, PreTrainedTokenizerFast
-
-model_dir = "snoop2head/kogpt-conditional-2"
-tokenizer = PreTrainedTokenizerFast.from_pretrained(
- model_dir,
- bos_token="",
- eos_token="",
- unk_token="",
- pad_token="",
- mask_token="",
-)
-
-
-@st.cache
-def load_model(model_name):
- model = AutoModelWithLMHead.from_pretrained(model_name)
- return model
-
-
-model = load_model(model_dir)
-print("loaded model completed")
-
-
-def find_nth(haystack, needle, n):
- start = haystack.find(needle)
- while start >= 0 and n > 1:
- start = haystack.find(needle, start + len(needle))
- n -= 1
- return start
-
-
-def infer(input_ids, max_length, temperature, top_k, top_p):
- output_sequences = model.generate(
- input_ids=input_ids,
- max_length=max_length,
- temperature=temperature,
- top_k=top_k,
- top_p=top_p,
- do_sample=True,
- num_return_sequences=1,
- )
- return output_sequences
-
-
-# prompts
-st.title("주어진 감정에 맞게 문장을 만드는 KoGPT입니다 🦄")
-st.write("좌측에 감정상태의 변화를 주고, CTRL+Enter(CMD+Enter)를 누르세요 🤗")
-
-# text and sidebars
-default_value = "수상한 밤들이 계속되던 날 언젠가부터 나는"
-sent = st.text_area("Text", default_value, max_chars=30, height=50)
-max_length = st.sidebar.slider("생성 문장 길이를 선택해주세요!", min_value=42, max_value=64)
-temperature = st.sidebar.slider(
- "Temperature", value=0.9, min_value=0.0, max_value=1.0, step=0.05
-)
-top_k = st.sidebar.slider("Top-k", min_value=0, max_value=5, value=0)
-top_p = st.sidebar.slider("Top-p", min_value=0.0, max_value=1.0, step=0.05, value=1.0)
-
-print("slider sidebars rendering completed")
-
-# make input sentence
-emotion_list = ["행복", "놀람", "분노", "혐오", "슬픔", "공포", "중립"]
-main_emotion = st.sidebar.radio("주요 감정을 선택하세요", emotion_list)
-emotion_list.reverse()
-sub_emotion = st.sidebar.radio("두 번째 감정을 선택하세요", emotion_list)
-
-print("radio sidebars rendering completed")
-
-# create condition sentence
-random_main_logit = np.random.normal(loc=3.368, scale=1.015, size=1)[0].round(1)
-random_sub_logit = np.random.normal(loc=1.333, scale=0.790, size=1)[0].round(1)
-condition_sentence = f"{random_main_logit}만큼 {main_emotion}감정인 문장이다. {random_sub_logit}만큼 {sub_emotion}감정인 문장이다. "
-condition_plus_input = condition_sentence + sent
-print(condition_plus_input)
-
-
-def infer_sentence(
- condition_plus_input=condition_plus_input, tokenizer=tokenizer, top_k=2
-):
- encoded_prompt = tokenizer.encode(
- condition_plus_input, add_special_tokens=False, return_tensors="pt"
- )
- if encoded_prompt.size()[-1] == 0:
- input_ids = None
- else:
- input_ids = encoded_prompt
- output_sequences = infer(input_ids, max_length, temperature, top_k, top_p)
- print(output_sequences)
-
- generated_sequence = output_sequences[0]
- print(generated_sequence)
-
- # Decode text
- text = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True)
- print(text)
-
- # Remove all text after the pad token
- stop_token = tokenizer.pad_token
- print(stop_token)
- text = text[: text.find(stop_token) if stop_token else None]
- print(text)
-
- # Remove condition sentence
- condition_index = find_nth(text, "문장이다", 2)
- text = text[condition_index + 5 :]
- text = text.strip()
- return text
-
-
-return_text = infer_sentence(
- condition_plus_input=condition_plus_input, tokenizer=tokenizer
-)
-
-print(return_text)
-
-st.write(return_text)
diff --git a/spaces/solara-dev/template/README.md b/spaces/solara-dev/template/README.md
deleted file mode 100644
index 930b03477b50a8317a2dcaae6d9285ae98934e67..0000000000000000000000000000000000000000
--- a/spaces/solara-dev/template/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Template
-emoji: 🚀
-colorFrom: red
-colorTo: pink
-sdk: docker
-pinned: false
-license: mit
-app_port: 7860
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/songallery/my/README.md b/spaces/songallery/my/README.md
deleted file mode 100644
index 3d1f216f50e7b13b2c40177d581091f9da272bab..0000000000000000000000000000000000000000
--- a/spaces/songallery/my/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: My
-emoji: 👀
-colorFrom: gray
-colorTo: yellow
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/spritlesoftware/Image-Object-Detection/color.py b/spaces/spritlesoftware/Image-Object-Detection/color.py
deleted file mode 100644
index fcfe3a230a7db5bc46b2f4dabf4494105b54639f..0000000000000000000000000000000000000000
--- a/spaces/spritlesoftware/Image-Object-Detection/color.py
+++ /dev/null
@@ -1,36 +0,0 @@
-class Color:
- @property
- def a(self):
- return self._a
-
- @property
- def r(self):
- return self._r
-
- @property
- def g(self):
- return self._g
-
- @property
- def b(self):
- return self._b
-
- def __init__(self, a, r, g, b):
- self._a = a
- self._r = r
- self._g = g
- self._b = b
-
- @staticmethod
- def fromArgb(a, r, g, b):
- return Color(a, r, g, b)
-
- @staticmethod
- def fromRgb(r, g, b):
- return Color(0xff, r, g, b)
-
- def __eq__(self, other):
- return self.a == other.a and self.r == other.r and self.g == other.g and self.b == other.b
-
- def __str__(self):
- return 'ARGB({},{},{},{})'.format(self.a, self.r, self.g, self.b)
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/modules/monotonic_transformer_layer.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/modules/monotonic_transformer_layer.py
deleted file mode 100644
index 94bd71fb9c46a64a8b6e1960f47dfc43b78dda43..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/simultaneous_translation/modules/monotonic_transformer_layer.py
+++ /dev/null
@@ -1,182 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq.modules import TransformerDecoderLayer, TransformerEncoderLayer
-
-from . import build_monotonic_attention
-
-from typing import Dict, Optional, List
-
-from torch import Tensor
-import torch
-
-
-class TransformerMonotonicEncoderLayer(TransformerEncoderLayer):
- def forward(self, x, encoder_padding_mask):
- seq_len, _, _ = x.size()
- attn_mask = x.new_ones([seq_len, seq_len]).triu(1)
- attn_mask = attn_mask.masked_fill(attn_mask.bool(), float("-inf"))
- return super().forward(x, encoder_padding_mask, attn_mask)
-
-
-class TransformerMonotonicDecoderLayer(TransformerDecoderLayer):
- def __init__(self, args):
- super().__init__(args)
-
- assert args.simul_type is not None, "A --simul-type is needed."
- self.encoder_attn = build_monotonic_attention(args)
-
- def prune_incremental_state(
- self,
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]]
- ):
- input_buffer = self.self_attn._get_input_buffer(incremental_state)
- for key in ["prev_key", "prev_value"]:
- input_buffer_key = input_buffer[key]
- assert input_buffer_key is not None
- if input_buffer_key.size(2) > 1:
- input_buffer[key] = input_buffer_key[:, :, :-1, :]
- else:
- typed_empty_dict: Dict[str, Optional[Tensor]] = {}
- input_buffer = typed_empty_dict
- break
- assert incremental_state is not None
- self.self_attn._set_input_buffer(incremental_state, input_buffer)
-
- def forward(
- self,
- x,
- encoder_out: Optional[Tensor] = None,
- encoder_padding_mask: Optional[Tensor] = None,
- incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None,
- prev_self_attn_state: Optional[List[Tensor]] = None,
- prev_attn_state: Optional[List[Tensor]] = None,
- self_attn_mask: Optional[Tensor] = None,
- self_attn_padding_mask: Optional[Tensor] = None,
- need_attn: bool = False,
- need_head_weights: bool = False,
- ):
- """
- Args:
- x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)`
- encoder_padding_mask (ByteTensor, optional): binary
- ByteTensor of shape `(batch, src_len)` where padding
- elements are indicated by ``1``.
- need_attn (bool, optional): return attention weights
- need_head_weights (bool, optional): return attention weights
- for each head (default: return average over heads).
-
- Returns:
- encoded output of shape `(seq_len, batch, embed_dim)`
- """
- if need_head_weights:
- need_attn = True
-
- residual = x
- if self.normalize_before:
- x = self.self_attn_layer_norm(x)
- if prev_self_attn_state is not None:
- prev_key, prev_value = prev_self_attn_state[:2]
- saved_state: Dict[str, Optional[Tensor]] = {
- "prev_key": prev_key,
- "prev_value": prev_value,
- }
- if len(prev_self_attn_state) >= 3:
- saved_state["prev_key_padding_mask"] = prev_self_attn_state[2]
- assert incremental_state is not None
- self.self_attn._set_input_buffer(incremental_state, saved_state)
- _self_attn_input_buffer = self.self_attn._get_input_buffer(incremental_state)
- if self.cross_self_attention and not (
- incremental_state is not None
- and _self_attn_input_buffer is not None
- and "prev_key" in _self_attn_input_buffer
- ):
- if self_attn_mask is not None:
- assert encoder_out is not None
- self_attn_mask = torch.cat(
- (x.new_zeros(x.size(0), encoder_out.size(0)), self_attn_mask), dim=1
- )
- if self_attn_padding_mask is not None:
- if encoder_padding_mask is None:
- assert encoder_out is not None
- encoder_padding_mask = self_attn_padding_mask.new_zeros(
- encoder_out.size(1), encoder_out.size(0)
- )
- self_attn_padding_mask = torch.cat(
- (encoder_padding_mask, self_attn_padding_mask), dim=1
- )
- assert encoder_out is not None
- y = torch.cat((encoder_out, x), dim=0)
- else:
- y = x
-
- x, attn = self.self_attn(
- query=x,
- key=y,
- value=y,
- key_padding_mask=self_attn_padding_mask,
- incremental_state=incremental_state,
- need_weights=False,
- attn_mask=self_attn_mask,
- )
- x = self.dropout_module(x)
- x = self.residual_connection(x, residual)
- if not self.normalize_before:
- x = self.self_attn_layer_norm(x)
-
- assert self.encoder_attn is not None
- residual = x
- if self.normalize_before:
- x = self.encoder_attn_layer_norm(x)
- if prev_attn_state is not None:
- prev_key, prev_value = prev_attn_state[:2]
- saved_state: Dict[str, Optional[Tensor]] = {
- "prev_key": prev_key,
- "prev_value": prev_value,
- }
- if len(prev_attn_state) >= 3:
- saved_state["prev_key_padding_mask"] = prev_attn_state[2]
- assert incremental_state is not None
- self.encoder_attn._set_input_buffer(incremental_state, saved_state)
-
- x, attn = self.encoder_attn(
- query=x,
- key=encoder_out,
- value=encoder_out,
- key_padding_mask=encoder_padding_mask,
- incremental_state=incremental_state,
- static_kv=True,
- need_weights=need_attn or (not self.training and self.need_attn),
- need_head_weights=need_head_weights,
- )
- x = self.dropout_module(x)
- x = self.residual_connection(x, residual)
- if not self.normalize_before:
- x = self.encoder_attn_layer_norm(x)
-
- residual = x
- if self.normalize_before:
- x = self.final_layer_norm(x)
-
- x = self.activation_fn(self.fc1(x))
- x = self.activation_dropout_module(x)
- x = self.fc2(x)
- x = self.dropout_module(x)
- x = self.residual_connection(x, residual)
- if not self.normalize_before:
- x = self.final_layer_norm(x)
- if self.onnx_trace and incremental_state is not None:
- saved_state = self.self_attn._get_input_buffer(incremental_state)
- assert saved_state is not None
- if self_attn_padding_mask is not None:
- self_attn_state = [
- saved_state["prev_key"],
- saved_state["prev_value"],
- saved_state["prev_key_padding_mask"],
- ]
- else:
- self_attn_state = [saved_state["prev_key"], saved_state["prev_value"]]
- return x, attn, self_attn_state
- return x, attn, None
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/quantization/pq/modules/qconv.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/quantization/pq/modules/qconv.py
deleted file mode 100644
index d15ec192e8cda6265a198e583a9bf7fb194dd129..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/quantization/pq/modules/qconv.py
+++ /dev/null
@@ -1,115 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.modules.utils import _pair
-
-
-class PQConv2d(nn.Module):
- """
- Quantized counterpart of nn.Conv2d module. Stores the centroid, the assignments
- and the non-quantized biases. The full weight is re-instantiated at each forward
- pass and autograd automatically computes the gradients with respect to the
- centroids.
-
- Args:
- - centroids: centroids of size n_centroids x block_size
- - assignments: assignments of the centroids to the subvectors
- of size self.out_channels x n_blocks
- - bias: the non-quantized bias, must be either torch.Tensor or None
-
- Remarks:
- - We refer the reader to the official documentation of the nn.Conv2d module
- for the other arguments and the behavior of the module.
- - Performance tests on GPU show that this implementation is 10% slower than
- the non-quantized nn.Conv2d module for a standard training loop.
- - During the backward, the gradients are averaged by cluster and not summed.
- This explains the hook registered to the centroids.
- """
-
- def __init__(
- self,
- centroids,
- assignments,
- bias,
- in_channels,
- out_channels,
- kernel_size,
- stride=1,
- padding=0,
- dilation=1,
- groups=1,
- padding_mode="zeros",
- ):
- super(PQConv2d, self).__init__()
- self.block_size = centroids.size(1)
- self.n_centroids = centroids.size(0)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = _pair(kernel_size)
- self.stride = _pair(stride)
- self.padding = _pair(padding)
- self.dilation = _pair(dilation)
- self.groups = groups
- self.padding_mode = padding_mode
- # check compatibility
- if in_channels // groups * np.prod(self.kernel_size) % self.block_size != 0:
- raise ValueError("Wrong PQ sizes")
- if len(assignments) % out_channels != 0:
- raise ValueError("Wrong PQ sizes")
- if in_channels % groups != 0:
- raise ValueError("in_channels must be divisible by groups")
- if out_channels % groups != 0:
- raise ValueError("out_channels must be divisible by groups")
- # define parameters
- self.centroids = nn.Parameter(centroids, requires_grad=True)
- self.register_buffer("assignments", assignments)
- self.register_buffer("counts", torch.bincount(assignments).type_as(centroids))
- if bias is not None:
- self.bias = nn.Parameter(bias)
- else:
- self.register_parameter("bias", None)
- # register hook for averaging gradients per centroids instead of summing
- self.centroids.register_hook(lambda x: x / self.counts[:, None])
-
- @property
- def weight(self):
- return (
- self.centroids[self.assignments]
- .reshape(-1, self.out_channels, self.block_size)
- .permute(1, 0, 2)
- .reshape(
- self.out_channels, self.in_channels // self.groups, *self.kernel_size
- )
- )
-
- def forward(self, x):
- return F.conv2d(
- x,
- self.weight,
- self.bias,
- self.stride,
- self.padding,
- self.dilation,
- self.groups,
- )
-
- def extra_repr(self):
- s = "{in_channels}, {out_channels}, kernel_size={kernel_size}, stride={stride}"
- if self.padding != (0,) * len(self.padding):
- s += ", padding={padding}"
- if self.dilation != (1,) * len(self.dilation):
- s += ", dilation={dilation}"
- if self.groups != 1:
- s += ", groups={groups}"
- if self.bias is None:
- s += ", bias=False"
- if self.padding_mode != "zeros":
- s += ", padding_mode={padding_mode}"
- s += ", n_centroids={n_centroids}, block_size={block_size}"
- return s.format(**self.__dict__)
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Adobe CC 2019-2020 GenP ? Universal Patch V2.2 PATCHED.md b/spaces/stomexserde/gpt4-ui/Examples/Adobe CC 2019-2020 GenP ? Universal Patch V2.2 PATCHED.md
deleted file mode 100644
index 7514afc8d5e5da240c89d2f238451dca3491a701..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Adobe CC 2019-2020 GenP ? Universal Patch V2.2 PATCHED.md
+++ /dev/null
@@ -1,44 +0,0 @@
-
-How to Use Adobe CC 2019-2020 GenP â Universal Patch v2.2 to Crack Adobe Products
-Adobe CC 2019-2020 GenP â Universal Patch v2.2 is a jailbreak software that can crack the latest versions of Adobe Creative Cloud 2019 and 2020 products on Windows. It is developed by cw2k, a GitHub user, and can be downloaded from his repository[^1^].
-Adobe CC 2019-2020 GenP – Universal Patch v2.2
DOWNLOAD ★ https://urlgoal.com/2uI69k
-In this article, we will show you how to use Adobe CC 2019-2020 GenP â Universal Patch v2.2 to crack any Adobe product you want in a few simple steps.
-Step 1: Download and Extract Adobe CC 2019-2020 GenP â Universal Patch v2.2
-First, you need to download Adobe CC 2019-2020 GenP â Universal Patch v2.2 from cw2k's GitHub repository[^1^]. You will get a ZIP file named Adobe-GenP-3.1.zip. Extract it to a folder of your choice.
-Step 2: Run RunMe.au3
-Next, you need to run RunMe.au3, which is a launcher for the patcher. You can double-click on it or right-click and choose Run Script.
-Step 3: Choose Your Adobe Product and Patch It
-After running RunMe.au3, you will see a window with two buttons: Search Files and Custom Path. You have two options to choose your Adobe product:
-
-- If you want to patch all Adobe products in the default location (C:\Program Files\Adobe), click on Search Files and wait until GenP finds all files.
-- If you want to patch one Adobe product at a time or in a different location, click on Custom Path and select the folder that contains the product you want to patch.
-
-Once you have chosen your Adobe product, click on the Pill Button and wait until GenP does its job. You will see a message saying "Done!" when the patching is complete.
-
-Step 4: Enjoy Your Cracked Adobe Product
-Now you can enjoy your cracked Adobe product without any limitations or restrictions. You can use it offline or online, but be careful not to sign in with your Adobe account or update it.
-Note:
-According to cw2k, there are some known issues with some Adobe products when using GenP:
-
-- InDesign and InCopy will have high CPU usage.
-- Animate will have some problems with home screen if signed out.
-- Lightroom Classic will partially work if signed out.
-- Acrobat, Rush, Lightroom Online, Photoshop Express, Creative Cloud App won't be patched or fully unlocked.
-
-He also said that he may find a solution for them in the next release[^2^].
-Disclaimer:
-This article is for educational purposes only. We do not condone or encourage piracy or illegal use of software. Please support the developers by purchasing their products if you can afford them.
-
-Benefits of Using Adobe Products
-By using Adobe CC 2019-2020 GenP â Universal Patch v2.2, you can unlock the full potential of Adobe products and enjoy their benefits for your personal or professional projects. Here are some of the benefits of using Adobe products:
-
-- You can create and edit universally accessible files (PDFs) that retain their original formatting across different devices and platforms. You can also collect electronic signatures, certify documents, and track their status with Adobe Acrobat Pro DC[^1^].
-- You can design stunning graphics, logos, icons, posters, flyers, and more with Adobe Photoshop, Illustrator, and InDesign. You can also use Adobe Spark to create social media graphics, web pages, and videos in minutes.
-- You can produce professional-quality videos, animations, and motion graphics with Adobe Premiere Pro, After Effects, and Animate. You can also use Adobe Audition to record, edit, and mix audio.
-- You can build responsive websites, mobile apps, and interactive experiences with Adobe Dreamweaver, XD, and Animate. You can also use Adobe Muse to create beautiful websites without coding.
-- You can access your files anytime, anywhere with Adobe Document Cloud and Creative Cloud. You can also collaborate with others and share your work with Adobe Portfolio, Behance, and Creative Cloud Libraries.
-
-Conclusion
-Adobe CC 2019-2020 GenP â Universal Patch v2.2 is a powerful tool that can help you crack any Adobe product you want in a few simple steps. It is free to download and use for educational purposes only. However, if you can afford it, we recommend that you support the developers by purchasing their products from the official website. Adobe products offer many benefits for your personal or professional projects, and they are worth investing in.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Adolix Split And Merge Pdf V2.0.77 Crack.md b/spaces/stomexserde/gpt4-ui/Examples/Adolix Split And Merge Pdf V2.0.77 Crack.md
deleted file mode 100644
index 4ee95534f25fc34837495c50c76fd40b07b7825f..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Adolix Split And Merge Pdf V2.0.77 Crack.md
+++ /dev/null
@@ -1,36 +0,0 @@
-
-How to Use Adolix Split And Merge Pdf V2.0.77 Crack to Manage Your PDF Files
-PDF files are widely used for various purposes, such as business reports, ebooks, invoices, and personal documents. They offer many advantages, such as smaller size, high portability, and powerful security. However, sometimes you may need to split or merge PDF files to suit your needs. For example, you may want to extract some pages from a large PDF file, or combine several PDF files into one document.
-One of the best tools to help you with this task is Adolix Split And Merge Pdf V2.0.77 Crack. This is a free software that allows you to split and merge PDF files easily and quickly. You can also mix PDF files by selecting specific pages or page groups from each file. Moreover, you can protect your generated PDF files with password or watermark, and use command line arguments to integrate the software into your explorer context menu.
-Adolix Split And Merge Pdf V2.0.77 Crack
Download File ✫✫✫ https://urlgoal.com/2uIaXq
-In this article, we will show you how to use Adolix Split And Merge Pdf V2.0.77 Crack to manage your PDF files in a few simple steps.
-How to Split PDF Files with Adolix Split And Merge Pdf V2.0.77 Crack
-To split PDF files with Adolix Split And Merge Pdf V2.0.77 Crack, follow these steps:
-
-
-- Download and install Adolix Split And Merge Pdf V2.0.77 Crack from here.
-- Open the software and select the Split tab.
-- Add the PDF file that you want to split by clicking on the Add button or using drag and drop.
-- Choose the output folder where you want to save the split files.
-- Select the split method from the drop-down menu. You can choose to split by page number, page range, size, or custom pattern.
-- Click on the Split this PDF button to start the process.
-- You will see a message confirming that the operation was completed successfully.
-- You can now open the output folder and check the split files.
-
-How to Merge PDF Files with Adolix Split And Merge Pdf V2.0.77 Crack
-To merge PDF files with Adolix Split And Merge Pdf V2.0.77 Crack, follow these steps:
-
-- Download and install Adolix Split And Merge Pdf V2.0.77 Crack from here.
-- Open the software and select the Merge tab.
-- Add the PDF files that you want to merge by clicking on the Add button or using drag and drop.
-- Choose the output folder where you want to save the merged file.
-- Select the merge method from the drop-down menu. You can choose to merge all pages or mix pages by custom page groups.
-- If you want to protect your merged file with password or watermark, click on the Options button and enter the details.
-- Click on the Merge files button to start the process.
-- You will see a message confirming that the operation was completed successfully.
-- You can now open the output folder and check the merged file.
-
-
-We hope this article helped you learn how to use Adolix Split And Merge Pdf V2.0.77 Crack to manage your PDF files. This is a free and easy-to-use software that can save you time and hassle when dealing with PDF files. If you have any questions or feedback, please feel free to contact us.
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Camtasia Studio Crack License Key Free Download 2019 TOP.md b/spaces/stomexserde/gpt4-ui/Examples/Camtasia Studio Crack License Key Free Download 2019 TOP.md
deleted file mode 100644
index cc11ba1a7be44de28d7e7fa74a227d2deb35f295..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Camtasia Studio Crack License Key Free Download 2019 TOP.md
+++ /dev/null
@@ -1,39 +0,0 @@
-
-Camtasia Studio Crack License Key Free Download 2019: How to Get It and Why You Should Avoid It
-Camtasia Studio is a popular software for creating and editing videos, screen recordings, presentations, and more. It has many features and tools that make it easy and fun to use. However, it is not a free software and you need to purchase a license key to use it legally.
-Some people may be tempted to look for a Camtasia Studio crack license key free download 2019 on the internet, hoping to get the software for free. However, this is not a good idea and you should avoid it for several reasons. Here are some of them:
-Camtasia Studio Crack License Key Free Download 2019
Download Zip 🗸🗸🗸 https://urlgoal.com/2uI9Yn
-
-- A Camtasia Studio crack license key free download 2019 may not work properly or at all. It may be outdated, corrupted, or incompatible with your system. You may end up wasting your time and effort trying to install and run it.
-- A Camtasia Studio crack license key free download 2019 may contain viruses, malware, spyware, or other harmful programs that can damage your computer or steal your personal information. You may expose yourself to security risks and potential identity theft.
-- A Camtasia Studio crack license key free download 2019 may violate the intellectual property rights of the software developer, TechSmith. You may be breaking the law and risking legal consequences such as fines or lawsuits.
-- A Camtasia Studio crack license key free download 2019 may deprive the software developer of their rightful income and support. You may be hurting their ability to maintain and improve the software and provide customer service.
-
-Therefore, it is better to avoid a Camtasia Studio crack license key free download 2019 and instead purchase a legitimate license key from the official website of TechSmith. You will get a fully functional and updated software that is safe and legal to use. You will also support the software developer and enjoy their customer service and technical support.
-If you are looking for a cheaper or alternative option, you can also try some of the free or open source video editing software available online. Some examples are OpenShot, Shotcut, DaVinci Resolve, and Blender. They may not have all the features and tools of Camtasia Studio, but they can still help you create and edit videos for various purposes.
-In conclusion, a Camtasia Studio crack license key free download 2019 is not worth it and you should avoid it. Instead, you should purchase a legitimate license key from TechSmith or try some of the free or open source video editing software online. You will have a better experience and avoid many problems.
-
-
-How to Purchase a Legitimate License Key for Camtasia Studio
-If you want to use Camtasia Studio legally and enjoy its full features and benefits, you need to purchase a license key from the official website of TechSmith. Here are the steps to do so:
-
-- Go to https://www.techsmith.com/video-editor.html and click on the "Buy Now" button.
-- Select the edition and quantity of Camtasia Studio that you want to buy. You can choose between the individual, business, education, government, or non-profit edition. You can also buy multiple licenses if you need them for your team or organization.
-- Enter your billing and payment information. You can pay with a credit card, PayPal, or wire transfer. You can also apply a coupon code if you have one.
-- Confirm your order and complete the payment. You will receive an email confirmation with your license key and download link.
-- Download and install Camtasia Studio on your computer. Enter your license key when prompted to activate the software.
-
-You can now use Camtasia Studio legally and enjoy its full features and benefits. You can also access the online tutorials, resources, and support from TechSmith.
-
-How to Use Camtasia Studio to Create and Edit Videos
-Camtasia Studio is a powerful and easy-to-use software for creating and editing videos, screen recordings, presentations, and more. Here are some of the basic steps to use it:
-
-- Launch Camtasia Studio on your computer and select the option to create a new project or open an existing one.
-- Import the media files that you want to use in your video. You can import videos, images, audio, PowerPoint slides, and more. You can also record your screen or webcam using the built-in recorder.
-- Edit your video using the timeline and the tools panel. You can trim, split, crop, rotate, zoom, pan, speed up, slow down, add transitions, effects, annotations, captions, quizzes, and more. You can also use the library to access ready-made assets such as intros, outros, music, backgrounds, icons, etc.
-- Preview your video using the canvas and the player. You can adjust the size and position of the canvas and the player. You can also use the markers and hotspots to navigate your video.
-- Export your video using the share button. You can choose between different formats and quality options. You can also upload your video directly to YouTube, Vimeo, Google Drive, Screencast.com, or other platforms.
-
-You can now enjoy your video and share it with others. You can also use Camtasia Studio to create and edit more videos for various purposes.
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Erykah BaduNew Amerykah Part One 4th World War Full Album Zip.md b/spaces/stomexserde/gpt4-ui/Examples/Erykah BaduNew Amerykah Part One 4th World War Full Album Zip.md
deleted file mode 100644
index eb63a6b3919e1f7442cd5dd0d35617aa98a10943..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Erykah BaduNew Amerykah Part One 4th World War Full Album Zip.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-Erykah Badu's New Amerykah Part One: 4th World War - A Soulful and Political Album
-
-Erykah Badu is one of the most influential and innovative artists in the neo-soul genre. Her fourth studio album, New Amerykah Part One: 4th World War, released in 2008, is a testament to her musical and lyrical prowess. The album is a concept album that explores the social and political issues of the post-9/11 era, such as racism, war, poverty, addiction, and consumerism. The album also showcases Badu's eclectic and experimental sound, blending soul, funk, hip hop, jazz, and electronic elements.
-
-The album opens with the track "Amerykahn Promise", which samples the funk band Roy Ayers Ubiquity and sets the tone for the album's groovy and upbeat vibe. The song is a declaration of Badu's artistic vision and identity, as she sings "I'm not who you think I am / I'm not who you think I am / I'm a woman on a mission". The next track, "The Healer", is a tribute to hip hop as a universal and spiritual force, featuring a hypnotic beat produced by Madlib and references to various hip hop icons. The song also expresses Badu's belief in the power of music to heal and unite people across cultures and religions.
-Erykah BaduNew Amerykah Part One 4th World War Full Album Zip
Download >>> https://urlgoal.com/2uIc34
-
-The album then shifts to a more somber and introspective mood with songs like "Me", "My People", "Soldier", and "The Cell". These songs deal with the personal and collective struggles of African Americans in the modern society, such as self-love, identity, oppression, resistance, and survival. Badu delivers powerful and poetic lyrics that reflect her own experiences and observations as a black woman in America. For example, in "Me", she sings "Had two babies different dudes / And for them both my love was true / This is my last interview / Oh hey there's me". In "Soldier", she pays homage to the civil rights activists and revolutionaries who fought for justice and freedom, such as Martin Luther King Jr., Malcolm X, Marcus Garvey, and Huey P. Newton.
-
-The album also features some lighter and more playful songs, such as "Honey", "Twinkle", and "That Hump". These songs showcase Badu's sense of humor and creativity, as well as her sensual and romantic side. In "Honey", she compares her lover to various sweet treats, such as chocolate cake, ice cream, and caramel. In "Twinkle", she sings about the joys and challenges of being a star in the music industry, while also poking fun at herself and her critics. In "That Hump", she addresses the issue of drug addiction with a catchy chorus and a funky bass line.
-
-The album closes with the epic track "Telephone", which is dedicated to the late producer and rapper J Dilla, who was a close friend and collaborator of Badu. The song is a heartfelt tribute to Dilla's life and legacy, as well as a message of hope and comfort to his family and fans. The song features a soulful piano melody, a gospel choir, and a spoken word outro by Badu's son Seven. The song is also a reminder of Badu's connection to the hip hop community and her role as a healer through her music.
-
-New Amerykah Part One: 4th World War is a masterpiece that showcases Erykah Badu's artistic genius and social consciousness. The album is a musical journey that takes the listener through various emotions, themes, and styles. The album is also a reflection of Badu's personal growth and evolution as an artist and a human being. The album is a must-listen for anyone who appreciates soulful and political music.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Ik Multimedia Miroslav Philharmonik Keygen Torrent ((LINK)).md b/spaces/stomexserde/gpt4-ui/Examples/Ik Multimedia Miroslav Philharmonik Keygen Torrent ((LINK)).md
deleted file mode 100644
index 334611dc61f450d5944f9ea60aa411ed48c5aa03..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Ik Multimedia Miroslav Philharmonik Keygen Torrent ((LINK)).md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-Here is a possible title and article with html formatting for the keyword "ik multimedia miroslav philharmonik keygen torrent":
-
-How to Download and Install IK Multimedia Miroslav Philharmonik 2 for Free
-IK Multimedia Miroslav Philharmonik 2 is a virtual instrument that brings the orchestra to life with stunning realism and expressiveness. It features over 2,700 instruments and 55 GB of samples recorded by the legendary Miroslav Vitous and his team of musicians. Whether you need strings, woodwinds, brass, percussion, keyboards, or choir, Miroslav Philharmonik 2 has it all.
-But how can you get this amazing software for free? Well, there are some websites that offer keygen torrents for IK Multimedia Miroslav Philharmonik 2. A keygen is a program that generates a serial number or a license code for a software product. A torrent is a file that contains information about how to download and share files over a peer-to-peer network. By using a keygen torrent, you can bypass the official registration process and activate the software without paying anything.
-ik multimedia miroslav philharmonik keygen torrent
DOWNLOAD ❤ https://urlgoal.com/2uI7FP
-However, before you download and install IK Multimedia Miroslav Philharmonik 2 using a keygen torrent, you should be aware of some risks and drawbacks. First of all, downloading and using pirated software is illegal and unethical. You are violating the intellectual property rights of the developers and depriving them of their deserved income. You are also exposing yourself to potential legal consequences if you are caught.
-Secondly, downloading and using keygen torrents can be dangerous for your computer and your personal data. Many keygen torrents are infected with malware, viruses, spyware, or ransomware that can harm your system or steal your information. You may also encounter fake or corrupted files that will not work properly or at all. Moreover, you may face compatibility issues or performance problems with the software, as it may not be updated or supported by the official developers.
-Therefore, we strongly advise you to avoid using keygen torrents for IK Multimedia Miroslav Philharmonik 2 or any other software product. Instead, you should purchase the software from the official website[^1^] or an authorized dealer. By doing so, you will get a legitimate copy of the software that is safe, reliable, and fully functional. You will also get access to technical support, updates, and additional content from the developers. And most importantly, you will support the hard work and creativity of the people who made this amazing software possible.
Here are a few more paragraphs for the article:
-
-If you are still interested in trying out IK Multimedia Miroslav Philharmonik 2 for free, there is a legal and safe way to do so. You can download a free trial version of the software from the official website. The trial version will let you use the software for 10 days with no limitations. You can explore all the features and sounds of Miroslav Philharmonik 2 and see if it suits your needs and preferences.
-After the trial period expires, you can decide whether you want to buy the full version of the software or not. The full version costs $499.99 and includes a lifetime license for up to three computers. You can also choose to pay in installments or use a coupon code to get a discount. If you buy the full version, you will also get a free copy of SampleTank 4 SE, a powerful sound and groove workstation with over 2,000 instruments and 30 GB of samples.
-
-IK Multimedia Miroslav Philharmonik 2 is a remarkable virtual instrument that will elevate your music production to a new level of realism and expression. It is designed for composers, producers, arrangers, and musicians of all genres and styles. Whether you need to create orchestral scores, film soundtracks, pop songs, or ambient soundscapes, Miroslav Philharmonik 2 will provide you with the sounds and tools you need. Don't miss this opportunity to get this amazing software for free or at a reduced price. Download the trial version today and experience the magic of Miroslav Philharmonik 2.
7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/subhajitmaji/MusicGen/audiocraft/data/audio.py b/spaces/subhajitmaji/MusicGen/audiocraft/data/audio.py
deleted file mode 100644
index 2048df6f175d7303bcf5c7b931922fd297908ead..0000000000000000000000000000000000000000
--- a/spaces/subhajitmaji/MusicGen/audiocraft/data/audio.py
+++ /dev/null
@@ -1,215 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Audio IO methods are defined in this module (info, read, write),
-We rely on av library for faster read when possible, otherwise on torchaudio.
-"""
-
-from dataclasses import dataclass
-from pathlib import Path
-import logging
-import typing as tp
-
-import numpy as np
-import soundfile
-import torch
-from torch.nn import functional as F
-import torchaudio as ta
-
-import av
-
-from .audio_utils import f32_pcm, i16_pcm, normalize_audio
-
-
-_av_initialized = False
-
-
-def _init_av():
- global _av_initialized
- if _av_initialized:
- return
- logger = logging.getLogger('libav.mp3')
- logger.setLevel(logging.ERROR)
- _av_initialized = True
-
-
-@dataclass(frozen=True)
-class AudioFileInfo:
- sample_rate: int
- duration: float
- channels: int
-
-
-def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sample_rate = stream.codec_context.sample_rate
- duration = float(stream.duration * stream.time_base)
- channels = stream.channels
- return AudioFileInfo(sample_rate, duration, channels)
-
-
-def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- info = soundfile.info(filepath)
- return AudioFileInfo(info.samplerate, info.duration, info.channels)
-
-
-def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- # torchaudio no longer returns useful duration informations for some formats like mp3s.
- filepath = Path(filepath)
- if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info
- # ffmpeg has some weird issue with flac.
- return _soundfile_info(filepath)
- else:
- return _av_info(filepath)
-
-
-def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]:
- """FFMPEG-based audio file reading using PyAV bindings.
- Soundfile cannot read mp3 and av_read is more efficient than torchaudio.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate
- """
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sr = stream.codec_context.sample_rate
- num_frames = int(sr * duration) if duration >= 0 else -1
- frame_offset = int(sr * seek_time)
- # we need a small negative offset otherwise we get some edge artifact
- # from the mp3 decoder.
- af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream)
- frames = []
- length = 0
- for frame in af.decode(streams=stream.index):
- current_offset = int(frame.rate * frame.pts * frame.time_base)
- strip = max(0, frame_offset - current_offset)
- buf = torch.from_numpy(frame.to_ndarray())
- if buf.shape[0] != stream.channels:
- buf = buf.view(-1, stream.channels).t()
- buf = buf[:, strip:]
- frames.append(buf)
- length += buf.shape[1]
- if num_frames > 0 and length >= num_frames:
- break
- assert frames
- # If the above assert fails, it is likely because we seeked past the end of file point,
- # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp.
- # This will need proper debugging, in due time.
- wav = torch.cat(frames, dim=1)
- assert wav.shape[0] == stream.channels
- if num_frames > 0:
- wav = wav[:, :num_frames]
- return f32_pcm(wav), sr
-
-
-def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0.,
- duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]:
- """Read audio by picking the most appropriate backend tool based on the audio format.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- pad (bool): Pad output audio if not reaching expected duration.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate.
- """
- fp = Path(filepath)
- if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg
- # There is some bug with ffmpeg and reading flac
- info = _soundfile_info(filepath)
- frames = -1 if duration <= 0 else int(duration * info.sample_rate)
- frame_offset = int(seek_time * info.sample_rate)
- wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32)
- assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}"
- wav = torch.from_numpy(wav).t().contiguous()
- if len(wav.shape) == 1:
- wav = torch.unsqueeze(wav, 0)
- elif (
- fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats()
- and duration <= 0 and seek_time == 0
- ):
- # Torchaudio is faster if we load an entire file at once.
- wav, sr = ta.load(fp)
- else:
- wav, sr = _av_read(filepath, seek_time, duration)
- if pad and duration > 0:
- expected_frames = int(duration * sr)
- wav = F.pad(wav, (0, expected_frames - wav.shape[-1]))
- return wav, sr
-
-
-def audio_write(stem_name: tp.Union[str, Path],
- wav: torch.Tensor, sample_rate: int,
- format: str = 'wav', mp3_rate: int = 320, normalize: bool = True,
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
- loudness_compressor: bool = False,
- log_clipping: bool = True, make_parent_dir: bool = True,
- add_suffix: bool = True) -> Path:
- """Convenience function for saving audio to disk. Returns the filename the audio was written to.
-
- Args:
- stem_name (str or Path): Filename without extension which will be added automatically.
- format (str): Either "wav" or "mp3".
- mp3_rate (int): kbps when using mp3s.
- normalize (bool): if `True` (default), normalizes according to the prescribed
- strategy (see after). If `False`, the strategy is only used in case clipping
- would happen.
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
- with extra headroom to avoid clipping. 'clip' just clips.
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
- than the `peak_clip` one to avoid further clipping.
- loudness_headroom_db (float): Target loudness for loudness normalization.
- loudness_compressor (bool): Uses tanh for soft clipping when strategy is 'loudness'.
- when strategy is 'loudness'log_clipping (bool): If True, basic logging on stderr when clipping still
- occurs despite strategy (only for 'rms').
- make_parent_dir (bool): Make parent directory if it doesn't exist.
- Returns:
- Path: Path of the saved audio.
- """
- assert wav.dtype.is_floating_point, "wav is not floating point"
- if wav.dim() == 1:
- wav = wav[None]
- elif wav.dim() > 2:
- raise ValueError("Input wav should be at most 2 dimension.")
- assert wav.isfinite().all()
- wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db,
- rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping,
- sample_rate=sample_rate, stem_name=str(stem_name))
- kwargs: dict = {}
- if format == 'mp3':
- suffix = '.mp3'
- kwargs.update({"compression": mp3_rate})
- elif format == 'wav':
- wav = i16_pcm(wav)
- suffix = '.wav'
- kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16})
- else:
- raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.")
- if not add_suffix:
- suffix = ''
- path = Path(str(stem_name) + suffix)
- if make_parent_dir:
- path.parent.mkdir(exist_ok=True, parents=True)
- try:
- ta.save(path, wav, sample_rate, **kwargs)
- except Exception:
- if path.exists():
- # we do not want to leave half written files around.
- path.unlink()
- raise
- return path
diff --git a/spaces/sukiru/rvc-Blue-archives/lib/infer_pack/models.py b/spaces/sukiru/rvc-Blue-archives/lib/infer_pack/models.py
deleted file mode 100644
index 3665d03bc0514a6ed07d3372ea24717dae1e0a65..0000000000000000000000000000000000000000
--- a/spaces/sukiru/rvc-Blue-archives/lib/infer_pack/models.py
+++ /dev/null
@@ -1,1142 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from lib.infer_pack import modules
-from lib.infer_pack import attentions
-from lib.infer_pack import commons
-from lib.infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from lib.infer_pack.commons import init_weights
-import numpy as np
-from lib.infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- nsff0 = nsff0[:, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- nsff0 = nsff0[:, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Mini Kms Activator V1.2 Office20 !EXCLUSIVE!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Mini Kms Activator V1.2 Office20 !EXCLUSIVE!.md
deleted file mode 100644
index 859d5e8832999ab783d510734052fa28771f1cea..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Mini Kms Activator V1.2 Office20 !EXCLUSIVE!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Mini Kms Activator V1.2 Office20
Download 🌟 https://cinurl.com/2uEZh0
-
- 4d29de3e1b
-
-
-
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/WiFi Commander 3D Analyze Monitor Full Version _HOT_.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/WiFi Commander 3D Analyze Monitor Full Version _HOT_.md
deleted file mode 100644
index 5298511a6d865e30b5c8dc52c999ad9559a86778..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/WiFi Commander 3D Analyze Monitor Full Version _HOT_.md
+++ /dev/null
@@ -1,8 +0,0 @@
-WiFi Commander: 3D Analyze Monitor full version
Download File ✶✶✶ https://cinurl.com/2uEZ7n
-
-WiFi Commander: 3D analysis and monitoring ... Scan your surroundings for Wi-Fi networks ✓ The original 3D analyzer ... Pan-European game information ... WiFi Commander: 3D analysis and monitoring ... Scan your surroundings for Wi-Fi networks ✓ The original 3D analyzer ...
-WiFi Commander: 3D analysis and monitoring ...
-WiFi Commander 8a78ff9644
-
-
-
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/nonlocal_r50-d8.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/nonlocal_r50-d8.py
deleted file mode 100644
index 5674a39854cafd1f2e363bac99c58ccae62f24da..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/nonlocal_r50-d8.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained='open-mmlab://resnet50_v1c',
- backbone=dict(
- type='ResNetV1c',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- dilations=(1, 1, 2, 4),
- strides=(1, 2, 1, 1),
- norm_cfg=norm_cfg,
- norm_eval=False,
- style='pytorch',
- contract_dilation=True),
- decode_head=dict(
- type='NLHead',
- in_channels=2048,
- in_index=3,
- channels=512,
- dropout_ratio=0.1,
- reduction=2,
- use_scale=True,
- mode='embedded_gaussian',
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=1024,
- in_index=2,
- channels=256,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=19,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='whole'))
diff --git a/spaces/t13718236382/bingoGPT4/src/components/chat.tsx b/spaces/t13718236382/bingoGPT4/src/components/chat.tsx
deleted file mode 100644
index a37ab1cc96ca2e6bfd9acbe313a8d946bfd5c3d4..0000000000000000000000000000000000000000
--- a/spaces/t13718236382/bingoGPT4/src/components/chat.tsx
+++ /dev/null
@@ -1,93 +0,0 @@
-'use client'
-
-import { useCallback, useEffect, useMemo, useState } from 'react'
-import { useAtom } from 'jotai'
-import Image from 'next/image'
-import { cn } from '@/lib/utils'
-import { ChatList } from '@/components/chat-list'
-import { ChatPanel } from '@/components/chat-panel'
-import { WelcomeScreen } from '@/components/welcome-screen'
-import { ChatScrollAnchor } from '@/components/chat-scroll-anchor'
-import { ToneSelector } from './tone-selector'
-import { ChatHeader } from './chat-header'
-import { ChatSuggestions } from './chat-suggestions'
-import { bingConversationStyleAtom } from '@/state'
-import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom'
-import StopIcon from '@/assets/images/stop.svg'
-import { useBing } from '@/lib/hooks/use-bing'
-import { ChatMessageModel } from '@/lib/bots/bing/types'
-import { ChatNotification } from './chat-notification'
-import { Settings } from './settings'
-import { ChatHistory } from './chat-history'
-
-export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] }
-
-export default function Chat({ className }: ChatProps) {
-
- const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom)
- const {
- messages,
- sendMessage,
- resetConversation,
- stopGenerating,
- setInput,
- bot,
- input,
- generating,
- isSpeaking,
- uploadImage,
- attachmentList,
- setAttachmentList,
- } = useBing()
-
- useEffect(() => {
- window.scrollTo({
- top: document.body.offsetHeight,
- behavior: 'smooth'
- })
- }, [])
-
- return (
-
-
-
-
-
-
- {messages.length ? (
- <>
-
-
-
-
-
- {generating ? (
-
-
-
- ) : null}
- >
- ) : null}
-
-
-
-
- )
-}
diff --git a/spaces/taneemishere/html-code-generation-from-images-with-deep-neural-networks/classes/model/__init__.py b/spaces/taneemishere/html-code-generation-from-images-with-deep-neural-networks/classes/model/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/tangshitao/MVDiffusion/lib/multi_Perspec2Equirec.py b/spaces/tangshitao/MVDiffusion/lib/multi_Perspec2Equirec.py
deleted file mode 100644
index a07170104d0bc830052958262ff197c6602c5885..0000000000000000000000000000000000000000
--- a/spaces/tangshitao/MVDiffusion/lib/multi_Perspec2Equirec.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import os
-import sys
-import cv2
-import numpy as np
-import lib.Perspec2Equirec as P2E
-
-
-class Perspective:
- def __init__(self, img_array , F_T_P_array ):
-
- assert len(img_array)==len(F_T_P_array)
-
- self.img_array = img_array
- self.F_T_P_array = F_T_P_array
-
-
- def GetEquirec(self,height,width):
- #
- # THETA is left/right angle, PHI is up/down angle, both in degree
- #
- merge_image = np.zeros((height,width,3))
- merge_mask = np.zeros((height,width,3))
-
-
- for img_dir,[F,T,P] in zip (self.img_array,self.F_T_P_array):
- per = P2E.Perspective(img_dir,F,T,P) # Load equirectangular image
- img , mask = per.GetEquirec(height,width) # Specify parameters(FOV, theta, phi, height, width)
- mask = mask.astype(np.float32)
- img = img.astype(np.float32)
-
- weight_mask = np.zeros((img_dir.shape[0],img_dir.shape[1], 3))
- w = img_dir.shape[1]
- weight_mask[:,0:w//2,:] = np.linspace(0,1,w//2)[...,None]
- weight_mask[:,w//2:,:] = np.linspace(1,0,w//2)[...,None]
- weight_mask = P2E.Perspective(weight_mask,F,T,P)
- weight_mask, _ = weight_mask.GetEquirec(height,width)
-
-
- blur = cv2.blur(mask,(5,5))
- blur = blur * mask
- mask = (blur == 1) * blur + (blur != 1) * blur * 0.05
-
- merge_image += img * weight_mask
- merge_mask += weight_mask
-
- merge_image[merge_mask==0] = 255.
- merge_mask = np.where(merge_mask==0,1,merge_mask)
- merge_image = (np.divide(merge_image,merge_mask))
-
-
- return merge_image
-
-
-
-
-
-
diff --git a/spaces/taskswithcode/semantic_clustering/twc_embeddings.py b/spaces/taskswithcode/semantic_clustering/twc_embeddings.py
deleted file mode 100644
index 4529381e749e50255bb276427fc39f0cdd5cf6da..0000000000000000000000000000000000000000
--- a/spaces/taskswithcode/semantic_clustering/twc_embeddings.py
+++ /dev/null
@@ -1,407 +0,0 @@
-from transformers import AutoModel, AutoTokenizer
-from transformers import AutoModelForCausalLM
-from scipy.spatial.distance import cosine
-import argparse
-import json
-import pdb
-import torch
-import torch.nn.functional as F
-
-def read_text(input_file):
- arr = open(input_file).read().split("\n")
- return arr[:-1]
-
-
-class CausalLMModel:
- def __init__(self):
- self.model = None
- self.tokenizer = None
- self.debug = False
- print("In CausalLMModel Constructor")
-
- def init_model(self,model_name = None):
- # Get our models - The package will take care of downloading the models automatically
- # For best performance: Muennighoff/SGPT-5.8B-weightedmean-nli-bitfit
- if (self.debug):
- print("Init model",model_name)
- # For best performance: EleutherAI/gpt-j-6B
- if (model_name is None):
- model_name = "EleutherAI/gpt-neo-125M"
- self.tokenizer = AutoTokenizer.from_pretrained(model_name)
- self.model = AutoModelForCausalLM.from_pretrained(model_name)
- self.model.eval()
- self.prompt = 'Documents are searched to find matches with the same content.\nThe document "{}" is a good search result for "'
-
- def compute_embeddings(self,input_file_name,input_data,is_file):
- if (self.debug):
- print("Computing embeddings for:", input_data[:20])
- model = self.model
- tokenizer = self.tokenizer
-
- texts = read_text(input_data) if is_file == True else input_data
- query = texts[0]
- docs = texts[1:]
-
- # Tokenize input texts
-
- #print(f"Query: {query}")
- scores = []
- for doc in docs:
- context = self.prompt.format(doc)
-
- context_enc = tokenizer.encode(context, add_special_tokens=False)
- continuation_enc = tokenizer.encode(query, add_special_tokens=False)
- # Slice off the last token, as we take its probability from the one before
- model_input = torch.tensor(context_enc+continuation_enc[:-1])
- continuation_len = len(continuation_enc)
- input_len, = model_input.shape
-
- # [seq_len] -> [seq_len, vocab]
- logprobs = torch.nn.functional.log_softmax(model(model_input)[0], dim=-1).cpu()
- # [seq_len, vocab] -> [continuation_len, vocab]
- logprobs = logprobs[input_len-continuation_len:]
- # Gather the log probabilities of the continuation tokens -> [continuation_len]
- logprobs = torch.gather(logprobs, 1, torch.tensor(continuation_enc).unsqueeze(-1)).squeeze(-1)
- score = torch.sum(logprobs)
- scores.append(score.tolist())
- return texts,scores
-
- def output_results(self,output_file,texts,scores,main_index = 0):
- cosine_dict = {}
- docs = texts[1:]
- if (self.debug):
- print("Total sentences",len(texts))
- assert(len(scores) == len(docs))
- for i in range(len(docs)):
- cosine_dict[docs[i]] = scores[i]
-
- if (self.debug):
- print("Input sentence:",texts[main_index])
- sorted_dict = dict(sorted(cosine_dict.items(), key=lambda item: item[1],reverse = True))
- if (self.debug):
- for key in sorted_dict:
- print("Document score for \"%s\" is: %.3f" % (key[:100], sorted_dict[key]))
- if (output_file is not None):
- with open(output_file,"w") as fp:
- fp.write(json.dumps(sorted_dict,indent=0))
- return sorted_dict
-
-
-class SGPTQnAModel:
- def __init__(self):
- self.model = None
- self.tokenizer = None
- self.debug = False
- print("In SGPT Q&A Constructor")
-
-
- def init_model(self,model_name = None):
- # Get our models - The package will take care of downloading the models automatically
- # For best performance: Muennighoff/SGPT-5.8B-weightedmean-nli-bitfit
- if (self.debug):
- print("Init model",model_name)
- if (model_name is None):
- model_name = "Muennighoff/SGPT-125M-weightedmean-msmarco-specb-bitfit"
- self.tokenizer = AutoTokenizer.from_pretrained(model_name)
- self.model = AutoModel.from_pretrained(model_name)
- self.model.eval()
- self.SPECB_QUE_BOS = self.tokenizer.encode("[", add_special_tokens=False)[0]
- self.SPECB_QUE_EOS = self.tokenizer.encode("]", add_special_tokens=False)[0]
-
- self.SPECB_DOC_BOS = self.tokenizer.encode("{", add_special_tokens=False)[0]
- self.SPECB_DOC_EOS = self.tokenizer.encode("}", add_special_tokens=False)[0]
-
-
- def tokenize_with_specb(self,texts, is_query):
- # Tokenize without padding
- batch_tokens = self.tokenizer(texts, padding=False, truncation=True)
- # Add special brackets & pay attention to them
- for seq, att in zip(batch_tokens["input_ids"], batch_tokens["attention_mask"]):
- if is_query:
- seq.insert(0, self.SPECB_QUE_BOS)
- seq.append(self.SPECB_QUE_EOS)
- else:
- seq.insert(0, self.SPECB_DOC_BOS)
- seq.append(self.SPECB_DOC_EOS)
- att.insert(0, 1)
- att.append(1)
- # Add padding
- batch_tokens = self.tokenizer.pad(batch_tokens, padding=True, return_tensors="pt")
- return batch_tokens
-
- def get_weightedmean_embedding(self,batch_tokens, model):
- # Get the embeddings
- with torch.no_grad():
- # Get hidden state of shape [bs, seq_len, hid_dim]
- last_hidden_state = self.model(**batch_tokens, output_hidden_states=True, return_dict=True).last_hidden_state
-
- # Get weights of shape [bs, seq_len, hid_dim]
- weights = (
- torch.arange(start=1, end=last_hidden_state.shape[1] + 1)
- .unsqueeze(0)
- .unsqueeze(-1)
- .expand(last_hidden_state.size())
- .float().to(last_hidden_state.device)
- )
-
- # Get attn mask of shape [bs, seq_len, hid_dim]
- input_mask_expanded = (
- batch_tokens["attention_mask"]
- .unsqueeze(-1)
- .expand(last_hidden_state.size())
- .float()
- )
-
- # Perform weighted mean pooling across seq_len: bs, seq_len, hidden_dim -> bs, hidden_dim
- sum_embeddings = torch.sum(last_hidden_state * input_mask_expanded * weights, dim=1)
- sum_mask = torch.sum(input_mask_expanded * weights, dim=1)
-
- embeddings = sum_embeddings / sum_mask
-
- return embeddings
-
- def compute_embeddings(self,input_file_name,input_data,is_file):
- if (self.debug):
- print("Computing embeddings for:", input_data[:20])
- model = self.model
- tokenizer = self.tokenizer
-
- texts = read_text(input_data) if is_file == True else input_data
-
- queries = [texts[0]]
- docs = texts[1:]
- query_embeddings = self.get_weightedmean_embedding(self.tokenize_with_specb(queries, is_query=True), self.model)
- doc_embeddings = self.get_weightedmean_embedding(self.tokenize_with_specb(docs, is_query=False), self.model)
- return texts,(query_embeddings,doc_embeddings)
-
-
-
- def output_results(self,output_file,texts,embeddings,main_index = 0):
- # Calculate cosine similarities
- # Cosine similarities are in [-1, 1]. Higher means more similar
- query_embeddings = embeddings[0]
- doc_embeddings = embeddings[1]
- cosine_dict = {}
- queries = [texts[0]]
- docs = texts[1:]
- if (self.debug):
- print("Total sentences",len(texts))
- for i in range(len(docs)):
- cosine_dict[docs[i]] = 1 - cosine(query_embeddings[0], doc_embeddings[i])
-
- if (self.debug):
- print("Input sentence:",texts[main_index])
- sorted_dict = dict(sorted(cosine_dict.items(), key=lambda item: item[1],reverse = True))
- if (self.debug):
- for key in sorted_dict:
- print("Cosine similarity with \"%s\" is: %.3f" % (key, sorted_dict[key]))
- if (output_file is not None):
- with open(output_file,"w") as fp:
- fp.write(json.dumps(sorted_dict,indent=0))
- return sorted_dict
-
-
-class SimCSEModel:
- def __init__(self):
- self.model = None
- self.tokenizer = None
- self.debug = False
- print("In SimCSE constructor")
-
- def init_model(self,model_name = None):
- if (model_name == None):
- model_name = "princeton-nlp/sup-simcse-roberta-large"
- #self.model = SimCSE(model_name)
- self.tokenizer = AutoTokenizer.from_pretrained(model_name)
- self.model = AutoModel.from_pretrained(model_name)
-
- def compute_embeddings(self,input_file_name,input_data,is_file):
- texts = read_text(input_data) if is_file == True else input_data
- inputs = self.tokenizer(texts, padding=True, truncation=True, return_tensors="pt")
- with torch.no_grad():
- embeddings = self.model(**inputs, output_hidden_states=True, return_dict=True).pooler_output
- return texts,embeddings
-
- def output_results(self,output_file,texts,embeddings,main_index = 0):
- # Calculate cosine similarities
- # Cosine similarities are in [-1, 1]. Higher means more similar
- cosine_dict = {}
- #print("Total sentences",len(texts))
- for i in range(len(texts)):
- cosine_dict[texts[i]] = 1 - cosine(embeddings[main_index], embeddings[i])
-
- #print("Input sentence:",texts[main_index])
- sorted_dict = dict(sorted(cosine_dict.items(), key=lambda item: item[1],reverse = True))
- if (self.debug):
- for key in sorted_dict:
- print("Cosine similarity with \"%s\" is: %.3f" % (key, sorted_dict[key]))
- if (output_file is not None):
- with open(output_file,"w") as fp:
- fp.write(json.dumps(sorted_dict,indent=0))
- return sorted_dict
-
-
-
-class SGPTModel:
- def __init__(self):
- self.model = None
- self.tokenizer = None
- self.debug = False
- print("In SGPT Constructor")
-
-
- def init_model(self,model_name = None):
- # Get our models - The package will take care of downloading the models automatically
- # For best performance: Muennighoff/SGPT-5.8B-weightedmean-nli-bitfit
- if (self.debug):
- print("Init model",model_name)
- if (model_name is None):
- model_name = "Muennighoff/SGPT-125M-weightedmean-nli-bitfit"
- self.tokenizer = AutoTokenizer.from_pretrained(model_name)
- self.model = AutoModel.from_pretrained(model_name)
- #self.tokenizer = AutoTokenizer.from_pretrained("Muennighoff/SGPT-1.3B-weightedmean-msmarco-specb-bitfit")
- #self.model = AutoModel.from_pretrained("Muennighoff/SGPT-1.3B-weightedmean-msmarco-specb-bitfit")
- #self.tokenizer = AutoTokenizer.from_pretrained("Muennighoff/SGPT-5.8B-weightedmean-msmarco-specb-bitfit")
- #self.model = AutoModel.from_pretrained("Muennighoff/SGPT-5.8B-weightedmean-msmarco-specb-bitfit")
- # Deactivate Dropout (There is no dropout in the above models so it makes no difference here but other SGPT models may have dropout)
- self.model.eval()
-
- def compute_embeddings(self,input_file_name,input_data,is_file):
- if (self.debug):
- print("Computing embeddings for:", input_data[:20])
- model = self.model
- tokenizer = self.tokenizer
-
- texts = read_text(input_data) if is_file == True else input_data
-
- # Tokenize input texts
- batch_tokens = tokenizer(texts, padding=True, truncation=True, return_tensors="pt")
-
- # Get the embeddings
- with torch.no_grad():
- # Get hidden state of shape [bs, seq_len, hid_dim]
- last_hidden_state = model(**batch_tokens, output_hidden_states=True, return_dict=True).last_hidden_state
-
- # Get weights of shape [bs, seq_len, hid_dim]
- weights = (
- torch.arange(start=1, end=last_hidden_state.shape[1] + 1)
- .unsqueeze(0)
- .unsqueeze(-1)
- .expand(last_hidden_state.size())
- .float().to(last_hidden_state.device)
- )
-
- # Get attn mask of shape [bs, seq_len, hid_dim]
- input_mask_expanded = (
- batch_tokens["attention_mask"]
- .unsqueeze(-1)
- .expand(last_hidden_state.size())
- .float()
- )
-
- # Perform weighted mean pooling across seq_len: bs, seq_len, hidden_dim -> bs, hidden_dim
- sum_embeddings = torch.sum(last_hidden_state * input_mask_expanded * weights, dim=1)
- sum_mask = torch.sum(input_mask_expanded * weights, dim=1)
-
- embeddings = sum_embeddings / sum_mask
- return texts,embeddings
-
- def output_results(self,output_file,texts,embeddings,main_index = 0):
- # Calculate cosine similarities
- # Cosine similarities are in [-1, 1]. Higher means more similar
- cosine_dict = {}
- if (self.debug):
- print("Total sentences",len(texts))
- for i in range(len(texts)):
- cosine_dict[texts[i]] = 1 - cosine(embeddings[main_index], embeddings[i])
-
- if (self.debug):
- print("Input sentence:",texts[main_index])
- sorted_dict = dict(sorted(cosine_dict.items(), key=lambda item: item[1],reverse = True))
- if (self.debug):
- for key in sorted_dict:
- print("Cosine similarity with \"%s\" is: %.3f" % (key, sorted_dict[key]))
- if (output_file is not None):
- with open(output_file,"w") as fp:
- fp.write(json.dumps(sorted_dict,indent=0))
- return sorted_dict
-
-
-
-
-
-class HFModel:
- def __init__(self):
- self.model = None
- self.tokenizer = None
- self.debug = False
- print("In HF Constructor")
-
-
- def init_model(self,model_name = None):
- # Get our models - The package will take care of downloading the models automatically
- # For best performance: Muennighoff/SGPT-5.8B-weightedmean-nli-bitfit
- #print("Init model",model_name)
- if (model_name is None):
- model_name = "sentence-transformers/all-MiniLM-L6-v2"
- self.tokenizer = AutoTokenizer.from_pretrained(model_name)
- self.model = AutoModel.from_pretrained(model_name)
- self.model.eval()
-
- def mean_pooling(self,model_output, attention_mask):
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
-
- def compute_embeddings(self,input_file_name,input_data,is_file):
- #print("Computing embeddings for:", input_data[:20])
- model = self.model
- tokenizer = self.tokenizer
-
- texts = read_text(input_data) if is_file == True else input_data
-
- encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
-
- # Compute token embeddings
- with torch.no_grad():
- model_output = model(**encoded_input)
-
- # Perform pooling
- sentence_embeddings = self.mean_pooling(model_output, encoded_input['attention_mask'])
-
- # Normalize embeddings
- sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
-
- return texts,sentence_embeddings
-
- def output_results(self,output_file,texts,embeddings,main_index = 0):
- # Calculate cosine similarities
- # Cosine similarities are in [-1, 1]. Higher means more similar
- cosine_dict = {}
- #print("Total sentences",len(texts))
- for i in range(len(texts)):
- cosine_dict[texts[i]] = 1 - cosine(embeddings[main_index], embeddings[i])
-
- #print("Input sentence:",texts[main_index])
- sorted_dict = dict(sorted(cosine_dict.items(), key=lambda item: item[1],reverse = True))
- if (self.debug):
- for key in sorted_dict:
- print("Cosine similarity with \"%s\" is: %.3f" % (key, sorted_dict[key]))
- if (output_file is not None):
- with open(output_file,"w") as fp:
- fp.write(json.dumps(sorted_dict,indent=0))
- return sorted_dict
-
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(description='SGPT model for sentence embeddings ',formatter_class=argparse.ArgumentDefaultsHelpFormatter)
- parser.add_argument('-input', action="store", dest="input",required=True,help="Input file with sentences")
- parser.add_argument('-output', action="store", dest="output",default="output.txt",help="Output file with results")
- parser.add_argument('-model', action="store", dest="model",default="sentence-transformers/all-MiniLM-L6-v2",help="model name")
-
- results = parser.parse_args()
- obj = HFModel()
- obj.init_model(results.model)
- texts, embeddings = obj.compute_embeddings(results.input,results.input,is_file = True)
- results = obj.output_results(results.output,texts,embeddings)
diff --git a/spaces/test1444/Pose_Video/app.py b/spaces/test1444/Pose_Video/app.py
deleted file mode 100644
index e2dd3a69eb9c327ee4e21fc57d126a7461ef2cc2..0000000000000000000000000000000000000000
--- a/spaces/test1444/Pose_Video/app.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import cv2
-import gradio as gr
-import mim
-mim.install('mmcv-full==1.5.0')
-from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
- vis_pose_result, process_mmdet_results)
-from mmdet.apis import inference_detector, init_detector
-import mediapy
-
-pose_config = 'configs/topdown_heatmap_hrnet_w48_coco_256x192.py'
-pose_checkpoint = 'hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth'
-det_config = 'configs/faster_rcnn_r50_fpn_1x_coco.py'
-det_checkpoint = 'faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
-
-# initialize pose model
-pose_model = init_pose_model(pose_config, pose_checkpoint, device='cpu')
-# initialize detector
-det_model = init_detector(det_config, det_checkpoint, device='cpu')
-
-
-max_num_frames=120
-def predict(video_path):
- cap = cv2.VideoCapture(video_path)
- height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
- width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
- fps = cap.get(cv2.CAP_PROP_FPS)
-
- preds_all = []
-
- # fourcc = cv2.VideoWriter_fourcc(*'mp4v')
- # out_file = tempfile.NamedTemporaryFile(suffix='.mp4', delete=False)
- # writer = cv2.VideoWriter(out_file.name, fourcc, fps, (width, height))
- frames = []
-
- for _ in range(max_num_frames):
- ok, frame = cap.read()
- if not ok:
- break
- rgb_frame = frame[:,:,::-1]
- mmdet_results = inference_detector(det_model, rgb_frame)
- person_results = process_mmdet_results(mmdet_results, cat_id=1)
- pose_results, returned_outputs = inference_top_down_pose_model(
- pose_model,
- rgb_frame,
- person_results,
- bbox_thr=0.3,
- format='xyxy',
- dataset=pose_model.cfg.data.test.type)
- vis_result = vis_pose_result(
- pose_model,
- rgb_frame,
- pose_results,
- dataset=pose_model.cfg.data.test.type,
- show=False)
- frames.append(vis_result)
- cap.release()
- # writer.release()
- mediapy.write_video("out.mp4", frames, fps=fps)
- return "out.mp4"
-
-title = "Pose Estimation video"
-description = ""
-article = ""
-
-example_list = ['examples/000001_mpiinew_test.mp4']
-
-# Create the Gradio demo
-demo = gr.Interface(fn=predict,
- inputs=gr.Video(label='Input Video'),
- outputs=gr.Video(label='Result'),
- examples=example_list,
- title=title,
- description=description,
- article=article)
-
-# Launch the demo!
-demo.queue().launch(show_api=False)
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Dvd Shrink 4.1 Serial.md b/spaces/tialenAdioni/chat-gpt-api/logs/Dvd Shrink 4.1 Serial.md
deleted file mode 100644
index 1355f19fc9e380df13191f46c5526548c9541c5c..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Dvd Shrink 4.1 Serial.md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-How to Backup Your DVDs with DVD Shrink 4.1 Serial
-If you have a collection of DVDs that you want to preserve, you might be interested in DVD Shrink 4.1 Serial. This is a software that allows you to compress and copy your DVDs to your computer or another disc. You can also customize your backup by removing unwanted features, adding subtitles, or splitting the DVD into multiple parts.
-In this article, we will show you how to use DVD Shrink 4.1 Serial to backup your DVDs in a few simple steps.
-Dvd shrink 4.1 serial
DOWNLOAD ✶✶✶ https://urlcod.com/2uKa6l
-Step 1: Download and Install DVD Shrink 4.1 Serial
-The first thing you need to do is to download and install DVD Shrink 4.1 Serial on your computer. You can get it from this link. After downloading, run the setup file and follow the instructions to install the software.
-Step 2: Insert Your DVD and Open It with DVD Shrink
-Next, insert the DVD that you want to backup into your DVD drive and open DVD Shrink. You will see a window like this:
-
-Click on the "Open Disc" button and select your DVD drive from the drop-down menu. DVD Shrink will analyze your DVD and show you its contents.
-Step 3: Choose Your Backup Options
-Now you can choose how you want to backup your DVD. You can use the following options:
-
-- Full Disc: This will copy the entire DVD without any changes.
-- Main Movie: This will copy only the main movie and skip the menus and extras.
-- Re-author: This will let you customize your backup by selecting which titles, audio tracks, subtitles, and menus you want to keep or remove.
-- Compression Settings: This will let you adjust the quality and size of your backup by changing the compression level and output format.
-
-You can also preview your backup by clicking on the "Preview" button at the bottom right corner of the window.
-Step 4: Start Your Backup
-Once you are satisfied with your backup options, click on the "Backup!" button at the top of the window. You will see a window like this:
-
-Here you can choose where you want to save your backup. You can either save it as an ISO file on your hard drive, or burn it directly to a blank disc using Nero or another burning software. You can also choose to create a DVD folder or split your backup into multiple files.
-
-Click on "OK" to start your backup. Depending on the size and complexity of your DVD, this may take some time. You can monitor the progress and status of your backup on the window.
-Step 5: Enjoy Your Backup
-When your backup is finished, you can eject your original DVD and enjoy your backup on your computer or another device. You can also use DVD Shrink 4.1 Serial to backup more DVDs in the future.
-DVD Shrink 4.1 Serial is a handy tool that helps you preserve your DVDs in a digital format. It is easy to use and offers many options to customize your backup. However, please note that copying DVDs may be illegal in some countries or regions, so please use this software responsibly and respect the copyright laws.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Get Terramodel 10.6 for Free and Unlock Its Full Potential.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Get Terramodel 10.6 for Free and Unlock Its Full Potential.md
deleted file mode 100644
index c0f331bb98dd265cdf6af730484870a708fac151..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Get Terramodel 10.6 for Free and Unlock Its Full Potential.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-How to Download and Install Terramodel 10.6 for Windows
-Terramodel is a powerful software package for civil engineers, surveyors, and contractors who need a CAD and design package with integrated support for raw survey data. With Terramodel, you can import data collected using Trimble devices, perform COGO calculations, design roadways, generate contours, calculate volumes, and more. Terramodel also supports online services of Trimble, such as EPC, WIS, ASRA, and XENTRY TIPS.
-terramodel 10.6 download
Download ☆☆☆☆☆ https://urlcod.com/2uK7hA
-In this article, we will show you how to download and install Terramodel 10.6 for Windows. Terramodel 10.6 is the latest version of the software that includes updates and bug fixes. You can use the same installation whether or not you have a Sentinel key to enable modules other than Field Data.
-Step 1: Download Terramodel 10.6
-You can download Terramodel 10.6 from the official website of Geocomp Systems, which is the authorized distributor of Terramodel in Australia and New Zealand. The download link is https://www.geocomp.com.au/terramodel/. You can also find other information about Terramodel on this website, such as features, modules, paks, and customer care membership.
-The file size of Terramodel 10.6 is 228 MB. It will take some time to download depending on your internet speed. Your browser and virus checker might prompt you with requests for Keep, Open, More Info, Run anyway and so on. You need to allow the download to proceed.
-Step 2: Install Terramodel 10.6
-Once you have downloaded the file, you need to run it to start the installation process. The file name is terramodel_1061.exe. You may need to right-click on the file and select Run as administrator to avoid any permission issues.
-
-The installation process will install Terramodel 10.60, update it to 10.61, and install Sentinel System Driver 7.60 in a single sequence. You need to follow the prompts and accept the license agreement to complete the installation. You can choose the destination folder and the modules you want to install.
-The installation process may take several minutes to finish. You may need to restart your computer after the installation is done.
-Step 3: Activate Terramodel 10.6
-To use Terramodel 10.6, you need to have a valid license and certificate from Trimble or Geocomp Systems. If you have a Sentinel key (a USB dongle) that enables modules other than Field Data, you need to plug it into your computer before launching Terramodel. If you do not have a Sentinel key, you can still use Terramodel with the Field Data module for free.
-To activate Terramodel 10.6, you need to run the program and enter your license number and certificate number when prompted. You can find these numbers on your invoice or email from Trimble or Geocomp Systems. If you do not have these numbers or if you have any problems with activation, you can contact Geocomp Systems for assistance.
-Conclusion
-Terramodel 10.6 is a powerful software package for civil engineers, surveyors, and contractors who need a CAD and design package with integrated support for raw survey data. You can download and install Terramodel 10.6 from the official website of Geocomp Systems in three easy steps: download the file, run the installation process, and activate the software with your license and certificate numbers.
-If you need more information about Terramodel or if you want to buy modules or paks for more features and functions, you can visit https://www.geocomp.com.au/terramodel/ or contact Geocomp Systems for customer support.
ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Intel Desktop Board 21 B6 E1 E2 Driver Download Everything You Need to Know.md b/spaces/tialenAdioni/chat-gpt-api/logs/Intel Desktop Board 21 B6 E1 E2 Driver Download Everything You Need to Know.md
deleted file mode 100644
index 144a1906dd4c2c9f50117988f0ff93a8e0cfaa59..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Intel Desktop Board 21 B6 E1 E2 Driver Download Everything You Need to Know.md
+++ /dev/null
@@ -1,114 +0,0 @@
-
-Intel Desktop Board 21 B6 E1 E2 Driver Download: Everything You Need to Know
-If you are looking for a way to update or install the driver for your Intel Desktop Board 21 B6 E1 E2, then you might be interested in this article. Here, we will explain what is Intel Desktop Board 21 B6 E1 E2, why you need to download its driver, how to download its driver, and how to troubleshoot any problems or errors that you might encounter while using it.
-intel desktop board 21 b6 e1 e2 driver download
Download File ->>->>->> https://urlcod.com/2uK8yN
-What is Intel Desktop Board 21 B6 E1 E2?
-Intel Desktop Board 21 B6 E1 E2 is a motherboard that supports Intel processors and chipsets. It is a part of the Intel Desktop Board product family that was discontinued in 2013. However, many users still have this motherboard and use it for their PCs.
-Intel Desktop Board 21 B6 E1 E2 has many features and specifications that make it a reliable and powerful motherboard. Some of them are:
-
-- It supports Intel Core i7, i5, i3, Pentium and Celeron processors with LGA1150 socket.
-- It has four DIMM slots that support up to 32 GB of DDR3 memory with dual channel architecture.
-- It has six SATA ports that support up to 6 Gb/s data transfer rate.
-- It has one PCI Express 3.0 x16 slot, one PCI Express 2.0 x16 slot, two PCI Express 2.0 x1 slots and one PCI slot.
-- It has integrated graphics with Intel HD Graphics and supports up to three displays with HDMI, DVI-D and VGA ports.
-- It has integrated audio with Realtek ALC892 codec and supports 7.1 channel surround sound.
-- It has integrated LAN with Intel I217V Gigabit Ethernet controller and supports Wake-on-LAN and PXE boot.
-- It has eight USB ports (four USB 3.0 and four USB 2.0) and supports USB BIOS Flashback and USB Charger.
-- It has a UEFI BIOS with EZ Mode and Advanced Mode that allows users to customize their settings and preferences.
-
-Why Do You Need to Download Intel Desktop Board 21 B6 E1 E2 Driver?
-A driver is a software that allows your PC to communicate with your hardware devices such as your motherboard, processor, chipset, graphics card, audio card, network card, etc. Without a driver, your PC might not be able to recognize or use your hardware devices properly or at all.
-Therefore, you need to download Intel Desktop Board 21 B6 E1 E2 driver to ensure that your PC can communicate with your motherboard and use its features and functions optimally. By downloading Intel Desktop Board 21 B6 E1 E2 driver, you can also improve the performance, stability and security of your PC and fix any bugs or errors that might occur with your motherboard.
-intel desktop board 21 b6 e1 e2 driver update
-intel desktop board 21 b6 e1 e2 driver windows 10
-intel desktop board 21 b6 e1 e2 driver free download
-intel desktop board 21 b6 e1 e2 driver installation
-intel desktop board 21 b6 e1 e2 driver software
-intel desktop board 21 b6 e1 e2 driver error
-intel desktop board 21 b6 e1 e2 driver fix
-intel desktop board 21 b6 e1 e2 driver compatibility
-intel desktop board 21 b6 e1 e2 driver support
-intel desktop board 21 b6 e1 e2 driver bios
-intel desktop board 21 b6 e1 e2 driver manual
-intel desktop board 21 b6 e1 e2 driver troubleshooting
-intel desktop board 21 b6 e1 e2 driver latest version
-intel desktop board 21 b6 e1 e2 driver backup
-intel desktop board 21 b6 e1 e2 driver restore
-intel desktop board 21 b6 e1 e2 driver download link
-intel desktop board 21 b6 e1 e2 driver download site
-intel desktop board 21 b6 e1 e2 driver download for mac
-intel desktop board 21 b6 e1 e2 driver download for linux
-intel desktop board 21 b6 e1 e2 driver download for windows 7
-intel desktop board 21 b6 e1 e2 driver download for windows 8
-intel desktop board 21 b6 e1 e2 driver download for windows xp
-intel desktop board 21 b6 e1 e2 driver download for windows vista
-intel desktop board 21 b6 e1 e2 driver download for windows server
-intel desktop board 21 b6 e1 e2 driver download for android
-intel desktop board 21 b6 e1 e2 driver download for ios
-intel desktop board 21 b6 e1 e2 driver download for chromebook
-intel desktop board 21 b6 e1 e2 driver download for ubuntu
-intel desktop board 21 b6 e1 e2 driver download for fedora
-intel desktop board 21 b6 e1 e2 driver download for mint
-intel desktop board 21 b6 e1 e2 driver download for debian
-intel desktop board 21 b6 e1 e2 driver download for arch linux
-intel desktop board 21 b6 e1 e2 driver download for kali linux
-intel desktop board 21 b6 e1 e2 driver download for centos
-intel desktop board 21 b6 e1 e2 driver download for red hat linux
-intel desktop board 21 b6 e1 e2 driver download for suse linux
-intel desktop board 21 b6
-How to Download Intel Desktop Board 21 B6 E1 E2 Driver?
-To download Intel Desktop Board 21 B6 E1 E2 driver, you have several options to do so. Some of them are:
-
-- You can use the Intel Driver & Support Assistant tool by visiting https://www.intel.com/content/www/us/en/download-center/home.html. This tool can automatically identify your products and get driver and software updates for your Intel hardware. You just need to follow the instructions on the website to download and install the tool and then run it on your PC. The tool will scan your PC and detect your Intel Desktop Board 21 B6 E1 E2 and other Intel devices. It will then show you the available updates for your devices and allow you to download and install them easily.
-- You can use the Support for Intel Boards & Kits page by visiting https://www.intel.com/content/www/us/en/support/products/78586/boards-and-kits.html. This page can help you find featured content, product specifications, warranty information, community posts, and more for your Intel Boards & Kits. You just need to select your product from the list or use the search box to find your Intel Desktop Board 21 B6 E1 E2. You will then see a page that contains all the information and resources for your product. You can click on the "Downloads & Drivers" tab to see the available downloads for your product and choose the ones that you need.
-- You can use the Intel Desktop Boards page by visiting https://www.intel.com/content/www/us/en/download/19506/intel-desktop-boards.html. This page can help you download device drivers for Intel Desktop Boards that are no longer available on Intel Download Center after July 2015. You just need to scroll down the page until you see the section "Intel® Desktop Boards with Intel® H81 Chipset". You will then see a link that says "Download drivers for this board". You can click on this link to see the available drivers for your board and download them accordingly.
-
-How to Troubleshoot Intel Desktop Board 21 B6 E1 E2 Driver?
-Sometimes, you might encounter some problems or errors while using Intel Desktop Board 21 B6 E1 E2 driver on your PC. These problems or errors might be caused by various factors such as compatibility issues, corrupted files, missing components, insufficient resources, etc. Therefore, you should know how to troubleshoot Intel Desktop Board 21 B6 E1 E2 driver and fix the problems or errors that you face.
-Here are some common problems or errors that you might encounter while using Intel Desktop Board 21 B6 E1 E2 driver and how to troubleshoot them:
-
-- Intel Desktop Board 21 B6 E1 E2 driver does not launch or crashes: This might be due to a compatibility issue between Intel Desktop Board 21 B6 E1 E2 driver
-
- - Check the system requirements for running Intel Desktop Board 21 B6 E1 E2 driver and make sure that your PC or the PC that you are using meets them.
- - Run Intel Desktop Board 21 B6 E1 E2 driver as an administrator by right-clicking on the file and selecting "Run as administrator".
- - Run Intel Desktop Board 21 B6 E1 E2 driver in compatibility mode by right-clicking on the file, selecting "Properties", clicking on the "Compatibility" tab, checking the box "Run this program in compatibility mode for:", and choosing a compatible Windows version from the drop-down menu.
- - Update your PC or the PC that you are using to the latest version of Windows and install all the necessary updates and drivers.
-
-
-- Intel Desktop Board 21 B6 E1 E2 driver does not recognize or execute some functions or toolboxes: This might be due to a missing component or a corrupted file in Intel Desktop Board 21 B6 E1 E2 driver. To troubleshoot this problem, you can try the following solutions:
-
- - Check the contents of Intel Desktop Board 21 B6 E1 E2 driver and make sure that all the files and folders are intact and complete.
- - Reinstall Intel Desktop Board 21 B6 E1 E2 driver by downloading it from a reliable source and following the installation instructions.
- - Repair Intel Desktop Board 21 B6 E1 E2 driver by using a repair tool or utility that can fix any errors or issues with the driver.
-
-
-- Intel Desktop Board 21 B6 E1 E2 driver causes some problems or errors with other devices or software: This might be due to a conflict or an incompatibility between Intel Desktop Board 21 B6 E1 E2 driver and other devices or software on your PC or the PC that you are using. To troubleshoot this problem, you can try the following solutions:
-
- - Check the compatibility of Intel Desktop Board 21 B6 E1 E2 driver with other devices or software on your PC or the PC that you are using and make sure that they are compatible.
- - Update or reinstall other devices or software on your PC or the PC that you are using to the latest version and make sure that they work well with Intel Desktop Board 21 B6 E1 E2 driver.
- - Disable or uninstall other devices or software on your PC or the PC that you are using that might cause conflicts or problems with Intel Desktop Board 21 B6 E1 E2 driver.
-
-
-
-Conclusion
-Intel Desktop Board 21 B6 E1 E2 driver is a software that allows your PC to communicate with your motherboard and use its features and functions optimally. By downloading Intel Desktop Board 21 B6 E1 E2 driver, you can also improve the performance, stability and security of your PC and fix any bugs or errors that might occur with your motherboard. However, you might also encounter some problems or errors while using Intel Desktop Board 21 B6 E1 E2 driver on your PC. Therefore, you should know how to download, install, update and troubleshoot Intel Desktop Board 21 B6 E1 E2 driver and fix the problems or errors that you face.
-What is the Difference Between Intel Desktop Board 21 B6 E1 E2 and Other Intel Desktop Boards?
-Intel Desktop Board 21 B6 E1 E2 is one of the many models of Intel Desktop Boards that were produced by Intel. Intel Desktop Boards are motherboards that support Intel processors and chipsets and offer various features and specifications for different purposes and needs. However, Intel announced the discontinuance of the Intel Desktop Board product family in 2013 and stopped shipping them entirely by 2015. Therefore, Intel Desktop Boards are no longer available or supported by Intel.
-Intel Desktop Board 21 B6 E1 E2 is different from other Intel Desktop Boards in terms of its features and specifications. Some of the differences are:
-
-- Intel Desktop Board 21 B6 E1 E2 supports Intel Core i7, i5, i3, Pentium and Celeron processors with LGA1150 socket, while other Intel Desktop Boards might support different types or generations of Intel processors with different sockets.
-- Intel Desktop Board 21 B6 E1 E2 has four DIMM slots that support up to 32 GB of DDR3 memory with dual channel architecture, while other Intel Desktop Boards might have different numbers or types of memory slots that support different amounts or types of memory.
-- Intel Desktop Board 21 B6 E1 E2 has six SATA ports that support up to 6 Gb/s data transfer rate, while other Intel Desktop Boards might have different numbers or types of SATA ports that support different data transfer rates.
-- Intel Desktop Board 21 B6 E1 E2 has one PCI Express 3.0 x16 slot, one PCI Express 2.0 x16 slot, two PCI Express 2.0 x1 slots and one PCI slot, while other Intel Desktop Boards might have different numbers or types of expansion slots that support different standards or speeds.
-- Intel Desktop Board 21 B6 E1 E2 has integrated graphics with Intel HD Graphics and supports up to three displays with HDMI, DVI-D and VGA ports, while other Intel Desktop Boards might have different types or levels of integrated graphics or support different numbers or types of displays or ports.
-- Intel Desktop Board 21 B6 E1 E2 has integrated audio with Realtek ALC892 codec and supports 7.1 channel surround sound, while other Intel Desktop Boards might have different types or levels of integrated audio or support different numbers or types of audio channels or ports.
-- Intel Desktop Board 21 B6 E1 E2 has integrated LAN with Intel I217V Gigabit Ethernet controller and supports Wake-on-LAN and PXE boot, while other Intel Desktop Boards might have different types or levels of integrated LAN or support different features or protocols.
-- Intel Desktop Board 21 B6 E1 E2 has eight USB ports (four USB 3.0 and four USB 2.0) and supports USB BIOS Flashback and USB Charger, while other Intel Desktop Boards might have different numbers or types of USB ports or support different features or functions.
-- Intel Desktop Board 21 B6 E1 E2 has a UEFI BIOS with EZ Mode and Advanced Mode that allows users to customize their settings and preferences, while other Intel Desktop Boards might have different types or versions of BIOS or offer different options or modes.
-
-Conclusion
-Intel Desktop Board 21 B6 E1 E2 driver is a software that allows your PC to communicate with your motherboard and use its features and functions optimally. By downloading Intel Desktop Board 21 B6 E1 E2 driver, you can also improve the performance, stability and security of your PC and fix any bugs or errors that might occur with your motherboard. However, you might also encounter some problems or errors while using Intel Desktop Board 21 B6 E1 E2 driver on your PC. Therefore, you should know how to download, install, update and troubleshoot Intel Desktop Board 21 B6 E1 E2 driver and fix the problems or errors that you face.
-Intel Desktop Board 21 B6 E1 E2 is one of the many models of Intel Desktop Boards that were produced by Intel. Intel Desktop Boards are motherboards that support Intel processors and chipsets and offer various features and specifications for different purposes and needs. However, Intel announced the discontinuance of the Intel Desktop Board product family in 2013 and stopped shipping them entirely by 2015. Therefore, Intel Desktop Boards are no longer available or supported by Intel.
-Intel Desktop Board 21 B6 E1 E2 is different from other Intel Desktop Boards in terms of its features and specifications. It supports Intel Core i7, i5, i3, Pentium and Celeron processors with LGA1150 socket, has four DIMM slots that support up to 32 GB of DDR3 memory with dual channel architecture, has six SATA ports that support up to 6 Gb/s data transfer rate, has one PCI Express 3.0 x16 slot, one PCI Express 2.0 x16 slot, two PCI Express 2.0 x1 slots and one PCI slot, has integrated graphics with Intel HD Graphics and supports up to three displays with HDMI, DVI-D and VGA ports, has integrated audio with Realtek ALC892 codec and supports 7.1 channel surround sound, has integrated LAN with Intel I217V Gigabit Ethernet controller and supports Wake-on-LAN and PXE boot, has eight USB ports (four USB 3.0 and four USB 2.0) and supports USB BIOS Flashback and USB Charger, and has a UEFI BIOS with EZ Mode and Advanced Mode that allows users to customize their settings and preferences.
679dcb208e
-
-
\ No newline at end of file
diff --git a/spaces/timpal0l/chat-ui/src/lib/types/Conversation.ts b/spaces/timpal0l/chat-ui/src/lib/types/Conversation.ts
deleted file mode 100644
index d9120dd1ce64766f2edb5cc78694474837e7a840..0000000000000000000000000000000000000000
--- a/spaces/timpal0l/chat-ui/src/lib/types/Conversation.ts
+++ /dev/null
@@ -1,19 +0,0 @@
-import type { ObjectId } from "mongodb";
-import type { Message } from "./Message";
-
-export interface Conversation {
- _id: ObjectId;
-
- // Can be undefined for shared convo then deleted
- sessionId: string;
-
- title: string;
- messages: Message[];
-
- createdAt: Date;
- updatedAt: Date;
-
- meta?: {
- fromShareId?: string;
- };
-}
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Facebook Get Access Code.md b/spaces/tioseFevbu/cartoon-converter/scripts/Facebook Get Access Code.md
deleted file mode 100644
index 08fd58d6238cef506ff31cdda63aa3c3c66ffa9c..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Facebook Get Access Code.md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-How to Get a Security Code to Log into Facebook
-If you have enabled two-factor authentication on your Facebook account, you will need a security code to log in from a new device or browser. A security code is a six-digit number that is sent to your phone or generated by an app. It helps to protect your account from unauthorized access.
-Facebook Get Access Code
Download ⇒ https://urlcod.com/2uHxyU
-There are several ways to get a security code for Facebook login. In this article, we will explain each method and how to use it.
-
-1. Text Message (SMS)
-This is the most common way to get a security code for Facebook login. You need to have a mobile phone number associated with your Facebook account. When you try to log in, Facebook will send you a text message with a six-digit code. You need to enter this code on the login screen to verify your identity.
-To use this method, make sure you have access to your phone and that you can receive text messages. If you don't receive the code, you can request a new one or try another method.
-
-2. Security Key
-A security key is a physical device that you can use to log into Facebook without a code. You need to have a security key that is compatible with your device and browser. When you try to log in, Facebook will ask you to tap your security key on your device. This will confirm your identity and allow you to access your account.
-To use this method, make sure you have your security key with you and that it is connected to your device. If you don't have a security key or it doesn't work, you can try another method.
-
-
-3. Third Party App
-A third party app is an app that you can use to generate security codes for Facebook login. You need to have an app that supports two-factor authentication, such as Google Authenticator or Authy. You need to link your Facebook account with the app before you can use it. When you try to log in, Facebook will ask you to enter a code from the app. You need to open the app and copy the code that is displayed on the screen.
-To use this method, make sure you have the app installed on your device and that it is synced with your Facebook account. If you don't have an app or it doesn't work, you can try another method.
-
-4. Recognized Device
-A recognized device is a device that Facebook already knows and trusts. You need to have logged into Facebook from this device before and chosen to save it as a trusted device. When you try to log in from a recognized device, Facebook will not ask you for a code. Instead, it will ask you to confirm your login attempt by clicking on a button or checking a box.
-To use this method, make sure you are logging in from a device that you have used before and that you have marked as trusted. If you don't have access to a recognized device or it doesn't work, you can try another method.
-
-5. Recovery Codes
-A recovery code is a backup code that you can use to log into Facebook if you lose access to all other methods. You need to have generated and saved these codes beforehand from your Facebook settings. Each code can only be used once and expires after 30 days. When you try to log in, Facebook will ask you to enter one of these codes instead of a regular security code.
-To use this method, make sure you have your recovery codes with you and that they are valid. If you don't have any recovery codes or they are expired, you will not be able to use this method.
-
-Conclusion
-Getting a security code for Facebook login is easy if you have set up two-factor authentication on your account. You can choose from different methods depending on your preference and situation. However, if you lose access to all of these methods, you may not be able to log into your account. Therefore, it is important to keep your phone number and recovery codes updated and secure.
7b8c122e87
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/First Love Nikka Costa Music Midi.md b/spaces/tioseFevbu/cartoon-converter/scripts/First Love Nikka Costa Music Midi.md
deleted file mode 100644
index d0bddef3c63382f95473267e16021c336a2c6fb5..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/First Love Nikka Costa Music Midi.md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-First Love by Nikka Costa: A Soulful Song with a Violin Twist
-First Love is a song by American singer Nikka Costa, who is known for her blend of pop, soul, and blues genres. The song was released in 1981, when Costa was only nine years old, and became a hit in Europe and Asia. The song is about the innocence and joy of young love, and features Costa's powerful vocals and catchy melody.
-First Love Nikka Costa Music Midi
DOWNLOAD ————— https://urlcod.com/2uHvIs
-One of the most interesting aspects of First Love is the use of violin as the main instrument. The song has a violin solo that adds a touch of classical elegance and emotion to the otherwise upbeat tune. The violin part was arranged by rinrievalryuu, a user on Musescore.com, a website where musicians can share their sheet music and MIDI files. Rinrievalryuu's arrangement captures the essence of the original song while adding some variations and embellishments.
-If you are looking for a fun and soulful song to play on your violin, you can download and print the sheet music for First Love by Nikka Costa from Musescore.com[^1^]. You can also listen to the MIDI file and watch a video of rinrievalryuu playing the song on YouTube. First Love by Nikka Costa is a song that will make you smile and sing along.
The lyrics of First Love by Nikka Costa are simple yet heartfelt, expressing the feelings of a young girl who is experiencing love for the first time. The lyrics were written by Roger Joyce and Teddy Randazzo, who also wrote songs for Little Anthony and the Imperials, The Stylistics, and The Temptations. The lyrics capture the innocence and joy of first love, as well as the frustration and longing of not being able to express it to the object of one's affection.
-The chorus of the song is catchy and memorable, repeating the phrase "my first love" four times. The verses describe how the girl's behavior has changed since she fell in love, how she confides in her mirror and her teddy bear, and how she wishes that her crush would notice her and reciprocate her feelings. The song also uses some rhymes, such as "me" and "see", "pray" and "way", and "bed" and "head". The lyrics are easy to sing along to and relate to, especially for young listeners who may have experienced similar emotions.
-First Love by Nikka Costa is a song that celebrates the beauty and wonder of first love, while also acknowledging the challenges and obstacles that come with it. The lyrics are honest and sincere, conveying a universal theme that resonates with many people. The song is a classic example of pop music that appeals to both children and adults.
-First Love by Nikka Costa has been covered by many artists over the years, showing its popularity and timeless appeal. Some of the most notable covers are by Raissa Anggiani, Chris Andrian Yang, and Nadia & Yoseph, who are all Indonesian singers. These covers showcase the versatility and talent of these artists, who give their own interpretation and style to the song.
-Raissa Anggiani's cover of First Love is a soft and sweet rendition that highlights her delicate and soothing voice. She sings with a gentle and sincere expression, conveying the emotions of the song. Her cover has a simple acoustic guitar accompaniment that creates a cozy and intimate atmosphere. Her cover has over 2.3 million views on YouTube[^1^].
-Chris Andrian Yang's cover of First Love is a lively and upbeat version that showcases his smooth and powerful voice. He sings with a confident and playful attitude, adding some vocal runs and improvisations to the song. His cover has a full band arrangement that gives a modern and energetic vibe to the song. His cover has over 1 million views on YouTube[^2^].
-Nadia & Yoseph's cover of First Love is a harmonious and elegant rendition that highlights their beautiful and blended voices. They sing with a warm and romantic expression, complementing each other's vocals. Their cover has a simple acoustic guitar accompaniment that creates a calm and relaxing atmosphere. Their cover has over 600 thousand views on YouTube[^3^].
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Hindustani Movie In Hindi 720p Download TOP.md b/spaces/tioseFevbu/cartoon-converter/scripts/Hindustani Movie In Hindi 720p Download TOP.md
deleted file mode 100644
index 19db963f04f19dba6de07152a0eed54249156c49..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Hindustani Movie In Hindi 720p Download TOP.md
+++ /dev/null
@@ -1,27 +0,0 @@
-
-Here is a possible title and article with html formatting for the keyword "Hindustani Movie In Hindi 720p Download":
-
-```html
-Hindustani Movie In Hindi 720p Download: A Patriotic Thriller
-Hindustani is a 1996 Indian action thriller film directed by S. Shankar and starring Kamal Haasan, Manisha Koirala and Urmila Matondkar. The film is a remake of the Tamil film Indian, which was also directed by Shankar. The film revolves around a freedom fighter turned vigilante who targets corrupt officials and politicians.
-The film was a blockbuster at the box office and received critical acclaim for its story, direction, performances and music. The film also won four National Film Awards, including Best Actor for Kamal Haasan. The film was dubbed in Hindi and released as Hindustani in 1996.
-Hindustani Movie In Hindi 720p Download
Download Zip ✑ ✑ ✑ https://urlcod.com/2uHvsZ
-If you are looking for Hindustani movie in Hindi 720p download, you can find it on various online platforms. However, we advise you to watch the film legally on streaming services or buy the DVD from authorized sellers. Downloading or sharing pirated content is a crime and can land you in trouble.
-Hindustani movie in Hindi 720p download is a great option for those who want to enjoy a patriotic thriller with a powerful message. The film will keep you hooked with its gripping plot, action sequences and dialogues. The film also showcases the acting prowess of Kamal Haasan, who plays a dual role of father and son.
-So, what are you waiting for? Watch Hindustani movie in Hindi 720p download and get ready for a thrilling ride.
-```Here is a possible continuation of the article:
-
-```html
-The film Hindustani is also known for its music composed by A.R. Rahman, who won the Filmfare Award for Best Music Director. The film has some memorable songs like "Telephone Dhun Mein", "Latka Dikha Diya Humne", "Maya Machindra" and "Kashtiye". The songs are a mix of patriotic, romantic and peppy tunes that suit the mood of the film.
-Hindustani movie in Hindi 720p download is a must-watch for all the fans of Kamal Haasan and S. Shankar. The film is a masterpiece that showcases the talent and vision of these two legends. The film is also a tribute to the spirit of India and its freedom fighters who sacrificed their lives for the nation.
-Don't miss this opportunity to watch Hindustani movie in Hindi 720p download and witness a cinematic gem that will leave you spellbound.
-```Here are a few more paragraphs for the article:
-
-```html
-Hindustani movie in Hindi 720p download is also a great way to learn about the history and culture of India. The film depicts the struggle of the Indian independence movement and the challenges faced by the post-independence generation. The film also showcases the diversity and richness of the Indian society and its values.
-The film Hindustani is not just a movie, but a movement. It inspires the viewers to fight against corruption and injustice and to uphold the dignity and honor of the nation. The film also reminds the viewers of their duty and responsibility as citizens of India and as human beings.
-
-Hindustani movie in Hindi 720p download is a film that every Indian should watch and cherish. It is a film that will make you proud of your country and your heritage. It is a film that will make you think and act for the betterment of the society and the world.
-``` 7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Mahavishnu Orchestra - Discography - 1971-2007.md b/spaces/tioseFevbu/cartoon-converter/scripts/Mahavishnu Orchestra - Discography - 1971-2007.md
deleted file mode 100644
index 585309d15876d3ddaa8f19bb2f3d17a949802797..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Mahavishnu Orchestra - Discography - 1971-2007.md
+++ /dev/null
@@ -1,26 +0,0 @@
-
-Mahavishnu Orchestra: A Journey Through Their Discography (1971-2007)
-The Mahavishnu Orchestra was a pioneering jazz fusion band that emerged in the early 1970s, led by the virtuoso guitarist John McLaughlin. The band combined elements of jazz, rock, Indian classical music, and other genres, creating a unique and influential sound that inspired many musicians and listeners. In this article, we will explore the discography of the Mahavishnu Orchestra, from their debut album in 1971 to their final release in 2007.
-
-The First Mahavishnu Orchestra (1971-1973)
-The original line-up of the Mahavishnu Orchestra consisted of John McLaughlin on guitar, Jan Hammer on keyboards, Jerry Goodman on violin, Rick Laird on bass, and Billy Cobham on drums. This quintet recorded three studio albums and one live album between 1971 and 1973, which are widely regarded as classics of jazz fusion.
-Mahavishnu Orchestra - Discography - 1971-2007
Download File ☆ https://urlcod.com/2uHyiX
-
-
-- The Inner Mounting Flame (1971): The debut album of the Mahavishnu Orchestra introduced their distinctive style of high-energy, complex, and melodic compositions. The album features some of the band's most famous songs, such as "Meeting of the Spirits", "The Dance of Maya", and "You Know You Know". The album received critical acclaim and reached number 11 on the Billboard Jazz Albums chart.
-- Birds of Fire (1973): The second album of the Mahavishnu Orchestra was even more successful than their first, reaching number 15 on the Billboard 200 chart and number 1 on the Jazz Albums chart. The album showcases the band's incredible technical skills and musical diversity, with songs ranging from the fiery title track to the serene "Thousand Island Park". The album is considered one of the best jazz fusion albums of all time.
-- Between Nothingness & Eternity (1973): The third album of the Mahavishnu Orchestra was a live recording of a concert at New York's Central Park in August 1973. The album features three long improvisational pieces that demonstrate the band's dynamic interplay and creativity. The album was released after the band had already disbanded due to personal and musical differences.
-
-
-The Second Mahavishnu Orchestra (1974-1976)
-After the break-up of the first Mahavishnu Orchestra, John McLaughlin formed a new line-up with different musicians. The second Mahavishnu Orchestra consisted of John McLaughlin on guitar and vocals, Jean-Luc Ponty on violin and electric violin, Gayle Moran on keyboards and vocals, Ralphe Armstrong on bass and vocals, and Narada Michael Walden on drums and vocals. This line-up recorded two studio albums and one live album between 1974 and 1976, which featured a more vocal-oriented and orchestral approach to jazz fusion.
-
-
-- Apocalypse (1974): The fourth album of the Mahavishnu Orchestra was a collaboration with the London Symphony Orchestra, conducted by Michael Tilson Thomas. The album features four symphonic compositions by John McLaughlin that blend jazz fusion with classical music. The album received mixed reviews from critics and fans, but was praised for its ambitious scope and originality.
-- Visions of the Emerald Beyond (1975): The fifth album of the Mahavishnu Orchestra was a return to a more conventional jazz fusion format, with shorter and more accessible songs. The album features some of the band's most popular tunes, such as "Eternity's Breath", "Cosmic Strut", and "Be Happy". The album was well received by critics and fans alike, and reached number 13 on the Jazz Albums chart.
-- Inner Worlds (1976): The sixth album of the Mahavishnu Orchestra was the last one to feature John McLaughlin as the leader. The album features a more experimental and eclectic style of jazz fusion, incorporating elements of funk, disco, electronic music, and Indian music. The album received mixed reactions from critics and fans, who found it either innovative or inconsistent. The album reached number 20 on the Jazz Albums chart.
-
-
-The Third Mahavishnu Orchestra (1984-1987) cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/_functools.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/_functools.py
deleted file mode 100644
index e7053bac12fdb7b2cc50448f88318cd93f62cc0e..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/_functools.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import functools
-
-
-# from jaraco.functools 3.5
-def pass_none(func):
- """
- Wrap func so it's not called if its first param is None
-
- >>> print_text = pass_none(print)
- >>> print_text('text')
- text
- >>> print_text(None)
- """
-
- @functools.wraps(func)
- def wrapper(param, *args, **kwargs):
- if param is not None:
- return func(param, *args, **kwargs)
-
- return wrapper
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/pyparsing/diagram/__init__.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/pyparsing/diagram/__init__.py
deleted file mode 100644
index 2d0c587cbf42126eb903f27c11dc2dde9146c1cc..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/pyparsing/diagram/__init__.py
+++ /dev/null
@@ -1,611 +0,0 @@
-import railroad
-import pyparsing
-from pkg_resources import resource_filename
-from typing import (
- List,
- Optional,
- NamedTuple,
- Generic,
- TypeVar,
- Dict,
- Callable,
- Set,
- Iterable,
-)
-from jinja2 import Template
-from io import StringIO
-import inspect
-
-
-with open(resource_filename(__name__, "template.jinja2"), encoding="utf-8") as fp:
- template = Template(fp.read())
-
-# Note: ideally this would be a dataclass, but we're supporting Python 3.5+ so we can't do this yet
-NamedDiagram = NamedTuple(
- "NamedDiagram",
- [("name", str), ("diagram", Optional[railroad.DiagramItem]), ("index", int)],
-)
-"""
-A simple structure for associating a name with a railroad diagram
-"""
-
-T = TypeVar("T")
-
-
-class EachItem(railroad.Group):
- """
- Custom railroad item to compose a:
- - Group containing a
- - OneOrMore containing a
- - Choice of the elements in the Each
- with the group label indicating that all must be matched
- """
-
- all_label = "[ALL]"
-
- def __init__(self, *items):
- choice_item = railroad.Choice(len(items) - 1, *items)
- one_or_more_item = railroad.OneOrMore(item=choice_item)
- super().__init__(one_or_more_item, label=self.all_label)
-
-
-class AnnotatedItem(railroad.Group):
- """
- Simple subclass of Group that creates an annotation label
- """
-
- def __init__(self, label: str, item):
- super().__init__(item=item, label="[{}]".format(label) if label else label)
-
-
-class EditablePartial(Generic[T]):
- """
- Acts like a functools.partial, but can be edited. In other words, it represents a type that hasn't yet been
- constructed.
- """
-
- # We need this here because the railroad constructors actually transform the data, so can't be called until the
- # entire tree is assembled
-
- def __init__(self, func: Callable[..., T], args: list, kwargs: dict):
- self.func = func
- self.args = args
- self.kwargs = kwargs
-
- @classmethod
- def from_call(cls, func: Callable[..., T], *args, **kwargs) -> "EditablePartial[T]":
- """
- If you call this function in the same way that you would call the constructor, it will store the arguments
- as you expect. For example EditablePartial.from_call(Fraction, 1, 3)() == Fraction(1, 3)
- """
- return EditablePartial(func=func, args=list(args), kwargs=kwargs)
-
- @property
- def name(self):
- return self.kwargs["name"]
-
- def __call__(self) -> T:
- """
- Evaluate the partial and return the result
- """
- args = self.args.copy()
- kwargs = self.kwargs.copy()
-
- # This is a helpful hack to allow you to specify varargs parameters (e.g. *args) as keyword args (e.g.
- # args=['list', 'of', 'things'])
- arg_spec = inspect.getfullargspec(self.func)
- if arg_spec.varargs in self.kwargs:
- args += kwargs.pop(arg_spec.varargs)
-
- return self.func(*args, **kwargs)
-
-
-def railroad_to_html(diagrams: List[NamedDiagram], **kwargs) -> str:
- """
- Given a list of NamedDiagram, produce a single HTML string that visualises those diagrams
- :params kwargs: kwargs to be passed in to the template
- """
- data = []
- for diagram in diagrams:
- io = StringIO()
- diagram.diagram.writeSvg(io.write)
- title = diagram.name
- if diagram.index == 0:
- title += " (root)"
- data.append({"title": title, "text": "", "svg": io.getvalue()})
-
- return template.render(diagrams=data, **kwargs)
-
-
-def resolve_partial(partial: "EditablePartial[T]") -> T:
- """
- Recursively resolves a collection of Partials into whatever type they are
- """
- if isinstance(partial, EditablePartial):
- partial.args = resolve_partial(partial.args)
- partial.kwargs = resolve_partial(partial.kwargs)
- return partial()
- elif isinstance(partial, list):
- return [resolve_partial(x) for x in partial]
- elif isinstance(partial, dict):
- return {key: resolve_partial(x) for key, x in partial.items()}
- else:
- return partial
-
-
-def to_railroad(
- element: pyparsing.ParserElement,
- diagram_kwargs: Optional[dict] = None,
- vertical: int = 3,
- show_results_names: bool = False,
- show_groups: bool = False,
-) -> List[NamedDiagram]:
- """
- Convert a pyparsing element tree into a list of diagrams. This is the recommended entrypoint to diagram
- creation if you want to access the Railroad tree before it is converted to HTML
- :param element: base element of the parser being diagrammed
- :param diagram_kwargs: kwargs to pass to the Diagram() constructor
- :param vertical: (optional) - int - limit at which number of alternatives should be
- shown vertically instead of horizontally
- :param show_results_names - bool to indicate whether results name annotations should be
- included in the diagram
- :param show_groups - bool to indicate whether groups should be highlighted with an unlabeled
- surrounding box
- """
- # Convert the whole tree underneath the root
- lookup = ConverterState(diagram_kwargs=diagram_kwargs or {})
- _to_diagram_element(
- element,
- lookup=lookup,
- parent=None,
- vertical=vertical,
- show_results_names=show_results_names,
- show_groups=show_groups,
- )
-
- root_id = id(element)
- # Convert the root if it hasn't been already
- if root_id in lookup:
- if not element.customName:
- lookup[root_id].name = ""
- lookup[root_id].mark_for_extraction(root_id, lookup, force=True)
-
- # Now that we're finished, we can convert from intermediate structures into Railroad elements
- diags = list(lookup.diagrams.values())
- if len(diags) > 1:
- # collapse out duplicate diags with the same name
- seen = set()
- deduped_diags = []
- for d in diags:
- # don't extract SkipTo elements, they are uninformative as subdiagrams
- if d.name == "...":
- continue
- if d.name is not None and d.name not in seen:
- seen.add(d.name)
- deduped_diags.append(d)
- resolved = [resolve_partial(partial) for partial in deduped_diags]
- else:
- # special case - if just one diagram, always display it, even if
- # it has no name
- resolved = [resolve_partial(partial) for partial in diags]
- return sorted(resolved, key=lambda diag: diag.index)
-
-
-def _should_vertical(
- specification: int, exprs: Iterable[pyparsing.ParserElement]
-) -> bool:
- """
- Returns true if we should return a vertical list of elements
- """
- if specification is None:
- return False
- else:
- return len(_visible_exprs(exprs)) >= specification
-
-
-class ElementState:
- """
- State recorded for an individual pyparsing Element
- """
-
- # Note: this should be a dataclass, but we have to support Python 3.5
- def __init__(
- self,
- element: pyparsing.ParserElement,
- converted: EditablePartial,
- parent: EditablePartial,
- number: int,
- name: str = None,
- parent_index: Optional[int] = None,
- ):
- #: The pyparsing element that this represents
- self.element: pyparsing.ParserElement = element
- #: The name of the element
- self.name: str = name
- #: The output Railroad element in an unconverted state
- self.converted: EditablePartial = converted
- #: The parent Railroad element, which we store so that we can extract this if it's duplicated
- self.parent: EditablePartial = parent
- #: The order in which we found this element, used for sorting diagrams if this is extracted into a diagram
- self.number: int = number
- #: The index of this inside its parent
- self.parent_index: Optional[int] = parent_index
- #: If true, we should extract this out into a subdiagram
- self.extract: bool = False
- #: If true, all of this element's children have been filled out
- self.complete: bool = False
-
- def mark_for_extraction(
- self, el_id: int, state: "ConverterState", name: str = None, force: bool = False
- ):
- """
- Called when this instance has been seen twice, and thus should eventually be extracted into a sub-diagram
- :param el_id: id of the element
- :param state: element/diagram state tracker
- :param name: name to use for this element's text
- :param force: If true, force extraction now, regardless of the state of this. Only useful for extracting the
- root element when we know we're finished
- """
- self.extract = True
-
- # Set the name
- if not self.name:
- if name:
- # Allow forcing a custom name
- self.name = name
- elif self.element.customName:
- self.name = self.element.customName
- else:
- self.name = ""
-
- # Just because this is marked for extraction doesn't mean we can do it yet. We may have to wait for children
- # to be added
- # Also, if this is just a string literal etc, don't bother extracting it
- if force or (self.complete and _worth_extracting(self.element)):
- state.extract_into_diagram(el_id)
-
-
-class ConverterState:
- """
- Stores some state that persists between recursions into the element tree
- """
-
- def __init__(self, diagram_kwargs: Optional[dict] = None):
- #: A dictionary mapping ParserElements to state relating to them
- self._element_diagram_states: Dict[int, ElementState] = {}
- #: A dictionary mapping ParserElement IDs to subdiagrams generated from them
- self.diagrams: Dict[int, EditablePartial[NamedDiagram]] = {}
- #: The index of the next unnamed element
- self.unnamed_index: int = 1
- #: The index of the next element. This is used for sorting
- self.index: int = 0
- #: Shared kwargs that are used to customize the construction of diagrams
- self.diagram_kwargs: dict = diagram_kwargs or {}
- self.extracted_diagram_names: Set[str] = set()
-
- def __setitem__(self, key: int, value: ElementState):
- self._element_diagram_states[key] = value
-
- def __getitem__(self, key: int) -> ElementState:
- return self._element_diagram_states[key]
-
- def __delitem__(self, key: int):
- del self._element_diagram_states[key]
-
- def __contains__(self, key: int):
- return key in self._element_diagram_states
-
- def generate_unnamed(self) -> int:
- """
- Generate a number used in the name of an otherwise unnamed diagram
- """
- self.unnamed_index += 1
- return self.unnamed_index
-
- def generate_index(self) -> int:
- """
- Generate a number used to index a diagram
- """
- self.index += 1
- return self.index
-
- def extract_into_diagram(self, el_id: int):
- """
- Used when we encounter the same token twice in the same tree. When this
- happens, we replace all instances of that token with a terminal, and
- create a new subdiagram for the token
- """
- position = self[el_id]
-
- # Replace the original definition of this element with a regular block
- if position.parent:
- ret = EditablePartial.from_call(railroad.NonTerminal, text=position.name)
- if "item" in position.parent.kwargs:
- position.parent.kwargs["item"] = ret
- elif "items" in position.parent.kwargs:
- position.parent.kwargs["items"][position.parent_index] = ret
-
- # If the element we're extracting is a group, skip to its content but keep the title
- if position.converted.func == railroad.Group:
- content = position.converted.kwargs["item"]
- else:
- content = position.converted
-
- self.diagrams[el_id] = EditablePartial.from_call(
- NamedDiagram,
- name=position.name,
- diagram=EditablePartial.from_call(
- railroad.Diagram, content, **self.diagram_kwargs
- ),
- index=position.number,
- )
-
- del self[el_id]
-
-
-def _worth_extracting(element: pyparsing.ParserElement) -> bool:
- """
- Returns true if this element is worth having its own sub-diagram. Simply, if any of its children
- themselves have children, then its complex enough to extract
- """
- children = element.recurse()
- return any(child.recurse() for child in children)
-
-
-def _apply_diagram_item_enhancements(fn):
- """
- decorator to ensure enhancements to a diagram item (such as results name annotations)
- get applied on return from _to_diagram_element (we do this since there are several
- returns in _to_diagram_element)
- """
-
- def _inner(
- element: pyparsing.ParserElement,
- parent: Optional[EditablePartial],
- lookup: ConverterState = None,
- vertical: int = None,
- index: int = 0,
- name_hint: str = None,
- show_results_names: bool = False,
- show_groups: bool = False,
- ) -> Optional[EditablePartial]:
-
- ret = fn(
- element,
- parent,
- lookup,
- vertical,
- index,
- name_hint,
- show_results_names,
- show_groups,
- )
-
- # apply annotation for results name, if present
- if show_results_names and ret is not None:
- element_results_name = element.resultsName
- if element_results_name:
- # add "*" to indicate if this is a "list all results" name
- element_results_name += "" if element.modalResults else "*"
- ret = EditablePartial.from_call(
- railroad.Group, item=ret, label=element_results_name
- )
-
- return ret
-
- return _inner
-
-
-def _visible_exprs(exprs: Iterable[pyparsing.ParserElement]):
- non_diagramming_exprs = (
- pyparsing.ParseElementEnhance,
- pyparsing.PositionToken,
- pyparsing.And._ErrorStop,
- )
- return [
- e
- for e in exprs
- if not (e.customName or e.resultsName or isinstance(e, non_diagramming_exprs))
- ]
-
-
-@_apply_diagram_item_enhancements
-def _to_diagram_element(
- element: pyparsing.ParserElement,
- parent: Optional[EditablePartial],
- lookup: ConverterState = None,
- vertical: int = None,
- index: int = 0,
- name_hint: str = None,
- show_results_names: bool = False,
- show_groups: bool = False,
-) -> Optional[EditablePartial]:
- """
- Recursively converts a PyParsing Element to a railroad Element
- :param lookup: The shared converter state that keeps track of useful things
- :param index: The index of this element within the parent
- :param parent: The parent of this element in the output tree
- :param vertical: Controls at what point we make a list of elements vertical. If this is an integer (the default),
- it sets the threshold of the number of items before we go vertical. If True, always go vertical, if False, never
- do so
- :param name_hint: If provided, this will override the generated name
- :param show_results_names: bool flag indicating whether to add annotations for results names
- :returns: The converted version of the input element, but as a Partial that hasn't yet been constructed
- :param show_groups: bool flag indicating whether to show groups using bounding box
- """
- exprs = element.recurse()
- name = name_hint or element.customName or element.__class__.__name__
-
- # Python's id() is used to provide a unique identifier for elements
- el_id = id(element)
-
- element_results_name = element.resultsName
-
- # Here we basically bypass processing certain wrapper elements if they contribute nothing to the diagram
- if not element.customName:
- if isinstance(
- element,
- (
- # pyparsing.TokenConverter,
- # pyparsing.Forward,
- pyparsing.Located,
- ),
- ):
- # However, if this element has a useful custom name, and its child does not, we can pass it on to the child
- if exprs:
- if not exprs[0].customName:
- propagated_name = name
- else:
- propagated_name = None
-
- return _to_diagram_element(
- element.expr,
- parent=parent,
- lookup=lookup,
- vertical=vertical,
- index=index,
- name_hint=propagated_name,
- show_results_names=show_results_names,
- show_groups=show_groups,
- )
-
- # If the element isn't worth extracting, we always treat it as the first time we say it
- if _worth_extracting(element):
- if el_id in lookup:
- # If we've seen this element exactly once before, we are only just now finding out that it's a duplicate,
- # so we have to extract it into a new diagram.
- looked_up = lookup[el_id]
- looked_up.mark_for_extraction(el_id, lookup, name=name_hint)
- ret = EditablePartial.from_call(railroad.NonTerminal, text=looked_up.name)
- return ret
-
- elif el_id in lookup.diagrams:
- # If we have seen the element at least twice before, and have already extracted it into a subdiagram, we
- # just put in a marker element that refers to the sub-diagram
- ret = EditablePartial.from_call(
- railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"]
- )
- return ret
-
- # Recursively convert child elements
- # Here we find the most relevant Railroad element for matching pyparsing Element
- # We use ``items=[]`` here to hold the place for where the child elements will go once created
- if isinstance(element, pyparsing.And):
- # detect And's created with ``expr*N`` notation - for these use a OneOrMore with a repeat
- # (all will have the same name, and resultsName)
- if not exprs:
- return None
- if len(set((e.name, e.resultsName) for e in exprs)) == 1:
- ret = EditablePartial.from_call(
- railroad.OneOrMore, item="", repeat=str(len(exprs))
- )
- elif _should_vertical(vertical, exprs):
- ret = EditablePartial.from_call(railroad.Stack, items=[])
- else:
- ret = EditablePartial.from_call(railroad.Sequence, items=[])
- elif isinstance(element, (pyparsing.Or, pyparsing.MatchFirst)):
- if not exprs:
- return None
- if _should_vertical(vertical, exprs):
- ret = EditablePartial.from_call(railroad.Choice, 0, items=[])
- else:
- ret = EditablePartial.from_call(railroad.HorizontalChoice, items=[])
- elif isinstance(element, pyparsing.Each):
- if not exprs:
- return None
- ret = EditablePartial.from_call(EachItem, items=[])
- elif isinstance(element, pyparsing.NotAny):
- ret = EditablePartial.from_call(AnnotatedItem, label="NOT", item="")
- elif isinstance(element, pyparsing.FollowedBy):
- ret = EditablePartial.from_call(AnnotatedItem, label="LOOKAHEAD", item="")
- elif isinstance(element, pyparsing.PrecededBy):
- ret = EditablePartial.from_call(AnnotatedItem, label="LOOKBEHIND", item="")
- elif isinstance(element, pyparsing.Group):
- if show_groups:
- ret = EditablePartial.from_call(AnnotatedItem, label="", item="")
- else:
- ret = EditablePartial.from_call(railroad.Group, label="", item="")
- elif isinstance(element, pyparsing.TokenConverter):
- ret = EditablePartial.from_call(AnnotatedItem, label=type(element).__name__.lower(), item="")
- elif isinstance(element, pyparsing.Opt):
- ret = EditablePartial.from_call(railroad.Optional, item="")
- elif isinstance(element, pyparsing.OneOrMore):
- ret = EditablePartial.from_call(railroad.OneOrMore, item="")
- elif isinstance(element, pyparsing.ZeroOrMore):
- ret = EditablePartial.from_call(railroad.ZeroOrMore, item="")
- elif isinstance(element, pyparsing.Group):
- ret = EditablePartial.from_call(
- railroad.Group, item=None, label=element_results_name
- )
- elif isinstance(element, pyparsing.Empty) and not element.customName:
- # Skip unnamed "Empty" elements
- ret = None
- elif len(exprs) > 1:
- ret = EditablePartial.from_call(railroad.Sequence, items=[])
- elif len(exprs) > 0 and not element_results_name:
- ret = EditablePartial.from_call(railroad.Group, item="", label=name)
- else:
- terminal = EditablePartial.from_call(railroad.Terminal, element.defaultName)
- ret = terminal
-
- if ret is None:
- return
-
- # Indicate this element's position in the tree so we can extract it if necessary
- lookup[el_id] = ElementState(
- element=element,
- converted=ret,
- parent=parent,
- parent_index=index,
- number=lookup.generate_index(),
- )
- if element.customName:
- lookup[el_id].mark_for_extraction(el_id, lookup, element.customName)
-
- i = 0
- for expr in exprs:
- # Add a placeholder index in case we have to extract the child before we even add it to the parent
- if "items" in ret.kwargs:
- ret.kwargs["items"].insert(i, None)
-
- item = _to_diagram_element(
- expr,
- parent=ret,
- lookup=lookup,
- vertical=vertical,
- index=i,
- show_results_names=show_results_names,
- show_groups=show_groups,
- )
-
- # Some elements don't need to be shown in the diagram
- if item is not None:
- if "item" in ret.kwargs:
- ret.kwargs["item"] = item
- elif "items" in ret.kwargs:
- # If we've already extracted the child, don't touch this index, since it's occupied by a nonterminal
- ret.kwargs["items"][i] = item
- i += 1
- elif "items" in ret.kwargs:
- # If we're supposed to skip this element, remove it from the parent
- del ret.kwargs["items"][i]
-
- # If all this items children are none, skip this item
- if ret and (
- ("items" in ret.kwargs and len(ret.kwargs["items"]) == 0)
- or ("item" in ret.kwargs and ret.kwargs["item"] is None)
- ):
- ret = EditablePartial.from_call(railroad.Terminal, name)
-
- # Mark this element as "complete", ie it has all of its children
- if el_id in lookup:
- lookup[el_id].complete = True
-
- if el_id in lookup and lookup[el_id].extract and lookup[el_id].complete:
- lookup.extract_into_diagram(el_id)
- if ret is not None:
- ret = EditablePartial.from_call(
- railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"]
- )
-
- return ret
diff --git a/spaces/tmaham/DS-Fusion-Express/ldm/models/diffusion/ddim.py b/spaces/tmaham/DS-Fusion-Express/ldm/models/diffusion/ddim.py
deleted file mode 100644
index 15b1ee080ce5d5043c2f4b1f390d65913448ba42..0000000000000000000000000000000000000000
--- a/spaces/tmaham/DS-Fusion-Express/ldm/models/diffusion/ddim.py
+++ /dev/null
@@ -1,242 +0,0 @@
-"""SAMPLING ONLY."""
-
-import torch
-import numpy as np
-from tqdm import tqdm
-from functools import partial
-import pdb
-from ldm.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, \
- extract_into_tensor
-
-
-class DDIMSampler(object):
- def __init__(self, model, device="cuda", schedule="linear", **kwargs):
- super().__init__()
- self.model = model
- self.ddpm_num_timesteps = model.num_timesteps
- self.schedule = schedule
- self.device = device
-
- def register_buffer(self, name, attr):
- if type(attr) == torch.Tensor:
- if attr.device != torch.device(self.device):
- attr = attr.to(torch.device(self.device))
- setattr(self, name, attr)
-
- def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True):
- self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps,
- num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose)
- alphas_cumprod = self.model.alphas_cumprod
- assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep'
- to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device)
-
- self.register_buffer('betas', to_torch(self.model.betas))
- self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
- self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev))
-
- # calculations for diffusion q(x_t | x_{t-1}) and others
- self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu())))
- self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu())))
- self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu())))
- self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1)))
-
- # ddim sampling parameters
- ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(),
- ddim_timesteps=self.ddim_timesteps,
- eta=ddim_eta,verbose=verbose)
- self.register_buffer('ddim_sigmas', ddim_sigmas)
- self.register_buffer('ddim_alphas', ddim_alphas)
- self.register_buffer('ddim_alphas_prev', ddim_alphas_prev)
- self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas))
- sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt(
- (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * (
- 1 - self.alphas_cumprod / self.alphas_cumprod_prev))
- self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps)
-
- @torch.no_grad()
- def sample(self,
- S,
- batch_size,
- shape,
- conditioning=None,
- callback=None,
- normals_sequence=None,
- img_callback=None,
- quantize_x0=False,
- eta=0.,
- mask=None,
- x0=None,
- temperature=1.,
- noise_dropout=0.,
- score_corrector=None,
- corrector_kwargs=None,
- verbose=True,
- x_T=None,
- log_every_t=100,
- unconditional_guidance_scale=1.,
- unconditional_conditioning=None,
- # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...
- **kwargs
- ):
- if conditioning is not None:
- if isinstance(conditioning, dict):
- cbs = conditioning[list(conditioning.keys())[0]].shape[0]
- if cbs != batch_size:
- print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}")
- else:
- if conditioning.shape[0] != batch_size:
- print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}")
-
- self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose)
- # sampling
- C, H, W = shape
- size = (batch_size, C, H, W)
- print(f'Data shape for DDIM sampling is {size}, eta {eta}')
-
- samples, intermediates = self.ddim_sampling(conditioning, size,
- callback=callback,
- img_callback=img_callback,
- quantize_denoised=quantize_x0,
- mask=mask, x0=x0,
- ddim_use_original_steps=False,
- noise_dropout=noise_dropout,
- temperature=temperature,
- score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- x_T=x_T,
- log_every_t=log_every_t,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning,
- )
- return samples, intermediates
-
- @torch.no_grad()
- def ddim_sampling(self, cond, shape,
- x_T=None, ddim_use_original_steps=False,
- callback=None, timesteps=None, quantize_denoised=False,
- mask=None, x0=None, img_callback=None, log_every_t=100,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None,):
- device = self.model.betas.device
- b = shape[0]
- if x_T is None:
- img = torch.randn(shape, device=device)
- else:
- img = x_T
-
- if timesteps is None:
- timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps
- elif timesteps is not None and not ddim_use_original_steps:
- subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1
- timesteps = self.ddim_timesteps[:subset_end]
-
- intermediates = {'x_inter': [img], 'pred_x0': [img]}
- time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps)
- total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0]
- print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps)
-
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full((b,), step, device=device, dtype=torch.long)
-
- if mask is not None:
- assert x0 is not None
- img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass?
- img = img_orig * mask + (1. - mask) * img
-
- outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
- quantize_denoised=quantize_denoised, temperature=temperature,
- noise_dropout=noise_dropout, score_corrector=score_corrector,
- corrector_kwargs=corrector_kwargs,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning)
- img, pred_x0 = outs
- if callback: callback(i)
- if img_callback: img_callback(pred_x0, i)
-
- if index % log_every_t == 0 or index == total_steps - 1:
- intermediates['x_inter'].append(img)
- intermediates['pred_x0'].append(pred_x0)
-
- return img, intermediates
-
- @torch.no_grad()
- def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
- temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
- unconditional_guidance_scale=1., unconditional_conditioning=None):
- b, *_, device = *x.shape, x.device
-
- if unconditional_conditioning is None or unconditional_guidance_scale == 1.:
- e_t = self.model.apply_model(x, t, c)
- else:
- x_in = torch.cat([x] * 2)
- t_in = torch.cat([t] * 2)
- c_in = torch.cat([unconditional_conditioning, c])
- e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
- e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
-
- if score_corrector is not None:
- assert self.model.parameterization == "eps"
- e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)
-
- alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas
- alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev
- sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas
- sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas
- # select parameters corresponding to the currently considered timestep
- a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)
- a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)
- sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)
- sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device)
-
- # current prediction for x_0
- pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
- if quantize_denoised:
- pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)
- # direction pointing to x_t
- dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t
- noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature
- if noise_dropout > 0.:
- noise = torch.nn.functional.dropout(noise, p=noise_dropout)
- x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise
- return x_prev, pred_x0
-
- @torch.no_grad()
- def stochastic_encode(self, x0, t, use_original_steps=False, noise=None):
- # fast, but does not allow for exact reconstruction
- # t serves as an index to gather the correct alphas
- if use_original_steps:
- sqrt_alphas_cumprod = self.sqrt_alphas_cumprod
- sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod
- else:
- sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas)
- sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas
-
- if noise is None:
- noise = torch.randn_like(x0)
- return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 +
- extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise)
-
- @torch.no_grad()
- def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None,
- use_original_steps=False):
-
- timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps
- timesteps = timesteps[:t_start]
-
- time_range = np.flip(timesteps)
- total_steps = timesteps.shape[0]
- print(f"Running DDIM Sampling with {total_steps} timesteps")
-
- iterator = tqdm(time_range, desc='Decoding image', total=total_steps)
- x_dec = x_latent
- for i, step in enumerate(iterator):
- index = total_steps - i - 1
- ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long)
- x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps,
- unconditional_guidance_scale=unconditional_guidance_scale,
- unconditional_conditioning=unconditional_conditioning)
- return x_dec
\ No newline at end of file
diff --git a/spaces/toloka/open-llm-leaderboard/app.py b/spaces/toloka/open-llm-leaderboard/app.py
deleted file mode 100644
index 620e142bc43ec2400f4f408e984170ff3d392bb6..0000000000000000000000000000000000000000
--- a/spaces/toloka/open-llm-leaderboard/app.py
+++ /dev/null
@@ -1,171 +0,0 @@
-import streamlit as st
-import requests
-from collections import defaultdict
-import pandas as pd
-import plotly.graph_objects as go
-
-
-header = """Toloka compared and ranked LLM output in multiple categories, using Guanaco 13B as the baseline.
-
-We used human evaluation to rate model responses to real prompts."""
-
-description = """The Toloka LLM leaderboard provides a human evaluation framework. Here, we ask [Toloka](https://toloka.ai/) domain experts to assess the model's responses. For this purpose, responses are generated by open-source LLMs based on a dataset of real-world user prompts. These prompts are categorized as per the [InstructGPT paper](https://arxiv.org/abs/2203.02155). Subsequently, annotators evaluate these responses in the manner of [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). It's worth noting that we employ [Guanaco 13B](https://huggingface.co/timdettmers/guanaco-13b) instead of text-davinci-003. This is because Guanaco 13B is the closest counterpart to the now-deprecated text-davinci-003 in AlpacaEval.
-The metrics on the leaderboard represent the win rate of the respective model in comparison to Guanaco 13B across various prompt categories. The "Total" category denotes the aggregation of all prompts and is not a mere average of metrics from individual categories.
-
-### 📊 The evaluation method
-
-#### 🖊 Stage 1: Prompt collection
-
-We collected our own dataset of organic prompts for LLM evaluation.
-
-The alternative is to use open-source prompts, but they are not reliable enough for high-quality evaluation. Using open-source datasets can be restrictive for several reasons:
-
-1. Many open-source prompts are too generic and do not reflect the needs of a business looking to implement an LLM.
-2. The range of tasks the open-source prompts cover might be broad but the distribution is skewed towards certain tasks that are not necessarily the most relevant for business applications.
-3. It is virtually impossible to guarantee that the dataset was not leaked and the open-source prompts were not included in the training data of the existing LLMs.
-
-To mitigate these issues, we collected organic prompts sent to ChatGPT (some were submitted by Toloka employees, and some we found on the internet, but all of them were from real conversations with ChatGPT). These prompts are the key to accurate evaluation — **we can be certain that the prompts represent real-world use cases, and they were not used in any LLM training sets.** We store the dataset securely and reserve it solely for use in this particular evaluation.
-
-After collecting the prompts, we manually classified them by category and got the following distribution:"""
-
-# * Brainstorming: 15.48%
-# * Chat: 1.59%
-# * Classification: 0.2%
-# * Closed QA: 3.77%
-# * Extraction: 0.6%
-# * Generation: 38.29%
-# * Open QA: 32.94%
-# * Rewrite: 5.16%
-# * Summarization: 1.98%
-
-fig = go.Figure(
- data=[go.Bar(y=[38.29, 32.94, 15.48, 5.16, 3.77, 1.98, 1.59, 0.6, 0.2], x=["Generation", "Open QA", "Brainstorming", "Rewrite", "Closed QA", "Summarization", "Chat", "Extraction", "Classification"])],
-)
-fig.update_layout(yaxis_title="% of prompts")
-
-description2 = """We intentionally excluded prompts about coding. If you are interested in comparing coding abilities, you can refer to specific benchmarks such as [HumanEval](https://paperswithcode.com/sota/code-generation-on-humaneval).
-
-
-#### 🧠 Stage 2: Human evaluation
-
-Human evaluation of prompts was conducted by [Toloka’s domain experts](https://toloka.ai/blog/ai-tutors/).
-Our experts were given a prompt and responses to this prompt from two different models: the reference model (Guanaco 13B) and the model under evaluation. In a side-by-side comparison, experts selected the best output according to the [harmlessness, truthfulness, and helpfulness principles](https://arxiv.org/pdf/2203.02155.pdf).
-
-In other words, each model was compared to the same baseline model, rather than comparing each model to every other competitor model. Then we calculated the percentage of prompts where humans preferred the tested model’s output over the baseline model’s output (this is called the model’s win rate). The leaderboard shows results in each category, as well as the average score overall for each of the tested models.
-
-
-Most importantly, we ensured the accuracy of human judgments by using advanced quality control techniques:
-- Annotator onboarding with rigorous qualification tests to certify experts and check their performance on evaluation tasks.
-- Overlap of 3 with Dawid-Skene aggregation of the results (each prompt was evaluated by 3 experts and aggregated to achieve a single verdict).
-- Monitoring individual accuracy by comparing each expert’s results with the majority vote; those who fell below the accuracy threshold were removed from the evaluation project.
-
-#### 👉 Ready to compare LLMs?
-
-Find your AI application’s use case categories on our leaderboard and see how the models stack up. It never hurts to check more leaderboards ([Hugging Face Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?tab=evaluation), [LMSYS](https://leaderboard.lmsys.org/), or others) for the big picture before you pick a model and start experimenting.
-
-If you’re interested in comparing more LLMs using our experts, or you need reliable evaluation of your model, we have the tools you need.
-Reach out to our team to learn how Toloka can help you achieve the quality insights you’re looking for.
-"""
-
-pretty_category_names = {
- "all": "Total",
- "brainstorming": "Brainstorming",
- "closed_qa": "Closed QA",
- "generation": "Generation",
- "open_qa": "Open QA",
- "rewrite": "Rewrite",
-}
-
-pretty_model_names = {
- "gpt-4": "GPT-4",
- "WizardLM/WizardLM-13B-V1.2": "WizardLM 13B V1.2",
- "meta-llama/Llama-2-70b-chat-hf": "LLaMA 2 70B Chat",
- "gpt-3.5-turbo": "GPT-3.5 Turbo",
- "lmsys/vicuna-33b-v1.3": "Vicuna 33B V1.3",
- "timdettmers/guanaco-13b": "Guanaco 13B",
-}
-
-reference_model_name = "timdettmers/guanaco-13b"
-
-
-leaderboard_results = requests.get("https://llmleaderboard.blob.core.windows.net/llmleaderboard/evaluation_resuls.json").json()
-categories = list(leaderboard_results.keys())
-pretty_categories = [pretty_category_names[category] for category in categories if category in pretty_category_names]
-categories.sort()
-models = set()
-
-
-model_ratings = defaultdict(dict)
-for category in categories:
- for entry in leaderboard_results[category]:
- model = entry['model']
- models.add(model)
- model_ratings[model][category] = entry['rating']
-
-
-table = []
-
-for model in models:
- row = [model]
- for category in categories:
- if category not in pretty_category_names:
- continue
- if category not in model_ratings[model]:
- row.append(0.0)
- else:
- row.append(model_ratings[model][category] * 100)
- table.append(row)
-
-table = pd.DataFrame(table, columns=['Model'] + pretty_categories)
-table = table.sort_values(by=['Total'], ascending=False)
-table = table.head(5)
-
-# Add row with reference model
-row = [reference_model_name] + [50.0] * len(pretty_categories)
-table = pd.concat([table, pd.DataFrame([pd.Series(row, index=table.columns)])], ignore_index=True)
-table = table.sort_values(by=['Total'], ascending=False)
-
-table.index = ["🥇 1", "🥈 2", "🥉 3"] + list(range(4, len(table) + 1))
-
-for category in pretty_category_names.values():
- table[category] = table[category].map('{:,.2f}%'.format)
-
-avg_token_counts = requests.get("https://llmleaderboard.blob.core.windows.net/llmleaderboard/token_count.json").json()
-table['Avg. Response Length (in characters)'] = [int(avg_token_counts[model]) if model != reference_model_name else int(avg_token_counts["TheBloke/guanaco-13B-HF"]) for model in table['Model']]
-table['HF Hub Link'] = [f"https://huggingface.co/{model}" if "/" in model else "" for model in table["Model"]]
-table["Model"] = [pretty_model_names[model] if model in pretty_model_names else model for model in table["Model"]]
-
-st.set_page_config(layout="wide")
-st.title('🏆 Toloka LLM Leaderboard for Real-World Evaluation')
-st.markdown(header)
-st.dataframe(
- table,
- column_config={
- "HF Hub Link": st.column_config.LinkColumn(
- "HF Hub Link",
- help="HF Hub Link",
- )
- }
-)
-st.markdown(description)
-st.plotly_chart(fig, theme="streamlit")
-st.markdown(description2)
-st.link_button('🚀 Evaluate my model', url='https://toloka.ai/talk-to-us/')
-prompt_examples = """
-### 🔍 Prompt Examples
-
-| Prompt | Model | Output |
-| ------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| What does 'burning the midnight oil' mean? | GPT-4 | “Burning the midnight oil" is an idiomatic expression that means working late into the night. It originates from the past when people used oil lamps for light to work after the sun went down. So, if someone is "burning the midnight oil," they are up late working, often on a project or task that requires extra time. |
-| | WizardLM | "Burning the midnight oil" is an idiom that means working late into the night, often until midnight or later, to complete tasks or meet deadlines. It is often used to describe an intense period of focused, dedicated work, often to the point of exhaustion. The phrase suggests that the person is spending so much time and energy on their work that they are essentially "burning" through their hours of rest and sleep, much like a candle burns through its wax. |
-
-
-Here WizardLM misunderstood the prompt:
-
-| Prompt | Model | Output |
-| ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- |
-| What is 5x5? | GPT-4 | 5x5 is 25. |
-| | WizardLM | 5x5 refers to a 5 by 5 grid or a 5-dimensional space. In mathematics, a 5x5 grid is a square grid with five rows and five columns. Each cell in the grid is identified by a pair of coordinates, (i, j), where i ranges from 1 to 5 for the row and j ranges from 1 to 5 for the column. In a 5x5 grid, there are 25 cells in total. Each cell can be filled with a specific value or content, such as a number, a letter, or a symbol. For example, a 5x5 grid can be used to create a puzzle where you have to fill in the missing numbers or letters in a specific pattern. In higher mathematics, a 5-dimensional space is a more abstract concept used to represent complex geometric structures. It's often used in fields like physics and computer science to model complex systems and relationships between different variables. In everyday life, a 5x5 grid can be used to represent a calendar with five columns for days of the week and five rows for different time slots in a day. |
-"""
-st.markdown(prompt_examples)
-
diff --git a/spaces/tomofi/MMOCR/tools/data/common/curvedsyntext_converter.py b/spaces/tomofi/MMOCR/tools/data/common/curvedsyntext_converter.py
deleted file mode 100644
index ddffd50e2af44e464d21260fbc2dbe58f70da2cf..0000000000000000000000000000000000000000
--- a/spaces/tomofi/MMOCR/tools/data/common/curvedsyntext_converter.py
+++ /dev/null
@@ -1,129 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import argparse
-import os.path as osp
-from functools import partial
-
-import mmcv
-import numpy as np
-
-from mmocr.utils import bezier_to_polygon, sort_points
-
-# The default dictionary used by CurvedSynthText
-dict95 = [
- ' ', '!', '"', '#', '$', '%', '&', '\'', '(', ')', '*', '+', ',', '-', '.',
- '/', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', ':', ';', '<', '=',
- '>', '?', '@', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L',
- 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '[',
- '\\', ']', '^', '_', '`', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j',
- 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y',
- 'z', '{', '|', '}', '~'
-]
-UNK = len(dict95)
-EOS = UNK + 1
-
-
-def digit2text(rec):
- res = []
- for d in rec:
- assert d <= EOS
- if d == EOS:
- break
- if d == UNK:
- print('Warning: Has a UNK character')
- res.append('口') # Or any special character not in the target dict
- res.append(dict95[d])
- return ''.join(res)
-
-
-def modify_annotation(ann, num_sample, start_img_id=0, start_ann_id=0):
- ann['text'] = digit2text(ann.pop('rec'))
- # Get hide egmentation points
- polygon_pts = bezier_to_polygon(ann['bezier_pts'], num_sample=num_sample)
- ann['segmentation'] = np.asarray(sort_points(polygon_pts)).reshape(
- 1, -1).tolist()
- ann['image_id'] += start_img_id
- ann['id'] += start_ann_id
- return ann
-
-
-def modify_image_info(image_info, path_prefix, start_img_id=0):
- image_info['file_name'] = osp.join(path_prefix, image_info['file_name'])
- image_info['id'] += start_img_id
- return image_info
-
-
-def parse_args():
- parser = argparse.ArgumentParser(
- description='Convert CurvedSynText150k to COCO format')
- parser.add_argument('root_path', help='CurvedSynText150k root path')
- parser.add_argument('-o', '--out-dir', help='Output path')
- parser.add_argument(
- '-n',
- '--num-sample',
- type=int,
- default=4,
- help='Number of sample points at each Bezier curve.')
- parser.add_argument(
- '--nproc', default=1, type=int, help='Number of processes')
- args = parser.parse_args()
- return args
-
-
-def convert_annotations(data,
- path_prefix,
- num_sample,
- nproc,
- start_img_id=0,
- start_ann_id=0):
- modify_image_info_with_params = partial(
- modify_image_info, path_prefix=path_prefix, start_img_id=start_img_id)
- modify_annotation_with_params = partial(
- modify_annotation,
- num_sample=num_sample,
- start_img_id=start_img_id,
- start_ann_id=start_ann_id)
- if nproc > 1:
- data['annotations'] = mmcv.track_parallel_progress(
- modify_annotation_with_params, data['annotations'], nproc=nproc)
- data['images'] = mmcv.track_parallel_progress(
- modify_image_info_with_params, data['images'], nproc=nproc)
- else:
- data['annotations'] = mmcv.track_progress(
- modify_annotation_with_params, data['annotations'])
- data['images'] = mmcv.track_progress(
- modify_image_info_with_params,
- data['images'],
- )
- data['categories'] = [{'id': 1, 'name': 'text'}]
- return data
-
-
-def main():
- args = parse_args()
- root_path = args.root_path
- out_dir = args.out_dir if args.out_dir else root_path
- mmcv.mkdir_or_exist(out_dir)
-
- anns = mmcv.load(osp.join(root_path, 'train1.json'))
- data1 = convert_annotations(anns, 'syntext_word_eng', args.num_sample,
- args.nproc)
-
- # Get the maximum image id from data1
- start_img_id = max(data1['images'], key=lambda x: x['id'])['id'] + 1
- start_ann_id = max(data1['annotations'], key=lambda x: x['id'])['id'] + 1
- anns = mmcv.load(osp.join(root_path, 'train2.json'))
- data2 = convert_annotations(
- anns,
- 'emcs_imgs',
- args.num_sample,
- args.nproc,
- start_img_id=start_img_id,
- start_ann_id=start_ann_id)
-
- data1['images'] += data2['images']
- data1['annotations'] += data2['annotations']
- mmcv.dump(data1, osp.join(out_dir, 'instances_training.json'))
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/utils/miscellaneous.py b/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/utils/miscellaneous.py
deleted file mode 100644
index db9a8b3679ceea2a5cd2b807421793bbbd3d3677..0000000000000000000000000000000000000000
--- a/spaces/tomofi/MaskTextSpotterV3-OCR/maskrcnn_benchmark/utils/miscellaneous.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-import errno
-import os
-
-
-def mkdir(path):
- try:
- os.makedirs(path)
- except OSError as e:
- if e.errno != errno.EEXIST:
- raise
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/coder/__init__.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/coder/__init__.py
deleted file mode 100644
index ae455ba8fc0e0727e2d581cdc8f20fceededf99a..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/coder/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-from .base_bbox_coder import BaseBBoxCoder
-from .bucketing_bbox_coder import BucketingBBoxCoder
-from .delta_xywh_bbox_coder import DeltaXYWHBBoxCoder
-from .legacy_delta_xywh_bbox_coder import LegacyDeltaXYWHBBoxCoder
-from .pseudo_bbox_coder import PseudoBBoxCoder
-from .tblr_bbox_coder import TBLRBBoxCoder
-from .yolo_bbox_coder import YOLOBBoxCoder
-
-__all__ = [
- 'BaseBBoxCoder', 'PseudoBBoxCoder', 'DeltaXYWHBBoxCoder',
- 'LegacyDeltaXYWHBBoxCoder', 'TBLRBBoxCoder', 'YOLOBBoxCoder',
- 'BucketingBBoxCoder'
-]
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/mask/utils.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/mask/utils.py
deleted file mode 100644
index c88208291ab2a605bee9fe6c1a28a443b74c6372..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/mask/utils.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import mmcv
-import numpy as np
-import pycocotools.mask as mask_util
-
-
-def split_combined_polys(polys, poly_lens, polys_per_mask):
- """Split the combined 1-D polys into masks.
-
- A mask is represented as a list of polys, and a poly is represented as
- a 1-D array. In dataset, all masks are concatenated into a single 1-D
- tensor. Here we need to split the tensor into original representations.
-
- Args:
- polys (list): a list (length = image num) of 1-D tensors
- poly_lens (list): a list (length = image num) of poly length
- polys_per_mask (list): a list (length = image num) of poly number
- of each mask
-
- Returns:
- list: a list (length = image num) of list (length = mask num) of \
- list (length = poly num) of numpy array.
- """
- mask_polys_list = []
- for img_id in range(len(polys)):
- polys_single = polys[img_id]
- polys_lens_single = poly_lens[img_id].tolist()
- polys_per_mask_single = polys_per_mask[img_id].tolist()
-
- split_polys = mmcv.slice_list(polys_single, polys_lens_single)
- mask_polys = mmcv.slice_list(split_polys, polys_per_mask_single)
- mask_polys_list.append(mask_polys)
- return mask_polys_list
-
-
-# TODO: move this function to more proper place
-def encode_mask_results(mask_results):
- """Encode bitmap mask to RLE code.
-
- Args:
- mask_results (list | tuple[list]): bitmap mask results.
- In mask scoring rcnn, mask_results is a tuple of (segm_results,
- segm_cls_score).
-
- Returns:
- list | tuple: RLE encoded mask.
- """
- if isinstance(mask_results, tuple): # mask scoring
- cls_segms, cls_mask_scores = mask_results
- else:
- cls_segms = mask_results
- num_classes = len(cls_segms)
- encoded_mask_results = [[] for _ in range(num_classes)]
- for i in range(len(cls_segms)):
- for cls_segm in cls_segms[i]:
- encoded_mask_results[i].append(
- mask_util.encode(
- np.array(
- cls_segm[:, :, np.newaxis], order='F',
- dtype='uint8'))[0]) # encoded with RLE
- if isinstance(mask_results, tuple):
- return encoded_mask_results, cls_mask_scores
- else:
- return encoded_mask_results
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/yolof.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/yolof.py
deleted file mode 100644
index dc7b3adfeff078b135c9e7e5d6c2a73e4ae2b723..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/yolof.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from ..builder import DETECTORS
-from .single_stage import SingleStageDetector
-
-
-@DETECTORS.register_module()
-class YOLOF(SingleStageDetector):
- r"""Implementation of `You Only Look One-level Feature
- `_"""
-
- def __init__(self,
- backbone,
- neck,
- bbox_head,
- train_cfg=None,
- test_cfg=None,
- pretrained=None):
- super(YOLOF, self).__init__(backbone, neck, bbox_head, train_cfg,
- test_cfg, pretrained)
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/utils/ndl_categories.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/utils/ndl_categories.py
deleted file mode 100644
index 24dd300391aff62de0aff5863d4f7052cb9ca444..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/utils/ndl_categories.py
+++ /dev/null
@@ -1,25 +0,0 @@
-NUM_CLASSES = 15
-
-def get_instance_count():
- instance_count = [None] * NUM_CLASSES
- for i, c in enumerate(NDL_CATEGORIES):
- instance_count[i] = c['instance_count']
- return instance_count
-
-NDL_CATEGORIES = [
- {'id': 0, 'name': 'line_main', 'instance_count': 100},
- {'id': 1, 'name': 'line_inote', 'instance_count': 100},
- {'id': 2, 'name': 'line_hnote', 'instance_count': 1},
- {'id': 3, 'name': 'line_caption', 'instance_count': 1},
- {'id': 4, 'name': 'block_fig', 'instance_count': 100},
- {'id': 5, 'name': 'block_table', 'instance_count': 1},
- {'id': 6, 'name': 'block_pillar', 'instance_count': 100},
- {'id': 7, 'name': 'block_folio', 'instance_count': 100},
- {'id': 8, 'name': 'block_rubi', 'instance_count': 100},
- {'id': 9, 'name': 'block_chart', 'instance_count': 1},
- {'id': 10, 'name': 'block_eqn', 'instance_count': 1},
- {'id': 11, 'name': 'block_cfm', 'instance_count': 1},
- {'id': 12, 'name': 'block_eng', 'instance_count': 1},
- {'id': 13, 'name': 'char', 'instance_count': 1},
- {'id': 14, 'name': 'void', 'instance_count': 1}
-]
\ No newline at end of file
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_roi_heads/test_mask_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_roi_heads/test_mask_head.py
deleted file mode 100644
index 31826cd59a4b2e2a0d1d0306df0321f48ff18e7a..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_roi_heads/test_mask_head.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import mmcv
-import torch
-
-from mmdet.models.roi_heads.mask_heads import FCNMaskHead, MaskIoUHead
-from .utils import _dummy_bbox_sampling
-
-
-def test_mask_head_loss():
- """Test mask head loss when mask target is empty."""
- self = FCNMaskHead(
- num_convs=1,
- roi_feat_size=6,
- in_channels=8,
- conv_out_channels=8,
- num_classes=8)
-
- # Dummy proposals
- proposal_list = [
- torch.Tensor([[23.6667, 23.8757, 228.6326, 153.8874]]),
- ]
-
- gt_bboxes = [
- torch.Tensor([[23.6667, 23.8757, 238.6326, 151.8874]]),
- ]
- gt_labels = [torch.LongTensor([2])]
- sampling_results = _dummy_bbox_sampling(proposal_list, gt_bboxes,
- gt_labels)
-
- # create dummy mask
- import numpy as np
- from mmdet.core import BitmapMasks
- dummy_mask = np.random.randint(0, 2, (1, 160, 240), dtype=np.uint8)
- gt_masks = [BitmapMasks(dummy_mask, 160, 240)]
-
- # create dummy train_cfg
- train_cfg = mmcv.Config(dict(mask_size=12, mask_thr_binary=0.5))
-
- # Create dummy features "extracted" for each sampled bbox
- num_sampled = sum(len(res.bboxes) for res in sampling_results)
- dummy_feats = torch.rand(num_sampled, 8, 6, 6)
-
- mask_pred = self.forward(dummy_feats)
- mask_targets = self.get_targets(sampling_results, gt_masks, train_cfg)
- pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results])
- loss_mask = self.loss(mask_pred, mask_targets, pos_labels)
-
- onegt_mask_loss = sum(loss_mask['loss_mask'])
- assert onegt_mask_loss.item() > 0, 'mask loss should be non-zero'
-
- # test mask_iou_head
- mask_iou_head = MaskIoUHead(
- num_convs=1,
- num_fcs=1,
- roi_feat_size=6,
- in_channels=8,
- conv_out_channels=8,
- fc_out_channels=8,
- num_classes=8)
-
- pos_mask_pred = mask_pred[range(mask_pred.size(0)), pos_labels]
- mask_iou_pred = mask_iou_head(dummy_feats, pos_mask_pred)
- pos_mask_iou_pred = mask_iou_pred[range(mask_iou_pred.size(0)), pos_labels]
-
- mask_iou_targets = mask_iou_head.get_targets(sampling_results, gt_masks,
- pos_mask_pred, mask_targets,
- train_cfg)
- loss_mask_iou = mask_iou_head.loss(pos_mask_iou_pred, mask_iou_targets)
- onegt_mask_iou_loss = loss_mask_iou['loss_mask_iou'].sum()
- assert onegt_mask_iou_loss.item() >= 0
diff --git a/spaces/tomofi/NDLOCR/src/separate_pages_ssd/ssd_tools/ssd_layers.py b/spaces/tomofi/NDLOCR/src/separate_pages_ssd/ssd_tools/ssd_layers.py
deleted file mode 100644
index a30dcdbe128f284b336e14c56761f0b68915085d..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/separate_pages_ssd/ssd_tools/ssd_layers.py
+++ /dev/null
@@ -1,181 +0,0 @@
-"""Some special pupropse layers for SSD."""
-
-import keras.backend as K
-from keras.engine.topology import InputSpec
-from keras.engine.topology import Layer
-import numpy as np
-import tensorflow as tf
-
-
-class Normalize(Layer):
- """Normalization layer as described in ParseNet paper.
-
- # Arguments
- scale: Default feature scale.
-
- # Input shape
- 4D tensor with shape:
- `(samples, channels, rows, cols)` if dim_ordering='th'
- or 4D tensor with shape:
- `(samples, rows, cols, channels)` if dim_ordering='tf'.
-
- # Output shape
- Same as input
-
- # References
- http://cs.unc.edu/~wliu/papers/parsenet.pdf
-
- #TODO
- Add possibility to have one scale for all features.
- """
- def __init__(self, scale, **kwargs):
- if K.image_dim_ordering() == 'tf':
- self.axis = 3
- else:
- self.axis = 1
- self.scale = scale
- super(Normalize, self).__init__(**kwargs)
-
- def build(self, input_shape):
- self.input_spec = [InputSpec(shape=input_shape)]
- shape = (input_shape[self.axis],)
- init_gamma = self.scale * np.ones(shape)
- self.gamma = K.variable(init_gamma, name='{}_gamma'.format(self.name))
- self.trainable_weights = [self.gamma]
-
- def call(self, x, mask=None):
- output = K.l2_normalize(x, self.axis)
- output *= self.gamma
- return output
-
-
-class PriorBox(Layer):
- """Generate the prior boxes of designated sizes and aspect ratios.
-
- # Arguments
- img_size: Size of the input image as tuple (w, h).
- min_size: Minimum box size in pixels.
- max_size: Maximum box size in pixels.
- aspect_ratios: List of aspect ratios of boxes.
- flip: Whether to consider reverse aspect ratios.
- variances: List of variances for x, y, w, h.
- clip: Whether to clip the prior's coordinates
- such that they are within [0, 1].
-
- # Input shape
- 4D tensor with shape:
- `(samples, channels, rows, cols)` if dim_ordering='th'
- or 4D tensor with shape:
- `(samples, rows, cols, channels)` if dim_ordering='tf'.
-
- # Output shape
- 3D tensor with shape:
- (samples, num_boxes, 8)
-
- # References
- https://arxiv.org/abs/1512.02325
-
- #TODO
- Add possibility not to have variances.
- Add Theano support
- """
- def __init__(self, img_size, min_size, max_size=None, aspect_ratios=None,
- flip=True, variances=[0.1], clip=True, **kwargs):
- if K.image_dim_ordering() == 'tf':
- self.waxis = 2
- self.haxis = 1
- else:
- self.waxis = 3
- self.haxis = 2
- self.img_size = img_size
- if min_size <= 0:
- raise Exception('min_size must be positive.')
- self.min_size = min_size
- self.max_size = max_size
- self.aspect_ratios = [1.0]
- if max_size:
- if max_size < min_size:
- raise Exception('max_size must be greater than min_size.')
- self.aspect_ratios.append(1.0)
- if aspect_ratios:
- for ar in aspect_ratios:
- if ar in self.aspect_ratios:
- continue
- self.aspect_ratios.append(ar)
- if flip:
- self.aspect_ratios.append(1.0 / ar)
- self.variances = np.array(variances)
- self.clip = True
- super(PriorBox, self).__init__(**kwargs)
-
- def compute_output_shape(self, input_shape):
- num_priors_ = len(self.aspect_ratios)
- layer_width = input_shape[self.waxis]
- layer_height = input_shape[self.haxis]
- num_boxes = num_priors_ * layer_width * layer_height
- return (input_shape[0], num_boxes, 8)
-
- def call(self, x, mask=None):
- if hasattr(x, '_keras_shape'):
- input_shape = x._keras_shape
- elif hasattr(K, 'int_shape'):
- input_shape = K.int_shape(x)
- layer_width = input_shape[self.waxis]
- layer_height = input_shape[self.haxis]
- img_width = self.img_size[0]
- img_height = self.img_size[1]
- # define prior boxes shapes
- box_widths = []
- box_heights = []
- for ar in self.aspect_ratios:
- if ar == 1 and len(box_widths) == 0:
- box_widths.append(self.min_size)
- box_heights.append(self.min_size)
- elif ar == 1 and len(box_widths) > 0:
- box_widths.append(np.sqrt(self.min_size * self.max_size))
- box_heights.append(np.sqrt(self.min_size * self.max_size))
- elif ar != 1:
- box_widths.append(self.min_size * np.sqrt(ar))
- box_heights.append(self.min_size / np.sqrt(ar))
- box_widths = 0.5 * np.array(box_widths)
- box_heights = 0.5 * np.array(box_heights)
- # define centers of prior boxes
- step_x = img_width / layer_width
- step_y = img_height / layer_height
- linx = np.linspace(0.5 * step_x, img_width - 0.5 * step_x,
- layer_width)
- liny = np.linspace(0.5 * step_y, img_height - 0.5 * step_y,
- layer_height)
- centers_x, centers_y = np.meshgrid(linx, liny)
- centers_x = centers_x.reshape(-1, 1)
- centers_y = centers_y.reshape(-1, 1)
- # define xmin, ymin, xmax, ymax of prior boxes
- num_priors_ = len(self.aspect_ratios)
- prior_boxes = np.concatenate((centers_x, centers_y), axis=1)
- prior_boxes = np.tile(prior_boxes, (1, 2 * num_priors_))
- prior_boxes[:, ::4] -= box_widths
- prior_boxes[:, 1::4] -= box_heights
- prior_boxes[:, 2::4] += box_widths
- prior_boxes[:, 3::4] += box_heights
- prior_boxes[:, ::2] /= img_width
- prior_boxes[:, 1::2] /= img_height
- prior_boxes = prior_boxes.reshape(-1, 4)
- if self.clip:
- prior_boxes = np.minimum(np.maximum(prior_boxes, 0.0), 1.0)
- # define variances
- num_boxes = len(prior_boxes)
- if len(self.variances) == 1:
- variances = np.ones((num_boxes, 4)) * self.variances[0]
- elif len(self.variances) == 4:
- variances = np.tile(self.variances, (num_boxes, 1))
- else:
- raise Exception('Must provide one or four variances.')
- prior_boxes = np.concatenate((prior_boxes, variances), axis=1)
- prior_boxes_tensor = K.expand_dims(K.variable(prior_boxes), 0)
- if K.backend() == 'tensorflow':
- pattern = [tf.shape(x)[0], 1, 1]
- prior_boxes_tensor = tf.tile(prior_boxes_tensor, pattern)
- elif K.backend() == 'theano':
- #TODO
- pass
- return prior_boxes_tensor
diff --git a/spaces/ttt246/brain/Extension/README.md b/spaces/ttt246/brain/Extension/README.md
deleted file mode 100644
index 8177dfde16d8700857f0424095d7bdad02477408..0000000000000000000000000000000000000000
--- a/spaces/ttt246/brain/Extension/README.md
+++ /dev/null
@@ -1,62 +0,0 @@
-# 🌍 RisingBrain-Extension: Surfing the AI wave on your Browser 🚀
-
-Welcome to the browser extension of **RisingBrain** - an exciting piece of innovation bringing smart, AI-powered functionality right into your browser.
-
-Compatible with all major browsers – think ***Chrome, Safari, Firefox***, and more!
-
-
-
-
-## Salient Features 🌟
-
-With **RisingBrain** Extension, your browser does more than just surf the web. Here's what you're in for:
-
-
-
-
-- **Converse Freely**: Enjoy casual, yet interactive chat experiences with Rising AI, even while you browser surf.
-- **Browser Control**: Execute a host of browser functions like opening a new tab, scrolling, switching tabs, or clicking items on the webpage, all through simple prompts.
-- **Seamless Automation**: RisingBrain Extension goes further to automate routine tasks such as sending emails, booking tables, making appointments, or scheduling calendars.
-
-## Quick Installation Guide 🔧
-
-Ready to experience smarter browsing with **RisingBrain** Extension? Here's how to get started:
-
-- Clone the RisingBrain Extension repository to your local machine.
-- Using terminal, navigate to its directory and run this command to install all required node modules:
-
- ``` bach
- npm install
- ```
-- Once the installation is complete, simply build the project using:
-
- ``` bach
- npm run build
- ```
-
-- The build will create a new directory named ***'build'*** inside the Extension directory.
-
- You should now see this directory with built files as your directory location:
- ``` bash
- [project directory]/Brain/Extension/build
- ```
-
-## Compatibility and Beyond 🎓
-
-**RisingBrain** Extension is designed to foster compatibility across all browsers, ensuring no user feels left out.
-
-With the power of AI transforming your everyday browsing, you'll experience comfort, ease, and innovation like never before.
-
-The future is here with the **RisingBrain** Extension!
-***Ride*** the ***wave*** of the future with us 🚀.
-
-## Contributing 💪
-We appreciate your interest in enhancing our work! Please respect the style and contribution guidelines of every project when submitting patches and additions. Our general Git workflow of choice is "fork-and-pull".
-
- 1. **Fork** the repository on GitHub
- 2. **Clone** your fork to your machine
- 3. **Commit** the changes to your personal branch
- 4. **Push** these updates back to your fork
- 5. Don't forget to submit a **Pull Request** for us to study your contributions.
-
-NOTE: Sync with "upstream" to have the latest updates before you make a pull request!
\ No newline at end of file
diff --git a/spaces/unclesamjo/GTalkGPTV01/talk_to_chatgpt.py b/spaces/unclesamjo/GTalkGPTV01/talk_to_chatgpt.py
deleted file mode 100644
index f920ed2c498f885d66626cdca7eb677c91a9e182..0000000000000000000000000000000000000000
--- a/spaces/unclesamjo/GTalkGPTV01/talk_to_chatgpt.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import gradio as gr
-import openai
-import pyttsx3
-openai.api_key = "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"
-
-# Global variable to hold the chat history, initialise with system role
-conversation = [
- {"role": "system", "content": "You are an intelligent professor."}
- ]
-
-# transcribe function to record the audio input
-
-def transcribe(audio):
- print(audio)
-
-# Whisper API
-
- audio_file = open(audio, "rb")
- transcript = openai.Audio.transcribe("whisper-1", audio_file)
-
-
- print(transcript)
-
-# ChatGPT API
-
-# append user's inut to conversation
- conversation.append({"role": "user", "content": transcript["text"]})
-
- response = openai.ChatCompletion.create(
- model="gpt-3.5-turbo",
- messages=conversation
- )
-
- print(response)
-
-# system_message is the response from ChatGPT API
- system_message = response["choices"][0]["message"]["content"]
-
-# append ChatGPT response (assistant role) back to conversation
- conversation.append({"role": "assistant", "content": system_message})
-
-
-# Text to speech
- engine = pyttsx3.init()
- engine.setProperty("rate", 150)
- engine.setProperty("voice", "english-us")
- engine.save_to_file(system_message, "response.mp3")
- engine.runAndWait()
-
- return "response.mp3"
-
-# Gradio output
-
-bot = gr.Interface(fn=transcribe, inputs=gr.Audio(source="microphone", type="filepath"), outputs="audio")
-bot.launch()
\ No newline at end of file
diff --git a/spaces/upthrustinc/seoAnalyzerGPT/pagespeed.py b/spaces/upthrustinc/seoAnalyzerGPT/pagespeed.py
deleted file mode 100644
index f32e05454021bcee1f189ecb10315e8d3f411ceb..0000000000000000000000000000000000000000
--- a/spaces/upthrustinc/seoAnalyzerGPT/pagespeed.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import requests
-import re
-import pandas as pd
-from ask_questions import answer_question
-import numpy as np
-import streamlit as st
-
-df = pd.DataFrame()
-
-def extract_url_from_string(string):
- return re.search("(?Phttps?://[^\s]+)", string).group("url")
-
-def process_data(data):
- audits = [data["lighthouseResult"]["audits"][i] for i in data["lighthouseResult"]["audits"]]
- audits_names = [i["title"] for i in audits]
-
- scoresdisplays = [data["lighthouseResult"]["audits"][i]["scoreDisplayMode"] for i in data["lighthouseResult"]["audits"]]
-
- df=pd.read_csv('processed/embeddings.csv', index_col=0)
- df['embeddings'] = df['embeddings'].apply(eval).apply(np.array)
- issues = []
- for i in audits:
- if i["scoreDisplayMode"] != "notApplicable" and (i["score"] != 1 and i["score"] != None) and "details" in i.keys() and i["scoreDisplayMode"] != "informative":
- title = i["title"]
- desc = i["description"]
- item = i["details"]["items"][0]
- typeOfIssue = i["details"]["type"]
- dicto = {"title": title, "description": desc, "item": item, "type": typeOfIssue}
- issues.append(dicto)
- print(title)
- print(i["details"]["type"])
- question = f"Title: {title}\nDescription: {desc}\nItem: {item}"
- #print(answer_question(df, question=question, debug=False))
- print("***********************************")
- return issues
-
-
-def generate_response(website_url, url = "https://www.googleapis.com/pagespeedonline/v5/runPagespeed", api_key=st.secrets["page_speed_api_key"]):
- print("Website: " + website_url)
- print()
- name = website_url.split("//")[1].split(".")[1] # Get the name of the website
-
- params = {
- "url": website_url,
- "key": api_key,
- "category": ["performance", "accessibility", "best_practices", "seo"]
- }
-
- try:
- #output_file_path = f"Responses/{name}.json"
- if name not in st.session_state:
- st.session_state[name] = {}
- else:
- return st.session_state[name]
- response = requests.get(url, params=params)
- response.raise_for_status() # Check for any request errors
-
- data = response.json()
- st.session_state[name] = data
- """
- with open(output_file_path, "w") as output_file:
- json.dump(data, output_file, indent=4)
- else:
- with open(output_file_path) as output_file:
- data = json.load(output_file)"""
-
- # Process the data as needed
- return data
-
- except requests.exceptions.RequestException as e:
- print("Error:", e)
-#for i in list_of_urls:
-# data = generate_response(i)
-# process_data(data)
-#https://chat.openai.com/share/71d7a128-b56d-4368-9eee-beda874e4200
\ No newline at end of file
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version REPACK.md b/spaces/usbethFlerru/sovits-modelsV2/example/Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version REPACK.md
deleted file mode 100644
index fd46fce2b41373de6a91ae2c4350235818631fe7..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version REPACK.md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version
-Are you looking for a way to use all the products of Adobe Creative Cloud 2014 and 2015 for free? Do you want to activate any Adobe CC product with a single click? Do you want to get the most perfect keygen on the whole Internet? If your answer is yes, then you need Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version.
-Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version is a universal keygen that can generate serial numbers and activation codes for all products of Adobe Creative Cloud 2014 and 2015 on Windows and Mac OS X. It is made by X-Force Team, the most famous and reliable cracking team in the world. It is also the only keygen that has not been implanted with any Trojans or viruses.
-Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) full version
Download ⚡ https://urlcod.com/2uyUQM
-With Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version, you can use any Adobe CC product without any limitation or restriction. You can enjoy the features and functions of Adobe Photoshop CC, Adobe Premiere Pro CC, Adobe After Effects CC, Adobe Illustrator CC, Adobe Dreamweaver CC, Adobe Flash Professional CC, Adobe InCopy CC, Adobe InDesign CC, Adobe Audition CC, Adobe Prelude CC, Adobe SpeedGrade CC, Adobe Captivate CC, Adobe Lightroom CC, Adobe Muse CC, Adobe Edge Animate CC, and more.
-How to use Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version?
-Using Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version is very easy and convenient. You just need to follow these simple steps:
-
-- Download and extract Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version from a trusted source.
-- Disable your network card or pull the network cable out. Make sure you don't have any of these entries in your hosts file: C:\\windows\\system32\\drivers\\etc\\hosts
-- Install your Adobe CC product with a serial generated from the Keygen, and keep the keygen open.
-- Click on Install (I have purchased), then click on Sign In (make sure your network connection is offline), then click on Connect Later.
-- Accept the License Agreement and enter the serial generated from the Keygen.
-- Wait until the error “Please connect to the internet and retry” shows, then click Connect Later.
-- Launch the installed Adobe CC product.
-- Click on “Having trouble connecting to the internet?”, and then click on Offline Activation.
-- Click on Generate a request Code. A request code will be generated according to the serial you used to install your Adobe CC product.
-- Copy the request code and paste it into the keygen. Then click on Generate to generate your activation code.
-- Copy the activation code and paste it into the Adobe CC product. Then click on Activate.
-- Congratulations! You have successfully activated your Adobe CC product with Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version.
-
-What are the advantages of using Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version?
-Using Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version has many advantages for you as a user. Some of them are:
-
-- You can use all the products of Adobe Creative Cloud 2014 and 2015 for free without paying any subscription fee or license fee.
-- You can activate any Adobe CC product with a single click without going through any complicated or tedious process.
-- You can get the most perfect keygen on the whole Internet that has been tested and verified by many users and experts.
-- You can avoid any Trojans or viruses that may harm your device or compromise your privacy by using other fake or unreliable keygens.
-- You can enjoy all the features and functions of Adobe CC products without any limitation or restriction.
-
-Conclusion
-Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version is a universal keygen that can generate serial numbers and activation codes for all products of Adobe Creative Cloud 2014 and 2015 on Windows and Mac OS X. It is made by X-Force Team, the most famous and reliable cracking team in the world. It is also the only keygen that has not been implanted with any Trojans or viruses.
-With Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version, you can use any Adobe CC product without any limitation or restriction. You can enjoy the features and functions of Adobe Photoshop CC, Adobe Premiere Pro CC, Adobe After Effects CC, Adobe Illustrator CC, Adobe Dreamweaver CC, Adobe Flash Professional CC, Adobe InCopy CC, Adobe InDesign CC, Adobe Audition CC, Adobe Prelude CC, Adobe SpeedGrade CC, Adobe Captivate CC, Adobe Lightroom CC, Adobe Muse CC, Adobe Edge Animate CC, and more.
-If you want to use all the products of Adobe Creative Cloud 2014 and 2015 for free, you should download and use Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version today. It is easy to use and safe to download. It is also the best keygen for Adobe CC on the Internet.
-
-We hope this article has helped you find out how to use Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version. If you have any questions or suggestions about this topic, feel free to leave a comment below.
-What are the features of Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version?
-Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version has many features that make it the best keygen for Adobe CC products. Some of them are:
-
-- It can generate serial numbers and activation codes for all products of Adobe Creative Cloud 2014 and 2015 on Windows and Mac OS X.
-- It can activate any Adobe CC product with a single click without any complicated or tedious process.
-- It can work offline and online without any problem or error.
-- It can bypass the Adobe activation server and the Adobe genuine software validation.
-- It can update itself automatically to support the latest Adobe CC products and versions.
-- It is free, easy to use, and secure. It does not contain any Trojans or viruses that may harm your device or compromise your privacy.
-
-What are the precautions of using Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version?
-Using Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version is safe and legal, but you should still take some precautions to avoid any potential risks or problems. Some of them are:
-
-- Make sure you download and use Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version from a trusted source. Do not download or use any fake or unreliable keygens that may contain malware or viruses.
-- Make sure you disable your network card or pull the network cable out before using the keygen. Do not connect to the internet until you have activated your Adobe CC product.
-- Make sure you don't have any of these entries in your hosts file: C:\\windows\\system32\\drivers\\etc\\hosts
-- Make sure you backup your original files before using the keygen. Do not overwrite or delete any files that may affect your system or other programs.
-- Make sure you use the keygen at your own risk. Do not use it for any commercial or illegal purposes. Do not distribute or share it with others.
-
-What are the disadvantages of using Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version?
-Using Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version is not without any drawbacks or risks. Some of them are:
-
-- You may violate the terms and conditions of Adobe and face legal consequences. Using the keygen is considered as piracy and theft of intellectual property. Adobe may sue you or take other actions against you if they find out that you are using the keygen.
-- You may lose the access to the official updates and support from Adobe. Using the keygen may prevent you from updating your Adobe CC products to the latest versions or patches. You may also not be able to contact Adobe for any technical or customer support.
-- You may encounter some errors or bugs while using the keygen or the activated Adobe CC products. The keygen may not work properly or cause some problems with your system or other programs. The activated Adobe CC products may also not function properly or crash unexpectedly.
-- You may expose your device or data to security threats. The keygen may contain some hidden malware or viruses that can harm your device or compromise your privacy. The activated Adobe CC products may also have some vulnerabilities that can be exploited by hackers or cybercriminals.
-
-How to avoid the disadvantages of using Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version?
-If you want to avoid the disadvantages of using Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version, you should follow these tips:
-
-- Use the keygen at your own risk and responsibility. Do not use it for any commercial or illegal purposes. Do not distribute or share it with others.
-- Make sure you backup your original files and data before using the keygen. Do not overwrite or delete any files that may affect your system or other programs.
-- Make sure you scan the keygen and the activated Adobe CC products with a reliable antivirus software. Do not open any suspicious links or attachments that may contain malware or viruses.
-- Make sure you use a firewall or a VPN to protect your device and data from online threats. Do not connect to any unsecured or public networks while using the keygen or the activated Adobe CC products.
-
-What are the testimonials of using Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version?
-Many users and experts have used and tested Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version and shared their testimonials on various platforms. Most of them have given positive feedback and praised the keygen for its effectiveness and reliability. Some of them have also shared their tips and tricks on how to use the keygen better.
-Some of the testimonials of using Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version are:
-
-- AppNee: \"This is the best keygen for Adobe CC products I have ever used. It works perfectly on both Windows and Mac OS X. It can activate any Adobe CC product with a single click. It is also very safe and clean. I have scanned it with VirSCAN and found no virus or Trojan. I highly recommend it to anyone who wants to use Adobe CC products for free.\"
-- SoundCloud: \"I have downloaded and used Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version from this link and it works like a charm. It can generate serial numbers and activation codes for all products of Adobe Creative Cloud 2014 and 2015 on Windows and Mac OS X. It is also very easy to use and fast. I have activated my Adobe Photoshop CC and Adobe Premiere Pro CC with it and they work flawlessly. Thank you so much for sharing this amazing keygen.\"
-- YouTube: \"This is the only working keygen for Adobe CC products on the Internet. I have tried many other keygens but they either don't work or contain malware or viruses. This keygen is different. It is made by X-Force Team, the most famous and reliable cracking team in the world. It is also the only keygen that has not been implanted with any Trojans or viruses. It can activate any Adobe CC product with a single click without any problem or error. I have used it to activate my Adobe After Effects CC and Adobe Illustrator CC and they work perfectly. This keygen is a masterpiece.\"
-
-How to share your testimonial of using Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version?
-If you have used and enjoyed Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version, you can share your testimonial with other users and experts on various platforms. You can also help others by providing some tips and tricks on how to use the keygen better.
-To share your testimonial of using Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version, you can follow these steps:
-
-- Choose a platform that you want to share your testimonial on, such as AppNee, SoundCloud, YouTube, etc.
-- Create an account or log in to your existing account on the platform.
-- Write your testimonial in a clear and honest way. You can include some details such as what Adobe CC product you activated, how you used the keygen, what results you got, etc.
-- Add some keywords or tags related to the keygen, such as Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version, X-Force Team, universal keygen, crack patch, etc.
-- Publish or upload your testimonial on the platform.
-- Share your testimonial link with your friends or other users who may be interested in using the keygen.
-
-Conclusion
-Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version is a universal keygen that can generate serial numbers and activation codes for all products of Adobe Creative Cloud 2014 and 2015 on Windows and Mac OS X. It is made by X-Force Team, the most famous and reliable cracking team in the world. It is also the only keygen that has not been implanted with any Trojans or viruses.
-With Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version, you can use any Adobe CC product without any limitation or restriction. You can enjoy the features and functions of Adobe Photoshop CC, Adobe Premiere Pro CC, Adobe After Effects CC, Adobe Illustrator CC, Adobe Dreamweaver CC, Adobe Flash Professional CC, Adobe InCopy CC, Adobe InDesign CC, Adobe Audition CC, Adobe Prelude CC, Adobe SpeedGrade CC, Adobe Captivate CC, Adobe Lightroom CC, Adobe Muse CC, Adobe Edge Animate CC, and more.
-If you want to use all the products of Adobe Creative Cloud 2014 and 2015 for free, you should download and use Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version today. It is easy to use and safe to download. It is also the best keygen for Adobe CC on the Internet.
-We hope this article has helped you find out how to use Adobe Creative Cloud 2014 Keygen-XFORCE (Alien) Full Version. If you have any questions or suggestions about this topic, feel free to leave a comment below.
-Now, what are you waiting for? Grab your keygen and activate your Adobe CC product today!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Censor 3 Free Movie Download In Hd !EXCLUSIVE!.md b/spaces/usbethFlerru/sovits-modelsV2/example/Censor 3 Free Movie Download In Hd !EXCLUSIVE!.md
deleted file mode 100644
index d4a3ee7bbdfbc4ffb3f3add77cf23d606b5e97e5..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Censor 3 Free Movie Download In Hd !EXCLUSIVE!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Censor 3 free movie download in hd
Download File ⚙ https://urlcod.com/2uyXq5
-
-Jan 22, 2020 · Saaho full movie download in Hindi 480p Filmyzilla in Hindi in 420p, ... in hindi narnia 3 full movie in hindi new hollywood movie in hindi 2019 hd. ... P. 21 Feb 2021 Censor Certification Number:- VIL/2/383/2019-MUM Movie ... 1fdad05405
-
-
-
diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/v8/classify/val.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/v8/classify/val.py
deleted file mode 100644
index f56dea0a2d7af0d88d3185a05aed566c0a8ba8a3..0000000000000000000000000000000000000000
--- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/v8/classify/val.py
+++ /dev/null
@@ -1,109 +0,0 @@
-# Ultralytics YOLO 🚀, AGPL-3.0 license
-
-import torch
-
-from ultralytics.yolo.data import ClassificationDataset, build_dataloader
-from ultralytics.yolo.engine.validator import BaseValidator
-from ultralytics.yolo.utils import DEFAULT_CFG, LOGGER
-from ultralytics.yolo.utils.metrics import ClassifyMetrics, ConfusionMatrix
-from ultralytics.yolo.utils.plotting import plot_images
-
-
-class ClassificationValidator(BaseValidator):
-
- def __init__(self, dataloader=None, save_dir=None, pbar=None, args=None, _callbacks=None):
- """Initializes ClassificationValidator instance with args, dataloader, save_dir, and progress bar."""
- super().__init__(dataloader, save_dir, pbar, args, _callbacks)
- self.args.task = 'classify'
- self.metrics = ClassifyMetrics()
-
- def get_desc(self):
- """Returns a formatted string summarizing classification metrics."""
- return ('%22s' + '%11s' * 2) % ('classes', 'top1_acc', 'top5_acc')
-
- def init_metrics(self, model):
- """Initialize confusion matrix, class names, and top-1 and top-5 accuracy."""
- self.names = model.names
- self.nc = len(model.names)
- self.confusion_matrix = ConfusionMatrix(nc=self.nc, task='classify')
- self.pred = []
- self.targets = []
-
- def preprocess(self, batch):
- """Preprocesses input batch and returns it."""
- batch['img'] = batch['img'].to(self.device, non_blocking=True)
- batch['img'] = batch['img'].half() if self.args.half else batch['img'].float()
- batch['cls'] = batch['cls'].to(self.device)
- return batch
-
- def update_metrics(self, preds, batch):
- """Updates running metrics with model predictions and batch targets."""
- n5 = min(len(self.model.names), 5)
- self.pred.append(preds.argsort(1, descending=True)[:, :n5])
- self.targets.append(batch['cls'])
-
- def finalize_metrics(self, *args, **kwargs):
- """Finalizes metrics of the model such as confusion_matrix and speed."""
- self.confusion_matrix.process_cls_preds(self.pred, self.targets)
- if self.args.plots:
- for normalize in True, False:
- self.confusion_matrix.plot(save_dir=self.save_dir,
- names=self.names.values(),
- normalize=normalize,
- on_plot=self.on_plot)
- self.metrics.speed = self.speed
- self.metrics.confusion_matrix = self.confusion_matrix
-
- def get_stats(self):
- """Returns a dictionary of metrics obtained by processing targets and predictions."""
- self.metrics.process(self.targets, self.pred)
- return self.metrics.results_dict
-
- def build_dataset(self, img_path):
- return ClassificationDataset(root=img_path, args=self.args, augment=False)
-
- def get_dataloader(self, dataset_path, batch_size):
- """Builds and returns a data loader for classification tasks with given parameters."""
- dataset = self.build_dataset(dataset_path)
- return build_dataloader(dataset, batch_size, self.args.workers, rank=-1)
-
- def print_results(self):
- """Prints evaluation metrics for YOLO object detection model."""
- pf = '%22s' + '%11.3g' * len(self.metrics.keys) # print format
- LOGGER.info(pf % ('all', self.metrics.top1, self.metrics.top5))
-
- def plot_val_samples(self, batch, ni):
- """Plot validation image samples."""
- plot_images(images=batch['img'],
- batch_idx=torch.arange(len(batch['img'])),
- cls=batch['cls'].squeeze(-1),
- fname=self.save_dir / f'val_batch{ni}_labels.jpg',
- names=self.names,
- on_plot=self.on_plot)
-
- def plot_predictions(self, batch, preds, ni):
- """Plots predicted bounding boxes on input images and saves the result."""
- plot_images(batch['img'],
- batch_idx=torch.arange(len(batch['img'])),
- cls=torch.argmax(preds, dim=1),
- fname=self.save_dir / f'val_batch{ni}_pred.jpg',
- names=self.names,
- on_plot=self.on_plot) # pred
-
-
-def val(cfg=DEFAULT_CFG, use_python=False):
- """Validate YOLO model using custom data."""
- model = cfg.model or 'yolov8n-cls.pt' # or "resnet18"
- data = cfg.data or 'mnist160'
-
- args = dict(model=model, data=data)
- if use_python:
- from ultralytics import YOLO
- YOLO(model).val(**args)
- else:
- validator = ClassificationValidator(args=args)
- validator(model=args['model'])
-
-
-if __name__ == '__main__':
- val()
diff --git a/spaces/valeriylo/rag_demo/app.py b/spaces/valeriylo/rag_demo/app.py
deleted file mode 100644
index 1b7b5e3c9e85ec2c657ba94bc6c9fdb5b3828e5a..0000000000000000000000000000000000000000
--- a/spaces/valeriylo/rag_demo/app.py
+++ /dev/null
@@ -1,147 +0,0 @@
-import streamlit as st
-from dotenv import load_dotenv
-from PyPDF2 import PdfReader
-from langchain.text_splitter import CharacterTextSplitter
-from langchain.embeddings import OpenAIEmbeddings, HuggingFaceInstructEmbeddings, HuggingFaceEmbeddings
-from langchain.vectorstores import FAISS
-from langchain.chat_models import ChatOpenAI
-from langchain.memory import ConversationBufferMemory
-from langchain.chains import ConversationalRetrievalChain
-from langchain.chat_models.gigachat import GigaChat
-from htmlTemplates import css, bot_template, user_template
-from langchain.llms import HuggingFaceHub, LlamaCpp
-from huggingface_hub import snapshot_download, hf_hub_download
-
-# from prompts import CONDENSE_QUESTION_PROMPT
-
-repo_name = "IlyaGusev/saiga_mistral_7b_gguf"
-model_name = "model-q4_K.gguf"
-
-#snapshot_download(repo_id=repo_name, local_dir=".", allow_patterns=model_name)
-
-
-def get_pdf_text(pdf_docs):
- text = ""
- for pdf in pdf_docs:
- pdf_reader = PdfReader(pdf)
- for page in pdf_reader.pages:
- text += page.extract_text()
-
- return text
-
-
-def get_text_chunks(text):
- text_splitter = CharacterTextSplitter(separator="\n",
- chunk_size=1000, # 1000
- chunk_overlap=200, # 200
- length_function=len
- )
- chunks = text_splitter.split_text(text)
-
- return chunks
-
-
-def get_vectorstore(text_chunks):
- #embeddings = OpenAIEmbeddings()
- #embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl")
- embeddings = HuggingFaceEmbeddings(model_name="intfloat/multilingual-e5-large")
- #embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/paraphrase-multilingual-mpnet-base-v2")
- vectorstore = FAISS.from_texts(texts=text_chunks, embedding=embeddings)
-
- return vectorstore
-
-
-def get_conversation_chain(vectorstore, model_name):
-
- #llm = LlamaCpp(model_path=model_name,
- # temperature=0.1,
- # top_k=30,
- # top_p=0.9,
- # streaming=True,
- # n_ctx=2048,
- # n_parts=1,
- # echo=True
- # )
-
- #llm = ChatOpenAI()
-
- llm = GigaChat(profanity=False,
- verify_ssl_certs=False
- )
-
- memory = ConversationBufferMemory(memory_key='chat_history',
- input_key='question',
- output_key='answer',
- return_messages=True)
-
- conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm,
- retriever=vectorstore.as_retriever(),
- memory=memory,
- return_source_documents=True
- )
-
- return conversation_chain
-
-
-def handle_userinput(user_question):
-
- response = st.session_state.conversation({'question': user_question})
-
- st.session_state.chat_history = response['chat_history']
-
- st.session_state.retrieved_text = response['source_documents']
-
- for i, (message, text) in enumerate(zip(st.session_state.chat_history, st.session_state.retrieved_text)):
- if i % 3 == 0:
- st.write(user_template.replace(
- "{{MSG}}", message.content), unsafe_allow_html=True)
- else:
- st.write(bot_template.replace(
- "{{MSG}}", message.content), unsafe_allow_html=True)
- print(text)
- st.write(bot_template.replace(
- "{{MSG}}", str(text.page_content)), unsafe_allow_html=True)
-
-
-
- #for text in enumerate(st.session_state.retrieved_text):
- # st.write(text[1].page_content, '\n')
-
- #print(response['source_documents'][0])
-
-# main code
-load_dotenv()
-
-st.set_page_config(page_title="Chat with multiple PDFs",
- page_icon=":books:")
-st.write(css, unsafe_allow_html=True)
-
-if "conversation" not in st.session_state:
- st.session_state.conversation = None
-if "chat_history" not in st.session_state:
- st.session_state.chat_history = None
-
-st.header("Chat with multiple PDFs :books:")
-user_question = st.text_input("Ask a question about your documents: ")
-
-if user_question:
- handle_userinput(user_question)
-
-with st.sidebar:
- st.subheader("Your documents")
- pdf_docs = st.file_uploader(
- "Upload your PDFs here and click on 'Process'", accept_multiple_files=True)
- if st.button("Process"):
- with st.spinner("Processing"):
- # get pdf text
- raw_text = get_pdf_text(pdf_docs)
-
- # get the text chunks
- text_chunks = get_text_chunks(raw_text)
-
- # create vector store
- vectorstore = get_vectorstore(text_chunks)
-
- # create conversation chain
- st.session_state.conversation = get_conversation_chain(vectorstore, model_name)
-
\ No newline at end of file
diff --git a/spaces/vict0rsch/climateGAN/utils_scripts/make-labelbox.sh b/spaces/vict0rsch/climateGAN/utils_scripts/make-labelbox.sh
deleted file mode 100644
index d649238546b996ae1ad00f7753c04aeebc7aaa97..0000000000000000000000000000000000000000
--- a/spaces/vict0rsch/climateGAN/utils_scripts/make-labelbox.sh
+++ /dev/null
@@ -1,9 +0,0 @@
-echo "Dowloading Script" && python download_labelbox.py
-
-echo "Merging Script" && python merge_labelbox_masks.py
-
-echo "Cleaning labeled"
-rm /Users/victor/Downloads/metrics-v2/labels/*
-cp /Users/victor/Downloads/labelbox_test_flood-v2/__labeled/* /Users/victor/Downloads/metrics-v2/labels
-
-echo "Create labeled images Script" && python create_labeled.py
\ No newline at end of file
diff --git a/spaces/vincentliaw/runwayml-stable-diffusion-v1-5/README.md b/spaces/vincentliaw/runwayml-stable-diffusion-v1-5/README.md
deleted file mode 100644
index 3a812cf740e1b4cafc33fc7b4455060fa270cd48..0000000000000000000000000000000000000000
--- a/spaces/vincentliaw/runwayml-stable-diffusion-v1-5/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Runwayml Stable Diffusion V1 5
-emoji: 📚
-colorFrom: yellow
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/vishnu0001/text2mesh/shap_e/models/nerf/ray.py b/spaces/vishnu0001/text2mesh/shap_e/models/nerf/ray.py
deleted file mode 100644
index 433227f573a02a93cdf86d63cdffc8b6041a85da..0000000000000000000000000000000000000000
--- a/spaces/vishnu0001/text2mesh/shap_e/models/nerf/ray.py
+++ /dev/null
@@ -1,513 +0,0 @@
-from abc import ABC, abstractmethod
-from dataclasses import dataclass
-from functools import partial
-from typing import Any, Dict, List, Optional, Tuple
-
-import torch
-
-from shap_e.models.nn.utils import sample_pmf
-from shap_e.models.volume import Volume, VolumeRange
-from shap_e.util.collections import AttrDict
-
-from .model import NeRFModel, Query
-
-
-def render_rays(
- rays: torch.Tensor,
- parts: List["RayVolumeIntegral"],
- void_model: NeRFModel,
- shared: bool = False,
- prev_raw_outputs: Optional[List[AttrDict]] = None,
- render_with_direction: bool = True,
- importance_sampling_options: Optional[Dict[str, Any]] = None,
-) -> Tuple["RayVolumeIntegralResults", List["RaySampler"], List[AttrDict]]:
- """
- Perform volumetric rendering over a partition of possible t's in the union
- of rendering volumes (written below with some abuse of notations)
-
- C(r) := sum(
- transmittance(t[i]) *
- integrate(
- lambda t: density(t) * channels(t) * transmittance(t),
- [t[i], t[i + 1]],
- )
- for i in range(len(parts))
- ) + transmittance(t[-1]) * void_model(t[-1]).channels
-
- where
-
- 1) transmittance(s) := exp(-integrate(density, [t[0], s])) calculates the
- probability of light passing through the volume specified by [t[0], s].
- (transmittance of 1 means light can pass freely)
- 2) density and channels are obtained by evaluating the appropriate
- part.model at time t.
- 3) [t[i], t[i + 1]] is defined as the range of t where the ray intersects
- (parts[i].volume \\ union(part.volume for part in parts[:i])) at the surface
- of the shell (if bounded). If the ray does not intersect, the integral over
- this segment is evaluated as 0 and transmittance(t[i + 1]) :=
- transmittance(t[i]).
- 4) The last term is integration to infinity (e.g. [t[-1], math.inf]) that
- is evaluated by the void_model (i.e. we consider this space to be empty).
-
- :param rays: [batch_size x ... x 2 x 3] origin and direction.
- :param parts: disjoint volume integrals.
- :param void_model: use this model to integrate over the empty space
- :param shared: All RayVolumeIntegrals are calculated with the same model.
- :param prev_raw_outputs: Raw outputs from the previous rendering step
-
- :return: A tuple of
- - AttrDict containing the rendered `channels`, `distances`, and the `aux_losses`
- - A list of importance samplers for additional fine-grained rendering
- - A list of raw output for each interval
- """
- if importance_sampling_options is None:
- importance_sampling_options = {}
-
- origin, direc = rays[..., 0, :], rays[..., 1, :]
-
- if prev_raw_outputs is None:
- prev_raw_outputs = [None] * len(parts)
-
- samplers = []
- raw_outputs = []
- t0 = None
- results = None
-
- for part_i, prev_raw_i in zip(parts, prev_raw_outputs):
-
- # Integrate over [t[i], t[i + 1]]
- results_i = part_i.render_rays(
- origin,
- direc,
- t0=t0,
- prev_raw=prev_raw_i,
- shared=shared,
- render_with_direction=render_with_direction,
- )
-
- # Create an importance sampler for (optional) fine rendering
- samplers.append(
- ImportanceRaySampler(
- results_i.volume_range, results_i.raw, **importance_sampling_options
- )
- )
- raw_outputs.append(results_i.raw)
-
- # Pass t[i + 1] as the start of integration for the next interval.
- t0 = results_i.volume_range.next_t0()
-
- # Combine the results from [t[0], t[i]] and [t[i], t[i+1]]
- results = results_i if results is None else results.combine(results_i)
-
- # While integrating out [t[-1], math.inf] is the correct thing to do, this
- # erases a lot of useful information. Also, void_model is meant to predict
- # the channels at t=math.inf.
-
- # # Add the void background over [t[-1], math.inf] to complete integration.
- # results = results.combine(
- # RayVolumeIntegralResults(
- # output=AttrDict(
- # channels=void_model(origin, direc),
- # distances=torch.zeros_like(t0),
- # aux_losses=AttrDict(),
- # ),
- # volume_range=VolumeRange(
- # t0=t0,
- # t1=torch.full_like(t0, math.inf),
- # intersected=torch.full_like(results.volume_range.intersected, True),
- # ),
- # # Void space extends to infinity. It is assumed that no light
- # # passes beyond the void.
- # transmittance=torch.zeros_like(results_i.transmittance),
- # )
- # )
-
- results.output.channels = results.output.channels + results.transmittance * void_model(
- Query(origin, direc)
- )
-
- return results, samplers, raw_outputs
-
-
-@dataclass
-class RayVolumeIntegralResults:
- """
- Stores the relevant state and results of
-
- integrate(
- lambda t: density(t) * channels(t) * transmittance(t),
- [t0, t1],
- )
- """
-
- # Rendered output and auxiliary losses
- # output.channels has shape [batch_size, *inner_shape, n_channels]
- output: AttrDict
-
- """
- Optional values
- """
-
- # Raw values contain the sampled `ts`, `density`, `channels`, etc.
- raw: Optional[AttrDict] = None
-
- # Integration
- volume_range: Optional[VolumeRange] = None
-
- # If a ray intersects, the transmittance from t0 to t1 (e.g. the
- # probability that the ray passes through this volume).
- # has shape [batch_size, *inner_shape, 1]
- transmittance: Optional[torch.Tensor] = None
-
- def combine(self, cur: "RayVolumeIntegralResults") -> "RayVolumeIntegralResults":
- """
- Combines the integration results of `self` over [t0, t1] and
- `cur` over [t1, t2] to produce a new set of results over [t0, t2] by
- using a similar equation to (4) in NeRF++:
-
- integrate(
- lambda t: density(t) * channels(t) * transmittance(t),
- [t0, t2]
- )
-
- = integrate(
- lambda t: density(t) * channels(t) * transmittance(t),
- [t0, t1]
- ) + transmittance(t1) * integrate(
- lambda t: density(t) * channels(t) * transmittance(t),
- [t1, t2]
- )
- """
- assert torch.allclose(self.volume_range.next_t0(), cur.volume_range.t0)
-
- def _combine_fn(
- prev_val: Optional[torch.Tensor],
- cur_val: Optional[torch.Tensor],
- *,
- prev_transmittance: torch.Tensor,
- ):
- assert prev_val is not None
- if cur_val is None:
- # cur_output.aux_losses are empty for the void_model.
- return prev_val
- return prev_val + prev_transmittance * cur_val
-
- output = self.output.combine(
- cur.output, combine_fn=partial(_combine_fn, prev_transmittance=self.transmittance)
- )
-
- combined = RayVolumeIntegralResults(
- output=output,
- volume_range=self.volume_range.extend(cur.volume_range),
- transmittance=self.transmittance * cur.transmittance,
- )
- return combined
-
-
-@dataclass
-class RayVolumeIntegral:
- model: NeRFModel
- volume: Volume
- sampler: "RaySampler"
- n_samples: int
-
- def render_rays(
- self,
- origin: torch.Tensor,
- direction: torch.Tensor,
- t0: Optional[torch.Tensor] = None,
- prev_raw: Optional[AttrDict] = None,
- shared: bool = False,
- render_with_direction: bool = True,
- ) -> "RayVolumeIntegralResults":
- """
- Perform volumetric rendering over the given volume.
-
- :param position: [batch_size, *shape, 3]
- :param direction: [batch_size, *shape, 3]
- :param t0: Optional [batch_size, *shape, 1]
- :param prev_raw: the raw outputs when using multiple levels with this model.
- :param shared: means the same model is used for all RayVolumeIntegral's
- :param render_with_direction: use the incoming ray direction when querying the model.
-
- :return: RayVolumeIntegralResults
- """
- # 1. Intersect the rays with the current volume and sample ts to
- # integrate along.
- vrange = self.volume.intersect(origin, direction, t0_lower=t0)
- ts = self.sampler.sample(vrange.t0, vrange.t1, self.n_samples)
-
- if prev_raw is not None and not shared:
- # Append the previous ts now before fprop because previous
- # rendering used a different model and we can't reuse the output.
- ts = torch.sort(torch.cat([ts, prev_raw.ts], dim=-2), dim=-2).values
-
- # Shape sanity checks
- batch_size, *_shape, _t0_dim = vrange.t0.shape
- _, *ts_shape, _ts_dim = ts.shape
-
- # 2. Get the points along the ray and query the model
- directions = torch.broadcast_to(direction.unsqueeze(-2), [batch_size, *ts_shape, 3])
- positions = origin.unsqueeze(-2) + ts * directions
-
- optional_directions = directions if render_with_direction else None
- mids = (ts[..., 1:, :] + ts[..., :-1, :]) / 2
- raw = self.model(
- Query(
- position=positions,
- direction=optional_directions,
- t_min=torch.cat([vrange.t0[..., None, :], mids], dim=-2),
- t_max=torch.cat([mids, vrange.t1[..., None, :]], dim=-2),
- )
- )
- raw.ts = ts
-
- if prev_raw is not None and shared:
- # We can append the additional queries to previous raw outputs
- # before integration
- copy = prev_raw.copy()
- result = torch.sort(torch.cat([raw.pop("ts"), copy.pop("ts")], dim=-2), dim=-2)
- merge_results = partial(self._merge_results, dim=-2, indices=result.indices)
- raw = raw.combine(copy, merge_results)
- raw.ts = result.values
-
- # 3. Integrate the raw results
- output, transmittance = self.integrate_samples(vrange, raw)
-
- # 4. Clean up results that do not intersect with the volume.
- transmittance = torch.where(
- vrange.intersected, transmittance, torch.ones_like(transmittance)
- )
-
- def _mask_fn(_key: str, tensor: torch.Tensor):
- return torch.where(vrange.intersected, tensor, torch.zeros_like(tensor))
-
- def _is_tensor(_key: str, value: Any):
- return isinstance(value, torch.Tensor)
-
- output = output.map(map_fn=_mask_fn, should_map=_is_tensor)
-
- return RayVolumeIntegralResults(
- output=output,
- raw=raw,
- volume_range=vrange,
- transmittance=transmittance,
- )
-
- def integrate_samples(
- self,
- volume_range: VolumeRange,
- raw: AttrDict,
- ) -> Tuple[AttrDict, torch.Tensor]:
- """
- Integrate the raw.channels along with other aux_losses and values to
- produce the final output dictionary containing rendered `channels`,
- estimated `distances` and `aux_losses`.
-
- :param volume_range: Specifies the integral range [t0, t1]
- :param raw: Contains a dict of function evaluations at ts. Should have
-
- density: torch.Tensor [batch_size, *shape, n_samples, 1]
- channels: torch.Tensor [batch_size, *shape, n_samples, n_channels]
- aux_losses: {key: torch.Tensor [batch_size, *shape, n_samples, 1] for each key}
- no_weight_grad_aux_losses: an optional set of losses for which the weights
- should be detached before integration.
-
- after the call, integrate_samples populates some intermediate calculations
- for later use like
-
- weights: torch.Tensor [batch_size, *shape, n_samples, 1] (density *
- transmittance)[i] weight for each rgb output at [..., i, :].
- :returns: a tuple of (
- a dictionary of rendered outputs and aux_losses,
- transmittance of this volume,
- )
- """
-
- # 1. Calculate the weights
- _, _, dt = volume_range.partition(raw.ts)
- ddensity = raw.density * dt
-
- mass = torch.cumsum(ddensity, dim=-2)
- transmittance = torch.exp(-mass[..., -1, :])
-
- alphas = 1.0 - torch.exp(-ddensity)
- Ts = torch.exp(torch.cat([torch.zeros_like(mass[..., :1, :]), -mass[..., :-1, :]], dim=-2))
- # This is the probability of light hitting and reflecting off of
- # something at depth [..., i, :].
- weights = alphas * Ts
-
- # 2. Integrate all results
- def _integrate(key: str, samples: torch.Tensor, weights: torch.Tensor):
- if key == "density":
- # Omit integrating the density, because we don't need it
- return None
- return torch.sum(samples * weights, dim=-2)
-
- def _is_tensor(_key: str, value: Any):
- return isinstance(value, torch.Tensor)
-
- if raw.no_weight_grad_aux_losses:
- extra_aux_losses = raw.no_weight_grad_aux_losses.map(
- partial(_integrate, weights=weights.detach()), should_map=_is_tensor
- )
- else:
- extra_aux_losses = {}
- output = raw.map(partial(_integrate, weights=weights), should_map=_is_tensor)
- if "no_weight_grad_aux_losses" in output:
- del output["no_weight_grad_aux_losses"]
- output.aux_losses.update(extra_aux_losses)
-
- # Integrating the ts yields the distance away from the origin; rename the variable.
- output.distances = output.ts
- del output["ts"]
- del output["density"]
-
- assert output.distances.shape == (*output.channels.shape[:-1], 1)
- assert output.channels.shape[:-1] == raw.channels.shape[:-2]
- assert output.channels.shape[-1] == raw.channels.shape[-1]
-
- # 3. Reduce loss
- def _reduce_loss(_key: str, loss: torch.Tensor):
- return loss.view(loss.shape[0], -1).sum(dim=-1)
-
- # 4. Store other useful calculations
- raw.weights = weights
-
- output.aux_losses = output.aux_losses.map(_reduce_loss)
-
- return output, transmittance
-
- def _merge_results(
- self, a: Optional[torch.Tensor], b: torch.Tensor, dim: int, indices: torch.Tensor
- ):
- """
- :param a: [..., n_a, ...]. The other dictionary containing the b's may
- contain extra tensors from earlier calculations, so a can be None.
- :param b: [..., n_b, ...]
- :param dim: dimension to merge
- :param indices: how the merged results should be sorted at the end
- :return: a concatted and sorted tensor of size [..., n_a + n_b, ...]
- """
- if a is None:
- return None
-
- merged = torch.cat([a, b], dim=dim)
- return torch.gather(merged, dim=dim, index=torch.broadcast_to(indices, merged.shape))
-
-
-class RaySampler(ABC):
- @abstractmethod
- def sample(self, t0: torch.Tensor, t1: torch.Tensor, n_samples: int) -> torch.Tensor:
- """
- :param t0: start time has shape [batch_size, *shape, 1]
- :param t1: finish time has shape [batch_size, *shape, 1]
- :param n_samples: number of ts to sample
- :return: sampled ts of shape [batch_size, *shape, n_samples, 1]
- """
-
-
-class StratifiedRaySampler(RaySampler):
- """
- Instead of fixed intervals, a sample is drawn uniformly at random from each
- interval.
- """
-
- def __init__(self, depth_mode: str = "linear"):
- """
- :param depth_mode: linear samples ts linearly in depth. harmonic ensures
- closer points are sampled more densely.
- """
- self.depth_mode = depth_mode
- assert self.depth_mode in ("linear", "geometric", "harmonic")
-
- def sample(
- self,
- t0: torch.Tensor,
- t1: torch.Tensor,
- n_samples: int,
- epsilon: float = 1e-3,
- ) -> torch.Tensor:
- """
- :param t0: start time has shape [batch_size, *shape, 1]
- :param t1: finish time has shape [batch_size, *shape, 1]
- :param n_samples: number of ts to sample
- :return: sampled ts of shape [batch_size, *shape, n_samples, 1]
- """
- ones = [1] * (len(t0.shape) - 1)
- ts = torch.linspace(0, 1, n_samples).view(*ones, n_samples).to(t0.dtype).to(t0.device)
-
- if self.depth_mode == "linear":
- ts = t0 * (1.0 - ts) + t1 * ts
- elif self.depth_mode == "geometric":
- ts = (t0.clamp(epsilon).log() * (1.0 - ts) + t1.clamp(epsilon).log() * ts).exp()
- elif self.depth_mode == "harmonic":
- # The original NeRF recommends this interpolation scheme for
- # spherical scenes, but there could be some weird edge cases when
- # the observer crosses from the inner to outer volume.
- ts = 1.0 / (1.0 / t0.clamp(epsilon) * (1.0 - ts) + 1.0 / t1.clamp(epsilon) * ts)
-
- mids = 0.5 * (ts[..., 1:] + ts[..., :-1])
- upper = torch.cat([mids, t1], dim=-1)
- lower = torch.cat([t0, mids], dim=-1)
- t_rand = torch.rand_like(ts)
-
- ts = lower + (upper - lower) * t_rand
- return ts.unsqueeze(-1)
-
-
-class ImportanceRaySampler(RaySampler):
- """
- Given the initial estimate of densities, this samples more from
- regions/bins expected to have objects.
- """
-
- def __init__(
- self, volume_range: VolumeRange, raw: AttrDict, blur_pool: bool = False, alpha: float = 1e-5
- ):
- """
- :param volume_range: the range in which a ray intersects the given volume.
- :param raw: dictionary of raw outputs from the NeRF models of shape
- [batch_size, *shape, n_coarse_samples, 1]. Should at least contain
-
- :param ts: earlier samples from the coarse rendering step
- :param weights: discretized version of density * transmittance
- :param blur_pool: if true, use 2-tap max + 2-tap blur filter from mip-NeRF.
- :param alpha: small value to add to weights.
- """
- self.volume_range = volume_range
- self.ts = raw.ts.clone().detach()
- self.weights = raw.weights.clone().detach()
- self.blur_pool = blur_pool
- self.alpha = alpha
-
- @torch.no_grad()
- def sample(self, t0: torch.Tensor, t1: torch.Tensor, n_samples: int) -> torch.Tensor:
- """
- :param t0: start time has shape [batch_size, *shape, 1]
- :param t1: finish time has shape [batch_size, *shape, 1]
- :param n_samples: number of ts to sample
- :return: sampled ts of shape [batch_size, *shape, n_samples, 1]
- """
- lower, upper, _ = self.volume_range.partition(self.ts)
-
- batch_size, *shape, n_coarse_samples, _ = self.ts.shape
-
- weights = self.weights
- if self.blur_pool:
- padded = torch.cat([weights[..., :1, :], weights, weights[..., -1:, :]], dim=-2)
- maxes = torch.maximum(padded[..., :-1, :], padded[..., 1:, :])
- weights = 0.5 * (maxes[..., :-1, :] + maxes[..., 1:, :])
- weights = weights + self.alpha
- pmf = weights / weights.sum(dim=-2, keepdim=True)
- inds = sample_pmf(pmf, n_samples)
- assert inds.shape == (batch_size, *shape, n_samples, 1)
- assert (inds >= 0).all() and (inds < n_coarse_samples).all()
-
- t_rand = torch.rand(inds.shape, device=inds.device)
- lower_ = torch.gather(lower, -2, inds)
- upper_ = torch.gather(upper, -2, inds)
-
- ts = lower_ + (upper_ - lower_) * t_rand
- ts = torch.sort(ts, dim=-2).values
- return ts
diff --git a/spaces/vishnu23/web_scrap/README.md b/spaces/vishnu23/web_scrap/README.md
deleted file mode 100644
index b9f4d8f8884e1ccb8400961dc649340dfa87a41c..0000000000000000000000000000000000000000
--- a/spaces/vishnu23/web_scrap/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Web Scrap
-emoji: 🌍
-colorFrom: indigo
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.33.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/vumichien/canvas_controlnet/annotator/midas/midas/vit.py b/spaces/vumichien/canvas_controlnet/annotator/midas/midas/vit.py
deleted file mode 100644
index ea46b1be88b261b0dec04f3da0256f5f66f88a74..0000000000000000000000000000000000000000
--- a/spaces/vumichien/canvas_controlnet/annotator/midas/midas/vit.py
+++ /dev/null
@@ -1,491 +0,0 @@
-import torch
-import torch.nn as nn
-import timm
-import types
-import math
-import torch.nn.functional as F
-
-
-class Slice(nn.Module):
- def __init__(self, start_index=1):
- super(Slice, self).__init__()
- self.start_index = start_index
-
- def forward(self, x):
- return x[:, self.start_index :]
-
-
-class AddReadout(nn.Module):
- def __init__(self, start_index=1):
- super(AddReadout, self).__init__()
- self.start_index = start_index
-
- def forward(self, x):
- if self.start_index == 2:
- readout = (x[:, 0] + x[:, 1]) / 2
- else:
- readout = x[:, 0]
- return x[:, self.start_index :] + readout.unsqueeze(1)
-
-
-class ProjectReadout(nn.Module):
- def __init__(self, in_features, start_index=1):
- super(ProjectReadout, self).__init__()
- self.start_index = start_index
-
- self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU())
-
- def forward(self, x):
- readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :])
- features = torch.cat((x[:, self.start_index :], readout), -1)
-
- return self.project(features)
-
-
-class Transpose(nn.Module):
- def __init__(self, dim0, dim1):
- super(Transpose, self).__init__()
- self.dim0 = dim0
- self.dim1 = dim1
-
- def forward(self, x):
- x = x.transpose(self.dim0, self.dim1)
- return x
-
-
-def forward_vit(pretrained, x):
- b, c, h, w = x.shape
-
- glob = pretrained.model.forward_flex(x)
-
- layer_1 = pretrained.activations["1"]
- layer_2 = pretrained.activations["2"]
- layer_3 = pretrained.activations["3"]
- layer_4 = pretrained.activations["4"]
-
- layer_1 = pretrained.act_postprocess1[0:2](layer_1)
- layer_2 = pretrained.act_postprocess2[0:2](layer_2)
- layer_3 = pretrained.act_postprocess3[0:2](layer_3)
- layer_4 = pretrained.act_postprocess4[0:2](layer_4)
-
- unflatten = nn.Sequential(
- nn.Unflatten(
- 2,
- torch.Size(
- [
- h // pretrained.model.patch_size[1],
- w // pretrained.model.patch_size[0],
- ]
- ),
- )
- )
-
- if layer_1.ndim == 3:
- layer_1 = unflatten(layer_1)
- if layer_2.ndim == 3:
- layer_2 = unflatten(layer_2)
- if layer_3.ndim == 3:
- layer_3 = unflatten(layer_3)
- if layer_4.ndim == 3:
- layer_4 = unflatten(layer_4)
-
- layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1)
- layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2)
- layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3)
- layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4)
-
- return layer_1, layer_2, layer_3, layer_4
-
-
-def _resize_pos_embed(self, posemb, gs_h, gs_w):
- posemb_tok, posemb_grid = (
- posemb[:, : self.start_index],
- posemb[0, self.start_index :],
- )
-
- gs_old = int(math.sqrt(len(posemb_grid)))
-
- posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2)
- posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear")
- posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1)
-
- posemb = torch.cat([posemb_tok, posemb_grid], dim=1)
-
- return posemb
-
-
-def forward_flex(self, x):
- b, c, h, w = x.shape
-
- pos_embed = self._resize_pos_embed(
- self.pos_embed, h // self.patch_size[1], w // self.patch_size[0]
- )
-
- B = x.shape[0]
-
- if hasattr(self.patch_embed, "backbone"):
- x = self.patch_embed.backbone(x)
- if isinstance(x, (list, tuple)):
- x = x[-1] # last feature if backbone outputs list/tuple of features
-
- x = self.patch_embed.proj(x).flatten(2).transpose(1, 2)
-
- if getattr(self, "dist_token", None) is not None:
- cls_tokens = self.cls_token.expand(
- B, -1, -1
- ) # stole cls_tokens impl from Phil Wang, thanks
- dist_token = self.dist_token.expand(B, -1, -1)
- x = torch.cat((cls_tokens, dist_token, x), dim=1)
- else:
- cls_tokens = self.cls_token.expand(
- B, -1, -1
- ) # stole cls_tokens impl from Phil Wang, thanks
- x = torch.cat((cls_tokens, x), dim=1)
-
- x = x + pos_embed
- x = self.pos_drop(x)
-
- for blk in self.blocks:
- x = blk(x)
-
- x = self.norm(x)
-
- return x
-
-
-activations = {}
-
-
-def get_activation(name):
- def hook(model, input, output):
- activations[name] = output
-
- return hook
-
-
-def get_readout_oper(vit_features, features, use_readout, start_index=1):
- if use_readout == "ignore":
- readout_oper = [Slice(start_index)] * len(features)
- elif use_readout == "add":
- readout_oper = [AddReadout(start_index)] * len(features)
- elif use_readout == "project":
- readout_oper = [
- ProjectReadout(vit_features, start_index) for out_feat in features
- ]
- else:
- assert (
- False
- ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'"
-
- return readout_oper
-
-
-def _make_vit_b16_backbone(
- model,
- features=[96, 192, 384, 768],
- size=[384, 384],
- hooks=[2, 5, 8, 11],
- vit_features=768,
- use_readout="ignore",
- start_index=1,
-):
- pretrained = nn.Module()
-
- pretrained.model = model
- pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1"))
- pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2"))
- pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3"))
- pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4"))
-
- pretrained.activations = activations
-
- readout_oper = get_readout_oper(vit_features, features, use_readout, start_index)
-
- # 32, 48, 136, 384
- pretrained.act_postprocess1 = nn.Sequential(
- readout_oper[0],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[0],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[0],
- out_channels=features[0],
- kernel_size=4,
- stride=4,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess2 = nn.Sequential(
- readout_oper[1],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[1],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[1],
- out_channels=features[1],
- kernel_size=2,
- stride=2,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess3 = nn.Sequential(
- readout_oper[2],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[2],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- )
-
- pretrained.act_postprocess4 = nn.Sequential(
- readout_oper[3],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[3],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.Conv2d(
- in_channels=features[3],
- out_channels=features[3],
- kernel_size=3,
- stride=2,
- padding=1,
- ),
- )
-
- pretrained.model.start_index = start_index
- pretrained.model.patch_size = [16, 16]
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model)
- pretrained.model._resize_pos_embed = types.MethodType(
- _resize_pos_embed, pretrained.model
- )
-
- return pretrained
-
-
-def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_large_patch16_384", pretrained=pretrained)
-
- hooks = [5, 11, 17, 23] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model,
- features=[256, 512, 1024, 1024],
- hooks=hooks,
- vit_features=1024,
- use_readout=use_readout,
- )
-
-
-def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_base_patch16_384", pretrained=pretrained)
-
- hooks = [2, 5, 8, 11] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout
- )
-
-
-def _make_pretrained_deitb16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained)
-
- hooks = [2, 5, 8, 11] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout
- )
-
-
-def _make_pretrained_deitb16_distil_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model(
- "vit_deit_base_distilled_patch16_384", pretrained=pretrained
- )
-
- hooks = [2, 5, 8, 11] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model,
- features=[96, 192, 384, 768],
- hooks=hooks,
- use_readout=use_readout,
- start_index=2,
- )
-
-
-def _make_vit_b_rn50_backbone(
- model,
- features=[256, 512, 768, 768],
- size=[384, 384],
- hooks=[0, 1, 8, 11],
- vit_features=768,
- use_vit_only=False,
- use_readout="ignore",
- start_index=1,
-):
- pretrained = nn.Module()
-
- pretrained.model = model
-
- if use_vit_only == True:
- pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1"))
- pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2"))
- else:
- pretrained.model.patch_embed.backbone.stages[0].register_forward_hook(
- get_activation("1")
- )
- pretrained.model.patch_embed.backbone.stages[1].register_forward_hook(
- get_activation("2")
- )
-
- pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3"))
- pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4"))
-
- pretrained.activations = activations
-
- readout_oper = get_readout_oper(vit_features, features, use_readout, start_index)
-
- if use_vit_only == True:
- pretrained.act_postprocess1 = nn.Sequential(
- readout_oper[0],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[0],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[0],
- out_channels=features[0],
- kernel_size=4,
- stride=4,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess2 = nn.Sequential(
- readout_oper[1],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[1],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[1],
- out_channels=features[1],
- kernel_size=2,
- stride=2,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
- else:
- pretrained.act_postprocess1 = nn.Sequential(
- nn.Identity(), nn.Identity(), nn.Identity()
- )
- pretrained.act_postprocess2 = nn.Sequential(
- nn.Identity(), nn.Identity(), nn.Identity()
- )
-
- pretrained.act_postprocess3 = nn.Sequential(
- readout_oper[2],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[2],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- )
-
- pretrained.act_postprocess4 = nn.Sequential(
- readout_oper[3],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[3],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.Conv2d(
- in_channels=features[3],
- out_channels=features[3],
- kernel_size=3,
- stride=2,
- padding=1,
- ),
- )
-
- pretrained.model.start_index = start_index
- pretrained.model.patch_size = [16, 16]
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model)
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model._resize_pos_embed = types.MethodType(
- _resize_pos_embed, pretrained.model
- )
-
- return pretrained
-
-
-def _make_pretrained_vitb_rn50_384(
- pretrained, use_readout="ignore", hooks=None, use_vit_only=False
-):
- model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained)
-
- hooks = [0, 1, 8, 11] if hooks == None else hooks
- return _make_vit_b_rn50_backbone(
- model,
- features=[256, 512, 768, 768],
- size=[384, 384],
- hooks=hooks,
- use_vit_only=use_vit_only,
- use_readout=use_readout,
- )
diff --git a/spaces/vumichien/canvas_controlnet/annotator/openpose/body.py b/spaces/vumichien/canvas_controlnet/annotator/openpose/body.py
deleted file mode 100644
index 7c3cf7a388b4ac81004524e64125e383bdd455bd..0000000000000000000000000000000000000000
--- a/spaces/vumichien/canvas_controlnet/annotator/openpose/body.py
+++ /dev/null
@@ -1,219 +0,0 @@
-import cv2
-import numpy as np
-import math
-import time
-from scipy.ndimage.filters import gaussian_filter
-import matplotlib.pyplot as plt
-import matplotlib
-import torch
-from torchvision import transforms
-
-from . import util
-from .model import bodypose_model
-
-class Body(object):
- def __init__(self, model_path):
- self.model = bodypose_model()
- if torch.cuda.is_available():
- self.model = self.model.cuda()
- print('cuda')
- model_dict = util.transfer(self.model, torch.load(model_path))
- self.model.load_state_dict(model_dict)
- self.model.eval()
-
- def __call__(self, oriImg):
- # scale_search = [0.5, 1.0, 1.5, 2.0]
- scale_search = [0.5]
- boxsize = 368
- stride = 8
- padValue = 128
- thre1 = 0.1
- thre2 = 0.05
- multiplier = [x * boxsize / oriImg.shape[0] for x in scale_search]
- heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 19))
- paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38))
-
- for m in range(len(multiplier)):
- scale = multiplier[m]
- imageToTest = cv2.resize(oriImg, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC)
- imageToTest_padded, pad = util.padRightDownCorner(imageToTest, stride, padValue)
- im = np.transpose(np.float32(imageToTest_padded[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5
- im = np.ascontiguousarray(im)
-
- data = torch.from_numpy(im).float()
- if torch.cuda.is_available():
- data = data.cuda()
- # data = data.permute([2, 0, 1]).unsqueeze(0).float()
- with torch.no_grad():
- Mconv7_stage6_L1, Mconv7_stage6_L2 = self.model(data)
- Mconv7_stage6_L1 = Mconv7_stage6_L1.cpu().numpy()
- Mconv7_stage6_L2 = Mconv7_stage6_L2.cpu().numpy()
-
- # extract outputs, resize, and remove padding
- # heatmap = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[1]].data), (1, 2, 0)) # output 1 is heatmaps
- heatmap = np.transpose(np.squeeze(Mconv7_stage6_L2), (1, 2, 0)) # output 1 is heatmaps
- heatmap = cv2.resize(heatmap, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC)
- heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]
- heatmap = cv2.resize(heatmap, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)
-
- # paf = np.transpose(np.squeeze(net.blobs[output_blobs.keys()[0]].data), (1, 2, 0)) # output 0 is PAFs
- paf = np.transpose(np.squeeze(Mconv7_stage6_L1), (1, 2, 0)) # output 0 is PAFs
- paf = cv2.resize(paf, (0, 0), fx=stride, fy=stride, interpolation=cv2.INTER_CUBIC)
- paf = paf[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]
- paf = cv2.resize(paf, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)
-
- heatmap_avg += heatmap_avg + heatmap / len(multiplier)
- paf_avg += + paf / len(multiplier)
-
- all_peaks = []
- peak_counter = 0
-
- for part in range(18):
- map_ori = heatmap_avg[:, :, part]
- one_heatmap = gaussian_filter(map_ori, sigma=3)
-
- map_left = np.zeros(one_heatmap.shape)
- map_left[1:, :] = one_heatmap[:-1, :]
- map_right = np.zeros(one_heatmap.shape)
- map_right[:-1, :] = one_heatmap[1:, :]
- map_up = np.zeros(one_heatmap.shape)
- map_up[:, 1:] = one_heatmap[:, :-1]
- map_down = np.zeros(one_heatmap.shape)
- map_down[:, :-1] = one_heatmap[:, 1:]
-
- peaks_binary = np.logical_and.reduce(
- (one_heatmap >= map_left, one_heatmap >= map_right, one_heatmap >= map_up, one_heatmap >= map_down, one_heatmap > thre1))
- peaks = list(zip(np.nonzero(peaks_binary)[1], np.nonzero(peaks_binary)[0])) # note reverse
- peaks_with_score = [x + (map_ori[x[1], x[0]],) for x in peaks]
- peak_id = range(peak_counter, peak_counter + len(peaks))
- peaks_with_score_and_id = [peaks_with_score[i] + (peak_id[i],) for i in range(len(peak_id))]
-
- all_peaks.append(peaks_with_score_and_id)
- peak_counter += len(peaks)
-
- # find connection in the specified sequence, center 29 is in the position 15
- limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \
- [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \
- [1, 16], [16, 18], [3, 17], [6, 18]]
- # the middle joints heatmap correpondence
- mapIdx = [[31, 32], [39, 40], [33, 34], [35, 36], [41, 42], [43, 44], [19, 20], [21, 22], \
- [23, 24], [25, 26], [27, 28], [29, 30], [47, 48], [49, 50], [53, 54], [51, 52], \
- [55, 56], [37, 38], [45, 46]]
-
- connection_all = []
- special_k = []
- mid_num = 10
-
- for k in range(len(mapIdx)):
- score_mid = paf_avg[:, :, [x - 19 for x in mapIdx[k]]]
- candA = all_peaks[limbSeq[k][0] - 1]
- candB = all_peaks[limbSeq[k][1] - 1]
- nA = len(candA)
- nB = len(candB)
- indexA, indexB = limbSeq[k]
- if (nA != 0 and nB != 0):
- connection_candidate = []
- for i in range(nA):
- for j in range(nB):
- vec = np.subtract(candB[j][:2], candA[i][:2])
- norm = math.sqrt(vec[0] * vec[0] + vec[1] * vec[1])
- norm = max(0.001, norm)
- vec = np.divide(vec, norm)
-
- startend = list(zip(np.linspace(candA[i][0], candB[j][0], num=mid_num), \
- np.linspace(candA[i][1], candB[j][1], num=mid_num)))
-
- vec_x = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 0] \
- for I in range(len(startend))])
- vec_y = np.array([score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 1] \
- for I in range(len(startend))])
-
- score_midpts = np.multiply(vec_x, vec[0]) + np.multiply(vec_y, vec[1])
- score_with_dist_prior = sum(score_midpts) / len(score_midpts) + min(
- 0.5 * oriImg.shape[0] / norm - 1, 0)
- criterion1 = len(np.nonzero(score_midpts > thre2)[0]) > 0.8 * len(score_midpts)
- criterion2 = score_with_dist_prior > 0
- if criterion1 and criterion2:
- connection_candidate.append(
- [i, j, score_with_dist_prior, score_with_dist_prior + candA[i][2] + candB[j][2]])
-
- connection_candidate = sorted(connection_candidate, key=lambda x: x[2], reverse=True)
- connection = np.zeros((0, 5))
- for c in range(len(connection_candidate)):
- i, j, s = connection_candidate[c][0:3]
- if (i not in connection[:, 3] and j not in connection[:, 4]):
- connection = np.vstack([connection, [candA[i][3], candB[j][3], s, i, j]])
- if (len(connection) >= min(nA, nB)):
- break
-
- connection_all.append(connection)
- else:
- special_k.append(k)
- connection_all.append([])
-
- # last number in each row is the total parts number of that person
- # the second last number in each row is the score of the overall configuration
- subset = -1 * np.ones((0, 20))
- candidate = np.array([item for sublist in all_peaks for item in sublist])
-
- for k in range(len(mapIdx)):
- if k not in special_k:
- partAs = connection_all[k][:, 0]
- partBs = connection_all[k][:, 1]
- indexA, indexB = np.array(limbSeq[k]) - 1
-
- for i in range(len(connection_all[k])): # = 1:size(temp,1)
- found = 0
- subset_idx = [-1, -1]
- for j in range(len(subset)): # 1:size(subset,1):
- if subset[j][indexA] == partAs[i] or subset[j][indexB] == partBs[i]:
- subset_idx[found] = j
- found += 1
-
- if found == 1:
- j = subset_idx[0]
- if subset[j][indexB] != partBs[i]:
- subset[j][indexB] = partBs[i]
- subset[j][-1] += 1
- subset[j][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]
- elif found == 2: # if found 2 and disjoint, merge them
- j1, j2 = subset_idx
- membership = ((subset[j1] >= 0).astype(int) + (subset[j2] >= 0).astype(int))[:-2]
- if len(np.nonzero(membership == 2)[0]) == 0: # merge
- subset[j1][:-2] += (subset[j2][:-2] + 1)
- subset[j1][-2:] += subset[j2][-2:]
- subset[j1][-2] += connection_all[k][i][2]
- subset = np.delete(subset, j2, 0)
- else: # as like found == 1
- subset[j1][indexB] = partBs[i]
- subset[j1][-1] += 1
- subset[j1][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]
-
- # if find no partA in the subset, create a new subset
- elif not found and k < 17:
- row = -1 * np.ones(20)
- row[indexA] = partAs[i]
- row[indexB] = partBs[i]
- row[-1] = 2
- row[-2] = sum(candidate[connection_all[k][i, :2].astype(int), 2]) + connection_all[k][i][2]
- subset = np.vstack([subset, row])
- # delete some rows of subset which has few parts occur
- deleteIdx = []
- for i in range(len(subset)):
- if subset[i][-1] < 4 or subset[i][-2] / subset[i][-1] < 0.4:
- deleteIdx.append(i)
- subset = np.delete(subset, deleteIdx, axis=0)
-
- # subset: n*20 array, 0-17 is the index in candidate, 18 is the total score, 19 is the total parts
- # candidate: x, y, score, id
- return candidate, subset
-
-if __name__ == "__main__":
- body_estimation = Body('../model/body_pose_model.pth')
-
- test_image = '../images/ski.jpg'
- oriImg = cv2.imread(test_image) # B,G,R order
- candidate, subset = body_estimation(oriImg)
- canvas = util.draw_bodypose(oriImg, candidate, subset)
- plt.imshow(canvas[:, :, [2, 1, 0]])
- plt.show()
diff --git a/spaces/warp-ai/Wuerstchen/app.py b/spaces/warp-ai/Wuerstchen/app.py
deleted file mode 100644
index 690eb8277f53f1f364848c96e37445c062a5a76f..0000000000000000000000000000000000000000
--- a/spaces/warp-ai/Wuerstchen/app.py
+++ /dev/null
@@ -1,294 +0,0 @@
-import os
-import random
-import gradio as gr
-import numpy as np
-import PIL.Image
-import torch
-from typing import List
-from diffusers.utils import numpy_to_pil
-from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline
-from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS
-from previewer.modules import Previewer
-
-import user_history
-
-os.environ['TOKENIZERS_PARALLELISM'] = 'false'
-
-DESCRIPTION = "# Würstchen"
-DESCRIPTION += "\nWürstchen is a new fast and efficient high resolution text-to-image architecture and model
"
-if not torch.cuda.is_available():
- DESCRIPTION += "\nRunning on CPU 🥶
"
-
-MAX_SEED = np.iinfo(np.int32).max
-CACHE_EXAMPLES = torch.cuda.is_available() and os.getenv("CACHE_EXAMPLES") == "1"
-MAX_IMAGE_SIZE = int(os.getenv("MAX_IMAGE_SIZE", "1536"))
-USE_TORCH_COMPILE = False
-ENABLE_CPU_OFFLOAD = os.getenv("ENABLE_CPU_OFFLOAD") == "1"
-PREVIEW_IMAGES = True
-
-dtype = torch.float16
-device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
-if torch.cuda.is_available():
- prior_pipeline = WuerstchenPriorPipeline.from_pretrained("warp-ai/wuerstchen-prior", torch_dtype=dtype)
- decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained("warp-ai/wuerstchen", torch_dtype=dtype)
- if ENABLE_CPU_OFFLOAD:
- prior_pipeline.enable_model_cpu_offload()
- decoder_pipeline.enable_model_cpu_offload()
- else:
- prior_pipeline.to(device)
- decoder_pipeline.to(device)
-
- if USE_TORCH_COMPILE:
- prior_pipeline.prior = torch.compile(prior_pipeline.prior, mode="reduce-overhead", fullgraph=True)
- decoder_pipeline.decoder = torch.compile(decoder_pipeline.decoder, mode="reduce-overhead", fullgraph=True)
-
- if PREVIEW_IMAGES:
- previewer = Previewer()
- previewer.load_state_dict(torch.load("previewer/text2img_wurstchen_b_v1_previewer_100k.pt")["state_dict"])
- previewer.eval().requires_grad_(False).to(device).to(dtype)
-
- def callback_prior(i, t, latents):
- output = previewer(latents)
- output = numpy_to_pil(output.clamp(0, 1).permute(0, 2, 3, 1).cpu().numpy())
- return output
-
- else:
- previewer = None
- callback_prior = None
-else:
- prior_pipeline = None
- decoder_pipeline = None
-
-
-def randomize_seed_fn(seed: int, randomize_seed: bool) -> int:
- if randomize_seed:
- seed = random.randint(0, MAX_SEED)
- return seed
-
-
-def generate(
- prompt: str,
- negative_prompt: str = "",
- seed: int = 0,
- width: int = 1024,
- height: int = 1024,
- prior_num_inference_steps: int = 60,
- # prior_timesteps: List[float] = None,
- prior_guidance_scale: float = 4.0,
- decoder_num_inference_steps: int = 12,
- # decoder_timesteps: List[float] = None,
- decoder_guidance_scale: float = 0.0,
- num_images_per_prompt: int = 2,
- profile: gr.OAuthProfile | None = None,
-) -> PIL.Image.Image:
- generator = torch.Generator().manual_seed(seed)
-
- prior_output = prior_pipeline(
- prompt=prompt,
- height=height,
- width=width,
- timesteps=DEFAULT_STAGE_C_TIMESTEPS,
- negative_prompt=negative_prompt,
- guidance_scale=prior_guidance_scale,
- num_images_per_prompt=num_images_per_prompt,
- generator=generator,
- callback=callback_prior,
- )
-
- if PREVIEW_IMAGES:
- for _ in range(len(DEFAULT_STAGE_C_TIMESTEPS)):
- r = next(prior_output)
- if isinstance(r, list):
- yield r
- prior_output = r
-
- decoder_output = decoder_pipeline(
- image_embeddings=prior_output.image_embeddings,
- prompt=prompt,
- num_inference_steps=decoder_num_inference_steps,
- # timesteps=decoder_timesteps,
- guidance_scale=decoder_guidance_scale,
- negative_prompt=negative_prompt,
- generator=generator,
- output_type="pil",
- ).images
-
- # Save images
- for image in decoder_output:
- user_history.save_image(
- profile=profile,
- image=image,
- label=prompt,
- metadata={
- "negative_prompt": negative_prompt,
- "seed": seed,
- "width": width,
- "height": height,
- "prior_guidance_scale": prior_guidance_scale,
- "decoder_num_inference_steps": decoder_num_inference_steps,
- "decoder_guidance_scale": decoder_guidance_scale,
- "num_images_per_prompt": num_images_per_prompt,
- },
- )
-
- yield decoder_output
-
-
-examples = [
- "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k",
- "An astronaut riding a green horse",
-]
-
-with gr.Blocks() as demo:
- gr.Markdown(DESCRIPTION)
- gr.DuplicateButton(
- value="Duplicate Space for private use",
- elem_id="duplicate-button",
- visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1",
- )
- with gr.Group():
- with gr.Row():
- prompt = gr.Text(
- label="Prompt",
- show_label=False,
- max_lines=1,
- placeholder="Enter your prompt",
- container=False,
- )
- run_button = gr.Button("Run", scale=0)
- result = gr.Gallery(label="Result", show_label=False)
- with gr.Accordion("Advanced options", open=False):
- negative_prompt = gr.Text(
- label="Negative prompt",
- max_lines=1,
- placeholder="Enter a Negative Prompt",
- )
-
- seed = gr.Slider(
- label="Seed",
- minimum=0,
- maximum=MAX_SEED,
- step=1,
- value=0,
- )
- randomize_seed = gr.Checkbox(label="Randomize seed", value=True)
- with gr.Row():
- width = gr.Slider(
- label="Width",
- minimum=1024,
- maximum=MAX_IMAGE_SIZE,
- step=512,
- value=1024,
- )
- height = gr.Slider(
- label="Height",
- minimum=1024,
- maximum=MAX_IMAGE_SIZE,
- step=512,
- value=1024,
- )
- num_images_per_prompt = gr.Slider(
- label="Number of Images",
- minimum=1,
- maximum=2,
- step=1,
- value=2,
- )
- with gr.Row():
- prior_guidance_scale = gr.Slider(
- label="Prior Guidance Scale",
- minimum=0,
- maximum=20,
- step=0.1,
- value=4.0,
- )
- prior_num_inference_steps = gr.Slider(
- label="Prior Inference Steps",
- minimum=30,
- maximum=30,
- step=1,
- value=30,
- )
-
- decoder_guidance_scale = gr.Slider(
- label="Decoder Guidance Scale",
- minimum=0,
- maximum=0,
- step=0.1,
- value=0.0,
- )
- decoder_num_inference_steps = gr.Slider(
- label="Decoder Inference Steps",
- minimum=4,
- maximum=12,
- step=1,
- value=12,
- )
-
- gr.Examples(
- examples=examples,
- inputs=prompt,
- outputs=result,
- fn=generate,
- cache_examples=CACHE_EXAMPLES,
- )
-
- inputs = [
- prompt,
- negative_prompt,
- seed,
- width,
- height,
- prior_num_inference_steps,
- # prior_timesteps,
- prior_guidance_scale,
- decoder_num_inference_steps,
- # decoder_timesteps,
- decoder_guidance_scale,
- num_images_per_prompt,
- ]
- prompt.submit(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- queue=False,
- api_name=False,
- ).then(
- fn=generate,
- inputs=inputs,
- outputs=result,
- api_name="run",
- )
- negative_prompt.submit(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- queue=False,
- api_name=False,
- ).then(
- fn=generate,
- inputs=inputs,
- outputs=result,
- api_name=False,
- )
- run_button.click(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- queue=False,
- api_name=False,
- ).then(
- fn=generate,
- inputs=inputs,
- outputs=result,
- api_name=False,
- )
-
-with gr.Blocks(css="style.css") as demo_with_history:
- with gr.Tab("App"):
- demo.render()
- with gr.Tab("Past generations"):
- user_history.render()
-
-if __name__ == "__main__":
- demo_with_history.queue(max_size=20).launch()
diff --git a/spaces/watanabe3tipapa/web-sge-agent/tools/search_ddg.py b/spaces/watanabe3tipapa/web-sge-agent/tools/search_ddg.py
deleted file mode 100644
index 342aaa3628faaeb8e7a66b1cd6a32503ff27777b..0000000000000000000000000000000000000000
--- a/spaces/watanabe3tipapa/web-sge-agent/tools/search_ddg.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from itertools import islice
-from langchain.agents import Tool
-from duckduckgo_search import DDGS
-
-
-def search_ddg(query, max_result_num=5):
- """
- Tool for performing DuckDuckGo searches
- - Please enter the keyword you want to search for and use it.
- - The title, snippet (description), and URL of each page in the search results will be returned.
-
- Sample Response of DuckDuckGo python library
- --------------------------------------------
- [
- {
- 'title': '日程・結果|Fifa 女子ワールドカップ オーストラリア&ニュージーランド 2023|なでしこジャパン|日本代表|Jfa|日本サッカー協会',
- 'href': 'https://www.jfa.jp/nadeshikojapan/womensworldcup2023/schedule_result/',
- 'body': '日程・結果|FIFA 女子ワールドカップ オーストラリア&ニュージーランド 2023|なでしこジャパン|日本代表|JFA|日本サッカー協会. FIFA 女子ワールドカップ. オーストラリア&ニュージーランド 2023.'
- }, ...
- ]
-
- Returns
- -------
- List[Dict[str, str]]:
- - title
- - snippet
- - url
- """
- res = DDGS().text(query, region='wt-wt', safesearch='off', backend="lite")
- return [
- {
- "title": r.get('title', ""),
- "snippet": r.get('body', ""),
- "url": r.get('href', "")
- }
- for r in islice(res, max_result_num)
- ]
-
-
-def get_search_ddg_tool():
- search_tool_description = """
- Tool for performing DuckDuckGo searches.
- Please enter the keyword you want to search for and use it.
- The title, snippet (description) and URL of each page in the search results will be returned.
- The information available through this tool is QUITE CONDENSED and sometimes outdated.
-
- If you can't find the information you're looking for, please make sure to use the `WEB Page Fetcher` tool to read the content of each page.
- Feel free to use the most appropriate language for the context. (not necessary same as the user's language)
- For example, for programming-related questions, it's best to search in English.
- """
- return Tool(
- name='search_ddg',
- func=search_ddg,
- description=search_tool_description
- )
diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/metagpt_text_to_image.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/metagpt_text_to_image.py
deleted file mode 100644
index c5a0b872ff3e42036c8b240270cda75252ce2cf4..0000000000000000000000000000000000000000
--- a/spaces/wffcyrus/MetaGPT-v1/metagpt/tools/metagpt_text_to_image.py
+++ /dev/null
@@ -1,117 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/8/18
-@Author : mashenquan
-@File : metagpt_text_to_image.py
-@Desc : MetaGPT Text-to-Image OAS3 api, which provides text-to-image functionality.
-"""
-import asyncio
-import base64
-import os
-import sys
-from pathlib import Path
-from typing import List, Dict
-
-import aiohttp
-import requests
-from pydantic import BaseModel
-
-from metagpt.config import CONFIG, Config
-
-sys.path.append(str(Path(__file__).resolve().parent.parent.parent)) # fix-bug: No module named 'metagpt'
-from metagpt.logs import logger
-
-
-class MetaGPTText2Image:
- def __init__(self, model_url):
- """
- :param model_url: Model reset api url
- """
- self.model_url = model_url if model_url else CONFIG.METAGPT_TEXT_TO_IMAGE_MODEL
-
- async def text_2_image(self, text, size_type="512x512"):
- """Text to image
-
- :param text: The text used for image conversion.
- :param size_type: One of ['512x512', '512x768']
- :return: The image data is returned in Base64 encoding.
- """
-
- headers = {
- "Content-Type": "application/json"
- }
- dims = size_type.split("x")
- data = {
- "prompt": text,
- "negative_prompt": "(easynegative:0.8),black, dark,Low resolution",
- "override_settings": {"sd_model_checkpoint": "galaxytimemachinesGTM_photoV20"},
- "seed": -1,
- "batch_size": 1,
- "n_iter": 1,
- "steps": 20,
- "cfg_scale": 11,
- "width": int(dims[0]),
- "height": int(dims[1]), # 768,
- "restore_faces": False,
- "tiling": False,
- "do_not_save_samples": False,
- "do_not_save_grid": False,
- "enable_hr": False,
- "hr_scale": 2,
- "hr_upscaler": "Latent",
- "hr_second_pass_steps": 0,
- "hr_resize_x": 0,
- "hr_resize_y": 0,
- "hr_upscale_to_x": 0,
- "hr_upscale_to_y": 0,
- "truncate_x": 0,
- "truncate_y": 0,
- "applied_old_hires_behavior_to": None,
- "eta": None,
- "sampler_index": "DPM++ SDE Karras",
- "alwayson_scripts": {},
- }
-
- class ImageResult(BaseModel):
- images: List
- parameters: Dict
-
- try:
- async with aiohttp.ClientSession() as session:
- async with session.post(self.model_url, headers=headers, json=data) as response:
- result = ImageResult(**await response.json())
- if len(result.images) == 0:
- return ""
- return result.images[0]
- except requests.exceptions.RequestException as e:
- logger.error(f"An error occurred:{e}")
- return ""
-
-
-# Export
-async def oas3_metagpt_text_to_image(text, size_type: str = "512x512", model_url=""):
- """Text to image
-
- :param text: The text used for image conversion.
- :param model_url: Model reset api
- :param size_type: One of ['512x512', '512x768']
- :return: The image data is returned in Base64 encoding.
- """
- if not text:
- return ""
- if not model_url:
- model_url = CONFIG.METAGPT_TEXT_TO_IMAGE_MODEL_URL
- return await MetaGPTText2Image(model_url).text_2_image(text, size_type=size_type)
-
-
-if __name__ == "__main__":
- Config()
- loop = asyncio.new_event_loop()
- task = loop.create_task(oas3_metagpt_text_to_image("Panda emoji"))
- v = loop.run_until_complete(task)
- print(v)
- data = base64.b64decode(v)
- with open("tmp.png", mode="wb") as writer:
- writer.write(data)
- print(v)
diff --git a/spaces/wxiaofei/vits-uma-genshin-honkai/modules.py b/spaces/wxiaofei/vits-uma-genshin-honkai/modules.py
deleted file mode 100644
index 56ea4145eddf19dd330a3a41ab0183efc1686d83..0000000000000000000000000000000000000000
--- a/spaces/wxiaofei/vits-uma-genshin-honkai/modules.py
+++ /dev/null
@@ -1,388 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(
- nn.ReLU(),
- nn.Dropout(p_dropout))
- for _ in range(n_layers-1):
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size ** i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
- groups=channels, dilation=dilation, padding=padding
- ))
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- self.hidden_channels =hidden_channels
- self.kernel_size = kernel_size,
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = dilation_rate ** i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(
- x_in,
- g_l,
- n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:,self.hidden_channels:,:]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2])))
- ])
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
- padding=get_padding(kernel_size, 1)))
- ])
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList([
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]))),
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1])))
- ])
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels,1))
- self.logs = nn.Parameter(torch.zeros(channels,1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1,2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1,2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
-
-class ConvFlow(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails='linear',
- tail_bound=self.tail_bound
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1,2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/xiaolongbaox/gpt2.0/chatgpt - macOS.command b/spaces/xiaolongbaox/gpt2.0/chatgpt - macOS.command
deleted file mode 100644
index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000
--- a/spaces/xiaolongbaox/gpt2.0/chatgpt - macOS.command
+++ /dev/null
@@ -1,7 +0,0 @@
-#!/bin/bash
-echo Opening ChuanhuChatGPT...
-cd "$(dirname "${BASH_SOURCE[0]}")"
-nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 &
-sleep 5
-open http://127.0.0.1:7860
-echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal.
\ No newline at end of file
diff --git a/spaces/xiayi/anime-remove-background/app.py b/spaces/xiayi/anime-remove-background/app.py
deleted file mode 100644
index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000
--- a/spaces/xiayi/anime-remove-background/app.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import gradio as gr
-import huggingface_hub
-import onnxruntime as rt
-import numpy as np
-import cv2
-
-
-def get_mask(img, s=1024):
- img = (img / 255).astype(np.float32)
- h, w = h0, w0 = img.shape[:-1]
- h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s)
- ph, pw = s - h, s - w
- img_input = np.zeros([s, s, 3], dtype=np.float32)
- img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h))
- img_input = np.transpose(img_input, (2, 0, 1))
- img_input = img_input[np.newaxis, :]
- mask = rmbg_model.run(None, {'img': img_input})[0][0]
- mask = np.transpose(mask, (1, 2, 0))
- mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w]
- mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis]
- return mask
-
-
-def rmbg_fn(img):
- mask = get_mask(img)
- img = (mask * img + 255 * (1 - mask)).astype(np.uint8)
- mask = (mask * 255).astype(np.uint8)
- img = np.concatenate([img, mask], axis=2, dtype=np.uint8)
- mask = mask.repeat(3, axis=2)
- return mask, img
-
-
-if __name__ == "__main__":
- providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
- model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx")
- rmbg_model = rt.InferenceSession(model_path, providers=providers)
- app = gr.Blocks()
- with app:
- gr.Markdown("# Anime Remove Background\n\n"
- "\n\n"
- "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)")
- with gr.Row():
- with gr.Column():
- input_img = gr.Image(label="input image")
- examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)]
- examples = gr.Dataset(components=[input_img], samples=examples_data)
- run_btn = gr.Button(variant="primary")
- output_mask = gr.Image(label="mask")
- output_img = gr.Image(label="result", image_mode="RGBA")
- examples.click(lambda x: x[0], [examples], [input_img])
- run_btn.click(rmbg_fn, [input_img], [output_mask, output_img])
- app.launch()
diff --git a/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/swin_transformer.py b/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/swin_transformer.py
deleted file mode 100644
index 1c66194deb5dd370e797e57e2712f44303e568cc..0000000000000000000000000000000000000000
--- a/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/models/GroundingDINO/backbone/swin_transformer.py
+++ /dev/null
@@ -1,802 +0,0 @@
-# ------------------------------------------------------------------------
-# Grounding DINO
-# url: https://github.com/IDEA-Research/GroundingDINO
-# Copyright (c) 2023 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# ------------------------------------------------------------------------
-# DINO
-# Copyright (c) 2022 IDEA. All Rights Reserved.
-# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-# --------------------------------------------------------
-# modified from https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/master/mmdet/models/backbones/swin_transformer.py
-# --------------------------------------------------------
-
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-from groundingdino.util.misc import NestedTensor
-
-
-class Mlp(nn.Module):
- """Multilayer perceptron."""
-
- def __init__(
- self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.0
- ):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-
-class WindowAttention(nn.Module):
- """Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- """
-
- def __init__(
- self,
- dim,
- window_size,
- num_heads,
- qkv_bias=True,
- qk_scale=None,
- attn_drop=0.0,
- proj_drop=0.0,
- ):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.num_heads = num_heads
- head_dim = dim // num_heads
- self.scale = qk_scale or head_dim**-0.5
-
- # define a parameter table of relative position bias
- self.relative_position_bias_table = nn.Parameter(
- torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)
- ) # 2*Wh-1 * 2*Ww-1, nH
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
-
- trunc_normal_(self.relative_position_bias_table, std=0.02)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
- """Forward function.
- Args:
- x: input features with shape of (num_windows*B, N, C)
- mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
- """
- B_, N, C = x.shape
- qkv = (
- self.qkv(x)
- .reshape(B_, N, 3, self.num_heads, C // self.num_heads)
- .permute(2, 0, 3, 1, 4)
- )
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = q @ k.transpose(-2, -1)
-
- relative_position_bias = self.relative_position_bias_table[
- self.relative_position_index.view(-1)
- ].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1
- ) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(
- 2, 0, 1
- ).contiguous() # nH, Wh*Ww, Wh*Ww
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class SwinTransformerBlock(nn.Module):
- """Swin Transformer Block.
- Args:
- dim (int): Number of input channels.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(
- self,
- dim,
- num_heads,
- window_size=7,
- shift_size=0,
- mlp_ratio=4.0,
- qkv_bias=True,
- qk_scale=None,
- drop=0.0,
- attn_drop=0.0,
- drop_path=0.0,
- act_layer=nn.GELU,
- norm_layer=nn.LayerNorm,
- ):
- super().__init__()
- self.dim = dim
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim,
- window_size=to_2tuple(self.window_size),
- num_heads=num_heads,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- attn_drop=attn_drop,
- proj_drop=drop,
- )
-
- self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(
- in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop
- )
-
- self.H = None
- self.W = None
-
- def forward(self, x, mask_matrix):
- """Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- mask_matrix: Attention mask for cyclic shift.
- """
- B, L, C = x.shape
- H, W = self.H, self.W
- assert L == H * W, "input feature has wrong size"
-
- shortcut = x
- x = self.norm1(x)
- x = x.view(B, H, W, C)
-
- # pad feature maps to multiples of window size
- pad_l = pad_t = 0
- pad_r = (self.window_size - W % self.window_size) % self.window_size
- pad_b = (self.window_size - H % self.window_size) % self.window_size
- x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
- _, Hp, Wp, _ = x.shape
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- attn_mask = mask_matrix
- else:
- shifted_x = x
- attn_mask = None
-
- # partition windows
- x_windows = window_partition(
- shifted_x, self.window_size
- ) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(
- -1, self.window_size * self.window_size, C
- ) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA
- attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
-
- if pad_r > 0 or pad_b > 0:
- x = x[:, :H, :W, :].contiguous()
-
- x = x.view(B, H * W, C)
-
- # FFN
- x = shortcut + self.drop_path(x)
- x = x + self.drop_path(self.mlp(self.norm2(x)))
-
- return x
-
-
-class PatchMerging(nn.Module):
- """Patch Merging Layer
- Args:
- dim (int): Number of input channels.
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, dim, norm_layer=nn.LayerNorm):
- super().__init__()
- self.dim = dim
- self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
- self.norm = norm_layer(4 * dim)
-
- def forward(self, x, H, W):
- """Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
-
- x = x.view(B, H, W, C)
-
- # padding
- pad_input = (H % 2 == 1) or (W % 2 == 1)
- if pad_input:
- x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
-
- x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
- x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
- x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
- x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
- x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
- x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
-
- x = self.norm(x)
- x = self.reduction(x)
-
- return x
-
-
-class BasicLayer(nn.Module):
- """A basic Swin Transformer layer for one stage.
- Args:
- dim (int): Number of feature channels
- depth (int): Depths of this stage.
- num_heads (int): Number of attention head.
- window_size (int): Local window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- """
-
- def __init__(
- self,
- dim,
- depth,
- num_heads,
- window_size=7,
- mlp_ratio=4.0,
- qkv_bias=True,
- qk_scale=None,
- drop=0.0,
- attn_drop=0.0,
- drop_path=0.0,
- norm_layer=nn.LayerNorm,
- downsample=None,
- use_checkpoint=False,
- ):
- super().__init__()
- self.window_size = window_size
- self.shift_size = window_size // 2
- self.depth = depth
- self.use_checkpoint = use_checkpoint
-
- # build blocks
- self.blocks = nn.ModuleList(
- [
- SwinTransformerBlock(
- dim=dim,
- num_heads=num_heads,
- window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop,
- attn_drop=attn_drop,
- drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
- norm_layer=norm_layer,
- )
- for i in range(depth)
- ]
- )
-
- # patch merging layer
- if downsample is not None:
- self.downsample = downsample(dim=dim, norm_layer=norm_layer)
- else:
- self.downsample = None
-
- def forward(self, x, H, W):
- """Forward function.
- Args:
- x: Input feature, tensor size (B, H*W, C).
- H, W: Spatial resolution of the input feature.
- """
-
- # calculate attention mask for SW-MSA
- Hp = int(np.ceil(H / self.window_size)) * self.window_size
- Wp = int(np.ceil(W / self.window_size)) * self.window_size
- img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
- h_slices = (
- slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None),
- )
- w_slices = (
- slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None),
- )
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(
- img_mask, self.window_size
- ) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(
- attn_mask == 0, float(0.0)
- )
-
- for blk in self.blocks:
- blk.H, blk.W = H, W
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x, attn_mask)
- else:
- x = blk(x, attn_mask)
- if self.downsample is not None:
- x_down = self.downsample(x, H, W)
- Wh, Ww = (H + 1) // 2, (W + 1) // 2
- return x, H, W, x_down, Wh, Ww
- else:
- return x, H, W, x, H, W
-
-
-class PatchEmbed(nn.Module):
- """Image to Patch Embedding
- Args:
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- patch_size = to_2tuple(patch_size)
- self.patch_size = patch_size
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
- if norm_layer is not None:
- self.norm = norm_layer(embed_dim)
- else:
- self.norm = None
-
- def forward(self, x):
- """Forward function."""
- # padding
- _, _, H, W = x.size()
- if W % self.patch_size[1] != 0:
- x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
- if H % self.patch_size[0] != 0:
- x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
-
- x = self.proj(x) # B C Wh Ww
- if self.norm is not None:
- Wh, Ww = x.size(2), x.size(3)
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
- x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
-
- return x
-
-
-class SwinTransformer(nn.Module):
- """Swin Transformer backbone.
- A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
- https://arxiv.org/pdf/2103.14030
- Args:
- pretrain_img_size (int): Input image size for training the pretrained model,
- used in absolute postion embedding. Default 224.
- patch_size (int | tuple(int)): Patch size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- depths (tuple[int]): Depths of each Swin Transformer stage.
- num_heads (tuple[int]): Number of attention head of each stage.
- window_size (int): Window size. Default: 7.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- qk_scale (float): Override default qk scale of head_dim ** -0.5 if set.
- drop_rate (float): Dropout rate.
- attn_drop_rate (float): Attention dropout rate. Default: 0.
- drop_path_rate (float): Stochastic depth rate. Default: 0.2.
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False.
- patch_norm (bool): If True, add normalization after patch embedding. Default: True.
- out_indices (Sequence[int]): Output from which stages.
- frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
- -1 means not freezing any parameters.
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- dilation (bool): if True, the output size if 16x downsample, ow 32x downsample.
- """
-
- def __init__(
- self,
- pretrain_img_size=224,
- patch_size=4,
- in_chans=3,
- embed_dim=96,
- depths=[2, 2, 6, 2],
- num_heads=[3, 6, 12, 24],
- window_size=7,
- mlp_ratio=4.0,
- qkv_bias=True,
- qk_scale=None,
- drop_rate=0.0,
- attn_drop_rate=0.0,
- drop_path_rate=0.2,
- norm_layer=nn.LayerNorm,
- ape=False,
- patch_norm=True,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- dilation=False,
- use_checkpoint=False,
- ):
- super().__init__()
-
- self.pretrain_img_size = pretrain_img_size
- self.num_layers = len(depths)
- self.embed_dim = embed_dim
- self.ape = ape
- self.patch_norm = patch_norm
- self.out_indices = out_indices
- self.frozen_stages = frozen_stages
- self.dilation = dilation
-
- # if use_checkpoint:
- # print("use_checkpoint!!!!!!!!!!!!!!!!!!!!!!!!")
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- patch_size=patch_size,
- in_chans=in_chans,
- embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None,
- )
-
- # absolute position embedding
- if self.ape:
- pretrain_img_size = to_2tuple(pretrain_img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [
- pretrain_img_size[0] // patch_size[0],
- pretrain_img_size[1] // patch_size[1],
- ]
-
- self.absolute_pos_embed = nn.Parameter(
- torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])
- )
- trunc_normal_(self.absolute_pos_embed, std=0.02)
-
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- # stochastic depth
- dpr = [
- x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))
- ] # stochastic depth decay rule
-
- # build layers
- self.layers = nn.ModuleList()
- # prepare downsample list
- downsamplelist = [PatchMerging for i in range(self.num_layers)]
- downsamplelist[-1] = None
- num_features = [int(embed_dim * 2**i) for i in range(self.num_layers)]
- if self.dilation:
- downsamplelist[-2] = None
- num_features[-1] = int(embed_dim * 2 ** (self.num_layers - 1)) // 2
- for i_layer in range(self.num_layers):
- layer = BasicLayer(
- # dim=int(embed_dim * 2 ** i_layer),
- dim=num_features[i_layer],
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- qk_scale=qk_scale,
- drop=drop_rate,
- attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])],
- norm_layer=norm_layer,
- # downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
- downsample=downsamplelist[i_layer],
- use_checkpoint=use_checkpoint,
- )
- self.layers.append(layer)
-
- # num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
- self.num_features = num_features
-
- # add a norm layer for each output
- for i_layer in out_indices:
- layer = norm_layer(num_features[i_layer])
- layer_name = f"norm{i_layer}"
- self.add_module(layer_name, layer)
-
- self._freeze_stages()
-
- def _freeze_stages(self):
- if self.frozen_stages >= 0:
- self.patch_embed.eval()
- for param in self.patch_embed.parameters():
- param.requires_grad = False
-
- if self.frozen_stages >= 1 and self.ape:
- self.absolute_pos_embed.requires_grad = False
-
- if self.frozen_stages >= 2:
- self.pos_drop.eval()
- for i in range(0, self.frozen_stages - 1):
- m = self.layers[i]
- m.eval()
- for param in m.parameters():
- param.requires_grad = False
-
- # def init_weights(self, pretrained=None):
- # """Initialize the weights in backbone.
- # Args:
- # pretrained (str, optional): Path to pre-trained weights.
- # Defaults to None.
- # """
-
- # def _init_weights(m):
- # if isinstance(m, nn.Linear):
- # trunc_normal_(m.weight, std=.02)
- # if isinstance(m, nn.Linear) and m.bias is not None:
- # nn.init.constant_(m.bias, 0)
- # elif isinstance(m, nn.LayerNorm):
- # nn.init.constant_(m.bias, 0)
- # nn.init.constant_(m.weight, 1.0)
-
- # if isinstance(pretrained, str):
- # self.apply(_init_weights)
- # logger = get_root_logger()
- # load_checkpoint(self, pretrained, strict=False, logger=logger)
- # elif pretrained is None:
- # self.apply(_init_weights)
- # else:
- # raise TypeError('pretrained must be a str or None')
-
- def forward_raw(self, x):
- """Forward function."""
- x = self.patch_embed(x)
-
- Wh, Ww = x.size(2), x.size(3)
- if self.ape:
- # interpolate the position embedding to the corresponding size
- absolute_pos_embed = F.interpolate(
- self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic"
- )
- x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
- else:
- x = x.flatten(2).transpose(1, 2)
- x = self.pos_drop(x)
-
- outs = []
- for i in range(self.num_layers):
- layer = self.layers[i]
- x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
- # import ipdb; ipdb.set_trace()
-
- if i in self.out_indices:
- norm_layer = getattr(self, f"norm{i}")
- x_out = norm_layer(x_out)
-
- out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
- outs.append(out)
- # in:
- # torch.Size([2, 3, 1024, 1024])
- # outs:
- # [torch.Size([2, 192, 256, 256]), torch.Size([2, 384, 128, 128]), \
- # torch.Size([2, 768, 64, 64]), torch.Size([2, 1536, 32, 32])]
- return tuple(outs)
-
- def forward(self, tensor_list: NestedTensor):
- x = tensor_list.tensors
-
- """Forward function."""
- x = self.patch_embed(x)
-
- Wh, Ww = x.size(2), x.size(3)
- if self.ape:
- # interpolate the position embedding to the corresponding size
- absolute_pos_embed = F.interpolate(
- self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic"
- )
- x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
- else:
- x = x.flatten(2).transpose(1, 2)
- x = self.pos_drop(x)
-
- outs = []
- for i in range(self.num_layers):
- layer = self.layers[i]
- x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
-
- if i in self.out_indices:
- norm_layer = getattr(self, f"norm{i}")
- x_out = norm_layer(x_out)
-
- out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
- outs.append(out)
- # in:
- # torch.Size([2, 3, 1024, 1024])
- # out:
- # [torch.Size([2, 192, 256, 256]), torch.Size([2, 384, 128, 128]), \
- # torch.Size([2, 768, 64, 64]), torch.Size([2, 1536, 32, 32])]
-
- # collect for nesttensors
- outs_dict = {}
- for idx, out_i in enumerate(outs):
- m = tensor_list.mask
- assert m is not None
- mask = F.interpolate(m[None].float(), size=out_i.shape[-2:]).to(torch.bool)[0]
- outs_dict[idx] = NestedTensor(out_i, mask)
-
- return outs_dict
-
- def train(self, mode=True):
- """Convert the model into training mode while keep layers freezed."""
- super(SwinTransformer, self).train(mode)
- self._freeze_stages()
-
-
-def build_swin_transformer(modelname, pretrain_img_size, **kw):
- assert modelname in [
- "swin_T_224_1k",
- "swin_B_224_22k",
- "swin_B_384_22k",
- "swin_L_224_22k",
- "swin_L_384_22k",
- ]
-
- model_para_dict = {
- "swin_T_224_1k": dict(
- embed_dim=96, depths=[2, 2, 6, 2], num_heads=[3, 6, 12, 24], window_size=7
- ),
- "swin_B_224_22k": dict(
- embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=7
- ),
- "swin_B_384_22k": dict(
- embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=12
- ),
- "swin_L_224_22k": dict(
- embed_dim=192, depths=[2, 2, 18, 2], num_heads=[6, 12, 24, 48], window_size=7
- ),
- "swin_L_384_22k": dict(
- embed_dim=192, depths=[2, 2, 18, 2], num_heads=[6, 12, 24, 48], window_size=12
- ),
- }
- kw_cgf = model_para_dict[modelname]
- kw_cgf.update(kw)
- model = SwinTransformer(pretrain_img_size=pretrain_img_size, **kw_cgf)
- return model
-
-
-if __name__ == "__main__":
- model = build_swin_transformer("swin_L_384_22k", 384, dilation=True)
- x = torch.rand(2, 3, 1024, 1024)
- y = model.forward_raw(x)
- import ipdb
-
- ipdb.set_trace()
- x = torch.rand(2, 3, 384, 384)
- y = model.forward_raw(x)
diff --git a/spaces/xswu/HPSv2/src/training/scheduler.py b/spaces/xswu/HPSv2/src/training/scheduler.py
deleted file mode 100644
index fba76fcf1720b11d136a5ab6d3a58ab2fbe42f74..0000000000000000000000000000000000000000
--- a/spaces/xswu/HPSv2/src/training/scheduler.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import numpy as np
-
-
-def assign_learning_rate(optimizer, new_lr):
- for param_group in optimizer.param_groups:
- param_group["lr"] = new_lr
-
-
-def _warmup_lr(base_lr, warmup_length, step):
- return base_lr * (step + 1) / warmup_length
-
-
-def const_lr(optimizer, base_lr, warmup_length, steps):
- def _lr_adjuster(step):
- if step < warmup_length:
- lr = _warmup_lr(base_lr, warmup_length, step)
- else:
- lr = base_lr
- assign_learning_rate(optimizer, lr)
- return lr
- return _lr_adjuster
-
-
-def const_lr_cooldown(optimizer, base_lr, warmup_length, steps, cooldown_steps, cooldown_power=1.0, cooldown_end_lr=0.):
- def _lr_adjuster(step):
- start_cooldown_step = steps - cooldown_steps
- if step < warmup_length:
- lr = _warmup_lr(base_lr, warmup_length, step)
- else:
- if step < start_cooldown_step:
- lr = base_lr
- else:
- e = step - start_cooldown_step
- es = steps - start_cooldown_step
- # linear decay if power == 1; polynomial decay otherwise;
- decay = (1 - (e/es)) ** cooldown_power
- lr = decay * (base_lr - cooldown_end_lr) + cooldown_end_lr
- assign_learning_rate(optimizer, lr)
- return lr
- return _lr_adjuster
-
-
-def cosine_lr(optimizer, base_lr, warmup_length, steps):
- def _lr_adjuster(step):
- if step < warmup_length:
- lr = _warmup_lr(base_lr, warmup_length, step)
- else:
- e = step - warmup_length
- es = steps - warmup_length
- lr = 0.5 * (1 + np.cos(np.pi * e / es)) * base_lr
- assign_learning_rate(optimizer, lr)
- return lr
- return _lr_adjuster
diff --git a/spaces/xuxw98/TAPA/scripts/prepare_redpajama.py b/spaces/xuxw98/TAPA/scripts/prepare_redpajama.py
deleted file mode 100644
index 8da1c1b4e5525e43268e2c2b0c59be99109894ed..0000000000000000000000000000000000000000
--- a/spaces/xuxw98/TAPA/scripts/prepare_redpajama.py
+++ /dev/null
@@ -1,181 +0,0 @@
-import json
-import glob
-import os
-from pathlib import Path
-import sys
-
-# support running without installing as a package
-wd = Path(__file__).parent.parent.resolve()
-sys.path.append(str(wd))
-
-import numpy as np
-from tqdm import tqdm
-
-from lit_llama import Tokenizer
-import lit_llama.packed_dataset as packed_dataset
-
-
-filenames_sample = [
- "arxiv_sample.jsonl",
- "book_sample.jsonl",
- "c4_sample.jsonl",
- "cc_2019-30_sample.jsonl",
- "cc_2020-05_sample.jsonl",
- "cc_2021-04_sample.jsonl",
- "cc_2022-05_sample.jsonl",
- "cc_2023-06_sample.jsonl",
- "github_sample.jsonl",
- "stackexchange_sample.jsonl",
- "wikipedia_sample.jsonl",
-]
-
-filename_sets = {
- "arxiv": "arxiv/arxiv*",
- "book": "book/book*",
- "c4": "c4/c4-train*",
- "common_crawl": "common_crawl/*",
- "github": "github/filtered*",
- "stackexchange": "stackexchange/stackexchange*",
- "wikipedia": "wikipedia/wiki*",
-}
-
-
-def prepare_sample(
- source_path: Path,
- tokenizer_path: Path,
- destination_path: Path,
- chunk_size: int,
- match = ""
-) -> None:
- """Prepare the "Red Pajama" dataset. We assume tokenizer has been trained (i.e. we reuse LLaMA's tokenizer model)."""
- destination_path.mkdir(parents=True, exist_ok=True)
-
- tokenizer = Tokenizer(tokenizer_path)
-
- for name in filenames_sample:
- if match and match not in name:
- continue
-
- filepath = source_path / name
-
- if not filepath.is_file():
- raise RuntimeError(
- f"Input file not found at {filepath}. \n"
- "Make sure you download the data, e.g. wget -i https://data.together.xyz/redpajama-data-1T/v1.0.0/urls.txt or through \n"
- "https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T \n"
- "https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample \n"
- )
-
- prefix, _ = os.path.splitext(name)
-
- builder = packed_dataset.PackedDatasetBuilder(
- outdir=destination_path,
- prefix=prefix,
- chunk_size=chunk_size,
- sep_token=tokenizer.bos_id,
- dtype="auto",
- vocab_size=tokenizer.vocab_size,
- )
-
- print(f"Processing {name}")
-
- with open(filepath, encoding="utf-8") as f:
- for row in tqdm(f):
- text = json.loads(row)["text"]
- text_ids = tokenizer.encode(text)
- builder.add_array(np.array(text_ids, dtype=builder.dtype))
-
- builder.write_reminder()
-
-
-def prepare_full(
- source_path: Path,
- tokenizer_path: Path,
- destination_path: Path,
- chunk_size: int,
- match: str = ""
-) -> None:
- """Prepare the "Red Pajama" dataset. We assume tokenizer has been trained (i.e. we reuse LLaMA's tokenizer model)."""
- import zstandard as zstd
-
- destination_path.mkdir(parents=True, exist_ok=True)
-
- tokenizer = Tokenizer(tokenizer_path)
-
- for set_name, pattern in filename_sets.items():
- if match and match not in set_name:
- continue
-
- is_cc = set_name == "common_crawl"
-
- filenames = glob.glob(os.path.join(source_path, pattern), recursive=True)
-
- if not filenames:
- raise RuntimeError(
- f"No files matching {pattern} found at {source_path}. \n"
- "Make sure you download the data, e.g. wget -i https://data.together.xyz/redpajama-data-1T/v1.0.0/urls.txt or through \n"
- "https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T \n"
- "https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample \n"
- )
-
- builder = packed_dataset.PackedDatasetBuilder(
- outdir=destination_path,
- prefix=set_name,
- chunk_size=chunk_size,
- sep_token=tokenizer.bos_id,
- dtype="auto",
- vocab_size=tokenizer.vocab_size,
- )
-
- for name in filenames:
- filepath = source_path / name
-
- print(f"Processing {name}")
-
- if is_cc:
- with zstd.open(open(filepath, "rb"), "rt", encoding="utf-8") as f:
- for row in tqdm(f):
- text = json.loads(row)["text"]
- text_ids = tokenizer.encode(text)
- builder.add_array(np.array(text_ids, dtype=builder.dtype))
- else:
- with open(filepath, encoding="utf-8") as f:
- for row in tqdm(f):
- text = json.loads(row)["text"]
- text_ids = tokenizer.encode(text)
- builder.add_array(np.array(text_ids, dtype=builder.dtype))
-
- builder.write_reminder()
-
-
-def prepare(
- source_path: Path = Path("data/RedPajama-Data-1T-Sample"),
- tokenizer_path: Path = Path("checkpoints/lit-llama/tokenizer.model"),
- destination_path: Path = Path("data/red_pajama_sample"),
- chunk_size: int = 2049 * 1024, # 2048 block size + 1 for causal (from LLama), 1024 blocks
- sample: bool = False,
- match: str = "",
-) -> None:
- """Prepare the "Red Pajama" dataset. We assume tokenizer has been trained (i.e. we reuse LLaMA's tokenizer model)."""
- if sample:
- prepare_sample(
- source_path=source_path,
- tokenizer_path=tokenizer_path,
- destination_path=destination_path,
- chunk_size=chunk_size,
- match=match,
- )
- else:
- prepare_full(
- source_path=source_path,
- tokenizer_path=tokenizer_path,
- destination_path=destination_path,
- chunk_size=chunk_size,
- match=match,
- )
-
-
-if __name__ == "__main__":
- from jsonargparse import CLI
-
- CLI(prepare)
diff --git a/spaces/xxccc/gpt-academic/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp b/spaces/xxccc/gpt-academic/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp
deleted file mode 100644
index c94575903bdf2eef71ecbe66382375552446e510..0000000000000000000000000000000000000000
--- a/spaces/xxccc/gpt-academic/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp
+++ /dev/null
@@ -1,17 +0,0 @@
-#include "libipc/pool_alloc.h"
-
-#include "libipc/memory/resource.h"
-
-namespace ipc {
-namespace mem {
-
-void* pool_alloc::alloc(std::size_t size) {
- return async_pool_alloc::alloc(size);
-}
-
-void pool_alloc::free(void* p, std::size_t size) {
- async_pool_alloc::free(p, size);
-}
-
-} // namespace mem
-} // namespace ipc
diff --git a/spaces/yangheng/Waifu2X-Image-Scale/Waifu2x/Common.py b/spaces/yangheng/Waifu2X-Image-Scale/Waifu2x/Common.py
deleted file mode 100644
index c4d0e92bde751f2980ab1d3a21bd130e215d1983..0000000000000000000000000000000000000000
--- a/spaces/yangheng/Waifu2X-Image-Scale/Waifu2x/Common.py
+++ /dev/null
@@ -1,189 +0,0 @@
-from contextlib import contextmanager
-from math import sqrt, log
-
-import torch
-import torch.nn as nn
-
-
-# import warnings
-# warnings.simplefilter('ignore')
-
-
-class BaseModule(nn.Module):
- def __init__(self):
- self.act_fn = None
- super(BaseModule, self).__init__()
-
- def selu_init_params(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d) and m.weight.requires_grad:
- m.weight.data.normal_(0.0, 1.0 / sqrt(m.weight.numel()))
- if m.bias is not None:
- m.bias.data.fill_(0)
- elif isinstance(m, nn.BatchNorm2d) and m.weight.requires_grad:
- m.weight.data.fill_(1)
- m.bias.data.zero_()
-
- elif isinstance(m, nn.Linear) and m.weight.requires_grad:
- m.weight.data.normal_(0, 1.0 / sqrt(m.weight.numel()))
- m.bias.data.zero_()
-
- def initialize_weights_xavier_uniform(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d) and m.weight.requires_grad:
- # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='leaky_relu')
- nn.init.xavier_uniform_(m.weight)
- if m.bias is not None:
- m.bias.data.zero_()
- elif isinstance(m, nn.BatchNorm2d) and m.weight.requires_grad:
- m.weight.data.fill_(1)
- m.bias.data.zero_()
-
- def load_state_dict(self, state_dict, strict=True, self_state=False):
- own_state = self_state if self_state else self.state_dict()
- for name, param in state_dict.items():
- if name in own_state:
- try:
- own_state[name].copy_(param.data)
- except Exception as e:
- print("Parameter {} fails to load.".format(name))
- print("-----------------------------------------")
- print(e)
- else:
- print("Parameter {} is not in the model. ".format(name))
-
- @contextmanager
- def set_activation_inplace(self):
- if hasattr(self, 'act_fn') and hasattr(self.act_fn, 'inplace'):
- # save memory
- self.act_fn.inplace = True
- yield
- self.act_fn.inplace = False
- else:
- yield
-
- def total_parameters(self):
- total = sum([i.numel() for i in self.parameters()])
- trainable = sum([i.numel() for i in self.parameters() if i.requires_grad])
- print("Total parameters : {}. Trainable parameters : {}".format(total, trainable))
- return total
-
- def forward(self, *x):
- raise NotImplementedError
-
-
-class ResidualFixBlock(BaseModule):
- def __init__(self, in_channels, out_channels, kernel_size=3, padding=1, dilation=1,
- groups=1, activation=nn.SELU(), conv=nn.Conv2d):
- super(ResidualFixBlock, self).__init__()
- self.act_fn = activation
- self.m = nn.Sequential(
- conv(in_channels, out_channels, kernel_size, padding=padding, dilation=dilation, groups=groups),
- activation,
- # conv(out_channels, out_channels, kernel_size, padding=(kernel_size - 1) // 2, dilation=1, groups=groups),
- conv(in_channels, out_channels, kernel_size, padding=padding, dilation=dilation, groups=groups),
- )
-
- def forward(self, x):
- out = self.m(x)
- return self.act_fn(out + x)
-
-
-class ConvBlock(BaseModule):
- def __init__(self, in_channels, out_channels, kernel_size=3, padding=1, dilation=1, groups=1,
- activation=nn.SELU(), conv=nn.Conv2d):
- super(ConvBlock, self).__init__()
- self.m = nn.Sequential(conv(in_channels, out_channels, kernel_size, padding=padding,
- dilation=dilation, groups=groups),
- activation)
-
- def forward(self, x):
- return self.m(x)
-
-
-class UpSampleBlock(BaseModule):
- def __init__(self, channels, scale, activation, atrous_rate=1, conv=nn.Conv2d):
- assert scale in [2, 4, 8], "Currently UpSampleBlock supports 2, 4, 8 scaling"
- super(UpSampleBlock, self).__init__()
- m = nn.Sequential(
- conv(channels, 4 * channels, kernel_size=3, padding=atrous_rate, dilation=atrous_rate),
- activation,
- nn.PixelShuffle(2)
- )
- self.m = nn.Sequential(*[m for _ in range(int(log(scale, 2)))])
-
- def forward(self, x):
- return self.m(x)
-
-
-class SpatialChannelSqueezeExcitation(BaseModule):
- # https://arxiv.org/abs/1709.01507
- # https://arxiv.org/pdf/1803.02579v1.pdf
- def __init__(self, in_channel, reduction=16, activation=nn.ReLU()):
- super(SpatialChannelSqueezeExcitation, self).__init__()
- linear_nodes = max(in_channel // reduction, 4) # avoid only 1 node case
- self.avg_pool = nn.AdaptiveAvgPool2d(1)
- self.channel_excite = nn.Sequential(
- # check the paper for the number 16 in reduction. It is selected by experiment.
- nn.Linear(in_channel, linear_nodes),
- activation,
- nn.Linear(linear_nodes, in_channel),
- nn.Sigmoid()
- )
- self.spatial_excite = nn.Sequential(
- nn.Conv2d(in_channel, 1, kernel_size=1, stride=1, padding=0, bias=False),
- nn.Sigmoid()
- )
-
- def forward(self, x):
- b, c, h, w = x.size()
- #
- channel = self.avg_pool(x).view(b, c)
- # channel = F.avg_pool2d(x, kernel_size=(h,w)).view(b,c) # used for porting to other frameworks
- cSE = self.channel_excite(channel).view(b, c, 1, 1)
- x_cSE = torch.mul(x, cSE)
-
- # spatial
- sSE = self.spatial_excite(x)
- x_sSE = torch.mul(x, sSE)
- # return x_sSE
- return torch.add(x_cSE, x_sSE)
-
-
-class PartialConv(nn.Module):
- # reference:
- # Image Inpainting for Irregular Holes Using Partial Convolutions
- # http://masc.cs.gmu.edu/wiki/partialconv/show?time=2018-05-24+21%3A41%3A10
- # https://github.com/naoto0804/pytorch-inpainting-with-partial-conv/blob/master/net.py
- # https://github.com/SeitaroShinagawa/chainer-partial_convolution_image_inpainting/blob/master/common/net.py
- # partial based padding
- # https: // github.com / NVIDIA / partialconv / blob / master / models / pd_resnet.py
- def __init__(self, in_channels, out_channels, kernel_size, stride=1,
- padding=0, dilation=1, groups=1, bias=True):
-
- super(PartialConv, self).__init__()
- self.feature_conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride,
- padding, dilation, groups, bias)
-
- self.mask_conv = nn.Conv2d(1, 1, kernel_size, stride,
- padding, dilation, groups, bias=False)
- self.window_size = self.mask_conv.kernel_size[0] * self.mask_conv.kernel_size[1]
- torch.nn.init.constant_(self.mask_conv.weight, 1.0)
-
- for param in self.mask_conv.parameters():
- param.requires_grad = False
-
- def forward(self, x):
- output = self.feature_conv(x)
- if self.feature_conv.bias is not None:
- output_bias = self.feature_conv.bias.view(1, -1, 1, 1).expand_as(output)
- else:
- output_bias = torch.zeros_like(output, device=x.device)
-
- with torch.no_grad():
- ones = torch.ones(1, 1, x.size(2), x.size(3), device=x.device)
- output_mask = self.mask_conv(ones)
- output_mask = self.window_size / output_mask
- output = (output - output_bias) * output_mask + output_bias
-
- return output
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/data2vec/modeling_data2vec_text.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/data2vec/modeling_data2vec_text.py
deleted file mode 100644
index 7cbaee692564b4d611028aa6cd12d73679853aaf..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/data2vec/modeling_data2vec_text.py
+++ /dev/null
@@ -1,1560 +0,0 @@
-# coding=utf-8
-# Copyright 2022 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""PyTorch Data2VecText model."""
-
-import math
-from typing import List, Optional, Tuple, Union
-
-import torch
-import torch.utils.checkpoint
-from torch import nn
-from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
-
-from ...activations import ACT2FN, gelu
-from ...modeling_outputs import (
- BaseModelOutputWithPastAndCrossAttentions,
- BaseModelOutputWithPoolingAndCrossAttentions,
- CausalLMOutputWithCrossAttentions,
- MaskedLMOutput,
- MultipleChoiceModelOutput,
- QuestionAnsweringModelOutput,
- SequenceClassifierOutput,
- TokenClassifierOutput,
-)
-from ...modeling_utils import PreTrainedModel
-from ...pytorch_utils import apply_chunking_to_forward, find_pruneable_heads_and_indices, prune_linear_layer
-from ...utils import (
- add_code_sample_docstrings,
- add_start_docstrings,
- add_start_docstrings_to_model_forward,
- logging,
- replace_return_docstrings,
-)
-from .configuration_data2vec_text import Data2VecTextConfig
-
-
-logger = logging.get_logger(__name__)
-
-
-_HIDDEN_STATES_START_POSITION = 2
-
-# General docstring
-_CHECKPOINT_FOR_DOC = "facebook/data2vec-text-base"
-_CONFIG_FOR_DOC = "Data2VecTextConfig"
-
-
-DATA2VEC_TEXT_PRETRAINED_MODEL_ARCHIVE_LIST = [
- "facebook/data2vec-text-base",
- # See all data2vec models at https://huggingface.co/models?filter=data2vec-text
-]
-
-
-# Copied from transformers.models.roberta.modeling_roberta.RobertaEmbeddings with Roberta->Data2VecText
-class Data2VecTextForTextEmbeddings(nn.Module):
- """
- Same as BertEmbeddings with a tiny tweak for positional embeddings indexing.
- """
-
- # Copied from transformers.models.bert.modeling_bert.BertEmbeddings.__init__
- def __init__(self, config):
- super().__init__()
- self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id)
- self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)
- self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size)
-
- # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
- # any TensorFlow checkpoint file
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
- # position_ids (1, len position emb) is contiguous in memory and exported when serialized
- self.position_embedding_type = getattr(config, "position_embedding_type", "absolute")
- self.register_buffer(
- "position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)), persistent=False
- )
- self.register_buffer(
- "token_type_ids", torch.zeros(self.position_ids.size(), dtype=torch.long), persistent=False
- )
-
- # End copy
- self.padding_idx = config.pad_token_id
- self.position_embeddings = nn.Embedding(
- config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx
- )
-
- def forward(
- self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0
- ):
- if position_ids is None:
- if input_ids is not None:
- # Create the position ids from the input token ids. Any padded tokens remain padded.
- position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length)
- else:
- position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds)
-
- if input_ids is not None:
- input_shape = input_ids.size()
- else:
- input_shape = inputs_embeds.size()[:-1]
-
- seq_length = input_shape[1]
-
- # Setting the token_type_ids to the registered buffer in constructor where it is all zeros, which usually occurs
- # when its auto-generated, registered buffer helps users when tracing the model without passing token_type_ids, solves
- # issue #5664
- if token_type_ids is None:
- if hasattr(self, "token_type_ids"):
- buffered_token_type_ids = self.token_type_ids[:, :seq_length]
- buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length)
- token_type_ids = buffered_token_type_ids_expanded
- else:
- token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device)
-
- if inputs_embeds is None:
- inputs_embeds = self.word_embeddings(input_ids)
- token_type_embeddings = self.token_type_embeddings(token_type_ids)
-
- embeddings = inputs_embeds + token_type_embeddings
- if self.position_embedding_type == "absolute":
- position_embeddings = self.position_embeddings(position_ids)
- embeddings += position_embeddings
- embeddings = self.LayerNorm(embeddings)
- embeddings = self.dropout(embeddings)
- return embeddings
-
- def create_position_ids_from_inputs_embeds(self, inputs_embeds):
- """
- We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids.
-
- Args:
- inputs_embeds: torch.Tensor
-
- Returns: torch.Tensor
- """
- input_shape = inputs_embeds.size()[:-1]
- sequence_length = input_shape[1]
-
- position_ids = torch.arange(
- self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device
- )
- return position_ids.unsqueeze(0).expand(input_shape)
-
-
-# Copied from transformers.models.roberta.modeling_roberta.RobertaSelfAttention with Roberta->Data2VecText
-class Data2VecTextSelfAttention(nn.Module):
- def __init__(self, config, position_embedding_type=None):
- super().__init__()
- if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"):
- raise ValueError(
- f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention "
- f"heads ({config.num_attention_heads})"
- )
-
- self.num_attention_heads = config.num_attention_heads
- self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
- self.all_head_size = self.num_attention_heads * self.attention_head_size
-
- self.query = nn.Linear(config.hidden_size, self.all_head_size)
- self.key = nn.Linear(config.hidden_size, self.all_head_size)
- self.value = nn.Linear(config.hidden_size, self.all_head_size)
-
- self.dropout = nn.Dropout(config.attention_probs_dropout_prob)
- self.position_embedding_type = position_embedding_type or getattr(
- config, "position_embedding_type", "absolute"
- )
- if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query":
- self.max_position_embeddings = config.max_position_embeddings
- self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size)
-
- self.is_decoder = config.is_decoder
-
- def transpose_for_scores(self, x: torch.Tensor) -> torch.Tensor:
- new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size)
- x = x.view(new_x_shape)
- return x.permute(0, 2, 1, 3)
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: Optional[torch.FloatTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
- output_attentions: Optional[bool] = False,
- ) -> Tuple[torch.Tensor]:
- mixed_query_layer = self.query(hidden_states)
-
- # If this is instantiated as a cross-attention module, the keys
- # and values come from an encoder; the attention mask needs to be
- # such that the encoder's padding tokens are not attended to.
- is_cross_attention = encoder_hidden_states is not None
-
- if is_cross_attention and past_key_value is not None:
- # reuse k,v, cross_attentions
- key_layer = past_key_value[0]
- value_layer = past_key_value[1]
- attention_mask = encoder_attention_mask
- elif is_cross_attention:
- key_layer = self.transpose_for_scores(self.key(encoder_hidden_states))
- value_layer = self.transpose_for_scores(self.value(encoder_hidden_states))
- attention_mask = encoder_attention_mask
- elif past_key_value is not None:
- key_layer = self.transpose_for_scores(self.key(hidden_states))
- value_layer = self.transpose_for_scores(self.value(hidden_states))
- key_layer = torch.cat([past_key_value[0], key_layer], dim=2)
- value_layer = torch.cat([past_key_value[1], value_layer], dim=2)
- else:
- key_layer = self.transpose_for_scores(self.key(hidden_states))
- value_layer = self.transpose_for_scores(self.value(hidden_states))
-
- query_layer = self.transpose_for_scores(mixed_query_layer)
-
- use_cache = past_key_value is not None
- if self.is_decoder:
- # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states.
- # Further calls to cross_attention layer can then reuse all cross-attention
- # key/value_states (first "if" case)
- # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of
- # all previous decoder key/value_states. Further calls to uni-directional self-attention
- # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case)
- # if encoder bi-directional self-attention `past_key_value` is always `None`
- past_key_value = (key_layer, value_layer)
-
- # Take the dot product between "query" and "key" to get the raw attention scores.
- attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
-
- if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query":
- query_length, key_length = query_layer.shape[2], key_layer.shape[2]
- if use_cache:
- position_ids_l = torch.tensor(key_length - 1, dtype=torch.long, device=hidden_states.device).view(
- -1, 1
- )
- else:
- position_ids_l = torch.arange(query_length, dtype=torch.long, device=hidden_states.device).view(-1, 1)
- position_ids_r = torch.arange(key_length, dtype=torch.long, device=hidden_states.device).view(1, -1)
- distance = position_ids_l - position_ids_r
-
- positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1)
- positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility
-
- if self.position_embedding_type == "relative_key":
- relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
- attention_scores = attention_scores + relative_position_scores
- elif self.position_embedding_type == "relative_key_query":
- relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding)
- relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding)
- attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key
-
- attention_scores = attention_scores / math.sqrt(self.attention_head_size)
- if attention_mask is not None:
- # Apply the attention mask is (precomputed for all layers in Data2VecTextModel forward() function)
- attention_scores = attention_scores + attention_mask
-
- # Normalize the attention scores to probabilities.
- attention_probs = nn.functional.softmax(attention_scores, dim=-1)
-
- # This is actually dropping out entire tokens to attend to, which might
- # seem a bit unusual, but is taken from the original Transformer paper.
- attention_probs = self.dropout(attention_probs)
-
- # Mask heads if we want to
- if head_mask is not None:
- attention_probs = attention_probs * head_mask
-
- context_layer = torch.matmul(attention_probs, value_layer)
-
- context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
- new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
- context_layer = context_layer.view(new_context_layer_shape)
-
- outputs = (context_layer, attention_probs) if output_attentions else (context_layer,)
-
- if self.is_decoder:
- outputs = outputs + (past_key_value,)
- return outputs
-
-
-# Copied from transformers.models.bert.modeling_bert.BertSelfOutput
-class Data2VecTextSelfOutput(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor:
- hidden_states = self.dense(hidden_states)
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.LayerNorm(hidden_states + input_tensor)
- return hidden_states
-
-
-# Copied from transformers.models.bert.modeling_bert.BertAttention with Bert->Data2VecText
-class Data2VecTextAttention(nn.Module):
- def __init__(self, config, position_embedding_type=None):
- super().__init__()
- self.self = Data2VecTextSelfAttention(config, position_embedding_type=position_embedding_type)
- self.output = Data2VecTextSelfOutput(config)
- self.pruned_heads = set()
-
- def prune_heads(self, heads):
- if len(heads) == 0:
- return
- heads, index = find_pruneable_heads_and_indices(
- heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads
- )
-
- # Prune linear layers
- self.self.query = prune_linear_layer(self.self.query, index)
- self.self.key = prune_linear_layer(self.self.key, index)
- self.self.value = prune_linear_layer(self.self.value, index)
- self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)
-
- # Update hyper params and store pruned heads
- self.self.num_attention_heads = self.self.num_attention_heads - len(heads)
- self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads
- self.pruned_heads = self.pruned_heads.union(heads)
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: Optional[torch.FloatTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
- output_attentions: Optional[bool] = False,
- ) -> Tuple[torch.Tensor]:
- self_outputs = self.self(
- hidden_states,
- attention_mask,
- head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- past_key_value,
- output_attentions,
- )
- attention_output = self.output(self_outputs[0], hidden_states)
- outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them
- return outputs
-
-
-# Copied from transformers.models.bert.modeling_bert.BertIntermediate
-class Data2VecTextIntermediate(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.intermediate_size)
- if isinstance(config.hidden_act, str):
- self.intermediate_act_fn = ACT2FN[config.hidden_act]
- else:
- self.intermediate_act_fn = config.hidden_act
-
- def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
- hidden_states = self.dense(hidden_states)
- hidden_states = self.intermediate_act_fn(hidden_states)
- return hidden_states
-
-
-# Copied from transformers.models.bert.modeling_bert.BertOutput
-class Data2VecTextOutput(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.intermediate_size, config.hidden_size)
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor:
- hidden_states = self.dense(hidden_states)
- hidden_states = self.dropout(hidden_states)
- hidden_states = self.LayerNorm(hidden_states + input_tensor)
- return hidden_states
-
-
-# Copied from transformers.models.bert.modeling_bert.BertLayer with Bert->Data2VecText
-class Data2VecTextLayer(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.chunk_size_feed_forward = config.chunk_size_feed_forward
- self.seq_len_dim = 1
- self.attention = Data2VecTextAttention(config)
- self.is_decoder = config.is_decoder
- self.add_cross_attention = config.add_cross_attention
- if self.add_cross_attention:
- if not self.is_decoder:
- raise ValueError(f"{self} should be used as a decoder model if cross attention is added")
- self.crossattention = Data2VecTextAttention(config, position_embedding_type="absolute")
- self.intermediate = Data2VecTextIntermediate(config)
- self.output = Data2VecTextOutput(config)
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: Optional[torch.FloatTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
- output_attentions: Optional[bool] = False,
- ) -> Tuple[torch.Tensor]:
- # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
- self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
- self_attention_outputs = self.attention(
- hidden_states,
- attention_mask,
- head_mask,
- output_attentions=output_attentions,
- past_key_value=self_attn_past_key_value,
- )
- attention_output = self_attention_outputs[0]
-
- # if decoder, the last output is tuple of self-attn cache
- if self.is_decoder:
- outputs = self_attention_outputs[1:-1]
- present_key_value = self_attention_outputs[-1]
- else:
- outputs = self_attention_outputs[1:] # add self attentions if we output attention weights
-
- cross_attn_present_key_value = None
- if self.is_decoder and encoder_hidden_states is not None:
- if not hasattr(self, "crossattention"):
- raise ValueError(
- f"If `encoder_hidden_states` are passed, {self} has to be instantiated with cross-attention layers"
- " by setting `config.add_cross_attention=True`"
- )
-
- # cross_attn cached key/values tuple is at positions 3,4 of past_key_value tuple
- cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None
- cross_attention_outputs = self.crossattention(
- attention_output,
- attention_mask,
- head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- cross_attn_past_key_value,
- output_attentions,
- )
- attention_output = cross_attention_outputs[0]
- outputs = outputs + cross_attention_outputs[1:-1] # add cross attentions if we output attention weights
-
- # add cross-attn cache to positions 3,4 of present_key_value tuple
- cross_attn_present_key_value = cross_attention_outputs[-1]
- present_key_value = present_key_value + cross_attn_present_key_value
-
- layer_output = apply_chunking_to_forward(
- self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output
- )
- outputs = (layer_output,) + outputs
-
- # if decoder, return the attn key/values as the last output
- if self.is_decoder:
- outputs = outputs + (present_key_value,)
-
- return outputs
-
- def feed_forward_chunk(self, attention_output):
- intermediate_output = self.intermediate(attention_output)
- layer_output = self.output(intermediate_output, attention_output)
- return layer_output
-
-
-# Copied from transformers.models.bert.modeling_bert.BertEncoder with Bert->Data2VecText
-class Data2VecTextEncoder(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.config = config
- self.layer = nn.ModuleList([Data2VecTextLayer(config) for _ in range(config.num_hidden_layers)])
- self.gradient_checkpointing = False
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: Optional[torch.FloatTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = False,
- output_hidden_states: Optional[bool] = False,
- return_dict: Optional[bool] = True,
- ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]:
- all_hidden_states = () if output_hidden_states else None
- all_self_attentions = () if output_attentions else None
- all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None
-
- if self.gradient_checkpointing and self.training:
- if use_cache:
- logger.warning_once(
- "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
- )
- use_cache = False
-
- next_decoder_cache = () if use_cache else None
- for i, layer_module in enumerate(self.layer):
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- layer_head_mask = head_mask[i] if head_mask is not None else None
- past_key_value = past_key_values[i] if past_key_values is not None else None
-
- if self.gradient_checkpointing and self.training:
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return module(*inputs, past_key_value, output_attentions)
-
- return custom_forward
-
- layer_outputs = torch.utils.checkpoint.checkpoint(
- create_custom_forward(layer_module),
- hidden_states,
- attention_mask,
- layer_head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- )
- else:
- layer_outputs = layer_module(
- hidden_states,
- attention_mask,
- layer_head_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- past_key_value,
- output_attentions,
- )
-
- hidden_states = layer_outputs[0]
- if use_cache:
- next_decoder_cache += (layer_outputs[-1],)
- if output_attentions:
- all_self_attentions = all_self_attentions + (layer_outputs[1],)
- if self.config.add_cross_attention:
- all_cross_attentions = all_cross_attentions + (layer_outputs[2],)
-
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- if not return_dict:
- return tuple(
- v
- for v in [
- hidden_states,
- next_decoder_cache,
- all_hidden_states,
- all_self_attentions,
- all_cross_attentions,
- ]
- if v is not None
- )
- return BaseModelOutputWithPastAndCrossAttentions(
- last_hidden_state=hidden_states,
- past_key_values=next_decoder_cache,
- hidden_states=all_hidden_states,
- attentions=all_self_attentions,
- cross_attentions=all_cross_attentions,
- )
-
-
-# Copied from transformers.models.bert.modeling_bert.BertPooler
-class Data2VecTextPooler(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- self.activation = nn.Tanh()
-
- def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
- # We "pool" the model by simply taking the hidden state corresponding
- # to the first token.
- first_token_tensor = hidden_states[:, 0]
- pooled_output = self.dense(first_token_tensor)
- pooled_output = self.activation(pooled_output)
- return pooled_output
-
-
-class Data2VecTextPreTrainedModel(PreTrainedModel):
- """
- An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
- models.
- """
-
- config_class = Data2VecTextConfig
- base_model_prefix = "data2vec_text"
- supports_gradient_checkpointing = True
- _no_split_modules = ["Data2VecTextForTextEmbeddings", "Data2VecTextLayer"]
-
- def _init_weights(self, module):
- """Initialize the weights"""
- if isinstance(module, nn.Linear):
- # Slightly different from the TF version which uses truncated_normal for initialization
- # cf https://github.com/pytorch/pytorch/pull/5617
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
- if module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.Embedding):
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
- if module.padding_idx is not None:
- module.weight.data[module.padding_idx].zero_()
- elif isinstance(module, nn.LayerNorm):
- if hasattr(module, "bias") and module.bias is not None:
- module.bias.data.zero_()
- if hasattr(module, "weight") and module.weight is not None:
- module.weight.data.fill_(1.0)
-
- def _set_gradient_checkpointing(self, module, value=False):
- if isinstance(module, Data2VecTextEncoder):
- module.gradient_checkpointing = value
-
-
-DATA2VECTEXT_START_DOCSTRING = r"""
- Data2VecText was proposed in [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and
- Language](https://arxiv.org/pdf/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and
- Michael Auli.
-
- This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
- library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
- etc.)
-
- This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
- Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
- and behavior.
-
- Parameters:
- config ([`Data2VecTextConfig`]): Model configuration class with all the parameters of the
- model. Initializing with a config file does not load the weights associated with the model, only the
- configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
-"""
-
-DATA2VECTEXT_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`torch.LongTensor` of shape `({0})`):
- Indices of input sequence tokens in the vocabulary.
-
- Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
- token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*):
- Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,
- 1]`:
-
- - 0 corresponds to a *sentence A* token,
- - 1 corresponds to a *sentence B* token.
-
- [What are token type IDs?](../glossary#token-type-ids)
- position_ids (`torch.LongTensor` of shape `({0})`, *optional*):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
- config.max_position_embeddings - 1]`.
-
- [What are position IDs?](../glossary#position-ids)
- head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
- Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`:
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
-
- inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*):
- Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
- is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
- model's internal embedding lookup matrix.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-
-@add_start_docstrings(
- "The bare Data2VecText Model for text transformer outputting raw hidden-states without any specific head on top.",
- DATA2VECTEXT_START_DOCSTRING,
-)
-class Data2VecTextModel(Data2VecTextPreTrainedModel):
- """
-
- The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
- cross-attention is added between the self-attention layers, following the architecture described in *Attention is
- all you need*_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
- Kaiser and Illia Polosukhin.
-
- To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set
- to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and
- `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass.
-
- .. _*Attention is all you need*: https://arxiv.org/abs/1706.03762
-
- """
-
- def __init__(self, config, add_pooling_layer=True):
- super().__init__(config)
- self.config = config
-
- self.embeddings = Data2VecTextForTextEmbeddings(config)
- self.encoder = Data2VecTextEncoder(config)
-
- self.pooler = Data2VecTextPooler(config) if add_pooling_layer else None
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_input_embeddings(self):
- return self.embeddings.word_embeddings
-
- def set_input_embeddings(self, value):
- self.embeddings.word_embeddings = value
-
- def _prune_heads(self, heads_to_prune):
- """
- Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
- class PreTrainedModel
- """
- for layer, heads in heads_to_prune.items():
- self.encoder.layer[layer].attention.prune_heads(heads)
-
- @add_start_docstrings_to_model_forward(DATA2VECTEXT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=BaseModelOutputWithPoolingAndCrossAttentions,
- config_class=_CONFIG_FOR_DOC,
- )
- # Copied from transformers.models.bert.modeling_bert.BertModel.forward
- def forward(
- self,
- input_ids: Optional[torch.Tensor] = None,
- attention_mask: Optional[torch.Tensor] = None,
- token_type_ids: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.Tensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- inputs_embeds: Optional[torch.Tensor] = None,
- encoder_hidden_states: Optional[torch.Tensor] = None,
- encoder_attention_mask: Optional[torch.Tensor] = None,
- past_key_values: Optional[List[torch.FloatTensor]] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPoolingAndCrossAttentions]:
- r"""
- encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
- the model is configured as a decoder.
- encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
- the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
-
- If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
- don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
- `decoder_input_ids` of shape `(batch_size, sequence_length)`.
- use_cache (`bool`, *optional*):
- If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
- `past_key_values`).
- """
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if self.config.is_decoder:
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- else:
- use_cache = False
-
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
- elif input_ids is not None:
- self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
- input_shape = input_ids.size()
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- else:
- raise ValueError("You have to specify either input_ids or inputs_embeds")
-
- batch_size, seq_length = input_shape
- device = input_ids.device if input_ids is not None else inputs_embeds.device
-
- # past_key_values_length
- past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0
-
- if attention_mask is None:
- attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device)
-
- if token_type_ids is None:
- if hasattr(self.embeddings, "token_type_ids"):
- buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length]
- buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length)
- token_type_ids = buffered_token_type_ids_expanded
- else:
- token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device)
-
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
- # ourselves in which case we just need to make it broadcastable to all heads.
- extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape)
-
- # If a 2D or 3D attention mask is provided for the cross-attention
- # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if self.config.is_decoder and encoder_hidden_states is not None:
- encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
- encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
- if encoder_attention_mask is None:
- encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
- encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
- else:
- encoder_extended_attention_mask = None
-
- # Prepare head mask if needed
- # 1.0 in head_mask indicate we keep the head
- # attention_probs has shape bsz x n_heads x N x N
- # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
- # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
- head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
-
- embedding_output = self.embeddings(
- input_ids=input_ids,
- position_ids=position_ids,
- token_type_ids=token_type_ids,
- inputs_embeds=inputs_embeds,
- past_key_values_length=past_key_values_length,
- )
- encoder_outputs = self.encoder(
- embedding_output,
- attention_mask=extended_attention_mask,
- head_mask=head_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_extended_attention_mask,
- past_key_values=past_key_values,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- sequence_output = encoder_outputs[0]
- pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
-
- if not return_dict:
- return (sequence_output, pooled_output) + encoder_outputs[1:]
-
- return BaseModelOutputWithPoolingAndCrossAttentions(
- last_hidden_state=sequence_output,
- pooler_output=pooled_output,
- past_key_values=encoder_outputs.past_key_values,
- hidden_states=encoder_outputs.hidden_states,
- attentions=encoder_outputs.attentions,
- cross_attentions=encoder_outputs.cross_attentions,
- )
-
-
-@add_start_docstrings(
- """Data2VecText Model with a `language modeling` head on top for CLM fine-tuning.""", DATA2VECTEXT_START_DOCSTRING
-)
-class Data2VecTextForCausalLM(Data2VecTextPreTrainedModel):
- _tied_weights_keys = ["lm_head.decoder.weight", "lm_head.decoder.bias"]
-
- def __init__(self, config):
- super().__init__(config)
-
- if not config.is_decoder:
- logger.warning("If you want to use `Data2VecTextLMHeadModel` as a standalone, add `is_decoder=True.`")
-
- self.data2vec_text = Data2VecTextModel(config, add_pooling_layer=False)
- self.lm_head = Data2VecTextLMHead(config)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_output_embeddings(self):
- return self.lm_head.decoder
-
- def set_output_embeddings(self, new_embeddings):
- self.lm_head.decoder = new_embeddings
-
- @add_start_docstrings_to_model_forward(DATA2VECTEXT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @replace_return_docstrings(output_type=CausalLMOutputWithCrossAttentions, config_class=_CONFIG_FOR_DOC)
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- token_type_ids: Optional[torch.LongTensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]:
- r"""
- encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
- the model is configured as a decoder.
- encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
- the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
- `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are
- ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`
- past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
-
- If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
- don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
- `decoder_input_ids` of shape `(batch_size, sequence_length)`.
- use_cache (`bool`, *optional*):
- If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
- `past_key_values`).
-
- Returns:
-
- Example:
-
- ```python
- >>> from transformers import AutoTokenizer, Data2VecTextForCausalLM, Data2VecTextConfig
- >>> import torch
-
- >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base")
- >>> config = Data2VecTextConfig.from_pretrained("facebook/data2vec-text-base")
- >>> config.is_decoder = True
- >>> model = Data2VecTextForCausalLM.from_pretrained("facebook/data2vec-text-base", config=config)
-
- >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
- >>> outputs = model(**inputs)
-
- >>> prediction_logits = outputs.logits
- ```"""
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
- if labels is not None:
- use_cache = False
-
- outputs = self.data2vec_text(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_attention_mask,
- past_key_values=past_key_values,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- sequence_output = outputs[0]
- prediction_scores = self.lm_head(sequence_output)
-
- lm_loss = None
- if labels is not None:
- # we are doing next-token prediction; shift prediction scores and input ids by one
- shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous()
- labels = labels[:, 1:].contiguous()
- loss_fct = CrossEntropyLoss()
-
- labels = labels.to(shifted_prediction_scores.device)
- lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
-
- if not return_dict:
- output = (prediction_scores,) + outputs[2:]
- return ((lm_loss,) + output) if lm_loss is not None else output
-
- return CausalLMOutputWithCrossAttentions(
- loss=lm_loss,
- logits=prediction_scores,
- past_key_values=outputs.past_key_values,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- cross_attentions=outputs.cross_attentions,
- )
-
- def prepare_inputs_for_generation(self, input_ids, past_key_values=None, attention_mask=None, **model_kwargs):
- input_shape = input_ids.shape
- # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
- if attention_mask is None:
- attention_mask = input_ids.new_ones(input_shape)
-
- # cut decoder_input_ids if past is used
- if past_key_values is not None:
- input_ids = input_ids[:, -1:]
-
- return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past_key_values}
-
- def _reorder_cache(self, past_key_values, beam_idx):
- reordered_past = ()
- for layer_past in past_key_values:
- reordered_past += (
- tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
- )
- return reordered_past
-
-
-@add_start_docstrings("""data2vec Model with a `language modeling` head on top.""", DATA2VECTEXT_START_DOCSTRING)
-class Data2VecTextForMaskedLM(Data2VecTextPreTrainedModel):
- _tied_weights_keys = ["lm_head.decoder.weight", "lm_head.decoder.bias"]
-
- def __init__(self, config):
- super().__init__(config)
-
- if config.is_decoder:
- logger.warning(
- "If you want to use `Data2VecTextForMaskedLM` make sure `config.is_decoder=False` for "
- "bi-directional self-attention."
- )
-
- self.data2vec_text = Data2VecTextModel(config, add_pooling_layer=False)
- self.lm_head = Data2VecTextLMHead(config)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_output_embeddings(self):
- return self.lm_head.decoder
-
- def set_output_embeddings(self, new_embeddings):
- self.lm_head.decoder = new_embeddings
-
- @add_start_docstrings_to_model_forward(DATA2VECTEXT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=MaskedLMOutput,
- config_class=_CONFIG_FOR_DOC,
- mask="",
- )
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- token_type_ids: Optional[torch.LongTensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, MaskedLMOutput]:
- r"""
- labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
- config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
- loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`
- kwargs (`Dict[str, any]`, optional, defaults to *{}*):
- Used to hide legacy arguments that have been deprecated.
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.data2vec_text(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_attention_mask,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- sequence_output = outputs[0]
- prediction_scores = self.lm_head(sequence_output)
-
- masked_lm_loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss()
-
- labels = labels.to(prediction_scores.device)
- masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
-
- if not return_dict:
- output = (prediction_scores,) + outputs[2:]
- return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
-
- return MaskedLMOutput(
- loss=masked_lm_loss,
- logits=prediction_scores,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-# Copied from transformers.models.roberta.modeling_roberta.RobertaLMHead with Roberta->Data2VecText
-class Data2VecTextLMHead(nn.Module):
- """Data2VecText Head for masked language modeling."""
-
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- self.layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
-
- self.decoder = nn.Linear(config.hidden_size, config.vocab_size)
- self.bias = nn.Parameter(torch.zeros(config.vocab_size))
- self.decoder.bias = self.bias
-
- def forward(self, features, **kwargs):
- x = self.dense(features)
- x = gelu(x)
- x = self.layer_norm(x)
-
- # project back to size of vocabulary with bias
- x = self.decoder(x)
-
- return x
-
- def _tie_weights(self):
- # To tie those two weights if they get disconnected (on TPU or when the bias is resized)
- # For accelerate compatibility and to not break backward compatibility
- if self.decoder.bias.device.type == "meta":
- self.decoder.bias = self.bias
- else:
- self.bias = self.decoder.bias
-
-
-@add_start_docstrings(
- """
- Data2VecText Model transformer with a sequence classification/regression head on top (a linear layer on top of the
- pooled output) e.g. for GLUE tasks.
- """,
- DATA2VECTEXT_START_DOCSTRING,
-)
-class Data2VecTextForSequenceClassification(Data2VecTextPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
- self.config = config
-
- self.data2vec_text = Data2VecTextModel(config, add_pooling_layer=False)
- self.classifier = Data2VecTextClassificationHead(config)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- @add_start_docstrings_to_model_forward(DATA2VECTEXT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=SequenceClassifierOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- token_type_ids: Optional[torch.LongTensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, SequenceClassifierOutput]:
- r"""
- labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
- Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
- config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
- `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.data2vec_text(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- sequence_output = outputs[0]
- logits = self.classifier(sequence_output)
-
- loss = None
- if labels is not None:
- labels = labels.to(logits.device)
-
- if self.config.problem_type is None:
- if self.num_labels == 1:
- self.config.problem_type = "regression"
- elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
- self.config.problem_type = "single_label_classification"
- else:
- self.config.problem_type = "multi_label_classification"
-
- if self.config.problem_type == "regression":
- loss_fct = MSELoss()
- if self.num_labels == 1:
- loss = loss_fct(logits.squeeze(), labels.squeeze())
- else:
- loss = loss_fct(logits, labels)
- elif self.config.problem_type == "single_label_classification":
- loss_fct = CrossEntropyLoss()
- loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
- elif self.config.problem_type == "multi_label_classification":
- loss_fct = BCEWithLogitsLoss()
- loss = loss_fct(logits, labels)
-
- if not return_dict:
- output = (logits,) + outputs[2:]
- return ((loss,) + output) if loss is not None else output
-
- return SequenceClassifierOutput(
- loss=loss,
- logits=logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """
- Data2VecText Model with a multiple choice classification head on top (a linear layer on top of the pooled output
- and a softmax) e.g. for RocStories/SWAG tasks.
- """,
- DATA2VECTEXT_START_DOCSTRING,
-)
-class Data2VecTextForMultipleChoice(Data2VecTextPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
-
- self.data2vec_text = Data2VecTextModel(config)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
- self.classifier = nn.Linear(config.hidden_size, 1)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- @add_start_docstrings_to_model_forward(
- DATA2VECTEXT_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length")
- )
- @add_code_sample_docstrings(
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=MultipleChoiceModelOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- token_type_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, MultipleChoiceModelOutput]:
- r"""
- labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
- Labels for computing the multiple choice classification loss. Indices should be in `[0, ...,
- num_choices-1]` where `num_choices` is the size of the second dimension of the input tensors. (See
- `input_ids` above)
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
- num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]
-
- flat_input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None
- flat_position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None
- flat_token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None
- flat_attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None
- flat_inputs_embeds = (
- inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1))
- if inputs_embeds is not None
- else None
- )
-
- outputs = self.data2vec_text(
- flat_input_ids,
- position_ids=flat_position_ids,
- token_type_ids=flat_token_type_ids,
- attention_mask=flat_attention_mask,
- head_mask=head_mask,
- inputs_embeds=flat_inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- pooled_output = outputs[1]
-
- pooled_output = self.dropout(pooled_output)
- logits = self.classifier(pooled_output)
- reshaped_logits = logits.view(-1, num_choices)
-
- loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss()
-
- labels = labels.to(reshaped_logits.device)
- loss = loss_fct(reshaped_logits, labels)
-
- if not return_dict:
- output = (reshaped_logits,) + outputs[2:]
- return ((loss,) + output) if loss is not None else output
-
- return MultipleChoiceModelOutput(
- loss=loss,
- logits=reshaped_logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """
- Data2VecText Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
- for Named-Entity-Recognition (NER) tasks.
- """,
- DATA2VECTEXT_START_DOCSTRING,
-)
-class Data2VecTextForTokenClassification(Data2VecTextPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
-
- self.data2vec_text = Data2VecTextModel(config, add_pooling_layer=False)
- classifier_dropout = (
- config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
- )
- self.dropout = nn.Dropout(classifier_dropout)
- self.classifier = nn.Linear(config.hidden_size, config.num_labels)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- @add_start_docstrings_to_model_forward(DATA2VECTEXT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=TokenClassifierOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- token_type_ids: Optional[torch.LongTensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, TokenClassifierOutput]:
- r"""
- labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.data2vec_text(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- sequence_output = outputs[0]
-
- sequence_output = self.dropout(sequence_output)
- logits = self.classifier(sequence_output)
-
- loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss()
-
- labels = labels.to(logits.device)
- loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
-
- if not return_dict:
- output = (logits,) + outputs[2:]
- return ((loss,) + output) if loss is not None else output
-
- return TokenClassifierOutput(
- loss=loss,
- logits=logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-# Copied from transformers.models.roberta.modeling_roberta.RobertaClassificationHead with Roberta->Data2VecText
-class Data2VecTextClassificationHead(nn.Module):
- """Head for sentence-level classification tasks."""
-
- def __init__(self, config):
- super().__init__()
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
- classifier_dropout = (
- config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
- )
- self.dropout = nn.Dropout(classifier_dropout)
- self.out_proj = nn.Linear(config.hidden_size, config.num_labels)
-
- def forward(self, features, **kwargs):
- x = features[:, 0, :] # take token (equiv. to [CLS])
- x = self.dropout(x)
- x = self.dense(x)
- x = torch.tanh(x)
- x = self.dropout(x)
- x = self.out_proj(x)
- return x
-
-
-@add_start_docstrings(
- """
- Data2VecText Model with a span classification head on top for extractive question-answering tasks like SQuAD (a
- linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`).
- """,
- DATA2VECTEXT_START_DOCSTRING,
-)
-class Data2VecTextForQuestionAnswering(Data2VecTextPreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
-
- self.data2vec_text = Data2VecTextModel(config, add_pooling_layer=False)
- self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- @add_start_docstrings_to_model_forward(DATA2VECTEXT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
- @add_code_sample_docstrings(
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=QuestionAnsweringModelOutput,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- token_type_ids: Optional[torch.LongTensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- start_positions: Optional[torch.LongTensor] = None,
- end_positions: Optional[torch.LongTensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, QuestionAnsweringModelOutput]:
- r"""
- start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
- Labels for position (index) of the start of the labelled span for computing the token classification loss.
- Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
- are not taken into account for computing the loss.
- end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
- Labels for position (index) of the end of the labelled span for computing the token classification loss.
- Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
- are not taken into account for computing the loss.
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.data2vec_text(
- input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- sequence_output = outputs[0]
-
- logits = self.qa_outputs(sequence_output)
- start_logits, end_logits = logits.split(1, dim=-1)
- start_logits = start_logits.squeeze(-1).contiguous()
- end_logits = end_logits.squeeze(-1).contiguous()
-
- total_loss = None
- if start_positions is not None and end_positions is not None:
- # If we are on multi-GPU, split add a dimension
- if len(start_positions.size()) > 1:
- start_positions = start_positions.squeeze(-1)
- if len(end_positions.size()) > 1:
- end_positions = end_positions.squeeze(-1)
- # sometimes the start/end positions are outside our model inputs, we ignore these terms
- ignored_index = start_logits.size(1)
- start_positions = start_positions.clamp(0, ignored_index)
- end_positions = end_positions.clamp(0, ignored_index)
-
- loss_fct = CrossEntropyLoss(ignore_index=ignored_index)
- start_loss = loss_fct(start_logits, start_positions)
- end_loss = loss_fct(end_logits, end_positions)
- total_loss = (start_loss + end_loss) / 2
-
- if not return_dict:
- output = (start_logits, end_logits) + outputs[2:]
- return ((total_loss,) + output) if total_loss is not None else output
-
- return QuestionAnsweringModelOutput(
- loss=total_loss,
- start_logits=start_logits,
- end_logits=end_logits,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
-
-
-def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0):
- """
- Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
- are ignored. This is modified from fairseq's `utils.make_positions`.
-
- Args:
- x: torch.Tensor x:
-
- Returns: torch.Tensor
- """
- # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA.
- mask = input_ids.ne(padding_idx).int()
- incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask
- return incremental_indices.long() + padding_idx
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt_bigcode/modeling_gpt_bigcode.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt_bigcode/modeling_gpt_bigcode.py
deleted file mode 100644
index d58e00af1dac13b72813cbb97b4e63fb8752f673..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/gpt_bigcode/modeling_gpt_bigcode.py
+++ /dev/null
@@ -1,1066 +0,0 @@
-# coding=utf-8
-# Copyright 2023 The Bigcode team and HuggingFace Inc. team.
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""PyTorch GPTBigCode model."""
-import math
-from typing import List, Optional, Tuple, Union
-
-import torch
-import torch.utils.checkpoint
-from torch import nn
-from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
-
-from ...activations import ACT2FN
-from ...modeling_outputs import (
- BaseModelOutputWithPastAndCrossAttentions,
- CausalLMOutputWithCrossAttentions,
- SequenceClassifierOutputWithPast,
- TokenClassifierOutput,
-)
-from ...modeling_utils import PreTrainedModel
-from ...utils import (
- add_code_sample_docstrings,
- add_start_docstrings,
- add_start_docstrings_to_model_forward,
- logging,
-)
-from .configuration_gpt_bigcode import GPTBigCodeConfig
-
-
-logger = logging.get_logger(__name__)
-
-_CHECKPOINT_FOR_DOC = "bigcode/gpt_bigcode-santacoder"
-_CONFIG_FOR_DOC = "GPTBigCodeConfig"
-
-GPT_BIGCODE_PRETRAINED_MODEL_ARCHIVE_LIST = [
- "bigcode/gpt_bigcode-santacoder",
- # See all GPTBigCode models at https://huggingface.co/models?filter=gpt_bigcode
-]
-
-
-# Fused kernels
-# Use separate functions for each case because conditionals prevent kernel fusion.
-# TODO: Could have better fused kernels depending on scaling, dropout and head mask.
-# Is it doable without writing 32 functions?
-@torch.jit.script
-def upcast_masked_softmax(
- x: torch.Tensor, mask: torch.Tensor, mask_value: torch.Tensor, scale: float, softmax_dtype: torch.dtype
-):
- input_dtype = x.dtype
- x = x.to(softmax_dtype) * scale
- x = torch.where(mask, x, mask_value)
- x = torch.nn.functional.softmax(x, dim=-1).to(input_dtype)
- return x
-
-
-@torch.jit.script
-def upcast_softmax(x: torch.Tensor, scale: float, softmax_dtype: torch.dtype):
- input_dtype = x.dtype
- x = x.to(softmax_dtype) * scale
- x = torch.nn.functional.softmax(x, dim=-1).to(input_dtype)
- return x
-
-
-@torch.jit.script
-def masked_softmax(x: torch.Tensor, mask: torch.Tensor, mask_value: torch.Tensor):
- x = torch.where(mask, x, mask_value)
- x = torch.nn.functional.softmax(x, dim=-1)
- return x
-
-
-class GPTBigCodeAttention(nn.Module):
- def __init__(self, config, is_cross_attention=False, layer_idx=None):
- super().__init__()
- self.mask_value = None
-
- self.multi_query = config.multi_query
- self.embed_dim = config.hidden_size
- self.num_heads = config.num_attention_heads
- self.head_dim = self.embed_dim // self.num_heads
- self.kv_heads = 1 if self.multi_query else self.num_heads
- self.kv_dim = self.kv_heads * self.head_dim
- self.split_size = self.embed_dim
- if self.head_dim * self.num_heads != self.embed_dim:
- raise ValueError(
- f"`embed_dim` must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:"
- f" {self.num_heads})."
- )
-
- self.scale_attn_weights = config.scale_attn_weights
- self.is_cross_attention = is_cross_attention
-
- self.layer_idx = layer_idx
- self.attention_softmax_in_fp32 = config.attention_softmax_in_fp32
- self.scale_attention_softmax_in_fp32 = (
- config.scale_attention_softmax_in_fp32 and config.attention_softmax_in_fp32
- )
-
- if self.is_cross_attention:
- if self.multi_query:
- raise NotImplementedError("Multi-Query Attention not supported for cross_attention")
-
- self.c_attn = nn.Linear(self.embed_dim, 2 * self.embed_dim)
- self.q_attn = nn.Linear(self.embed_dim, self.embed_dim)
- else:
- self.c_attn = nn.Linear(self.embed_dim, self.embed_dim + 2 * self.kv_dim)
-
- self.c_proj = nn.Linear(self.embed_dim, self.embed_dim)
-
- self.attn_dropout = nn.Dropout(config.attn_pdrop)
- self.resid_dropout = nn.Dropout(config.resid_pdrop)
-
- def _get_mask_value(self, device, dtype):
- # torch.where expects a tensor. We use a cache to avoid recreating it every time.
- if self.mask_value is None or self.mask_value.dtype != dtype or self.mask_value.device != device:
- self.mask_value = torch.full([], torch.finfo(dtype).min, dtype=dtype, device=device)
- return self.mask_value
-
- def _attn(self, query, key, value, attention_mask=None, head_mask=None):
- dtype = query.dtype
- softmax_dtype = torch.float32 if self.attention_softmax_in_fp32 else dtype
- upcast = dtype != softmax_dtype
-
- unscale = self.layer_idx + 1 if self.scale_attention_softmax_in_fp32 and upcast else 1
- scale_factor = unscale**-1
- if self.scale_attn_weights:
- scale_factor /= self.head_dim**0.5
-
- # MQA models: (batch_size, query_length, num_heads * head_dim)
- # MHA models: (batch_size, num_heads, query_length, head_dim)
- query_shape = query.shape
- batch_size = query_shape[0]
- key_length = key.size(-1)
- if self.multi_query:
- # (batch_size, query_length, num_heads, head_dim) x (batch_size, head_dim, key_length)
- # -> (batch_size, query_length, num_heads, key_length)
- query_length = query_shape[1]
- attn_shape = (batch_size, query_length, self.num_heads, key_length)
- attn_view = (batch_size, query_length * self.num_heads, key_length)
- # No copy needed for MQA 2, or when layer_past is provided.
- query = query.reshape(batch_size, query_length * self.num_heads, self.head_dim)
- else:
- # (batch_size, num_heads, query_length, head_dim) x (batch_size, num_heads, head_dim, key_length)
- # -> (batch_size, num_heads, query_length, key_length)
- query_length = query_shape[2]
- attn_shape = (batch_size, self.num_heads, query_length, key_length)
- attn_view = (batch_size * self.num_heads, query_length, key_length)
- # Always copies
- query = query.reshape(batch_size * self.num_heads, query_length, self.head_dim)
- # No copy when layer_past is provided.
- key = key.reshape(batch_size * self.num_heads, self.head_dim, key_length)
-
- attn_weights = torch.empty(attn_view, device=query.device, dtype=query.dtype)
- if query.device.type == "cpu":
- # This is needed because of a bug in pytorch https://github.com/pytorch/pytorch/issues/80588.
- # The bug was fixed in https://github.com/pytorch/pytorch/pull/96086,
- # but the fix has not been released as of pytorch version 2.0.0.
- attn_weights = torch.zeros_like(attn_weights)
- beta = 1
- else:
- beta = 0
- attn_weights = torch.baddbmm(attn_weights, query, key, beta=beta, alpha=scale_factor).view(attn_shape)
-
- if upcast:
- # Use a fused kernel to prevent a large overhead from casting and scaling.
- # Sub-optimal when the key length is not a multiple of 8.
- if attention_mask is None:
- attn_weights = upcast_softmax(attn_weights, unscale, softmax_dtype)
- else:
- mask_value = self._get_mask_value(attn_weights.device, softmax_dtype)
- attn_weights = upcast_masked_softmax(attn_weights, attention_mask, mask_value, unscale, softmax_dtype)
- else:
- if attention_mask is not None:
- mask_value = self._get_mask_value(attn_weights.device, softmax_dtype)
-
- # The fused kernel is very slow when the key length is not a multiple of 8, so we skip fusion.
- attn_weights = torch.where(attention_mask, attn_weights, mask_value)
-
- attn_weights = torch.nn.functional.softmax(attn_weights, dim=-1)
-
- attn_weights = self.attn_dropout(attn_weights)
-
- # Mask heads if we want to
- if head_mask is not None:
- if self.multi_query:
- head_mask = head_mask.transpose(1, 2)
- attn_weights = attn_weights * head_mask
-
- if self.multi_query:
- attn_output = torch.bmm(attn_weights.view(attn_view), value).view(query_shape)
- else:
- attn_output = torch.matmul(attn_weights, value)
-
- return attn_output, attn_weights
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- layer_past: Optional[torch.Tensor] = None,
- attention_mask: Optional[torch.Tensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- encoder_hidden_states: Optional[torch.Tensor] = None,
- encoder_attention_mask: Optional[torch.Tensor] = None,
- use_cache: Optional[bool] = False,
- output_attentions: Optional[bool] = False,
- ) -> Union[
- Tuple[torch.Tensor, Optional[torch.Tensor]],
- Tuple[torch.Tensor, Optional[torch.Tensor], Tuple[torch.Tensor, ...]],
- ]:
- if encoder_hidden_states is not None:
- if not hasattr(self, "q_attn") or not self.is_cross_attention:
- raise ValueError(
- "If class is used as cross attention, the weights `q_attn` have to be defined. "
- "Please make sure to instantiate class with `GPTBigCodeAttention(..., is_cross_attention=True)`."
- )
-
- query = self.q_attn(hidden_states)
- key_value = self.c_attn(encoder_hidden_states)
- attention_mask = encoder_attention_mask
- elif self.multi_query:
- query, key_value = self.c_attn(hidden_states).split((self.embed_dim, 2 * self.kv_dim), dim=2)
- else:
- # Note: We split as (self.num_heads, 3, self.head_dim) instead of (3, self.num_heads, self.head_dim),
- # i.e., the memory layout is not the same as GPT2.
- # This makes the concatenation with past_key_value more efficient.
- query, key_value = (
- self.c_attn(hidden_states)
- .view(*hidden_states.shape[:2], self.num_heads, 3 * self.head_dim)
- .transpose(1, 2)
- .split((self.head_dim, 2 * self.head_dim), dim=3)
- )
-
- if layer_past is not None:
- key_value = torch.cat((layer_past, key_value), dim=-2)
- present = key_value if use_cache else None
-
- key, value = key_value.split((self.head_dim, self.head_dim), dim=-1)
-
- attn_output, attn_weights = self._attn(query, key.transpose(-1, -2), value, attention_mask, head_mask)
-
- if not self.multi_query:
- attn_output = attn_output.transpose(1, 2).reshape(hidden_states.shape)
- attn_output = self.c_proj(attn_output)
- attn_output = self.resid_dropout(attn_output)
-
- outputs = (attn_output, present)
- if output_attentions:
- if self.multi_query:
- # Transpose to return weights in the usual format (batch_size, num_heads, query_length, key_length)
- attn_weights = attn_weights.transpose(1, 2)
- outputs += (attn_weights,)
-
- return outputs # a, present, (attentions)
-
-
-class GPTBigCodeMLP(nn.Module):
- def __init__(self, intermediate_size, config):
- super().__init__()
- embed_dim = config.hidden_size
- self.c_fc = nn.Linear(embed_dim, intermediate_size)
- self.c_proj = nn.Linear(intermediate_size, embed_dim)
- self.act = ACT2FN[config.activation_function]
- self.dropout = nn.Dropout(config.resid_pdrop)
-
- # Copied from transformers.models.gpt2.modeling_gpt2.GPT2MLP.forward
- def forward(self, hidden_states: Optional[Tuple[torch.FloatTensor]]) -> torch.FloatTensor:
- hidden_states = self.c_fc(hidden_states)
- hidden_states = self.act(hidden_states)
- hidden_states = self.c_proj(hidden_states)
- hidden_states = self.dropout(hidden_states)
- return hidden_states
-
-
-class GPTBigCodeBlock(nn.Module):
- def __init__(self, config, layer_idx=None):
- super().__init__()
- hidden_size = config.hidden_size
- self.inner_dim = config.n_inner if config.n_inner is not None else 4 * hidden_size
-
- self.ln_1 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon)
- self.attn = GPTBigCodeAttention(config, layer_idx=layer_idx)
- self.ln_2 = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon)
-
- if config.add_cross_attention:
- if config.multi_query:
- raise NotImplementedError("Cross-attention not implemented for MQA")
- self.crossattention = GPTBigCodeAttention(config, is_cross_attention=True, layer_idx=layer_idx)
- self.ln_cross_attn = nn.LayerNorm(hidden_size, eps=config.layer_norm_epsilon)
-
- self.mlp = GPTBigCodeMLP(self.inner_dim, config)
-
- def forward(
- self,
- hidden_states: Optional[Tuple[torch.Tensor]],
- layer_past: Optional[torch.Tensor] = None,
- attention_mask: Optional[torch.Tensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- encoder_hidden_states: Optional[torch.Tensor] = None,
- encoder_attention_mask: Optional[torch.Tensor] = None,
- use_cache: Optional[bool] = False,
- output_attentions: Optional[bool] = False,
- ) -> Union[
- Tuple[torch.Tensor], Tuple[torch.Tensor, torch.Tensor], Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
- ]:
- residual = hidden_states
- hidden_states = self.ln_1(hidden_states)
- attn_outputs = self.attn(
- hidden_states,
- layer_past=layer_past,
- attention_mask=attention_mask,
- head_mask=head_mask,
- use_cache=use_cache,
- output_attentions=output_attentions,
- )
- attn_output = attn_outputs[0] # output_attn: a, present, (attentions)
- outputs = attn_outputs[1:]
- # residual connection
- hidden_states = attn_output + residual
-
- if encoder_hidden_states is not None:
- # add one self-attention block for cross-attention
- if not hasattr(self, "crossattention"):
- raise ValueError(
- f"If `encoder_hidden_states` are passed, {self} has to be instantiated with "
- "cross-attention layers by setting `config.add_cross_attention=True`"
- )
- residual = hidden_states
- hidden_states = self.ln_cross_attn(hidden_states)
- cross_attn_outputs = self.crossattention(
- hidden_states,
- attention_mask=attention_mask,
- head_mask=head_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_attention_mask,
- output_attentions=output_attentions,
- )
- attn_output = cross_attn_outputs[0]
- # residual connection
- hidden_states = residual + attn_output
- outputs = outputs + cross_attn_outputs[2:] # add cross attentions if we output attention weights
-
- residual = hidden_states
- hidden_states = self.ln_2(hidden_states)
- feed_forward_hidden_states = self.mlp(hidden_states)
- # residual connection
- hidden_states = residual + feed_forward_hidden_states
-
- if use_cache:
- outputs = (hidden_states,) + outputs
- else:
- outputs = (hidden_states,) + outputs[1:]
-
- return outputs # hidden_states, present, (attentions, cross_attentions)
-
-
-class GPTBigCodePreTrainedModel(PreTrainedModel):
- """
- An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
- models.
- """
-
- config_class = GPTBigCodeConfig
- base_model_prefix = "transformer"
- supports_gradient_checkpointing = True
- _no_split_modules = ["GPTBigCodeBlock"]
- _skip_keys_device_placement = "past_key_values"
-
- def __init__(self, *inputs, **kwargs):
- super().__init__(*inputs, **kwargs)
-
- def _init_weights(self, module):
- """Initialize the weights."""
- if isinstance(module, (GPTBigCodeMLP, GPTBigCodeAttention)):
- # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
- # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
- # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
- # > -- GPT-2 :: https://openai.com/blog/better-language-models/
- #
- # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
- module.c_proj.weight.data.normal_(
- mean=0.0, std=(self.config.initializer_range / math.sqrt(2 * self.config.n_layer))
- )
- module.c_proj._is_hf_initialized = True
- elif isinstance(module, nn.Linear):
- # Slightly different from the TF version which uses truncated_normal for initialization
- # cf https://github.com/pytorch/pytorch/pull/5617
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
- if module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.Embedding):
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
- if module.padding_idx is not None:
- module.weight.data[module.padding_idx].zero_()
- elif isinstance(module, nn.LayerNorm):
- module.bias.data.zero_()
- module.weight.data.fill_(1.0)
-
- # Copied from transformers.models.gpt2.modeling_gpt2.GPT2PreTrainedModel._set_gradient_checkpointing with GPT2->GPTBigCode
- def _set_gradient_checkpointing(self, module, value=False):
- if isinstance(module, GPTBigCodeModel):
- module.gradient_checkpointing = value
-
-
-GPT_BIGCODE_START_DOCSTRING = r"""
-
- This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
- library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
- etc.)
-
- This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
- Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
- and behavior.
-
- Parameters:
- config ([`GPTBigCodeConfig`]): Model configuration class with all the parameters of the model.
- Initializing with a config file does not load the weights associated with the model, only the
- configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
-"""
-
-GPT_BIGCODE_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`torch.Tensor` of shape `(batch_size, input_ids_length)`):
- `input_ids_length` = `sequence_length` if `past_key_values` is `None` else
- `past_key_values[0][0].shape[-2]` (`sequence_length` of input past key value states). Indices of input
- sequence tokens in the vocabulary.
-
- If `past_key_values` is used, only `input_ids` that do not have their past calculated should be passed as
- `input_ids`.
-
- Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are input IDs?](../glossary#input-ids)
- past_key_values (`Tuple[torch.Tensor]` of length `config.n_layers`):
- Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see
- `past_key_values` output below). Can be used to speed up sequential decoding. The `input_ids` which have
- their past given to this model should not be passed as `input_ids` as they have already been computed.
- attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- If `past_key_values` is used, `attention_mask` needs to contain the masking strategy that was used for
- `past_key_values`. In other words, the `attention_mask` always has to have the length:
- `len(past_key_values) + len(input_ids)`
-
- [What are attention masks?](../glossary#attention-mask)
- token_type_ids (`torch.Tensor` of shape `(batch_size, input_ids_length)`, *optional*):
- Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0,
- 1]`:
-
- - 0 corresponds to a *sentence A* token,
- - 1 corresponds to a *sentence B* token.
-
- [What are token type IDs?](../glossary#token-type-ids)
- position_ids (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
- config.max_position_embeddings - 1]`.
-
- [What are position IDs?](../glossary#position-ids)
- head_mask (`torch.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*):
- Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`:
-
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
-
- inputs_embeds (`torch.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
- is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
- model's internal embedding lookup matrix.
-
- If `past_key_values` is used, optionally only the last `inputs_embeds` have to be input (see
- `past_key_values`).
- use_cache (`bool`, *optional*):
- If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
- `past_key_values`).
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-
-@add_start_docstrings(
- "The bare GPT_BIGCODE Model transformer outputting raw hidden-states without any specific head on top.",
- GPT_BIGCODE_START_DOCSTRING,
-)
-class GPTBigCodeModel(GPTBigCodePreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
- self.multi_query = config.multi_query
- self.embed_dim = config.hidden_size
-
- self.wte = nn.Embedding(config.vocab_size, self.embed_dim)
- self.wpe = nn.Embedding(config.max_position_embeddings, self.embed_dim)
-
- self.drop = nn.Dropout(config.embd_pdrop)
- self.h = nn.ModuleList([GPTBigCodeBlock(config, layer_idx=i) for i in range(config.num_hidden_layers)])
- self.ln_f = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_epsilon)
-
- max_positions = config.max_position_embeddings
- self.register_buffer(
- "bias", torch.tril(torch.ones((max_positions, max_positions), dtype=torch.bool)), persistent=False
- )
-
- self.gradient_checkpointing = False
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_input_embeddings(self):
- return self.wte
-
- def set_input_embeddings(self, new_embeddings):
- self.wte = new_embeddings
-
- @add_start_docstrings_to_model_forward(GPT_BIGCODE_INPUTS_DOCSTRING)
- @add_code_sample_docstrings(
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=BaseModelOutputWithPastAndCrossAttentions,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids: Optional[torch.Tensor] = None,
- past_key_values: Optional[List[torch.Tensor]] = None,
- attention_mask: Optional[torch.Tensor] = None,
- token_type_ids: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.Tensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- inputs_embeds: Optional[torch.Tensor] = None,
- encoder_hidden_states: Optional[torch.Tensor] = None,
- encoder_attention_mask: Optional[torch.Tensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, BaseModelOutputWithPastAndCrossAttentions]:
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
- elif input_ids is not None:
- self.warn_if_padding_and_no_attention_mask(input_ids, attention_mask)
- input_shape = input_ids.size()
- input_ids = input_ids.view(-1, input_shape[-1])
- batch_size = input_ids.shape[0]
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- batch_size = inputs_embeds.shape[0]
- else:
- raise ValueError("You have to specify either input_ids or inputs_embeds")
-
- if batch_size <= 0:
- raise ValueError("batch_size has to be defined and > 0")
-
- device = input_ids.device if input_ids is not None else inputs_embeds.device
-
- if token_type_ids is not None:
- token_type_ids = token_type_ids.view(-1, input_shape[-1])
-
- if past_key_values is None:
- past_length = 0
- past_key_values = tuple([None] * len(self.h))
- else:
- past_length = past_key_values[0].size(-2)
-
- if attention_mask is not None and len(attention_mask.shape) == 2 and position_ids is None:
- # create position_ids on the fly for batch generation
- position_ids = attention_mask.long().cumsum(-1) - 1
- position_ids.masked_fill_(attention_mask == 0, 1)
- if past_length > 0:
- position_ids = position_ids[:, past_length : input_shape[-1] + past_length :]
- elif position_ids is None:
- position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtype=torch.long, device=device)
- position_ids = position_ids.unsqueeze(0)
-
- # Self-attention mask.
- query_length = input_shape[-1]
- key_length = past_length + query_length
- self_attention_mask = self.bias[None, key_length - query_length : key_length, :key_length]
-
- if attention_mask is not None:
- self_attention_mask = self_attention_mask * attention_mask.view(batch_size, 1, -1).to(
- dtype=torch.bool, device=self_attention_mask.device
- )
-
- # MQA models: (batch_size, query_length, n_heads, key_length)
- # MHA models: (batch_size, n_heads, query_length, key_length)
- attention_mask = self_attention_mask.unsqueeze(2 if self.multi_query else 1)
-
- # If a 2D or 3D attention mask is provided for the cross-attention
- # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
- if (
- self.config.add_cross_attention
- and encoder_hidden_states is not None
- and encoder_attention_mask is not None
- ):
- if encoder_attention_mask.dim() == 2:
- encoder_attention_mask.unsqueeze(1)
- assert encoder_attention_mask.dim() == 3
- encoder_attention_mask = encoder_attention_mask.bool().unsqueeze(2 if self.multi_query else 1)
- else:
- encoder_attention_mask = None
-
- # Prepare head mask if needed
- # 1.0 in head_mask indicate we keep the head
- # attention_probs has shape bsz x n_heads x N x N
- # head_mask has shape n_layer x batch x n_heads x N x N
- head_mask = self.get_head_mask(head_mask, self.config.n_layer)
-
- if inputs_embeds is None:
- inputs_embeds = self.wte(input_ids)
- position_embeds = self.wpe(position_ids)
- hidden_states = inputs_embeds + position_embeds
-
- if token_type_ids is not None:
- token_type_embeds = self.wte(token_type_ids)
- hidden_states = hidden_states + token_type_embeds
-
- hidden_states = self.drop(hidden_states)
-
- output_shape = input_shape + (hidden_states.size(-1),)
-
- presents = [] if use_cache else None
- all_self_attentions = () if output_attentions else None
- all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None
- all_hidden_states = () if output_hidden_states else None
- for i, (block, layer_past) in enumerate(zip(self.h, past_key_values)):
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- if self.gradient_checkpointing and self.training:
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- # None for past_key_value
- return module(*inputs, use_cache, output_attentions)
-
- return custom_forward
-
- outputs = torch.utils.checkpoint.checkpoint(
- create_custom_forward(block),
- hidden_states,
- None,
- attention_mask,
- head_mask[i],
- encoder_hidden_states,
- encoder_attention_mask,
- )
- else:
- outputs = block(
- hidden_states,
- layer_past=layer_past,
- attention_mask=attention_mask,
- head_mask=head_mask[i],
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_attention_mask,
- use_cache=use_cache,
- output_attentions=output_attentions,
- )
-
- hidden_states = outputs[0]
- if use_cache:
- presents.append(outputs[1])
-
- if output_attentions:
- all_self_attentions = all_self_attentions + (outputs[2 if use_cache else 1],)
- if self.config.add_cross_attention:
- all_cross_attentions = all_cross_attentions + (outputs[3 if use_cache else 2],)
-
- hidden_states = self.ln_f(hidden_states)
-
- hidden_states = hidden_states.view(output_shape)
- # Add last hidden state
- if output_hidden_states:
- all_hidden_states = all_hidden_states + (hidden_states,)
-
- if not return_dict:
- return tuple(
- v
- for v in [hidden_states, presents, all_hidden_states, all_self_attentions, all_cross_attentions]
- if v is not None
- )
-
- return BaseModelOutputWithPastAndCrossAttentions(
- last_hidden_state=hidden_states,
- past_key_values=presents,
- hidden_states=all_hidden_states,
- attentions=all_self_attentions,
- cross_attentions=all_cross_attentions,
- )
-
-
-@add_start_docstrings(
- """
- The GPT_BIGCODE Model transformer with a language modeling head on top (linear layer with weights tied to the input
- embeddings).
- """,
- GPT_BIGCODE_START_DOCSTRING,
-)
-class GPTBigCodeForCausalLM(GPTBigCodePreTrainedModel):
- _tied_weights_keys = ["lm_head.weight"]
-
- def __init__(self, config):
- super().__init__(config)
- self.transformer = GPTBigCodeModel(config)
- self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_output_embeddings(self):
- return self.lm_head
-
- def set_output_embeddings(self, new_embeddings):
- self.lm_head = new_embeddings
-
- def prepare_inputs_for_generation(self, input_ids, past_key_values=None, inputs_embeds=None, **kwargs):
- token_type_ids = kwargs.get("token_type_ids", None)
- # only last token for inputs_ids if past is defined in kwargs
- if past_key_values:
- input_ids = input_ids[:, -1].unsqueeze(-1)
- if token_type_ids is not None:
- token_type_ids = token_type_ids[:, -1].unsqueeze(-1)
-
- attention_mask = kwargs.get("attention_mask", None)
- position_ids = kwargs.get("position_ids", None)
-
- if attention_mask is not None and position_ids is None:
- # create position_ids on the fly for batch generation
- position_ids = attention_mask.long().cumsum(-1) - 1
- position_ids.masked_fill_(attention_mask == 0, 1)
- if past_key_values:
- position_ids = position_ids[:, -1].unsqueeze(-1)
- else:
- position_ids = None
-
- # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
- if inputs_embeds is not None and past_key_values is None:
- model_inputs = {"inputs_embeds": inputs_embeds}
- else:
- model_inputs = {"input_ids": input_ids}
-
- model_inputs.update(
- {
- "past_key_values": past_key_values,
- "use_cache": kwargs.get("use_cache"),
- "position_ids": position_ids,
- "attention_mask": attention_mask,
- "token_type_ids": token_type_ids,
- }
- )
- return model_inputs
-
- @add_start_docstrings_to_model_forward(GPT_BIGCODE_INPUTS_DOCSTRING)
- @add_code_sample_docstrings(
- checkpoint=_CHECKPOINT_FOR_DOC,
- output_type=CausalLMOutputWithCrossAttentions,
- config_class=_CONFIG_FOR_DOC,
- )
- def forward(
- self,
- input_ids: Optional[torch.Tensor] = None,
- past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
- attention_mask: Optional[torch.Tensor] = None,
- token_type_ids: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.Tensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- inputs_embeds: Optional[torch.Tensor] = None,
- encoder_hidden_states: Optional[torch.Tensor] = None,
- encoder_attention_mask: Optional[torch.Tensor] = None,
- labels: Optional[torch.Tensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]:
- r"""
- labels (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
- `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
- are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- transformer_outputs = self.transformer(
- input_ids,
- past_key_values=past_key_values,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_attention_mask,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- hidden_states = transformer_outputs[0]
-
- lm_logits = self.lm_head(hidden_states)
-
- loss = None
- if labels is not None:
- # Shift so that tokens < n predict n
- shift_logits = lm_logits[..., :-1, :].contiguous()
- shift_labels = labels[..., 1:].contiguous().to(shift_logits.device)
- # Flatten the tokens
- loss_fct = CrossEntropyLoss()
- loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
-
- if not return_dict:
- output = (lm_logits,) + transformer_outputs[1:]
- return ((loss,) + output) if loss is not None else output
-
- return CausalLMOutputWithCrossAttentions(
- loss=loss,
- logits=lm_logits,
- past_key_values=transformer_outputs.past_key_values,
- hidden_states=transformer_outputs.hidden_states,
- attentions=transformer_outputs.attentions,
- cross_attentions=transformer_outputs.cross_attentions,
- )
-
- @staticmethod
- def _reorder_cache(
- past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor
- ) -> Tuple[Tuple[torch.Tensor]]:
- """
- This function is used to re-order the `past_key_values` cache if [`~PreTrainedModel.beam_search`] or
- [`~PreTrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct
- beam_idx at every generation step.
- """
- return tuple(layer_past.index_select(0, beam_idx.to(layer_past.device)) for layer_past in past_key_values)
-
-
-@add_start_docstrings(
- """
- The GPTBigCode Model transformer with a sequence classification head on top (linear layer).
-
- [`GPTBigCodeForSequenceClassification`] uses the last token in order to do the classification, as other causal
- models (e.g. GPT-1) do.
-
- Since it does classification on the last token, it requires to know the position of the last token. If a
- `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
- no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
- padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
- each row of the batch).
- """,
- GPT_BIGCODE_START_DOCSTRING,
-)
-class GPTBigCodeForSequenceClassification(GPTBigCodePreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
- self.transformer = GPTBigCodeModel(config)
- self.score = nn.Linear(config.n_embd, self.num_labels, bias=False)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- @add_start_docstrings_to_model_forward(GPT_BIGCODE_INPUTS_DOCSTRING)
- def forward(
- self,
- input_ids: Optional[torch.Tensor] = None,
- past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
- attention_mask: Optional[torch.Tensor] = None,
- token_type_ids: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.Tensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- inputs_embeds: Optional[torch.Tensor] = None,
- labels: Optional[torch.Tensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
- r"""
- labels (`torch.Tensor` of shape `(batch_size,)`, *optional*):
- Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
- config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
- `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- transformer_outputs = self.transformer(
- input_ids,
- past_key_values=past_key_values,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- hidden_states = transformer_outputs[0]
- logits = self.score(hidden_states)
-
- if input_ids is not None:
- batch_size, sequence_length = input_ids.shape[:2]
- else:
- batch_size, sequence_length = inputs_embeds.shape[:2]
-
- assert (
- self.config.pad_token_id is not None or batch_size == 1
- ), "Cannot handle batch sizes > 1 if no padding token is defined."
- if self.config.pad_token_id is None:
- sequence_lengths = -1
- else:
- if input_ids is not None:
- sequence_lengths = (torch.eq(input_ids, self.config.pad_token_id).long().argmax(-1) - 1).to(
- logits.device
- )
- else:
- sequence_lengths = -1
- logger.warning(
- f"{self.__class__.__name__} will not detect padding tokens in `inputs_embeds`. Results may be "
- "unexpected if using padding tokens in conjunction with `inputs_embeds.`"
- )
-
- pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
-
- loss = None
- if labels is not None:
- labels = labels.to(logits.device)
-
- if self.config.problem_type is None:
- if self.num_labels == 1:
- self.config.problem_type = "regression"
- elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
- self.config.problem_type = "single_label_classification"
- else:
- self.config.problem_type = "multi_label_classification"
-
- if self.config.problem_type == "regression":
- loss_fct = MSELoss()
- if self.num_labels == 1:
- loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
- else:
- loss = loss_fct(pooled_logits, labels)
- elif self.config.problem_type == "single_label_classification":
- loss_fct = CrossEntropyLoss()
- loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
- elif self.config.problem_type == "multi_label_classification":
- loss_fct = BCEWithLogitsLoss()
- loss = loss_fct(pooled_logits, labels)
- if not return_dict:
- output = (pooled_logits,) + transformer_outputs[1:]
- return ((loss,) + output) if loss is not None else output
-
- return SequenceClassifierOutputWithPast(
- loss=loss,
- logits=pooled_logits,
- past_key_values=transformer_outputs.past_key_values,
- hidden_states=transformer_outputs.hidden_states,
- attentions=transformer_outputs.attentions,
- )
-
-
-@add_start_docstrings(
- """
- GPT_BIGCODE Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g.
- for Named-Entity-Recognition (NER) tasks.
- """,
- GPT_BIGCODE_START_DOCSTRING,
-)
-class GPTBigCodeForTokenClassification(GPTBigCodePreTrainedModel):
- def __init__(self, config):
- super().__init__(config)
- self.num_labels = config.num_labels
-
- self.transformer = GPTBigCodeModel(config)
- if hasattr(config, "classifier_dropout") and config.classifier_dropout is not None:
- classifier_dropout = config.classifier_dropout
- elif hasattr(config, "hidden_dropout") and config.hidden_dropout is not None:
- classifier_dropout = config.hidden_dropout
- else:
- classifier_dropout = 0.1
- self.dropout = nn.Dropout(classifier_dropout)
- self.classifier = nn.Linear(config.hidden_size, config.num_labels)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- @add_start_docstrings_to_model_forward(GPT_BIGCODE_INPUTS_DOCSTRING)
- def forward(
- self,
- input_ids: Optional[torch.Tensor] = None,
- past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
- attention_mask: Optional[torch.Tensor] = None,
- token_type_ids: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.Tensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- inputs_embeds: Optional[torch.Tensor] = None,
- labels: Optional[torch.Tensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, TokenClassifierOutput]:
- r"""
- labels (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
- config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
- `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- transformer_outputs = self.transformer(
- input_ids,
- past_key_values=past_key_values,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- hidden_states = transformer_outputs[0]
- hidden_states = self.dropout(hidden_states)
- logits = self.classifier(hidden_states)
-
- loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss()
- loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1).to(logits.device))
-
- if not return_dict:
- output = (logits,) + transformer_outputs[2:]
- return ((loss,) + output) if loss is not None else output
-
- return TokenClassifierOutput(
- loss=loss,
- logits=logits,
- hidden_states=transformer_outputs.hidden_states,
- attentions=transformer_outputs.attentions,
- )
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlm/tokenization_layoutlm_fast.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlm/tokenization_layoutlm_fast.py
deleted file mode 100644
index afa92abaf87745aa901d495b8eca3d76f8fdc4b9..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlm/tokenization_layoutlm_fast.py
+++ /dev/null
@@ -1,205 +0,0 @@
-# coding=utf-8
-# Copyright 2018 The Microsoft Research Asia LayoutLM Team Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Tokenization class for model LayoutLM."""
-
-import json
-from typing import List, Optional, Tuple
-
-from tokenizers import normalizers
-
-from ...tokenization_utils_fast import PreTrainedTokenizerFast
-from ...utils import logging
-from .tokenization_layoutlm import LayoutLMTokenizer
-
-
-logger = logging.get_logger(__name__)
-
-VOCAB_FILES_NAMES = {"vocab_file": "vocab.txt", "tokenizer_file": "tokenizer.json"}
-
-PRETRAINED_VOCAB_FILES_MAP = {
- "vocab_file": {
- "microsoft/layoutlm-base-uncased": (
- "https://huggingface.co/microsoft/layoutlm-base-uncased/resolve/main/vocab.txt"
- ),
- "microsoft/layoutlm-large-uncased": (
- "https://huggingface.co/microsoft/layoutlm-large-uncased/resolve/main/vocab.txt"
- ),
- },
- "tokenizer_file": {
- "microsoft/layoutlm-base-uncased": (
- "https://huggingface.co/microsoft/layoutlm-base-uncased/resolve/main/tokenizer.json"
- ),
- "microsoft/layoutlm-large-uncased": (
- "https://huggingface.co/microsoft/layoutlm-large-uncased/resolve/main/tokenizer.json"
- ),
- },
-}
-
-PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
- "microsoft/layoutlm-base-uncased": 512,
- "microsoft/layoutlm-large-uncased": 512,
-}
-
-PRETRAINED_INIT_CONFIGURATION = {
- "microsoft/layoutlm-base-uncased": {"do_lower_case": True},
- "microsoft/layoutlm-large-uncased": {"do_lower_case": True},
-}
-
-
-# Copied from transformers.models.bert.tokenization_bert_fast.BertTokenizerFast with Bert->LayoutLM,BERT->LayoutLM
-class LayoutLMTokenizerFast(PreTrainedTokenizerFast):
- r"""
- Construct a "fast" LayoutLM tokenizer (backed by HuggingFace's *tokenizers* library). Based on WordPiece.
-
- This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
- refer to this superclass for more information regarding those methods.
-
- Args:
- vocab_file (`str`):
- File containing the vocabulary.
- do_lower_case (`bool`, *optional*, defaults to `True`):
- Whether or not to lowercase the input when tokenizing.
- unk_token (`str`, *optional*, defaults to `"[UNK]"`):
- The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
- token instead.
- sep_token (`str`, *optional*, defaults to `"[SEP]"`):
- The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
- sequence classification or for a text and a question for question answering. It is also used as the last
- token of a sequence built with special tokens.
- pad_token (`str`, *optional*, defaults to `"[PAD]"`):
- The token used for padding, for example when batching sequences of different lengths.
- cls_token (`str`, *optional*, defaults to `"[CLS]"`):
- The classifier token which is used when doing sequence classification (classification of the whole sequence
- instead of per-token classification). It is the first token of the sequence when built with special tokens.
- mask_token (`str`, *optional*, defaults to `"[MASK]"`):
- The token used for masking values. This is the token used when training this model with masked language
- modeling. This is the token which the model will try to predict.
- clean_text (`bool`, *optional*, defaults to `True`):
- Whether or not to clean the text before tokenization by removing any control characters and replacing all
- whitespaces by the classic one.
- tokenize_chinese_chars (`bool`, *optional*, defaults to `True`):
- Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see [this
- issue](https://github.com/huggingface/transformers/issues/328)).
- strip_accents (`bool`, *optional*):
- Whether or not to strip all accents. If this option is not specified, then it will be determined by the
- value for `lowercase` (as in the original LayoutLM).
- wordpieces_prefix (`str`, *optional*, defaults to `"##"`):
- The prefix for subwords.
- """
-
- vocab_files_names = VOCAB_FILES_NAMES
- pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
- pretrained_init_configuration = PRETRAINED_INIT_CONFIGURATION
- max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
- slow_tokenizer_class = LayoutLMTokenizer
-
- def __init__(
- self,
- vocab_file=None,
- tokenizer_file=None,
- do_lower_case=True,
- unk_token="[UNK]",
- sep_token="[SEP]",
- pad_token="[PAD]",
- cls_token="[CLS]",
- mask_token="[MASK]",
- tokenize_chinese_chars=True,
- strip_accents=None,
- **kwargs,
- ):
- super().__init__(
- vocab_file,
- tokenizer_file=tokenizer_file,
- do_lower_case=do_lower_case,
- unk_token=unk_token,
- sep_token=sep_token,
- pad_token=pad_token,
- cls_token=cls_token,
- mask_token=mask_token,
- tokenize_chinese_chars=tokenize_chinese_chars,
- strip_accents=strip_accents,
- **kwargs,
- )
-
- normalizer_state = json.loads(self.backend_tokenizer.normalizer.__getstate__())
- if (
- normalizer_state.get("lowercase", do_lower_case) != do_lower_case
- or normalizer_state.get("strip_accents", strip_accents) != strip_accents
- or normalizer_state.get("handle_chinese_chars", tokenize_chinese_chars) != tokenize_chinese_chars
- ):
- normalizer_class = getattr(normalizers, normalizer_state.pop("type"))
- normalizer_state["lowercase"] = do_lower_case
- normalizer_state["strip_accents"] = strip_accents
- normalizer_state["handle_chinese_chars"] = tokenize_chinese_chars
- self.backend_tokenizer.normalizer = normalizer_class(**normalizer_state)
-
- self.do_lower_case = do_lower_case
-
- def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
- """
- Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
- adding special tokens. A LayoutLM sequence has the following format:
-
- - single sequence: `[CLS] X [SEP]`
- - pair of sequences: `[CLS] A [SEP] B [SEP]`
-
- Args:
- token_ids_0 (`List[int]`):
- List of IDs to which the special tokens will be added.
- token_ids_1 (`List[int]`, *optional*):
- Optional second list of IDs for sequence pairs.
-
- Returns:
- `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
- """
- output = [self.cls_token_id] + token_ids_0 + [self.sep_token_id]
-
- if token_ids_1 is not None:
- output += token_ids_1 + [self.sep_token_id]
-
- return output
-
- def create_token_type_ids_from_sequences(
- self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
- ) -> List[int]:
- """
- Create a mask from the two sequences passed to be used in a sequence-pair classification task. A LayoutLM
- sequence pair mask has the following format:
-
- ```
- 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
- | first sequence | second sequence |
- ```
-
- If `token_ids_1` is `None`, this method only returns the first portion of the mask (0s).
-
- Args:
- token_ids_0 (`List[int]`):
- List of IDs.
- token_ids_1 (`List[int]`, *optional*):
- Optional second list of IDs for sequence pairs.
-
- Returns:
- `List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s).
- """
- sep = [self.sep_token_id]
- cls = [self.cls_token_id]
- if token_ids_1 is None:
- return len(cls + token_ids_0 + sep) * [0]
- return len(cls + token_ids_0 + sep) * [0] + len(token_ids_1 + sep) * [1]
-
- def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
- files = self._tokenizer.model.save(save_directory, name=filename_prefix)
- return tuple(files)
diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/diffusion/vocoder.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/diffusion/vocoder.py
deleted file mode 100644
index bbaa47f64fd5a3191a24dfaa054c423fa86e5bae..0000000000000000000000000000000000000000
--- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/diffusion/vocoder.py
+++ /dev/null
@@ -1,94 +0,0 @@
-import torch
-from vdecoder.nsf_hifigan.nvSTFT import STFT
-from vdecoder.nsf_hifigan.models import load_model,load_config
-from torchaudio.transforms import Resample
-
-
-class Vocoder:
- def __init__(self, vocoder_type, vocoder_ckpt, device = None):
- if device is None:
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
- self.device = device
-
- if vocoder_type == 'nsf-hifigan':
- self.vocoder = NsfHifiGAN(vocoder_ckpt, device = device)
- elif vocoder_type == 'nsf-hifigan-log10':
- self.vocoder = NsfHifiGANLog10(vocoder_ckpt, device = device)
- else:
- raise ValueError(f" [x] Unknown vocoder: {vocoder_type}")
-
- self.resample_kernel = {}
- self.vocoder_sample_rate = self.vocoder.sample_rate()
- self.vocoder_hop_size = self.vocoder.hop_size()
- self.dimension = self.vocoder.dimension()
-
- def extract(self, audio, sample_rate, keyshift=0):
-
- # resample
- if sample_rate == self.vocoder_sample_rate:
- audio_res = audio
- else:
- key_str = str(sample_rate)
- if key_str not in self.resample_kernel:
- self.resample_kernel[key_str] = Resample(sample_rate, self.vocoder_sample_rate, lowpass_filter_width = 128).to(self.device)
- audio_res = self.resample_kernel[key_str](audio)
-
- # extract
- mel = self.vocoder.extract(audio_res, keyshift=keyshift) # B, n_frames, bins
- return mel
-
- def infer(self, mel, f0):
- f0 = f0[:,:mel.size(1),0] # B, n_frames
- audio = self.vocoder(mel, f0)
- return audio
-
-
-class NsfHifiGAN(torch.nn.Module):
- def __init__(self, model_path, device=None):
- super().__init__()
- if device is None:
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
- self.device = device
- self.model_path = model_path
- self.model = None
- self.h = load_config(model_path)
- self.stft = STFT(
- self.h.sampling_rate,
- self.h.num_mels,
- self.h.n_fft,
- self.h.win_size,
- self.h.hop_size,
- self.h.fmin,
- self.h.fmax)
-
- def sample_rate(self):
- return self.h.sampling_rate
-
- def hop_size(self):
- return self.h.hop_size
-
- def dimension(self):
- return self.h.num_mels
-
- def extract(self, audio, keyshift=0):
- mel = self.stft.get_mel(audio, keyshift=keyshift).transpose(1, 2) # B, n_frames, bins
- return mel
-
- def forward(self, mel, f0):
- if self.model is None:
- print('| Load HifiGAN: ', self.model_path)
- self.model, self.h = load_model(self.model_path, device=self.device)
- with torch.no_grad():
- c = mel.transpose(1, 2)
- audio = self.model(c, f0)
- return audio
-
-class NsfHifiGANLog10(NsfHifiGAN):
- def forward(self, mel, f0):
- if self.model is None:
- print('| Load HifiGAN: ', self.model_path)
- self.model, self.h = load_model(self.model_path, device=self.device)
- with torch.no_grad():
- c = 0.434294 * mel.transpose(1, 2)
- audio = self.model(c, f0)
- return audio
\ No newline at end of file
diff --git a/spaces/ynhe/AskAnything/models/grit_src/grit/data/datasets/vg.py b/spaces/ynhe/AskAnything/models/grit_src/grit/data/datasets/vg.py
deleted file mode 100644
index 4d47a80d9f88b89ca3064dbc4945b0246162e5d1..0000000000000000000000000000000000000000
--- a/spaces/ynhe/AskAnything/models/grit_src/grit/data/datasets/vg.py
+++ /dev/null
@@ -1,98 +0,0 @@
-import logging
-import os
-from fvcore.common.timer import Timer
-from detectron2.structures import BoxMode
-from fvcore.common.file_io import PathManager
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from lvis import LVIS
-
-logger = logging.getLogger(__name__)
-
-__all__ = ["load_vg_json", "register_vg_instances"]
-
-
-def register_vg_instances(name, metadata, json_file, image_root):
- """
- """
- DatasetCatalog.register(name, lambda: load_vg_json(
- json_file, image_root, name))
- MetadataCatalog.get(name).set(
- json_file=json_file, image_root=image_root,
- evaluator_type="vg", **metadata
- )
-
-
-def get_vg_meta():
- categories = [{'supercategory': 'object', 'id': 1, 'name': 'object'}]
- vg_categories = sorted(categories, key=lambda x: x["id"])
- thing_classes = [k["name"] for k in vg_categories]
- meta = {"thing_classes": thing_classes}
- return meta
-
-
-def load_vg_json(json_file, image_root, dataset_name=None):
-
- json_file = PathManager.get_local_path(json_file)
-
- timer = Timer()
- lvis_api = LVIS(json_file)
- if timer.seconds() > 1:
- logger.info("Loading {} takes {:.2f} seconds.".format(
- json_file, timer.seconds()))
-
- img_ids = sorted(lvis_api.imgs.keys())
- imgs = lvis_api.load_imgs(img_ids)
- anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids]
-
- ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image]
- assert len(set(ann_ids)) == len(ann_ids), \
- "Annotation ids in '{}' are not unique".format(json_file)
-
- imgs_anns = list(zip(imgs, anns))
- logger.info("Loaded {} images in the LVIS v1 format from {}".format(
- len(imgs_anns), json_file))
-
- dataset_dicts = []
-
- for (img_dict, anno_dict_list) in imgs_anns:
- record = {}
- if "file_name" in img_dict:
- file_name = img_dict["file_name"]
- record["file_name"] = os.path.join(image_root, file_name)
-
- record["height"] = int(img_dict["height"])
- record["width"] = int(img_dict["width"])
- image_id = record["image_id"] = img_dict["id"]
-
- objs = []
- for anno in anno_dict_list:
- assert anno["image_id"] == image_id
- if anno.get('iscrowd', 0) > 0:
- continue
- obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS}
- obj["category_id"] = 0
- obj["object_description"] = anno["caption"]
-
- objs.append(obj)
- record["annotations"] = objs
- if len(record["annotations"]) == 0:
- continue
- record["task"] = "DenseCap"
- dataset_dicts.append(record)
-
- return dataset_dicts
-
-
-_CUSTOM_SPLITS_LVIS = {
- "vg_train": ("vg/images", "vg/annotations/train.json"),
- "vg_test": ("vg/images", "vg/annotations/test.json"),
-}
-
-
-for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS.items():
- register_vg_instances(
- key,
- get_vg_meta(),
- os.path.join("datasets", json_file) if "://" not in json_file else json_file,
- os.path.join("datasets", image_root),
- )
\ No newline at end of file
diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/common/optim.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/common/optim.py
deleted file mode 100644
index d39d3aaa546c17e831d21d1758b69e8c1609415e..0000000000000000000000000000000000000000
--- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/common/optim.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import torch
-
-from detectron2.config import LazyCall as L
-from detectron2.solver.build import get_default_optimizer_params
-
-SGD = L(torch.optim.SGD)(
- params=L(get_default_optimizer_params)(
- # params.model is meant to be set to the model object, before instantiating
- # the optimizer.
- weight_decay_norm=0.0
- ),
- lr=0.02,
- momentum=0.9,
- weight_decay=1e-4,
-)
diff --git a/spaces/ysharma/LLaVA_v1/scripts/sqa_eval_batch.sh b/spaces/ysharma/LLaVA_v1/scripts/sqa_eval_batch.sh
deleted file mode 100644
index adbf46ef7a6e86181b5927002597ef786add5bde..0000000000000000000000000000000000000000
--- a/spaces/ysharma/LLaVA_v1/scripts/sqa_eval_batch.sh
+++ /dev/null
@@ -1,13 +0,0 @@
-#!/bin/bash
-
-CHUNKS=8
-for IDX in {0..7}; do
- CUDA_VISIBLE_DEVICES=$IDX python -m llava.eval.model_vqa_science \
- --model-path liuhaotian/llava-lcs558k-scienceqa-vicuna-13b-v1.3 \
- --question-file ~/haotian/datasets/ScienceQA/data/scienceqa/llava_test_QCM-LEA.json \
- --image-folder ~/haotian/datasets/ScienceQA/data/scienceqa/images/test \
- --answers-file ./test_llava-13b-chunk$CHUNKS_$IDX.jsonl \
- --num-chunks $CHUNKS \
- --chunk-idx $IDX \
- --conv-mode llava_v1 &
-done
diff --git a/spaces/yunfei0710/gpt-academic/Dockerfile b/spaces/yunfei0710/gpt-academic/Dockerfile
deleted file mode 100644
index 97ad13d964d051e4bfdd255a668c209120b1ada4..0000000000000000000000000000000000000000
--- a/spaces/yunfei0710/gpt-academic/Dockerfile
+++ /dev/null
@@ -1,28 +0,0 @@
-# 此Dockerfile适用于“无本地模型”的环境构建,如果需要使用chatglm等本地模型,请参考 docs/Dockerfile+ChatGLM
-# 如何构建: 先修改 `config.py`, 然后 docker build -t gpt-academic .
-# 如何运行: docker run --rm -it --net=host gpt-academic
-FROM python:3.11
-
-RUN echo '[global]' > /etc/pip.conf && \
- echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
- echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
-
-
-WORKDIR /gpt
-
-
-
-
-# 安装依赖
-COPY requirements.txt ./
-COPY ./docs/gradio-3.32.2-py3-none-any.whl ./docs/gradio-3.32.2-py3-none-any.whl
-RUN pip3 install -r requirements.txt
-# 装载项目文件
-COPY . .
-RUN pip3 install -r requirements.txt
-
-# 可选步骤,用于预热模块
-RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
-
-# 启动
-CMD ["python3", "-u", "main.py"]
diff --git a/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/app.py b/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/app.py
deleted file mode 100644
index e7c0881ccd5434c0c68674ec824f8ee263b65386..0000000000000000000000000000000000000000
--- a/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/app.py
+++ /dev/null
@@ -1,154 +0,0 @@
-from typing import Union
-
-import gradio as gr
-import fire
-import os
-import yaml
-
-from llama_lora.config import Config, process_config
-from llama_lora.globals import initialize_global
-from llama_lora.utils.data import init_data_dir
-from llama_lora.models import prepare_base_model
-from llama_lora.ui.main_page import (
- main_page, get_page_title
-)
-from llama_lora.ui.css_styles import get_css_styles
-
-
-def main(
- base_model: Union[str, None] = None,
- data_dir: Union[str, None] = None,
- base_model_choices: Union[str, None] = None,
- trust_remote_code: Union[bool, None] = None,
- server_name: str = "127.0.0.1",
- share: bool = False,
- skip_loading_base_model: bool = False,
- auth: Union[str, None] = None,
- load_8bit: Union[bool, None] = None,
- ui_show_sys_info: Union[bool, None] = None,
- ui_dev_mode: Union[bool, None] = None,
- wandb_api_key: Union[str, None] = None,
- wandb_project: Union[str, None] = None,
- hf_access_token: Union[str, None] = None,
- timezone: Union[str, None] = None,
- config: Union[str, None] = None,
-):
- '''
- Start the LLaMA-LoRA Tuner UI.
-
- :param base_model: (required) The name of the default base model to use.
- :param data_dir: (required) The path to the directory to store data.
-
- :param base_model_choices: Base model selections to display on the UI, seperated by ",". For example: 'decapoda-research/llama-7b-hf,nomic-ai/gpt4all-j'.
-
- :param server_name: Allows to listen on all interfaces by providing '0.0.0.0'.
- :param share: Create a public Gradio URL.
-
- :param wandb_api_key: The API key for Weights & Biases. Setting either this or `wandb_project` will enable Weights & Biases.
- :param wandb_project: The default project name for Weights & Biases. Setting either this or `wandb_api_key` will enable Weights & Biases.
-
- :param hf_access_token: Provide an access token to load private models form Hugging Face Hub. An access token can be created at https://huggingface.co/settings/tokens.
- '''
-
- config_from_file = read_yaml_config(config_path=config)
- if config_from_file:
- for key, value in config_from_file.items():
- if key == "server_name":
- server_name = value
- continue
- if not hasattr(Config, key):
- available_keys = [k for k in vars(
- Config) if not k.startswith('__')]
- raise ValueError(
- f"Invalid config key '{key}' in config.yaml. Available keys: {', '.join(available_keys)}")
- setattr(Config, key, value)
-
- if base_model is not None:
- Config.default_base_model_name = base_model
-
- if base_model_choices is not None:
- Config.base_model_choices = base_model_choices
-
- if trust_remote_code is not None:
- Config.trust_remote_code = trust_remote_code
-
- if data_dir is not None:
- Config.data_dir = data_dir
-
- if load_8bit is not None:
- Config.load_8bit = load_8bit
-
- if auth is not None:
- try:
- [Config.auth_username, Config.auth_password] = auth.split(':')
- except ValueError:
- raise ValueError("--auth must be in the format :, e.g.: --auth='username:password'")
-
- if hf_access_token is not None:
- Config.hf_access_token = hf_access_token
-
- if wandb_api_key is not None:
- Config.wandb_api_key = wandb_api_key
-
- if wandb_project is not None:
- Config.default_wandb_project = wandb_project
-
- if timezone is not None:
- Config.timezone = timezone
-
- if ui_dev_mode is not None:
- Config.ui_dev_mode = ui_dev_mode
-
- if ui_show_sys_info is not None:
- Config.ui_show_sys_info = ui_show_sys_info
-
- process_config()
- initialize_global()
-
- assert (
- Config.default_base_model_name
- ), "Please specify a --base_model, e.g. --base_model='decapoda-research/llama-7b-hf'"
-
- assert (
- Config.data_dir
- ), "Please specify a --data_dir, e.g. --data_dir='./data'"
-
- init_data_dir()
-
- if (not skip_loading_base_model) and (not Config.ui_dev_mode):
- prepare_base_model(Config.default_base_model_name)
-
- with gr.Blocks(title=get_page_title(), css=get_css_styles()) as demo:
- main_page()
-
- demo.queue(concurrency_count=1).launch(
- server_name=server_name,
- share=share,
- auth=((Config.auth_username, Config.auth_password)
- if Config.auth_username and Config.auth_password else None)
- )
-
-
-def read_yaml_config(config_path: Union[str, None] = None):
- if not config_path:
- app_dir = os.path.dirname(os.path.abspath(__file__))
- config_path = os.path.join(app_dir, 'config-ui-demo.yaml')
-
- if not os.path.exists(config_path):
- return None
-
- print(f"Loading config from {config_path}...")
- with open(config_path, 'r') as yaml_file:
- config = yaml.safe_load(yaml_file)
- return config
-
-
-if __name__ == "__main__":
- fire.Fire(main)
-elif __name__ == "app": # running in gradio reload mode (`gradio`)
- try:
- main()
- except AssertionError as e:
- message = str(e)
- message += "\nNote that command line args are not supported while running in gradio reload mode, config.yaml must be used."
- raise AssertionError(message) from e
diff --git a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/minor.js b/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/minor.js
deleted file mode 100644
index 57b3455f827bac6a3376df0e782d1259cef2e3c9..0000000000000000000000000000000000000000
--- a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/functions/minor.js
+++ /dev/null
@@ -1,3 +0,0 @@
-const SemVer = require('../classes/semver')
-const minor = (a, loose) => new SemVer(a, loose).minor
-module.exports = minor
diff --git a/spaces/zhoucr/ai-koni/attentions.py b/spaces/zhoucr/ai-koni/attentions.py
deleted file mode 100644
index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000
--- a/spaces/zhoucr/ai-koni/attentions.py
+++ /dev/null
@@ -1,303 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/chat-history.tsx b/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/chat-history.tsx
deleted file mode 100644
index feb81de66562edda8f40d3c0cc717202c92b6509..0000000000000000000000000000000000000000
--- a/spaces/zhoujiaxin/zhoujiaxinchatgpt/src/components/chat-history.tsx
+++ /dev/null
@@ -1,48 +0,0 @@
-import { IconEdit, IconTrash, IconMore, IconDownload } from "./ui/icons"
-
-export function ChatHistory() {
- return (
-
-
- 历史记录
-
-
-
-
-
-
-
-
-
- 无标题的聊天
-
- 上午1:42
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- )
-}
diff --git a/spaces/zideliu/styledrop/tools/fid_score.py b/spaces/zideliu/styledrop/tools/fid_score.py
deleted file mode 100644
index d07585349edfa52829ab471aa8c85848ad99ee72..0000000000000000000000000000000000000000
--- a/spaces/zideliu/styledrop/tools/fid_score.py
+++ /dev/null
@@ -1,260 +0,0 @@
-"""Calculates the Frechet Inception Distance (FID) to evalulate GANs
-
-The FID metric calculates the distance between two distributions of images.
-Typically, we have summary statistics (mean & covariance matrix) of one
-of these distributions, while the 2nd distribution is given by a GAN.
-
-When run as a stand-alone program, it compares the distribution of
-images that are stored as PNG/JPEG at a specified location with a
-distribution given by summary statistics (in pickle format).
-
-The FID is calculated by assuming that X_1 and X_2 are the activations of
-the pool_3 layer of the inception net for generated samples and real world
-samples respectively.
-
-See --help to see further details.
-
-Code apapted from https://github.com/bioinf-jku/TTUR to use PyTorch instead
-of Tensorflow
-
-Copyright 2018 Institute of Bioinformatics, JKU Linz
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-"""
-import os
-import pathlib
-
-import numpy as np
-import torch
-import torchvision.transforms as TF
-from PIL import Image
-from scipy import linalg
-from torch.nn.functional import adaptive_avg_pool2d
-
-try:
- from tqdm import tqdm
-except ImportError:
- # If tqdm is not available, provide a mock version of it
- def tqdm(x):
- return x
-
-from .inception import InceptionV3
-
-
-IMAGE_EXTENSIONS = {'bmp', 'jpg', 'jpeg', 'pgm', 'png', 'ppm',
- 'tif', 'tiff', 'webp'}
-
-
-class ImagePathDataset(torch.utils.data.Dataset):
- def __init__(self, files, transforms=None):
- self.files = files
- self.transforms = transforms
-
- def __len__(self):
- return len(self.files)
-
- def __getitem__(self, i):
- path = self.files[i]
- img = Image.open(path).convert('RGB')
- if self.transforms is not None:
- img = self.transforms(img)
- return img
-
-
-def get_activations(files, model, batch_size=50, dims=2048, device='cpu', num_workers=8):
- """Calculates the activations of the pool_3 layer for all images.
-
- Params:
- -- files : List of image files paths
- -- model : Instance of inception model
- -- batch_size : Batch size of images for the model to process at once.
- Make sure that the number of samples is a multiple of
- the batch size, otherwise some samples are ignored. This
- behavior is retained to match the original FID score
- implementation.
- -- dims : Dimensionality of features returned by Inception
- -- device : Device to run calculations
- -- num_workers : Number of parallel dataloader workers
-
- Returns:
- -- A numpy array of dimension (num images, dims) that contains the
- activations of the given tensor when feeding inception with the
- query tensor.
- """
- model.eval()
-
- if batch_size > len(files):
- print(('Warning: batch size is bigger than the data size. '
- 'Setting batch size to data size'))
- batch_size = len(files)
-
- dataset = ImagePathDataset(files, transforms=TF.ToTensor())
- dataloader = torch.utils.data.DataLoader(dataset,
- batch_size=batch_size,
- shuffle=False,
- drop_last=False,
- num_workers=num_workers)
-
- pred_arr = np.empty((len(files), dims))
-
- start_idx = 0
-
- for batch in tqdm(dataloader):
- batch = batch.to(device)
-
- with torch.no_grad():
- pred = model(batch)[0]
-
- # If model output is not scalar, apply global spatial average pooling.
- # This happens if you choose a dimensionality not equal 2048.
- if pred.size(2) != 1 or pred.size(3) != 1:
- pred = adaptive_avg_pool2d(pred, output_size=(1, 1))
-
- pred = pred.squeeze(3).squeeze(2).cpu().numpy()
-
- pred_arr[start_idx:start_idx + pred.shape[0]] = pred
-
- start_idx = start_idx + pred.shape[0]
-
- return pred_arr
-
-
-def calculate_frechet_distance(mu1, sigma1, mu2, sigma2, eps=1e-6):
- """Numpy implementation of the Frechet Distance.
- The Frechet distance between two multivariate Gaussians X_1 ~ N(mu_1, C_1)
- and X_2 ~ N(mu_2, C_2) is
- d^2 = ||mu_1 - mu_2||^2 + Tr(C_1 + C_2 - 2*sqrt(C_1*C_2)).
-
- Stable version by Dougal J. Sutherland.
-
- Params:
- -- mu1 : Numpy array containing the activations of a layer of the
- inception net (like returned by the function 'get_predictions')
- for generated samples.
- -- mu2 : The sample mean over activations, precalculated on an
- representative data set.
- -- sigma1: The covariance matrix over activations for generated samples.
- -- sigma2: The covariance matrix over activations, precalculated on an
- representative data set.
-
- Returns:
- -- : The Frechet Distance.
- """
-
- mu1 = np.atleast_1d(mu1)
- mu2 = np.atleast_1d(mu2)
-
- sigma1 = np.atleast_2d(sigma1)
- sigma2 = np.atleast_2d(sigma2)
-
- assert mu1.shape == mu2.shape, \
- 'Training and test mean vectors have different lengths'
- assert sigma1.shape == sigma2.shape, \
- 'Training and test covariances have different dimensions'
-
- diff = mu1 - mu2
-
- # Product might be almost singular
- covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False)
- if not np.isfinite(covmean).all():
- msg = ('fid calculation produces singular product; '
- 'adding %s to diagonal of cov estimates') % eps
- print(msg)
- offset = np.eye(sigma1.shape[0]) * eps
- covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset))
-
- # Numerical error might give slight imaginary component
- if np.iscomplexobj(covmean):
- if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3):
- m = np.max(np.abs(covmean.imag))
- raise ValueError('Imaginary component {}'.format(m))
- covmean = covmean.real
-
- tr_covmean = np.trace(covmean)
-
- return (diff.dot(diff) + np.trace(sigma1)
- + np.trace(sigma2) - 2 * tr_covmean)
-
-
-def calculate_activation_statistics(files, model, batch_size=50, dims=2048,
- device='cpu', num_workers=8):
- """Calculation of the statistics used by the FID.
- Params:
- -- files : List of image files paths
- -- model : Instance of inception model
- -- batch_size : The images numpy array is split into batches with
- batch size batch_size. A reasonable batch size
- depends on the hardware.
- -- dims : Dimensionality of features returned by Inception
- -- device : Device to run calculations
- -- num_workers : Number of parallel dataloader workers
-
- Returns:
- -- mu : The mean over samples of the activations of the pool_3 layer of
- the inception model.
- -- sigma : The covariance matrix of the activations of the pool_3 layer of
- the inception model.
- """
- act = get_activations(files, model, batch_size, dims, device, num_workers)
- mu = np.mean(act, axis=0)
- sigma = np.cov(act, rowvar=False)
- return mu, sigma
-
-
-def compute_statistics_of_path(path, model, batch_size, dims, device, num_workers=8):
- if path.endswith('.npz'):
- with np.load(path) as f:
- m, s = f['mu'][:], f['sigma'][:]
- else:
- path = pathlib.Path(path)
- files = sorted([file for ext in IMAGE_EXTENSIONS
- for file in path.glob('*.{}'.format(ext))])
- m, s = calculate_activation_statistics(files, model, batch_size,
- dims, device, num_workers)
-
- return m, s
-
-
-def save_statistics_of_path(path, out_path, device=None, batch_size=50, dims=2048, num_workers=8):
- if device is None:
- device = torch.device('cuda' if (torch.cuda.is_available()) else 'cpu')
- else:
- device = torch.device(device)
- block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
- model = InceptionV3([block_idx]).to(device)
- m1, s1 = compute_statistics_of_path(path, model, batch_size, dims, device, num_workers)
- np.savez(out_path, mu=m1, sigma=s1)
-
-
-def calculate_fid_given_paths(paths, device=None, batch_size=50, dims=2048, num_workers=8):
- """Calculates the FID of two paths"""
- if device is None:
- device = torch.device('cuda' if (torch.cuda.is_available()) else 'cpu')
- else:
- device = torch.device(device)
-
- for p in paths:
- if not os.path.exists(p):
- raise RuntimeError('Invalid path: %s' % p)
-
- block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
-
- model = InceptionV3([block_idx]).to(device)
-
- m1, s1 = compute_statistics_of_path(paths[0], model, batch_size,
- dims, device, num_workers)
- m2, s2 = compute_statistics_of_path(paths[1], model, batch_size,
- dims, device, num_workers)
- fid_value = calculate_frechet_distance(m1, s1, m2, s2)
-
- return fid_value
diff --git a/spaces/zijia88/Sewer_Endoscopy_Risk_Identification/app.py b/spaces/zijia88/Sewer_Endoscopy_Risk_Identification/app.py
deleted file mode 100644
index a7c3fd7b3b890b18212860c43869387ab560420d..0000000000000000000000000000000000000000
--- a/spaces/zijia88/Sewer_Endoscopy_Risk_Identification/app.py
+++ /dev/null
@@ -1,23 +0,0 @@
-# 导入所需库
-import gradio as gr
-from transformers import pipeline
-import os
-
-# 获取环境变量中的token
-drainage_token = os.environ.get("drainage_token")
-
-# 加载预训练模型
-pipeline = pipeline(task="image-classification", model="zijia88/autotrain-drainage-56552131498", use_auth_token=drainage_token)
-
-# 预测函数,输入图片,返回预测结果
-def predict(image):
- predictions = pipeline(image)
- return {p["label"]: p["score"] for p in predictions}
-
-# 创建Gradio接口
-gr.Interface(
- predict, # 预测函数
- inputs=gr.inputs.Image(label="上传管道内窥检测图片", type="filepath"), # 图片上传输入组件
- outputs=gr.outputs.Label(num_top_classes=5), # 标签输出组件
- title="内窥检测隐患识别", # 接口标题
-).launch() # 启动接口
\ No newline at end of file
diff --git a/spaces/zomehwh/bert_vits2/mel_processing.py b/spaces/zomehwh/bert_vits2/mel_processing.py
deleted file mode 100644
index aab5bd926a194610b7ce3da29c553bd877341aa4..0000000000000000000000000000000000000000
--- a/spaces/zomehwh/bert_vits2/mel_processing.py
+++ /dev/null
@@ -1,139 +0,0 @@
-import torch
-import torch.utils.data
-from librosa.filters import mel as librosa_mel_fn
-
-MAX_WAV_VALUE = 32768.0
-
-
-def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
- """
- PARAMS
- ------
- C: compression factor
- """
- return torch.log(torch.clamp(x, min=clip_val) * C)
-
-
-def dynamic_range_decompression_torch(x, C=1):
- """
- PARAMS
- ------
- C: compression factor used to compress
- """
- return torch.exp(x) / C
-
-
-def spectral_normalize_torch(magnitudes):
- output = dynamic_range_compression_torch(magnitudes)
- return output
-
-
-def spectral_de_normalize_torch(magnitudes):
- output = dynamic_range_decompression_torch(magnitudes)
- return output
-
-
-mel_basis = {}
-hann_window = {}
-
-
-def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
- if torch.min(y) < -1.0:
- print("min value is ", torch.min(y))
- if torch.max(y) > 1.0:
- print("max value is ", torch.max(y))
-
- global hann_window
- dtype_device = str(y.dtype) + "_" + str(y.device)
- wnsize_dtype_device = str(win_size) + "_" + dtype_device
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(
- dtype=y.dtype, device=y.device
- )
-
- y = torch.nn.functional.pad(
- y.unsqueeze(1),
- (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)),
- mode="reflect",
- )
- y = y.squeeze(1)
-
- spec = torch.stft(
- y,
- n_fft,
- hop_length=hop_size,
- win_length=win_size,
- window=hann_window[wnsize_dtype_device],
- center=center,
- pad_mode="reflect",
- normalized=False,
- onesided=True,
- return_complex=False,
- )
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
- return spec
-
-
-def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
- global mel_basis
- dtype_device = str(spec.dtype) + "_" + str(spec.device)
- fmax_dtype_device = str(fmax) + "_" + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(
- dtype=spec.dtype, device=spec.device
- )
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
- return spec
-
-
-def mel_spectrogram_torch(
- y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False
-):
- if torch.min(y) < -1.0:
- print("min value is ", torch.min(y))
- if torch.max(y) > 1.0:
- print("max value is ", torch.max(y))
-
- global mel_basis, hann_window
- dtype_device = str(y.dtype) + "_" + str(y.device)
- fmax_dtype_device = str(fmax) + "_" + dtype_device
- wnsize_dtype_device = str(win_size) + "_" + dtype_device
- if fmax_dtype_device not in mel_basis:
- mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
- mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(
- dtype=y.dtype, device=y.device
- )
- if wnsize_dtype_device not in hann_window:
- hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(
- dtype=y.dtype, device=y.device
- )
-
- y = torch.nn.functional.pad(
- y.unsqueeze(1),
- (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)),
- mode="reflect",
- )
- y = y.squeeze(1)
-
- spec = torch.stft(
- y,
- n_fft,
- hop_length=hop_size,
- win_length=win_size,
- window=hann_window[wnsize_dtype_device],
- center=center,
- pad_mode="reflect",
- normalized=False,
- onesided=True,
- return_complex=False,
- )
-
- spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
-
- spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
- spec = spectral_normalize_torch(spec)
-
- return spec