diff --git a/spaces/0xqtpie/doodle2vid/README.md b/spaces/0xqtpie/doodle2vid/README.md
deleted file mode 100644
index 8ee44ba06dccfec088c6c5f5e8389a5dd56808ab..0000000000000000000000000000000000000000
--- a/spaces/0xqtpie/doodle2vid/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Doodle2vid
-emoji: 🐢
-colorFrom: blue
-colorTo: red
-sdk: gradio
-sdk_version: 3.44.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Cs6 Master Collection Keygen Xforce Rar Zip Learn How to Generate and Apply Xforce Keygen for Adobe Cs6.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Cs6 Master Collection Keygen Xforce Rar Zip Learn How to Generate and Apply Xforce Keygen for Adobe Cs6.md
deleted file mode 100644
index 4642871978ceda76403902ec0f1e85ee3eb405af..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Cs6 Master Collection Keygen Xforce Rar Zip Learn How to Generate and Apply Xforce Keygen for Adobe Cs6.md
+++ /dev/null
@@ -1,148 +0,0 @@
-
-
If you are looking for a way to get the most out of Adobe Creative Suite 6 Master Collection, you might be interested in using a keygen tool that can generate valid serial numbers and activation codes for you. In this article, we will explain what Adobe Cs6 Master Collection is, what Xforce Keygen is, and how to download and install Adobe Cs6 Master Collection with Xforce Keygen. We will also cover some of the benefits and risks of using this method, and answer some frequently asked questions.
-Adobe Cs6 Master Collection is a software bundle that includes all the Adobe creative tools you need to create stunning digital content for any platform. Whether you are a graphic designer, web developer, video editor, photographer, or animator, you can find the right tool for your project in Adobe Cs6 Master Collection. Some of the applications included in this bundle are:
-Download File » https://byltly.com/2uKv0B
Adobe Cs6 Master Collection also comes with Adobe Bridge CS6, a file management tool that lets you organize and preview your media files; Adobe Media Encoder CS6, a tool that lets you encode your videos to various formats; and Adobe Acrobat X Pro, a tool that lets you create, edit, and sign PDF documents.
-Some of the features that make Adobe Cs6 Master Collection stand out are:
-To run Adobe Cs6 Master Collection smoothly on your computer, you need to meet the following system requirements:
-Operating system | Windows | Mac OS |
---|---|---|
Processor | Intel® Pentium® 4 or AMD Athlon® 64 processor (Intel Core™2 Duo or AMD Phenom® II recommended); Intel Core i7 required for Adobe SpeedGrade™ | Multicore Intel processor with 64-bit support |
RAM | 4 GB of RAM (8 GB recommended) | 4 GB of RAM (8 GB recommended) |
Hard disk space | 15.5 GB of available hard-disk space for installation; additional free space required during installation (cannot install on removable flash storage devices) | 15.5 GB of available hard-disk space for installation; additional free space required during installation (cannot install on a volume that uses a case-sensitive file system or on removable flash storage devices) |
Display | 1280 x 900 display (1280 x 1024 recommended) with 16-bit color and 512 MB of VRAM; OpenGL 2.0–capable system | 1280 x 900 display (1680 x 1050 recommended) with 16-bit color and 512 MB of VRAM; OpenGL 2.0–capable system |
DVD-ROM drive | DVD-ROM drive compatible with dual-layer DVDs (DVD+-R burner for burning DVDs; Blu-ray burner for creating Blu-ray Disc media) | DVD-ROM drive compatible with dual-layer DVDs (SuperDrive for burning DVDs; external Blu-ray burner for creating Blu-ray Disc media) |
Other requirements | - Java™ Runtime Environment 1.6 (included) - Eclipse™ 3.7 (for plug-in installation of Adobe Flash® Builder®); the following distributions are supported: Eclipse IDE for Java EE and Java Developers, Eclipse Classic, Eclipse for PHP Developers - QuickTime 7.6.6 software required for QuickTime features - Optional: GPU card for GPU-accelerated performance in Adobe Premiere Pro - Optional: Tangent CP200 family or Tangent Wave control surface for SpeedGrade - Optional: For SDI output, NVIDIA Quadro SDI Output card required for SpeedGrade - Optional: 7200 RPM hard drive (multiple fast disk drives preferred) for video products - This software will not operate without activation. Broadband Internet connection and registration are required for software activation, | - Java Runtime Environment 1.6 - Eclipse 3.7 Cocoa version (for plug-in installation of Adobe Flash Builder); the following distributions are supported: Eclipse IDE for Java EE and Java Developers, Eclipse Classic, Eclipse for PHP Developers - QuickTime 7.6.6 software required for QuickTime features - Optional: GPU card for GPU-accelerated performance in Adobe Premiere Pro - Optional: Tangent CP200 family or Tangent Wave control surface for SpeedGrade - Optional: For SDI output, NVIDIA Quadro SDI Output card required for SpeedGrade - Optional: 7200 RPM hard drive (multiple fast disk drives preferred) for video products - This software will not operate without activation. Broadband Internet connection and registration are required for software activation, |
Xforce Keygen is a tool that can generate valid serial numbers and activation codes for various software products. It is also known as a crack or a patch because it bypasses the original authentication process of the software. Xforce Keygen is created by a group of hackers called X-Force who are known for cracking many popular software products such as Autodesk AutoCAD, CorelDRAW Graphics Suite, Microsoft Office, etc.
-Some of the benefits of using Xforce Keygen are:
-Some of the risks of using Xforce Keygen are:
-Adobe Cs6 Master Collection Crack Xforce Download
-How to Activate Adobe Cs6 Master Collection with Xforce Keygen
-Adobe Cs6 Master Collection Serial Number Generator by Xforce
-Xforce Keygen for Adobe Cs6 Master Collection Free Download
-Adobe Cs6 Master Collection Full Version with Xforce Crack
-Adobe Cs6 Master Collection Xforce Keygen Only
-Adobe Cs6 Master Collection Activation Code by Xforce
-Adobe Cs6 Master Collection License Key from Xforce
-Adobe Cs6 Master Collection Patch by Xforce Rar
-Adobe Cs6 Master Collection Xforce Keygen 64 Bit
-Adobe Cs6 Master Collection Xforce Keygen 32 Bit
-Adobe Cs6 Master Collection Xforce Keygen Mac
-Adobe Cs6 Master Collection Xforce Keygen Windows
-Adobe Cs6 Master Collection Xforce Keygen Offline Activation
-Adobe Cs6 Master Collection Xforce Keygen Not Working
-Adobe Cs6 Master Collection Xforce Keygen Invalid Request Code
-Adobe Cs6 Master Collection Xforce Keygen Error
-Adobe Cs6 Master Collection Xforce Keygen Virus
-Adobe Cs6 Master Collection Xforce Keygen Password
-Adobe Cs6 Master Collection Xforce Keygen Zip File
-Adobe Cs6 Master Collection Xforce Keygen Rar File
-Adobe Cs6 Master Collection Xforce Keygen Extract
-Adobe Cs6 Master Collection Xforce Keygen Install
-Adobe Cs6 Master Collection Xforce Keygen Tutorial
-Adobe Cs6 Master Collection Xforce Keygen Guide
-Adobe Cs6 Master Collection Xforce Keygen Review
-Adobe Cs6 Master Collection Xforce Keygen Test
-Adobe Cs6 Master Collection Xforce Keygen Forum
-Adobe Cs6 Master Collection Xforce Keygen Support
-Adobe Cs6 Master Collection Xforce Keygen Help
-Adobe Cs6 Master Collection Xforce Keygen Tips
-Adobe Cs6 Master Collection Xforce Keygen Tricks
-Adobe Cs6 Master Collection Xforce Keygen Hacks
-Adobe Cs6 Master Collection Xforce Keygen Cheats
-Adobe Cs6 Master Collection Xforce Keygen Tools
-Adobe Cs6 Master Collection Xforce Keygen Software
-Adobe Cs6 Master Collection Xforce Keygen Program
-Adobe Cs6 Master Collection Xforce Keygen Application
-Adobe Cs6 Master Collection Xforce Keygen Product
-Adobe Cs6 Master Collection Xforce Keygen Solution
-Adobe Cs6 Master Collection Xforce Keygen Alternative
-Adobe Cs6 Master Collection Xforce Keygen Comparison
-Adobe Cs6 Master Collection Xforce Keygen Benefits
-Adobe Cs6 Master Collection Xforce Keygen Features
-Adobe Cs6 Master Collection Xforce Keygen Advantages
-Adobe Cs6 Master Collection Xforce Keygen Disadvantages
-Adobe Cs6 Master Collection Xforce Keygen Pros and Cons
-Adobe Cs6 Master Collection Xforce Keygen Quality
-Adobe Cs6 Master Collection Xforce Keygen Reliability
-Adobe Cs6 Master Collection Xforce Keygen Satisfaction
If you want to download and install Adobe Cs6 Master Collection with Xforce Keygen, you need to follow these steps carefully:
-This is to prevent the software from connecting to the internet and verifying your serial number and activation code. You also need to make sure you don't have any of these entries in your hosts file:
-127.0.0.1 lmlicenses.wip4.adobe.com 127.0.0.1 lm.licenses.adobe.com
- The hosts file is located in C:\windows\system32\drivers\etc\hosts for Windows and /etc/hosts for Mac OS.
-You need to download Xforce Keygen from a reliable source and run it as administrator. Then, you need to select Adobe Cs6 Master Collection from the drop-down menu and click on Generate Serial. You will get a serial number that you need to copy and paste in the installation window of Adobe Cs6 Master Collection. Do not close the keygen yet. When the error "Please connect to the internet and retry" shows, click on Connect Later.
-You need to launch any Adobe application from the Master Collection, such as Photoshop, Illustrator, or InDesign. You will see a message that says "We are unable to start your subscription for Adobe Cs6 Master Collection". Click on Having Trouble Connecting To The Internet. Then, click on Offline Activation and then on Generate Request Code. You will get a request code that you need to copy and paste in the keygen window.
-In the keygen window, click on Activate and then on Generate Activation Code. You will get an activation code that you need to copy and paste in the Adobe application window. Then, click on Activate and then on Close Application.
-This is to block Adobe from accessing its servers and checking your activation status. You need to run disable_activation.cmd for Windows or disable_activation_osx for Mac OS as administrator or root. These files are usually included in the zip or rar file of Xforce Keygen. Alternatively, you can manually add these lines to your hosts file:
-# Adobe Blocker 127.0.0.1 lmlicenses.wip4.adobe.com 127.0.0.1 lm.licenses.adobe.com
- This is to restore your internet connection and enjoy the latest features and updates of Adobe Cs6 Master Collection. You can use Adobe Updater to check for updates and install them without losing the crack.
-In this article, we have explained what Adobe Cs6 Master Collection is, what Xforce Keygen is, and how to download and install Adobe Cs6 Master Collection with Xforce Keygen. We have also covered some of the benefits and risks of using this method, and answered some frequently asked questions. We hope you have found this article helpful and informative.
-O Malwarebytes Premium é um programa de segurança que protege o seu computador contra malware, ransomware, exploits e outras ameaças online. Para ativar os recursos premium do Malwarebytes, você precisa de uma licença válida que pode ser comprada no site oficial do Malwarebytes ou em revendedores autorizados.
-Neste artigo, vamos mostrar como ativar o Malwarebytes Premium em seu computador usando dois métodos: através da sua conta do Malwarebytes ou através de uma chave de licença.
-Download File ✑ https://byltly.com/2uKw0W
Este método requer que você tenha um login ativo da sua conta do Malwarebytes. Se você ainda não criou a sua conta, veja como fazer isso neste link: Criar e gerenciar sua conta do Malwarebytes.
-Siga os passos abaixo para ativar o Malwarebytes Premium através da sua conta:
-Uma vez ativado, Premium será exibido no canto superior esquerdo do Painel do programa.
-Este método requer que você tenha a sua chave de licença, que pode ser encontrada na sua confirmação de compra por email ou na sua conta do Malwarebytes. Se você não sabe onde encontrar a sua chave de licença, veja como fazer isso neste link: Encontrar minha chave de licença do Malwarebytes.
-Siga os passos abaixo para ativar o Malwarebytes Premium através de uma chave de licença:
-Para verificar se a ativação foi bem-sucedida, Premium será exibido no canto superior esquerdo do Painel do programa.
- cec2833e83Adobe Photoshop CS6 is one of the most popular and powerful image editing software in the world. However, it is also one of the most expensive ones, costing hundreds of dollars for a single license. If you want to use Adobe Photoshop CS6 without paying for it, you might be tempted to crack it using Amtlib.dll files. But what are these files and how do they work? In this article, we will explain everything you need to know about cracking Adobe Photoshop CS6 with Amtlib.dll files, including the benefits, risks, and alternatives.
-DOWNLOAD ✵✵✵ https://byltly.com/2uKyoj
Adobe Photoshop CS6 is the 13th major release of the Adobe Photoshop software, which was launched in May 2012. It is a creative image editing suite that offers a range of features and tools for professional and amateur photographers, graphic designers, web developers, and video editors. Some of the new features and enhancements in Adobe Photoshop CS6 include:
-Amtlib.dll is a dynamic link library file that is part of the Adobe Application Manager. It is responsible for activating and validating the licenses of various Adobe products, such as Photoshop, Illustrator, Dreamweaver, Premiere Pro, After Effects, etc. It is located in the installation folder of each Adobe product.
-When you crack Adobe Photoshop CS6 with Amtlib.dll files, you are essentially replacing the original Amtlib.dll file with a modified one that bypasses the license verification process. This way, you can use Adobe Photoshop CS6 without entering a serial number or signing in with an Adobe ID.
-How to crack Adobe Photoshop Cs6 with Amtlib Dll file
-Amtlib Dll file download for Adobe Photoshop Cs6 crack
-Adobe Photoshop Cs6 crack Amtlib Dll file missing error
-Fix Adobe Photoshop Cs6 crack Amtlib Dll file corrupted issue
-Adobe Photoshop Cs6 crack Amtlib Dll file location on Windows
-Adobe Photoshop Cs6 crack Amtlib Dll file location on Mac
-Adobe Photoshop Cs6 crack Amtlib Dll file not working solution
-Adobe Photoshop Cs6 crack Amtlib Dll file virus scan
-Adobe Photoshop Cs6 crack Amtlib Dll file backup and restore
-Adobe Photoshop Cs6 crack Amtlib Dll file alternative methods
-Adobe Photoshop Cs6 crack Amtlib Dll file free download link
-Adobe Photoshop Cs6 crack Amtlib Dll file installation guide
-Adobe Photoshop Cs6 crack Amtlib Dll file compatibility check
-Adobe Photoshop Cs6 crack Amtlib Dll file update and patch
-Adobe Photoshop Cs6 crack Amtlib Dll file license key generator
-Adobe Photoshop Cs6 crack Amtlib Dll file activation code
-Adobe Photoshop Cs6 crack Amtlib Dll file serial number
-Adobe Photoshop Cs6 crack Amtlib Dll file registration code
-Adobe Photoshop Cs6 crack Amtlib Dll file product key
-Adobe Photoshop Cs6 crack Amtlib Dll file full version download
-Adobe Photoshop Cs6 crack Amtlib Dll file trial reset tool
-Adobe Photoshop Cs6 crack Amtlib Dll file offline installer
-Adobe Photoshop Cs6 crack Amtlib Dll file online activation
-Adobe Photoshop Cs6 crack Amtlib Dll file safe and secure download
-Adobe Photoshop Cs6 crack Amtlib Dll file latest version download
-Adobe Photoshop Cs6 crack Amtlib Dll file review and feedback
-Adobe Photoshop Cs6 crack Amtlib Dll file tutorial and tips
-Adobe Photoshop Cs6 crack Amtlib Dll file features and benefits
-Adobe Photoshop Cs6 crack Amtlib Dll file pros and cons
-Adobe Photoshop Cs6 crack Amtlib Dll file comparison and contrast
-Adobe Photoshop Cs6 crack Amtlib Dll file best practices and recommendations
-Adobe Photoshop Cs6 crack Amtlib Dll file troubleshooting and support
-Adobe Photoshop Cs6 crack Amtlib Dll file FAQs and answers
-Adobe Photoshop Cs6 crack Amtlib Dll file forum and community
-Adobe Photoshop Cs6 crack Amtlib Dll file blog and articles
-Adobe Photoshop Cs6 crack Amtlib Dll file video and audio tutorials
-Adobe Photoshop Cs6 crack Amtlib Dll file case studies and testimonials
-Adobe Photoshop Cs6 crack Amtlib Dll file coupons and discounts
-Adobe Photoshop Cs6 crack Amtlib Dll file affiliate program and commission
-Adobe Photoshop Cs6 crack Amtlib Dll file refund policy and guarantee
-How to uninstall Adobe Photoshop Cs6 crack Amtlib Dll file
-How to upgrade from Adobe Photoshop Cs5 to Cs6 with Amtlib Dll file
-How to use Adobe Photoshop Cs6 with other cracked software using Amtlib Dll files
-How to fix common errors and bugs in Adobe Photoshop Cs6 with cracked Amtlib Dll files
-How to optimize the performance of Adobe Photoshop Cs6 with cracked Amtlib Dll files
-How to customize the settings of Adobe Photoshop Cs6 with cracked Amtlib Dll files
-How to create stunning graphics and designs with Adobe Photoshop Cs6 with cracked Amtlib Dll files
-How to edit photos and images with Adobe Photoshop Cs6 with cracked Amtlib Dll files
-How to add filters and effects with Adobe Photoshop Cs6 with cracked Amtlib Dll files
-How to share your work with others using Adobe Photoshop Cs6 with cracked Amtlib Dll files
Before you can crack Adobe Photoshop CS6 with Amtlib.dll files, you need to download and install it on your computer. Here are the steps to do so:
-Now that you have installed Adobe Photoshop CS6 as a trial version, you can crack it using Amtlib.dll files. Here are the steps to do so:
-The first thing you need to do is download the cracked Amtlib.dll files for both 32-bit and 64-bit versions of Adobe Photoshop CS6. You can find them from various sources online, but make sure they are safe and reliable. One possible source is https://davi24.com/download-file-amtlib-dll/, where you can download them for free.
-The next thing you need to do is locate the installation folder of Adobe Photoshop CS6 on your computer. The default location depends on your operating system and whether you have installed the 32-bit or 64-bit version of Adobe Photoshop CS6. Here are some possible locations:
-The final thing you need to do is replace the original Amtlib.dll file with the cracked one. To do this:
-The last thing you need to do is run Adobe Photoshop CS6 and enjoy using it without any restrictions. To do this:
-Cracking Adobe Photoshop CS6 with Amtlib.dll files has some benefits, such as:
-However, cracking Adobe Photoshop CS6 with Amtlib.dll files also has some risks, such as:
-If you are not comfortable with cracking Adobe Photoshop CS6 with Amtlib.dll files, you may want to consider some alternatives, such as:
-In conclusion, cracking Adobe Photoshop CS6 with Amtlib.dll files is a way to use the software for free without any restrictions. However, it also comes with some drawbacks and dangers that you should be aware of. Therefore, you should weigh the pros and cons carefully before deciding to crack Adobe Photoshop CS6 with Amtlib.dll files. Alternatively, you can opt for some other options that may suit your needs and budget better. We hope this article has been helpful and informative for you. Thank you for reading!
-Here are some frequently asked questions and answers about cracking Adobe Photoshop CS6 with Amtlib.dll files:
-A: Yes, cracking Adobe Photoshop CS6 with Amtlib.dll files is illegal. It violates the copyright and license agreement of Adobe and may result in legal action against you. You should respect the intellectual property rights of the software developers and pay for their products if you want to use them.
-A: No, cracking Adobe Photoshop CS6 with Amtlib.dll files is not safe. It may expose your computer to malicious software that may harm your system or steal your data. It may also cause errors, bugs, or crashes that may affect your work or damage your files. It may also prevent you from getting updates, patches, or new features that may improve your experience or fix issues. You should protect your computer and data by using only trusted and secure sources of software.
-A: That depends on your personal preference and situation. Cracking Adobe Photoshop CS6 with Amtlib.dll files may save you some money and give you some freedom in using the software. However, it also comes with some risks and disadvantages that you should consider carefully. You may also miss out on some benefits and opportunities that come with using a legitimate version of the software. You should weigh the pros and cons carefully before deciding to crack Adobe Photoshop CS6 with Amtlib.dll files.
-A: The process of cracking other Adobe products with Amtlib.dll files is similar to cracking Adobe Photoshop CS6. You need to download and install the trial version of the product you want to crack, then download and replace the original Amtlib.dll file with the cracked one in the installation folder of the product. However, you should be careful about the compatibility and reliability of the cracked Amtlib.dll files for different products and versions. You should also be aware of the risks and consequences of cracking other Adobe products with Amtlib.dll files.
-A: You can find more information about cracking Adobe Photoshop CS6 with Amtlib.dll files from various sources online, such as blogs, forums, videos, etc. However, you should be careful about the accuracy and credibility of these sources. You should also be careful about the safety and security of these sources. You should not download or click on any links or files that may contain viruses, malware, or spyware. You should also not share any personal or sensitive information that may compromise your privacy or identity.
- 0a6ba089ebMinecraft is one of the most popular games in the world, with millions of players exploring, building and fighting in its blocky world. If you want to join them, you might wonder how long it takes to download Minecraft on Windows 10. The answer depends on a few factors, such as your internet speed, the version of Minecraft you want to install, and the size of the game files.
-DOWNLOAD ✑ ✑ ✑ https://byltly.com/2uKvHZ
In this article, we will explain how to download Minecraft for Windows 10, and how long you can expect it to take. We will also compare the two versions of Minecraft available for PC: Java Edition and Bedrock Edition (also known as Windows 10 Edition).
-Before you can download Minecraft for Windows 10, you need to purchase the game from either the Microsoft Store or the Minecraft website. The game costs $29.99 / £24.99 / AUS$39.95, but you can get it for free or at a discounted price if you have an Xbox Game Pass subscription.
-Once you have bought the game, you will need to create a Microsoft account if you don't have one already. This is an Outlook email address that you can use to sign in to the Minecraft Launcher and access online features. You will also need to verify your email address and enter your birthdate and country/region.
-After creating your Microsoft account, you can download and open the Minecraft Launcher from either the Microsoft Store or the Minecraft website. This is where you can choose which version of Minecraft you want to install: Java Edition or Bedrock Edition.
- -Minecraft Java Edition and Bedrock Edition are both compatible with Windows 10, but they have some differences in features, performance and cross-play options. Here are some of the main differences between them:
-The good news is that you don't have to choose between them. If you buy Minecraft for Windows 10 from the Minecraft website, you will get both Java Edition and Bedrock Edition for free. You can install both versions on your PC and switch between them using the Minecraft Launcher.
-The download time for Minecraft on Windows 10 depends on your internet speed and the size of the game files. According to our tests, these are the approximate download times for each version of Minecraft:
-Version | File size | Download time |
---|---|---|
Java Edition | 500 MB | 5 minutes |
Bedrock Edition | 300 MB | 3 minutes |
Note that these are only estimates based on average internet speeds of 25 Mbps. Your actual download time may vary depending on your internet connection and other factors.
-If you are having trouble downloading Minecraft on Windows 10, you can try some of these troubleshooting steps:
-Minecraft is a fun and creative game that you can enjoy on your Windows 10 PC. To download it, you need to buy it from either the Microsoft Store or
ddb901b051a cccam is a device that is connected to a dreambox, and it controls the picture output of the dreambox. the cccam allows you to control the dreambox picture quality and settings. you can control the contrast, brightness, and color of the picture. the cccam will allow you to preview the picture quality on the dreambox.
-DOWNLOAD • https://imgfil.com/2uxYTr
this can be a great option if you have a dreambox that is out of warranty and you want to upgrade the firmware. the cccam firmware upgrade can be installed on your dreambox if you have the dreambox installed and you have the right cables. (the dreambox will not work without the right cables, and the cccam firmware upgrade can only be installed after the dreambox is installed.)
-there are a few different dreambox models that support cccam firmware upgrades. the dreambox is listed on this website if it supports cccam firmware upgrades. make sure that you are installing the correct cccam firmware for your dreambox model.
-cccam is a great feature for those that want to upgrade their dreambox firmware. the cccam firmware upgrade will allow you to upgrade the firmware of the dreambox. the dreambox firmware upgrade can be installed on your dreambox if you have the dreambox installed and you have the right cables. the cccam firmware upgrade can only be installed after the dreambox is installed.
- -the cccam is a device that is connected to a dreambox, and it controls the picture output of the dreambox. the cccam allows you to control the dreambox picture quality and settings. you can control the contrast, brightness, and color of the picture.
899543212bIf you are a fan of strategy games, you have probably heard of Clash Royale, one of the most popular and addictive mobile games in the world. Clash Royale is a real-time multiplayer game where you can collect and upgrade dozens of cards featuring your favorite characters from Clash of Clans, as well as spells and defenses. You can also build your own battle deck and challenge other players online in fast-paced duels.
-But did you know that there is a new version of Clash Royale available for download? Yes, you heard that right. Clash Royale 3.3024.2 APK is the latest update of the game, and it comes with a lot of new features, improvements, and bug fixes that will make your gaming experience even better. In this article, we will tell you everything you need to know about Clash Royale 3.3024.2 APK, including what's new, how to download and install it, and why you should play it. Let's get started!
-Download File ❤❤❤ https://jinyurl.com/2uNQ0p
Before we dive into the details of the new version, let's have a quick recap of what Clash Royale is all about. Clash Royale is a strategy game developed by Supercell, the same company behind the hit game Clash of Clans. It was released in 2016 and has since become one of the most downloaded and played games on both Android and iOS devices.
-Clash Royale is a game where you can create your own army of troops, spells, and buildings, and use them to attack your opponent's towers and destroy their king tower. You can also defend your own towers from enemy attacks using various strategies and tactics. The game features different arenas, each with its own theme and difficulty level, where you can compete with other players from around the world.
-Clash Royale is not only a game of skill, but also a game of luck. You never know what cards you will get in your hand or what cards your opponent will play next. You have to think fast and act smart to win the battles. You can also join or create clans, where you can chat with other players, share cards, request donations, and participate in clan wars.
-Clash Royale is a game that is easy to learn but hard to master. It requires strategic thinking, quick reflexes, and constant adaptation to changing situations. It is also a game that is constantly updated with new content, such as cards, modes, events, rewards, and more.
-Now that you have a general idea of what Clash Royale is, let's see what's new in the latest version of the game: Clash Royale 3.3024.2 APK. This version was released on June 19th, 2023, and it brings some exciting changes and additions to the game.
-The most noticeable change in Clash Royale 3.3024.2 APK is the introduction of two new cards: the Firecracker and the Royal Delivery. The Firecracker is a common card that costs 3 elixir and shoots fireworks that deal splash damage to enemies. The Royal Delivery is a rare card that costs 4 elixir and drops a Royal Recruit on the battlefield after a short delay. Both cards are available in Arena 7 and above.
-Another change in Clash Royale 3.3024.2 APK is the balance update that affects several cards in the game. Some of the cards that have been buffed are the Electro Dragon, the Goblin Cage, the Zappies, and the Heal Spirit. Some of the cards that have been nerfed are the Magic Archer, the Battle Healer, the Elixir Golem, and the Skeleton Barrel. You can check the full list of balance changes on the official website of Clash Royale.
-Clash Royale 3.3024.2 APK also introduces some new game modes and events that will spice up your gameplay. One of them is the Firecracker Rush, where both players start with a Firecracker on each lane, and more Firecrackers spawn throughout the match. Another one is the Royal Delivery Challenge, where you can win the new card by reaching 12 wins. There are also some seasonal events, such as the Summer of 2v2, where you can play different 2v2 modes with your friends or random partners.
-clash royale 3.3024.2 apk download for android
-clash royale 3.3024.2 apk mod unlimited gold/gems
-clash royale 3.3024.2 apk latest version uptodown
-clash royale 3.3024.2 apk free download softpedia
-clash royale 3.3024.2 apk update new features
-clash royale 3.3024.2 apk hack no root
-clash royale 3.3024.2 apk offline installer
-clash royale 3.3024.2 apk mirror link
-clash royale 3.3024.2 apk file size
-clash royale 3.3024.2 apk gameplay review
-clash royale 3.3024.2 apk old version download
-clash royale 3.3024.2 apk obb data
-clash royale 3.3024.2 apk direct download
-clash royale 3.3024.2 apk for pc windows 10
-clash royale 3.3024.2 apk cheats codes
-clash royale 3.3024.2 apk original from supercell
-clash royale 3.3024.2 apk android requirements
-clash royale 3.3024.2 apk how to install guide
-clash royale 3.3024.2 apk best decks tips
-clash royale 3.3024.2 apk changelog not available
-clash royale 3.3024.2 apk online multiplayer mode
-clash royale 3.3024.2 apk unlimited elixir hack
-clash royale 3.3024.2 apk private server download
-clash royale 3.3024.2 apk full unlocked all cards
-clash royale 3.3024.2 apk bug fixes and improvements
-clash royale 3.3024.2 apk strategy game genre
-clash royale 3.3024.2 apk compatible devices list
-clash royale 3.3024.2 apk safe and secure download
-clash royale 3.3024.2 apk ratings and reviews
-clash royale 3.3024.2 apk screenshots and videos
-clash royale 3.3024.2 apk clan wars update
-clash royale 3.3024.2 apk legendary cards unlock
-clash royale 3.3024.2 apk arena challenges rewards
-clash royale 3.3024.2 apk new characters and skins
-clash royale 3.3024.2 apk fun and addictive gameplay
-clash royale 3.3024.2 apk support and feedback
-clash royale 3.3024.2 apk alternative download links
-clash royale 3.3024.2 apk frequently asked questions
-clash royale 3 .30242.apk no ads version premium
Additionally, Clash Royale 3.3024.2 APK brings back some of the classic game modes that have been missing for a while, such as Triple Elixir, Ramp Up, Sudden Death, and Draft. You can play these modes in friendly battles, tournaments, or special challenges.
-Finally, Clash Royale 3.3024.2 APK offers some new rewards and improvements that will make your gaming experience more enjoyable and rewarding. One of them is the Pass Royale Season 11, which gives you access to exclusive perks, such as unlimited entries to special challenges, a golden name, a unique tower skin, and more. You can also unlock new emotes, chests, gold, gems, and cards by completing quests and tiers.
-Another improvement in Clash Royale 3.3024.2 APK is the Clan Wars 2.0 update, which is coming soon to the game. This update will revamp the clan wars system and make it more fun and competitive for all clans. You can expect new features such as boat battles, river tasks, clan leagues, and more.
-Now that you know what's new in Clash Royale 3.3024.2 APK, you might be wondering how to download and install it on your Android device. Don't worry, we have got you covered. Here is a step-by-step guide for you:
-Before you download and install Clash Royale 3.3024.2 APK, you need to make sure that your device meets the following requirements:
-You also need to enable the installation of apps from unknown sources on your device settings. To do this, follow these steps:
-Once you have met the requirements and enabled the permissions, you can proceed to download and install Clash Royale 3.3024.2 APK by following these steps:
-If you encounter any problems or errors while downloading or installing Clash Royale 3.3024.2 APK, here are some troubleshooting tips and FAQs that might help you:
-Now that you know how to download and install Clash Royale 3.3024.2 APK, you might be wondering why you should play it. Well, there are many reasons why you should play the latest version of Clash Royale, and here are some of them:
-One of the main reasons why you should play Clash Royale 3.3024.2 APK is to enjoy the new features and content that it offers. You can try out the new cards, such as the Firecracker and the Royal Delivery, and see how they fit in your deck and strategy. You can also play the new game modes and events, such as the Firecracker Rush and the Royal Delivery Challenge, and have fun with different rules and objectives. You can also explore the new seasonal events, such as the Summer of 2v2, and team up with your friends or random partners for some epic battles.
-Another reason why you should play Clash Royale 3.3024.2 APK is to compete with other players online and test your skills and knowledge. You can join or create clans, where you can chat with other players, share cards, request donations, and participate in clan wars. You can also enter tournaments, where you can play against players from around the world and win prizes. You can also climb the ladder, where you can rank up and earn trophies and rewards.
-The last reason why you should play Clash Royale 3.3024.2 APK is to have fun and challenge yourself. Clash Royale is a game that is easy to learn but hard to master. It requires strategic thinking, quick reflexes, and constant adaptation to changing situations. It is also a game that is constantly updated with new content, such as cards, modes, events, rewards, and more. You will never get bored or run out of things to do in Clash Royale.
-In conclusion, Clash Royale 3.3024.2 APK is the latest version of the game that brings a lot of new features, improvements, and bug fixes that will make your gaming experience even better. You can download and install it on your Android device by following our guide above. You can also enjoy the new cards, game modes, events, rewards, and more that it offers. You can also compete with other players online, join or create clans, enter tournaments, climb the ladder, and have fun and challenge yourself.
-So what are you waiting for? Download Clash Royale 3.3024.2 APK now and join the millions of players who are already playing this amazing game!
-Here are some frequently asked questions about Clash Royale 3.3024.2 APK:
-Si estás buscando una película que te entretenga con elementos de comedia y terror, entonces no debes perderte Bhool Bhulaiyaa 2, una película en hindi que fue lanzada el 20 de mayo de 2022. Esta película es una secuela independiente de Bhool Bhulaiyaa (2007), un éxito de taquilla protagonizado por Akshay Kumar y Vidya Balan. Bhool Bhulaiyaa 2 cuenta con Tabu, Kartik Aaryan y Kiara Advani en los papeles principales, junto con Rajpal Yadav, Sanjay Mishra, Ashwini Kalsekar, y otros en papeles secundarios. La película está dirigida por Anees Bazmee, escrita por Aakash Kaushik y Farhad Samji, y producida por Bhushan Kumar, Murad Khetani, Krishan Kumar y Anjum Khetani bajo el lema de T-Series Films y Cine1 Studios.
-La trama de Bhool Bhulaiyaa 2 sigue a Ruhaan Randhawa (Kartik Aaryan), un falso psíquico que tiene que lidiar con el regreso de Manjulika (Tabu), un espíritu malévolo que está empeñado en vengarse de la familia Thakur. Ruhaan conoce a Reet Rathore (Kiara Advani), una novia poco dispuesta que está en camino a Rajasthan para casarse con su prometido Sagar (Amar Upadhyay). El destino los lleva a una mansión abandonada donde Manjulika ha estado atrapada durante 18 años por algunos sacerdotes. Mientras Ruhaan intenta ayudar a Reet a escapar de la presión de su familia y de la ira de Manjulika, descubre los oscuros secretos del pasado y la verdad sobre su propia identidad. ¿Podrá Ruhaan salvar a Reet y a sí mismo de la maldición de Manjulika? ¿O se convertirá en su próxima víctima?
-Download Zip ->>->>->> https://bltlly.com/2v6LWr
El principal villano de la película es Manjulika, un espíritu vengativo que una vez fue bailarín en la corte de Thakur Vikram Singh (Rajendra Gupta). Ella estaba enamorada de él, pero él la traicionó y se casó con otra mujer. Ella se suicidó y juró perseguir a su familia para siempre. Ella poseyó a su hija Radhika (Vidya Balan) en la primera película y trató de matar a su marido Siddharth (Shiney Ahuja). Fue exorcizada por el Dr. Aditya Shrivastav (Akshay Kumar), un psiquiatra que pretendía ser sacerdote.
-En Bhool Bhulaiyaa 2, Manjulika regresa después de 18 años cuando algunos sacerdotes que custodiaban su tumba son asesinados por algunos matones. Ella escapa de su prisión y encuentra un nuevo anfitrión en Reet, que es la nieta de Thakur Vikram Singh. Ella quiere vengarse de la familia Thakur y también de Ruhaan, que es el hijo del Dr. Aditya Shrivastav. Usa sus poderes sobrenaturales para manipular, atormentar y matar a cualquiera que se interponga en su camino.
-Manjulika es interpretada por Tabu, una de las actrices más versátiles y talentosas de Bollywood. Ella ofrece una actuación brillante como el espíritu maligno que puede cambiar de ser seductor a aterrador en cuestión de segundos. Ella también muestra sus habilidades de baile en la canción "Ami Je Tomar", que es un remix de la canción original de Bhool Bhulaiyaa. Tabu ha dicho que le gustaba interpretar a Manjulika ya que era un papel desafiante y divertido para ella.
-El héroe de la película es Ruhaan Randhawa, un falso psíquico que afirma tener habilidades sobrenaturales, pero en realidad utiliza trucos y aparatos para engañar a la gente. Gana dinero realizando sesiones de espiritismo, exorcismos y lecturas para sus clientes. También es una persona coqueta e ingeniosa a la que le gusta divertirse y disfrutar de la vida.
- -Ruhaan es interpretado por Kartik Aaryan, uno de los actores más populares y encantadores de Bollywood. Él ofrece una actuación hilarante y heroica como el falso psíquico que tiene que enfrentar sus miedos y luchar contra Manjulika. También muestra su química con Kiara Advani, que interpreta a Reet, en las escenas y canciones románticas. Kartik Aaryan ha dicho que estaba emocionado de ser parte de Bhool Bhulaiyaa 2 ya que era una de sus películas favoritas cuando era pequeño.
La heroína de la película es Reet Rathore, una novia poco dispuesta que se ve obligada a casarse con Sagar, un hombre de negocios rico y arrogante que es el hijo del amigo de Thakur Vikram Singh. Ella no lo ama y quiere seguir su carrera como diseñadora de moda. También es una persona amable y valiente que cuida de su familia y amigos.
- -Reet conoce a Ruhaan en un tren y lo encuentra atractivo y divertido. Ella acepta ir con él a Rajasthan para escapar de su familia y Sagar. Ella también se convierte en el objetivo de Manjulika, que la ha poseído y quiere usar su cuerpo para matar a la familia Thakur. Lucha para luchar contra la influencia de Manjulika y también para expresar sus sentimientos por Ruhaan.
-Reet es interpretada por Kiara Advani, una de las actrices más bellas y talentosas de Bollywood. Ella ofrece una actuación dulce y fuerte como la novia no dispuesta que tiene que enfrentar muchos desafíos y peligros. Ella también se ve impresionante en los trajes tradicionales y joyas que lleva en la película. Kiara Advani ha dicho que tuvo el honor de ser parte de Bhool Bhulaiyaa 2 ya que fue un sueño hecho realidad para ella.
-Bhool Bhulaiyaa 2 no es una secuela directa de Bhool Bhulaiyaa sino una película independiente que tiene su propia historia, personajes y estilo. La película es diferente de la anterior de muchas maneras, como:
-Mientras que Bhulaiyaa fue un remake de la película malayalam Manichitrathazhu (1993), que también fue rehecho en varios otros idiomas, Bhulaiyaa 2 no es un remake de ninguna otra película sino una historia original con nuevos personajes. La película no sigue los acontecimientos de la película anterior, pero tiene algunas referencias y conexiones con ella. La película también tiene algunos cameos de Akshay Kumar y Vidya Balan, que repiten sus papeles de Bhool Bhulaiyaa.
-Mientras que Bhool Bhulaiyaa se inspiró en una película malayalam y una novela de M.R. James llamada The Mystery of the Yellow Room, Bhool Bhulaiyaa 2 está inspirado en varias fuentes pero también tiene sus propios giros y sorpresas. La película se basa libremente en otra película malayalam llamada Ezra (2017), que también fue una película de comedia de terror sobre una mansión encantada y un espíritu vengativo. La película también está influenciada por algunas películas de Hollywood como The Conjuring, The Exorcist, y The Shining. La película también tiene algunos elementos originales como el falso personaje psíquico, la configuración de Rajasthan, y la escena clímax.
-Bhool Bhulaiyaa 2 tiene muchos aspectos destacados que lo convierten en una película imprescindible para todo tipo de espectadores. Algunos de ellos son:
-La película cuenta con un reparto estelar que incluye algunos de los mejores actores de Bollywood. Tabu, Kartik Aaryan, y Kiara Advani dan excelentes actuaciones como Manjulika, Ruhaan y Reet respectivamente. Dan vida a sus personajes con sus expresiones, diálogos y acciones. También comparten gran química entre sí y crean escenas memorables juntos.
- -La película tiene una banda sonora pegadiza y melodiosa que consta de seis canciones compuestas por Pritam y Tanishk Bagchi. Las canciones son cantadas por cantantes populares como Arijit Singh, Shreya Ghoshal, Jubin Nautiyal, Neha Kakkar, y otros. Las canciones también son remezcladas por DJ Chetas, Lijo George, y otros. Las canciones son una mezcla de géneros románticos, de danza y de terror que se adaptan al estado de ánimo y al tema de la película.
-El tema principal de la película, "Bhool Bhulaiyaa 2", es un remix de la canción original de Bhool Bhulaiyaa que fue compuesta por Pritam y cantada por Neeraj Shridhar. La nueva versión es cantada por Jubin Nautiyal y Tulsi Kumar y tiene nuevas letras de Tanishk Bagchi. La canción es un número vigoroso y enérgico que cuenta con Kartik Aaryan y Kiara Advani bailando en un gran set con muchos bailarines.
-Otra canción popular de la película es "Ami Je Tomar", que también es un remix de la canción original de Bhool Bhulaiyaa que fue compuesta por Pritam y cantada por Shreya Ghoshal y K.K. La nueva versión es cantada por Arijit Singh y Shreya Ghoshal y tiene nuevas letras de Tanishk Bagchi. La canción es un número romántico y inquietante que cuenta con Tabu realizando una danza clásica en un traje tradicional.
-La película tiene una cinematografía impresionante que muestra la belleza y el misterio de Rajasthan y otros lugares. La película se rodó en varios lugares como Jaipur, Jaisalmer, Udaipur, Lucknow, Mumbai y Londres. La película captura la cultura, la arquitectura y el paisaje de estos lugares con sus vivos colores, ángulos e iluminación. La película también utiliza algunos efectos especiales y conjuntos para crear una atmósfera realista y espeluznante para las escenas de terror.
- -La película tiene un clímax emocionante que te mantendrá al borde de tu asiento. El clímax implica un enfrentamiento final entre Ruhaan y Manjulika que tiene lugar en la mansión. Ruhaan tiene que usar su ingenio, coraje y artilugios para luchar contra los poderes sobrenaturales de Manjulika y salvar a Reet de sus garras. También tiene que enfrentarse a su padre el Dr. Aditya Shrivastav que llega a la escena para ayudarlo.
-El clímax tiene muchos giros y vueltas que te sorprenderán y te harán jadear. El clímax también revela algunos secretos impactantes sobre el pasado de Ruhaan y el motivo de Manjulika. El clímax también tiene algunos momentos emocionales que tocarán tu corazón y te harán llorar. El clímax también tiene algunas secuencias llenas de acción que te harán animar y aplaudir. El clímax también tiene algunos momentos divertidos que te harán reír y aliviar tu tensión.
-Si te gustan las canciones de Bhool Bhulaiyaa 2 y quieres establecerlas como tu tono de llamada en tu teléfono, entonces puedes seguir estos sencillos pasos:
- -¡Felicidades! Has descargado exitosamente el tono de llamada Bhool Bhulaiyaa 2 en tu teléfono. Ahora, puedes disfrutar de las melodías pegadizas de la película cada vez que suena tu teléfono.
-Bhool Bhulaiyaa 2 es una película de comedia de terror que te hará reír y gritar con sus escenas hilarantes y aterradoras. La película tiene un reparto estelar, canciones pegadizas, lugares impresionantes y un emocionante clímax que te mantendrá entretenido hasta el final. La película también es diferente de Bhool Bhulaiyaa en muchos sentidos y tiene su propia originalidad y sorpresas. La película es una elección perfecta para cualquier persona que ama la comedia y los géneros de terror y quiere tener un momento divertido y emocionante en las películas.
-Si está interesado en ver Bhool Bhulaiyaa 2, puede reservar sus entradas en línea o visitar su teatro más cercano. También puede descargar los tonos de llamada de las canciones de la película en su teléfono y disfrutar de ellos en cualquier momento. También puede seguir las páginas oficiales de las redes sociales y el sitio web de la película para obtener más actualizaciones y noticias.
-Esperamos que hayas disfrutado leyendo este artículo y hayas aprendido más sobre Bhool Bhulaiyaa 2. Si tiene alguna pregunta o comentario, no dude en dejarlos en la sección de comentarios a continuación. Gracias por su tiempo y atención.
-Si respondiste sí a cualquiera de estas preguntas, entonces es posible que desee intentar descargar Facebook APK Android 4. Esta es una versión modificada de la aplicación original de Facebook que es compatible con los dispositivos Android que se ejecutan en la versión 4.0.3 o superior. En este artículo, vamos a explicar lo que es Facebook APK Android 4, cómo descargarlo, cómo actualizarlo, y cómo solucionarlo. Vamos a empezar!
-Un APK (Android Package Kit) es un formato de archivo que contiene todos los componentes de una aplicación Android, como el código, los recursos, los activos y el manifiesto. Puedes instalar un archivo APK en tu dispositivo sin usar Google Play Store, que es la fuente oficial para las aplicaciones de Android.
-Facebook APK Android 4 es una versión no oficial de la aplicación de Facebook que se ha modificado para funcionar en dispositivos Android más antiguos. Tiene algunas ventajas y desventajas en comparación con la aplicación oficial, que vamos a discutir a continuación.
-Si desea probar Facebook APK Android 4 en su dispositivo, tendrá que seguir estos pasos:
-De forma predeterminada, los dispositivos Android no permiten la instalación de aplicaciones desde fuentes distintas de Google Play Store. Esta es una medida de seguridad para evitar la instalación de aplicaciones maliciosas o dañinas. Sin embargo, si desea instalar Facebook APK Android 4, tendrá que habilitar la opción de instalar aplicaciones de fuentes desconocidas en su dispositivo. He aquí cómo hacerlo:
-Ahora ha habilitado fuentes desconocidas en su dispositivo, y puede proceder al siguiente paso.
- -El siguiente paso es encontrar una fuente confiable para el archivo de Facebook APK Android 4. Debe tener cuidado al elegir una fuente, ya que algunos sitios web pueden ofrecer archivos falsos o infectados que pueden dañar su dispositivo o datos. Aquí hay algunos consejos para ayudarle a encontrar una fuente confiable:
-Una de las fuentes que recomendamos para descargar Facebook APK Android 4 es APKPure, que es un sitio web de buena reputación que ofrece archivos APK seguros y verificados. También puede usar otras fuentes en las que confíe, pero asegúrese de seguir los consejos anteriores.
-El paso final es descargar e instalar el archivo de Facebook APK Android 4 en su dispositivo. Aquí está cómo hacerlo:
-Felicidades! Usted ha descargado e instalado con éxito Facebook APK Android 4 en su dispositivo. Ahora puedes disfrutar usando la aplicación y sus características.
-Si desea seguir utilizando Facebook APK Android 4, tendrá que actualizarlo regularmente para obtener las últimas características y mejoras. Hay dos maneras de actualizar Facebook APK Android 4: utilizando el actualizador incorporado o descargar el último archivo APK desde el sitio web oficial.
-Algunas versiones de Facebook APK Android 4 tienen un actualizador incorporado que le permite comprobar si hay actualizaciones y descargarlas directamente desde la aplicación. He aquí cómo usarlo:
-Esta opción es conveniente y fácil, pero puede que no funcione para todas las versiones de Facebook APK Android 4. Si no ves la opción de actualizaciones de aplicaciones en su configuración & menú de privacidad, entonces tendrá que utilizar la opción 2 en su lugar.
-Otra manera de actualizar Facebook APK Android 4 es descargar el último archivo APK desde el sitio web oficial de Facebook. Aquí está cómo hacerlo:
-Esta opción es confiable y segura, ya que está descargando el archivo APK de la fuente oficial. Sin embargo, puede tomar más tiempo y consumir más datos que la opción 1.
-A veces, puede encontrar algunos problemas o problemas cuando se utiliza Facebook APK Android 4. Estos pueden incluir errores, se bloquea, se congela, o mal rendimiento. No te preocupes, ya que hay algunas maneras de solucionar problemas de Facebook APK Android 4 y solucionar estos problemas. Aquí hay algunos problemas comunes y soluciones:
-En este artículo, hemos explicado lo que es Facebook APK Android 4, cómo descargarlo, cómo actualizarlo, y cómo solucionarlo. También hemos compartido algunos consejos y trucos para optimizar tu experiencia con la aplicación. Esperamos que este artículo te haya resultado útil e informativo.
-Facebook APK Android 4 es una gran alternativa a la aplicación oficial de Facebook, especialmente si usted tiene un dispositivo Android más antiguo o desea acceder a algunas características adicionales. Sin embargo, también debes ser consciente de los riesgos y desafíos que conlleva el uso de una aplicación no oficial. Siempre debe descargar la aplicación de fuentes confiables, escanearla en busca de amenazas y actualizarla regularmente. También debe seguir los pasos de solución de problemas si encuentra algún problema o problemas con la aplicación.
-Si usted tiene alguna pregunta o comentario sobre Facebook APK Android 4, no dude en dejar un comentario a continuación. Nos encantaría saber de usted!
-Aquí hay algunas preguntas frecuentes sobre Facebook APK Android 4:
-Facebook APK Android 4 no es ilegal, pero no está autorizado por Facebook tampoco. Puede violar sus términos de servicio y política de privacidad, lo que significa que puede arriesgarse a perder su cuenta o exponer su información personal si la usa. Debes usarlo a tu propia discreción y responsabilidad.
-Facebook APK Android 4 no puede ser seguro, ya que puede contener malware, virus o spyware que puede dañar su dispositivo o robar sus datos. Solo debe descargarlo de fuentes confiables y escanearlo con un antivirus antes de instalarlo. También debe evitar conceder permisos innecesarios o acceso a la aplicación.
-Si desea desinstalar Facebook APK Android 4 desde su dispositivo, puede seguir estos pasos:
-También puede eliminar el archivo APK del administrador de archivos de su dispositivo si ya no lo necesita.
-Facebook APK Android 4 está diseñado para dispositivos Android que se ejecutan en la versión 4.0.3 o superior. Es posible que pueda usarlo en otros dispositivos, como iOS o Windows, pero necesitará usar un emulador o un convertidor para hacerlo. Sin embargo, esto puede no funcionar bien o en absoluto, y puede causar algunos problemas de compatibilidad o rendimiento. Recomendamos usar la aplicación oficial de Facebook o la versión web para otros dispositivos.
64aa2da5cf> - : specialize_plan_impl_loop
> {};
-
- // until we find first lowest match
- template class P, class SM, class _1, class _2, class _3, class _4, class _5, class _6, class _7, class _8, class _9>
- struct specialize_plan_impl_loop >
- : specialize_plan_impl_match > {};
-
- template >
- : thrust::detail::conditional<
- has_sm_tuning ::value,
- P > >::type {};
-
- template class Plan, class SM = THRUST_TUNING_ARCH>
- struct specialize_plan_msvc10_war
- {
- // if Plan has tuning type, this means it has SM-specific tuning
- // so loop through sm_list to find match,
- // otherwise just specialize on provided SM
- typedef thrust::detail::conditional Download File ⭐ https://urlgoal.com/2uCKBi Download File ✅ https://urlgoal.com/2uCJMV Have you ever lost or forgotten your passwords to various web resources and mailboxes? If so, you may need a tool that can help you recover them. One of the tools that can do this job is Elcomsoft Internet Password Breaker. This is a software program that can instantly reveal Internet passwords, retrieve login and password information protecting a variety of web resources and mailboxes in various email clients. It can also extract passwords, stored forms and AutoComplete information in popular web browsers, and capture mailbox and identity passwords from popular email clients. Download ➡ https://urlgoal.com/2uCN7N But how can you get Elcomsoft Internet Password Breaker cracked? And is it safe and legal to use? In this article, we will answer these questions and provide you with a guide on how to download and use Elcomsoft Internet Password Breaker cracked. We will also explain what Elcomsoft Internet Password Breaker cracked is, how it works, and what are the benefits and risks of using it. Elcomsoft Internet Password Breaker cracked is a version of Elcomsoft Internet Password Breaker that has been modified or hacked to bypass the license verification or activation process. This means that you can use Elcomsoft Internet Password Breaker cracked without paying for it or registering it. Elcomsoft Internet Password Breaker cracked is usually distributed through torrent files or other file-sharing platforms. Torrent files are files that contain information about the software program and the sources where you can download it from. You need a torrent client, such as uTorrent or BitTorrent, to open the torrent file and start downloading the software program. Elcomsoft Internet Password Breaker cracked may also come with a crack file or a patch file. A crack file is a file that replaces the original executable file of the software program with a modified one that bypasses the license verification or activation process. A patch file is a file that modifies the original executable file of the software program to bypass the license verification or activation process. To download and use Elcomsoft Internet Password Breaker cracked, you need to follow these steps: Congratulations! You have successfully downloaded and used Elcomsoft Internet Password Breaker cracked. Using Elcomsoft Internet Password Breaker cracked has some benefits and risks that you should be aware of before using it. Some of the benefits are: Some of the risks are: Therefore, we advise you to use Elcomsoft Internet Password Breaker cracked at your own risk and discretion. We are not responsible for any damages or losses that may result from using Elcomsoft Internet Password Breaker cracked. We do not endorse or support any illegal or unethical activities that may be done with Elcomsoft Internet Password Breaker cracked. We only provide this article for informational and educational purposes only. In this article, we have explained what is Elcomsoft Internet Password Breaker cracked and how it works. We have also provided you with a guide on how to download and use Elcomsoft Internet Password Breaker cracked. We have also discussed some of the benefits and risks of using Elcomsoft Internet Password Breaker cracked. We hope this article has helped you learn more about Elcomsoft Internet Password Breaker cracked and how it can help you recover your web and email passwords with ease. Once you have downloaded and installed Elcomsoft Internet Password Breaker cracked, you can use it to recover your web and email passwords with the following steps: That's it! You have successfully used Elcomsoft Internet Password Breaker cracked to recover your web and email passwords. While Elcomsoft Internet Password Breaker cracked can help you recover your web and email passwords, it can also pose a threat to your privacy and security if someone else uses it to access your web resources and mailboxes without your permission. Therefore, we recommend you to optimize your web and email passwords security with the following tips: By following these tips, you can optimize your web and email passwords security and prevent unauthorized access to your web resources and mailboxes. In this article, we have explained what is Elcomsoft Internet Password Breaker cracked and how it works. We have also provided you with a guide on how to download and use Elcomsoft Internet Password Breaker cracked. We have also discussed some of the benefits and risks of using Elcomsoft Internet Password Breaker cracked. We have also shown you how to use Elcomsoft Internet Password Breaker cracked to recover your web and email passwords and how to optimize your web and email passwords security. We hope this article has helped you learn more about Elcomsoft Internet Password Breaker cracked and how it can help you recover your web and email passwords with ease. In conclusion, Elcomsoft Internet Password Breaker cracked is a useful tool that can help you recover your web and email passwords in a fast and easy way. However, it also has some risks and drawbacks that you should be aware of before using it. You should also take some measures to optimize your web and email passwords security and prevent unauthorized access to your web resources and mailboxes. We hope this article has helped you learn more about Elcomsoft Internet Password Breaker cracked and how it works. Download 🗹 https://urlgoal.com/2uCKDl DOWNLOAD 🆓 https://tinurll.com/2uzoc3 Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL Crime City Stories Torrent.YTL (Instantly). Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL.rar yEnc. Deep.Freeze.Standard. Download Zip ->>->>->> https://gohhs.com/2uEzid DOWNLOAD >>>>> https://gohhs.com/2uEyIV Blockman Go is a popular sandbox game that offers a variety of mini-games, modes, and social features. However, some players may want to hack the game to get an unfair advantage or access premium content. In this article, we will explain what Blockman Go is, why some players want to hack it, how to download a blockman go hack safely, and what are the risks of using a hack. DOWNLOAD ———>>> https://ssurll.com/2uNR1q Blockman Go is a free app that lets you play, craft, and share fun experiences with your friends. You can play various block style mini-games, such as Bed Wars, Sky Block, Anime Fighting Simulator, Egg War, and more. You can also create your own world with blocks and invite others to join you. Blockman Go is a sandbox game that gives you the freedom to explore different worlds and scenarios. You can choose from hundreds of mini-games that suit your preferences and skills. You can also join different modes, such as action, adventure, role-playing, strategy, simulation, and more. You can compete with other players or cooperate with them to achieve common goals. Blockman Go is also a social platform that lets you interact with other players from around the world. You can customize your virtual character with thousands of clothing, accessories, hats, and items. You can also chat with your friends, join groups, manage clans, and earn titles. You can also share your creations and feedback with the community. How to get 10 new dance pose in sakura school simulator Some players may want to hack Blockman Go for various reasons. Some of the most common ones are: Gold and diamonds are the main currencies in Blockman Go. You can use them to buy outfits, items, skins, pets, and more. You can also use them to unlock premium features and games. However, gold and diamonds are not easy to earn. You have to play games, watch ads, complete tasks, or spend real money to get them. Some players may want to hack the game to get unlimited gold and diamonds without spending time or money. Some mini-games in Blockman Go are competitive and require skill and strategy. For example, in Bed Wars, you have to protect your bed from being destroyed by other teams while trying to destroy theirs. In Sky Wars, you have to survive on floating islands while fighting other players. Some players may want to hack the game to cheat in these games by using mods or scripts that give them advantages such as speed hack, fly hack, hitbox expand, fast break blocks, etc. Some features and items in Blockman Go are only available for VIP members or GCube users. GCubes are another currency in the game that can be bought with real money or earned by inviting friends or completing surveys. VIP members get benefits such as extra gold and diamonds, exclusive outfits and items, free entry to premium games, etc. Some players may want to hack the game to access these features and items without paying or earning GCubes. If you still want to download a Blockman Go hack safely, you need to be careful and follow some tips. Here are some of them: Some websites or apps may claim to offer free or working Blockman Go hacks, but they may be scams or malware. They may ask you to click on links or ads that lead to malicious sites or downloads. They may also ask you to fill out surveys, enter personal information, or give permissions that compromise your security. You should avoid these links and ads and only download from trusted sources. One way to find a safe and reliable Blockman Go hack is to use curated software lists and reviews. These are websites or apps that provide information, ratings, feedback, and download links for various software, including hacks. You can use these lists and reviews to compare different options, check their features, compatibility, performance, and user experience. You can also use them to verify the authenticity and safety of the hacks before downloading them. Another way to protect your device and data from harmful Blockman Go hacks is to use Google Play Protect and antivirus software. Google Play Protect is a built-in service that scans your device for potentially harmful apps and warns you before installing them. You can enable it by going to Settings > Google > Security > Google Play Protect. Antivirus software is a third-party app that detects and removes viruses, malware, spyware, and other threats from your device. You can download a reputable antivirus app from the Google Play Store and run regular scans. Finally, you should check the app permissions and privacy policy of the Blockman Go hack before downloading it. App permissions are the access rights that the app requests from your device, such as camera, microphone, location, contacts, etc. You should review these permissions and only grant them if they are necessary and reasonable for the app's functionality. You can manage these permissions by going to Settings > Apps > Blockman Go Hack > Permissions. Privacy policy is the document that explains how the app collects, uses, shares, and protects your personal data. You should read this policy carefully and understand what data the app collects, why it collects it, how it uses it, who it shares it with, and how it safeguards it. You can find this policy on the app's website or in the app itself. Even if you download a Blockman Go hack safely, you may still face some risks by using it. Some of these risks are: Some Blockman Go hacks may contain malware that can infect your device and steal your data. Malware is a malicious software that can harm your device's performance, functionality, security, or privacy. It can also access your personal data, such as photos, videos, messages, contacts, passwords, bank accounts, etc., and send it to hackers or third parties without your consent. This can result in identity theft, fraud, blackmail, or other crimes. Some Blockman Go hacks may violate the game's terms of service and rules of conduct. These are the agreements and guidelines that you accept when you download and play the game. They state what you can and cannot do in the game, such as cheating, hacking, exploiting glitches, abusing others, etc. If you use a hack that breaks these rules, you may face consequences such as account suspension or ban. This means that you will lose access to your account, progress, items, friends, etc., permanently or temporarily. Some Blockman Go hacks may give you an unfair advantage over other players in competitive games. This may ruin the fun and balance of the game for yourself and others. It may also damage your reputation among other players who may report you or avoid playing with you. You may also miss out on the challenge and satisfaction of playing the game legitimately. Blockman Go is a fun and creative sandbox game that offers many mini-games, modes, and social features. However, some players may want to hack the game to get unlimited gold and diamonds, cheat in competitive games, or access premium features and items. To download a Blockman Go hack safely, you need to avoid unsolicited links and ads, use curated software lists and reviews, use Google Play Protect and antivirus software, and check the app permissions and privacy policy. However, using a Blockman Go hack may also expose you to risks such as malware infection and data theft, account suspension and ban, unfair gameplay and bad reputation. Therefore, you should think twice before using a Blockman Go hack and consider the pros and cons. You may also enjoy the game more by playing it fairly and honestly. Here are some frequently asked questions about Blockman Go and hacks: If you love racing games and want to enjoy them on your Android device, you might be wondering how to find and download the best car games as APK files. APK files are Android application packages that can be installed on your device without using the Google Play Store. This way, you can access games that are not available in your region, or get the latest updates before they are officially released. Download File ->->->-> https://ssurll.com/2uNSN4 In this article, we will explain what are car games, why download them as APK files, and how to do it safely and easily. We will also review the top 5 car games that you can download as APK files in 2021, based on their features, graphics, gameplay, and user ratings. Let's get started! Car games are a genre of video games that involve driving or racing cars on various tracks, roads, or environments. Car games can be realistic or arcade-style, depending on the level of simulation and physics. Some car games focus on speed and adrenaline, while others emphasize customization and tuning. Car games can also have different modes, such as career, multiplayer, or free roam. Downloading car games as APK files has several advantages over using the Google Play Store. For example: However, downloading car games as APK files also has some risks and drawbacks. For instance: To download and install car games APK files, you need to follow these steps: Now that you know how to download and install car games APK files, you might be wondering which games are worth trying. Here are our top 5 picks for the best car games to download as APK files in 2021, based on their features, graphics, gameplay, and user ratings. car racing game download apk 2021 Racing in Car 2021 is a realistic and immersive car game that lets you drive your car in the cockpit view. You can choose from different cars and environments, and enjoy the realistic driving physics and sound effects. You can also customize your car with paint, wheels, and stickers. Extreme Car Driving Simulator is an open world car game that lets you drive, drift, and explore a huge city. You can drive freely with no rules or traffic, or complete various missions and challenges. You can also customize your car with different colors, rims, and vinyls. Ultimate Car Driving Simulator is a fun and addictive car game that combines realism and fun. You can drive your car in a big open world with realistic physics, graphics, and sound effects. You can also customize your car with various parts, paint, and vinyls. Race Master 3D - Car Racing is a thrilling and fast-paced car game that tests your driving skills and reflexes. You can race against other players or the AI in different tracks and environments. You can also upgrade your car with better engines, tires, brakes, and more. Need for Speed™ No Limits is a legendary car game that lets you race and customize your dream cars in an underground street racing scene. You can compete with other players or the cops in various modes and events. You can also collect and upgrade over 100 real cars from top brands. Car games are a popular and exciting genre of video games that let you drive and race cars on your Android device. Downloading car games as APK files can give you access to games that are not available on the Play Store, or get the latest updates before they are officially released. However, you need to be careful about the source and the compatibility of the APK files, and the risks involved in installing them. In this article, we have reviewed the top 5 car games that you can download as APK files in 2021, based on their features, graphics, gameplay, and user ratings. These are: We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please let us know in the comments section below. Happy racing! Some of the benefits are: Some of the risks are: To download and install car games APK files safely and easily, you need to follow these steps: To download and install car games APK files safely and easily, you need to follow these steps: The best car games to download as APK files in 2021 are: Different car games have different options for customizing your car. Some of the common options are: Some tips to improve your driving skills in car games are: If you are a fan of anime action RPG games, you might want to try Dragon Ball Legends, a popular mobile game based on the iconic Dragon Ball series. In this game, you can collect and train hundreds of characters, compete in real-time battles against players from around the world, and enjoy an original story with voice acting from the original anime cast. In this article, we will show you how to download and install Dragon Ball Legends on your iPad, as well as how to update it and troubleshoot any issues. Dragon Ball Legends is a free-to-play anime action RPG game developed by Bandai Namco Entertainment. It was released in May 2018 for iOS and Android devices. The game features: Download File ★★★★★ https://ssurll.com/2uNTpI To play Dragon Ball Legends on your iPad, you need to have: The game is compatible with most iPad models, including iPad Air, iPad Air 2, iPad Pro, iPad mini 2, iPad mini 3, iPad mini 4, iPad (5th generation), iPad (6th generation), iPad (7th generation), iPad (8th generation), and iPad mini (5th generation). However, some older models may experience performance issues or crashes. You can check the compatibility of your device on the App Store page. To download and install Dragon Ball Legends from the App Store, follow these steps: To open and play Dragon Ball Legends on your iPad, follow these steps: If you have more than one iOS device, you can use iCloud to sync your apps across them. This way, you can access your apps from any device without having to download them again. You can also save your game progress and data in iCloud, so you can resume playing where you left off. To use iCloud, you need to have: To access and download Dragon Ball Legends from iCloud, follow these steps: How to download dragon ball legends on ipad To enjoy the best gaming experience, you should always update Dragon Ball Legends to the latest version. Updating the game will give you access to new features, content, events, characters, items, and more. It will also fix any bugs or errors that may affect the game performance or functionality. You can update the game from the App Store or iCloud, depending on how you downloaded it. To check and update Dragon Ball Legends from the App Store or iCloud, follow these steps: In this article, we have shown you how to download and install Dragon Ball Legends on your iPad, as well as how to update it and troubleshoot any issues. Dragon Ball Legends is a fun and exciting anime action RPG game that lets you collect and train hundreds of characters, compete in real-time battles against players from around the world, and enjoy an original story with voice acting from the original anime cast. If you are a fan of Dragon Ball or anime games in general, you should definitely give it a try! To download Dragon Ball Legends on your iPad, you can either use the App Store or iCloud, depending on your preference and convenience. To update Dragon Ball Legends on your iPad, you can also use either method. To play Dragon Ball Legends on your iPad, you need to have an iOS device running iOS 10 or later, at least 2 GB of free space on your device, a stable internet connection (Wi-Fi or cellular), and an Apple ID account. The game is compatible with most iPad models, but some older models may experience performance issues or crashes. We hope this article has been helpful and informative for you. If you have any questions or feedback about Dragon Ball Legends or this article, please feel free to leave a comment below. We would love to hear from you! A1: The initial download size of Dragon Ball Legends is about 200 MB on your device, but it may increase as you play the game and download more data. You can check the current size of the game on your device by going to Settings > General > iPad Storage and finding Dragon Ball Legends in the list of apps. You can also delete some data from the game by going to Menu > Other > Options > Delete Cache Data in the game settings. A2: You can play Dragon Ball Legends with your friends online in several ways. You can join co-op battles with your friends by going to Menu > Co-Op and selecting a battle mode. You can also invite your friends to join your lobby by tapping on the Invite button and sharing your lobby code or link. You can also join PvP battles with your friends by going to Menu > PvP and selecting a battle mode. You can also invite your friends to join your room by tapping on the Invite button and sharing your room code or link. To play with your friends online, you and your friends need to have a stable internet connection (Wi-Fi or cellular) and be in the same region. A3: You can get more characters and items in Dragon Ball Legends by various methods. You can summon new characters and items by using crystals, tickets, or coins. You can earn these currencies by completing missions, events, stories, challenges, or logging in daily. You can also buy them with real money through in-app purchases or subscriptions. You can also get more characters and items by playing co-op or PvP battles, exchanging medals or souls, or participating in tournaments or campaigns. A4: If you have any issues with the game, such as bugs, errors, crashes, or account problems, you can contact the support team by going to Menu > Other > Support and tapping on the Contact Us button. You can also access the FAQ section, the Terms of Service, the Privacy Policy, or the User Agreement from there. You can also send an email to bnfaq.channel.or.jp/contact/faq_list/1925 or visit the official website at https://dble.bn-ent.net/en/. A5: You can find more information and tips about Dragon Ball Legends by visiting the official social media accounts of the game. You can follow them on Facebook at https://www.facebook.com/DBLegends.Official/, on Twitter at https://twitter.com/db_legends, on Instagram at https://www.instagram.com/dragonballlegends/, or on YouTube at https://www.youtube.com/channel/UCDmFkuRZSbxyvqdK-cjMSog. You can also join the official Discord server at https://discord.gg/dbl or the official Reddit community at https://www.reddit.com/r/DragonballLegends/. You can also check out some fan-made websites or blogs that offer guides, reviews, news, or discussions about the game. If you are a fan of soccer games and business communication solutions, you might have heard of 3CX 11 and PES 2022. These are two popular products that offer amazing features and gameplay for their users. However, they also come with a hefty price tag that might deter some people from enjoying them. That's why some people resort to using a crack keygen to bypass the license verification and activation process. But what is a crack keygen and how does it work? Is it legal and safe to use? And what are the pros and cons of using one? In this article, we will answer these questions and more. Read on to find out everything you need to know about 3CX 11 Crack Keygen PES. Before we dive into the details of the crack keygen, let's first introduce the two products that it is designed for: 3CX 11 and PES 2022. Download Zip ✒ https://urlgoal.com/2uI9qu 3CX 11 is a cloud-based or on-premise PBX business communications solution that was redesigned for the remote workplace. It offers flexible and cost-effective unified communications features, such as VoIP phone calls, video conferencing, live website chat, Facebook Messenger and SMS texting integrations, mobile applications, and compatibility with third-party tools like Microsoft Teams . Users can manage inbound and outbound calls directly from their web browsers or via the 3CX App for Windows, Linux, iOS, and Android. They can also use IP phones from providers like Yealink and Grandstream. Some of the key features of 3CX 11 include : The benefits of using 3CX 11 include : PES 2022, also known as eFootball PES 2022, is the latest installment of the popular soccer simulation game series developed by Konami. It was released on September 30, 2022 for Windows, PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S, Android, and iOS . It features improved graphics, gameplay, modes, licenses, and online features compared to its previous versions. Some of the key features of PES 2022 include : So, as you can see, 3CX 11 and PES 2022 are not cheap products. They can cost you hundreds of dollars per year or even more if you want to access all their features and content. That's why some people might want to use a crack keygen for these games. A crack keygen is a software tool that can generate a valid license key for a product without paying for it. A license key is a code that verifies that you have purchased the product and allows you to activate and use it. A crack keygen can bypass the license verification and activation process by creating a fake or modified license key that tricks the product into thinking that you are a legitimate user. A crack keygen usually works by exploiting a vulnerability or flaw in the product's security system. For example, it might reverse-engineer the algorithm that generates the license keys, or it might modify the product's files or registry entries to disable the license check. A crack keygen can also come with a patch or a loader that modifies the product's executable file or memory to bypass the license verification and activation process. A crack keygen can be downloaded from various websites or forums that offer pirated software. However, not all crack keygens are reliable or safe. Some of them might not work properly, or they might contain malware, viruses, spyware, or other harmful programs that can damage your computer or steal your personal information. Therefore, using a crack keygen is risky and not recommended. If you still want to use a crack keygen for 3CX 11 and PES 2022, despite the risks and drawbacks, here are the steps that you need to follow: The first step is to find a reliable source for the crack keygen. This is not easy, as there are many fake or malicious websites that claim to offer crack keygens but actually deliver malware or scams. You need to be careful and do some research before downloading anything from the internet. Here are some tips that can help you find a reliable source for the crack keygen: One example of a reliable source for the crack keygen is , which provides all the information, comments, feedback, virus scan results, and download links for 3CX 11 Crack Keygen PES. However, this is not an endorsement of this website or its content, and we are not responsible for any consequences of using it. The second step is to run the crack keygen and generate a license key. This step may vary depending on the specific crack keygen that you use, but here are some general instructions that can apply to most cases: The third step is to activate 3CX 11 and PES 2022 with the license key that you generated in the previous step. This step may also vary depending on the specific product and version that you use, but here are some general instructions that can apply to most cases: The fourth step is to use 3CX 11 and PES 2022 with the crack keygen. This step is similar to using the products normally, except that you have access to all their features and content without paying for them. Here are some tips on how to use 3CX 11 and PES 2022 with the crack keygen: To access the video conferencing, live chat, and SMS features of 3CX 11, you need to do the following: To customize your team, play online matches, and enjoy the realistic graphics of PES 2022, you need to do the following: The fifth step is to weigh the pros and cons of using 3CX 11 Crack Keygen PES. As with any decision, there are advantages and disadvantages of using a crack keygen for these products. Here are some of them: The main advantages of using the crack keygen are: The main disadvantages of using the crack keygen are: In conclusion, 3CX 11 Crack Keygen PES is a software tool that can generate a valid license key for 3CX 11 and PES 2022, two popular products that offer amazing features and gameplay for their users. However, using a crack keygen is risky and not recommended, as it has many disadvantages and drawbacks that outweigh its advantages and benefits. Therefore, we suggest that you look for other alternatives that are legal and safe, such as buying or subscribing to these products from their official websites or sources, or looking for other products that suit your needs and budget. Here are some frequently asked questions about 3CX 11 Crack Keygen PES: A crack keygen is a software tool that can generate a valid license key for a product without paying for it. A license key is a code that verifies that you have purchased the product and allows you to activate and use it. A crack keygen can bypass the license verification and activation process by creating a fake or modified license key that tricks the product into thinking that you are a legitimate user. A crack keygen usually works by exploiting a vulnerability or flaw in the product's security system. No, it is not legal and safe to use a crack keygen for 3CX 11 and PES 2022. By using a crack keygen, you are violating the terms and conditions of these products and infringing their intellectual property rights. This is illegal and unethical, and you could face legal consequences if you are caught or reported. You are also exposing your computer and personal information to potential malware, viruses, spyware, or other harmful programs that might be hidden in the crack keygen file or download link. These programs could damage your computer, steal your data, or compromise your privacy. You are also exposing your system and communications to hackers or cybercriminals who might exploit the vulnerability or flaw in the product's security system. The system requirements for running 3CX 11 and PES 2022 with the crack keygen are the same as the system requirements for running these products normally. However, you might need more disk space and memory to accommodate the crack keygen file and its modifications. Here are the minimum and recommended system requirements for running 3CX 11 and PES 2022 on Windows : To update 3CX 11 and PES 2022 with the latest patches and data, you need to do the following: To find more information and support for 3CX 11 and PES 2022, you can visit their official websites or contact their customer service teams. Here are some links that might be useful: I hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and have a great day! If you are an automotive professional, you know how important it is to have accurate and up-to-date information on various vehicles, systems, components, and procedures. Whether you are a mechanic, a technician, a workshop owner, or a student, you need a reliable source of technical data and diagrams, service and repair instructions, troubleshooting tips, and more. That's where Autodata 3.40 Full Version comes in handy. Autodata 3.40 Full Version is a software that provides comprehensive and detailed information on over 17,000 models of cars, vans, trucks, motorcycles, and more. It covers vehicles from over 80 manufacturers worldwide, from 1959 to 2011. Download File ✓✓✓ https://urlgoal.com/2uI8QR In this article, we will show you what Autodata 3.40 Full Version is, what features and benefits it offers, how to use it, how to change its language, and where to download it. By the end of this article, you will have a clear idea of how Autodata 3.40 Full Version can help you in your automotive work. Autodata 3.40 Full Version is a software that provides technical data and diagrams, service and repair procedures, troubleshooting guides, fault codes, key programming information, wiring diagrams, timing belt and chain diagrams, component testing information, and more for various vehicles. Some of the features and benefits of Autodata 3.40 Full Version are: The system requirements for Autodata 3.40 Full Version are: The installation of Autodata 3.40 Full Version is simple and straightforward. Here are the steps to follow: Autodata 3.40 Full Version is easy to use and navigate. You can access any information you need by using the menu bar, the toolbar, or the search function. Here are some of the main functions and features of Autodata 3.40 Full Version: To access technical data and diagrams, you can use one of the following methods: After you select a vehicle, you will see a list of categories and subcategories of technical data and diagrams available for that vehicle. You can click on any category or subcategory to view the information. For example, you can click on Engine Management -> Fuel Injection System -> Components to view the components of the fuel injection system for that vehicle. You can also use the tabs at the bottom of the screen to switch between different types of technical data and diagrams, such as Specifications, Service Intervals, Wiring Diagrams, Timing Belt/Chain Diagrams, etc. To perform service and repair procedures, you can use one of the following methods: After you select a vehicle, you will see a list of categories and subcategories of service and repair procedures available for that vehicle. You can click on any category or subcategory to view the instructions. For example, you can click on Engine -> Oil Change -> Procedure to view the procedure for changing the oil for that vehicle. You can also use the tabs at the bottom of the screen to switch between different types of service and repair procedures, such as Maintenance Schedules, Service Illustrations, Service Procedures, etc. To troubleshoot common problems and faults, you can use one of the following methods: After you select a vehicle, you will see a list of categories and subcategories of problems and faults available for that vehicle. You can click on any category or subcategory to view the possible causes and solutions. For example, you can click on Engine Management -> Engine Misfire -> Causes and Solutions to view the causes and solutions for engine misfire for that vehicle. You can also use the tabs at the bottom of the screen to switch between different types of problems and faults, such as Fault Codes, Key Programming, Component Testing, etc. Autodata 3.40 Full Version has a multilingual option that allows you to choose between English or Spanish languages. However, if you want to change the language to another one, such as French, German, Italian, Portuguese, etc., you will need to download and install a language pack for that language. If you have installed Autodata 3.40 Full Version in Spanish and you want to change it to English, you will need to download and install the English language pack. Here are the steps to follow: If you have installed more than one language pack for Autodata 3.40 Full Version, you can easily switch between them by following these steps: Autodata 3.40 Full Version is not available on the official website of Autodata anymore, as it has been replaced by newer versions such as Autodata Online or Autodata CDA-3. However, you can still download Autodata 3.40 Full Version from some alternative sources on the Internet. Here are some of them: The Internet Archive is a non-profit digital library that preserves and provides access to millions of free books, movies, music, software, and more. You can download Autodata 3.40 Full Version from the Internet Archive by following these steps: MOTORCARSOFT.COM is a website that provides free downloads of various automotive software, such as diagnostic tools, repair manuals, tuning software, etc. You can download Autodata 3.40 Full Version from MOTORCARSOFT.COM by following these steps: Lionn Auto Softwares is a website that provides free downloads of various automotive software, such as diagnostic tools, repair manuals, tuning software, etc. You can download Autodata 3.40 Full Version from Lionn Auto Softwares by following these steps: Autodata 3.40 Full Version is a software that provides comprehensive and detailed information on over 17,000 models of cars, vans, trucks, motorcycles, and more. It covers vehicles from over 80 manufacturers worldwide, from 1959 to 2011. It has an easy-to-use interface that allows you to access technical data and diagrams, service and repair procedures, troubleshooting guides, fault codes, key programming information, wiring diagrams, timing belt and chain diagrams, component testing information, and more. It has a multilingual option that allows you to choose between English or Spanish languages. It has a print function that allows you to print any information you need. It has a zoom function that allows you to enlarge any diagram or image for better viewing. It has a bookmark function that allows you to save any information you want for later reference. It has an update function that allows you to check for any updates online. If you are an automotive professional, you will find Autodata 3.40 Full Version very useful and helpful in your work. You can download Autodata 3.40 Full Version from one of the sources we mentioned in this article, such as the Internet Archive, MOTORCARSOFT.COM, or Lionn Auto Softwares. You can then install and activate Autodata 3.40 Full Version following the steps we explained in this article. You can then use Autodata 3.40 Full Version to access any information you need on various vehicles, systems, components, and procedures. We hope this article has given you a comprehensive guide on Autodata 3.40 Full Version and how to use it. If you have any questions or comments, feel free to leave them below. Here are some of the frequently asked questions about Autodata 3.40 Full Version: A: Autodata 3.40 Full Version is a software that was originally developed and distributed by Autodata Limited, a company based in the UK that provides automotive data and software solutions. However, Autodata Limited has discontinued Autodata 3.40 Full Version and replaced it with newer versions such as Autodata Online or Autodata CDA-3. Therefore, Autodata 3.40 Full Version is no longer available on the official website of Autodata Limited or any authorized dealers or distributors. The sources we mentioned in this article are alternative sources that provide free downloads of Autodata 3.40 Full Version for personal use only. We do not endorse or support any of these sources and we do not take any responsibility for any legal issues that may arise from using them. If you want to use Autodata 3.40 Full Version legally and professionally, you should contact Autodata Limited and purchase one of their newer versions such as Autodata Online or Autodata CDA-3. A: The sources we mentioned in this article are alternative sources that provide free downloads of Autodata 3.40 Full Version for personal use only. We do not endorse or support any of these sources and we do not take any responsibility for any safety issues that may arise from using them. You should always scan any file you download from the Internet with a reliable antivirus software before opening it or installing it on your PC. You should also backup your PC before installing any software on it. If you want to use Autodata 3.40 Full Version safely and professionally, you should contact Autodata Limited and purchase one of their newer versions such as Autodata Online or Autodata CDA-3. A: Autodata 3.40 Full Version has an update function that allows you to check for any updates online. However, since Autodata Limited has discontinued Autodata 3.40 Full Version and replaced it with newer versions such as Autodata Online or Autodata CDA-3, you may not find any updates for Autodata 3.40 Full Version anymore. If you want to use the latest and most updated version of Autodata, you should contact Autodata Limited and purchase one of their newer versions such as Autodata Online or Autodata CDA-3. A: To uninstall Autodata 3.40 Full Version, you can use one of the following methods: After you uninstall Autodata 3.40 Full Version, you should also delete any leftover files or folders in your PC that are related to Autodata 3.40 Full Version. A: If you have any questions, comments, feedback, or issues regarding Autodata 3.40 Full Version or any of their newer versions such as Autodata Online or Autodata CDA-3, you can contact Autodata Limited by using one of the following methods: A: If you are looking for more automotive software for free, you can check out some of the websites we mentioned in this article, such as the Internet Archive, MOTORCARSOFT.COM, or Lionn Auto Softwares. You can also search online for other websites that provide free downloads of various automotive software, such as diagnostic tools, repair manuals, tuning software, etc. However, you should always be careful when downloading any software from the Internet, as some of them may contain viruses, malware, spyware, or other harmful elements that may damage your PC or compromise your privacy. You should always scan any file you download from the Internet with a reliable antivirus software before opening it or installing it on your PC. You should also backup your PC before installing any software on it. If you want to use automotive software legally and safely, you should contact the original developers or distributors of the software and purchase a licensed version from them. Plants Vs Zombies 2 is a popular casual game that challenges you to defend your house from hordes of zombies using various plants and weapons. The game was originally released for mobile devices, but you can also play it on your PC with Windows 7 operating system. In this article, we will show you how to download and install Plants Vs Zombies 2 on your PC using an emulator called BlueStacks. BlueStacks is a software that allows you to run Android apps and games on your PC or Mac. It creates a virtual environment that mimics an Android device, so you can access the Google Play Store and download your favorite apps and games. BlueStacks is free to download and use, and it offers many features and enhancements to improve your gaming experience. DOWNLOAD ►►► https://urlgoal.com/2uIbq5 To download and install BlueStacks on your PC, follow these steps: To download and install Plants Vs Zombies 2 on your PC using BlueStacks, follow these steps: Playing Plants Vs Zombies 2 on PC has many advantages over playing it on mobile devices. Here are some of them: Plants Vs Zombies 2 is a fun and addictive game that will keep you entertained for hours. If you want to play it on your PC with Windows 7 operating system, you can use BlueStacks emulator to download and install it easily. BlueStacks will also enhance your gaming experience with its features and optimizations. Download BlueStacks today and start playing Plants Vs Zombies 2 on your PC! Plants Vs Zombies 2 is a sequel to the original Plants Vs Zombies game that was released in 2009. The game is developed by PopCap Games and published by Electronic Arts. The game follows the adventures of Crazy Dave, a quirky inventor who travels back and forth in time with his talking car Penny to fight zombies in different historical periods. The game features 11 worlds, each with its own theme, plants, zombies, and challenges. The game also has various modes, such as Adventure, Arena, Penny's Pursuit, and Epic Quests. The gameplay of Plants Vs Zombies 2 is similar to the first game. You have to plant various types of plants on your lawn to stop the zombies from reaching your house and eating your brains. Each plant has its own abilities and costs a certain amount of suns to plant. Suns are generated by sunflowers or fall from the sky periodically. You can also use plant food to boost your plants' power or use power-ups to help you in difficult situations. The game has a day and night cycle, as well as different weather conditions that affect the gameplay. You can also collect coins, gems, piñatas, and seed packets to unlock new plants, upgrade your existing ones, or buy items from the shop. If you want to master Plants Vs Zombies 2 and beat all the levels and zombies, here are some tips and tricks that you can use: Microsoft Security Essential (MSE) is a free antivirus software that protects your computer from malware and other threats. However, MSE can also cause your computer to run slower than usual, especially if it is outdated or has too many files to scan. In this article, we will show you how to fix MSE and speed up your slow running computer by removing Microsoft Secur. Microsoft Secur is a fake antivirus program that pretends to be MSE and tries to trick you into buying its full version. It can infect your computer through malicious downloads or websites, and it can display fake alerts and scan results. Microsoft Secur can also slow down your computer by using up your system resources and blocking legitimate programs. DOWNLOAD ✔✔✔ https://urlgoal.com/2uI7i0 To fix MSE and speed up your slow running computer by removing Microsoft Secur, you need to follow these steps: By following these steps, you should be able to fix MSE and speed up your slow running computer by removing Microsoft Secur. Remember to keep MSE updated and scan your computer regularly to prevent any future infections. If you want to further enhance your computer's security and performance, you can also try these tips: By following these tips, you can further improve your computer's security and performance. However, if your computer is still running slow or experiencing problems, you may need to contact a professional technician for help. Jai Hanuman is a blockbuster Indian television series based on the life of Lord Hanuman, the Hindu monkey god and devotee of Lord Rama. The series was written and composed by the late Shri Ravindra Jain and aired from 1997 to 2000. The title song of the series, "Mangal Ko Janme Mangal Hi Karte Mangal Mai Hanuman", is a powerful and catchy tribute to the auspiciousness and heroism of Hanuman. The song begins with a chorus of "Jai Hanuman" (Hail Hanuman) followed by a narration of his birth story. The lyrics describe how Hanuman was born on a Tuesday (Mangalwar), which is considered a day of good luck and prosperity in Hindu culture. The song also praises Hanuman's various attributes, such as his strength, intelligence, courage, devotion, humility, and service. The song ends with a repetition of "Jai Hanuman" and a request to Hanuman to bless the listeners with his grace. Download Zip ✒ https://urlgoal.com/2uIaoZ The song is sung by Mohammed Aziz, a renowned playback singer who has lent his voice to many Bollywood films. His voice is deep and resonant, conveying a sense of reverence and admiration for Hanuman. The music is composed by Ravindra Jain, who was known for his devotional songs and music direction for many mythological shows. The music is upbeat and energetic, blending traditional instruments like tabla, harmonium, flute, and sitar with modern sounds like drums, guitar, and keyboard. The music creates a mood of celebration and joy, reflecting Hanuman's playful and adventurous nature. The title song of Jai Hanuman is one of the most popular and memorable songs of Indian television history. It has been viewed millions of times on YouTube and other platforms. It has also been covered by many singers and musicians, such as Vikrant Marwa, Amit Joshi, Panchgavya Dr.Sujeet Kumar Sharma, and others. The song has inspired many devotees of Hanuman to chant his name and seek his blessings. The song is a testament to the enduring appeal and relevance of Hanuman in Indian culture and spirituality. The title song of Jai Hanuman is not only a musical masterpiece, but also a visual treat. The song is accompanied by stunning visuals of Hanuman's various feats and adventures, such as lifting the Dronagiri mountain, crossing the ocean to Lanka, burning Lanka with his tail, fighting Ravana and his army, and rescuing Sita. The song also shows Hanuman's devotion to Rama and his role in the Ramayana. The song captures the essence of Hanuman's character and his significance in Hindu mythology. The title song of Jai Hanuman is also a tribute to the talented cast and crew of the series. The series was directed by Sanjay Khan, who also wrote the script and played the role of Vali. The series featured many famous actors, such as Raj Premi as Hanuman, Siraj Mustafa Khan as Rama, Shilpa Mukherji as Sita, Irrfan Khan as Valmiki, Mukesh Khanna as Bhagiratha, Jaya Bhattacharya as Lakshmi, Ravi Kishan as Krishna, and many more. The series also had a huge production value, with elaborate sets, costumes, and special effects. The series was a huge success and won many awards and accolades. The title song of Jai Hanuman is a song that has touched the hearts of millions of Indians. It is a song that celebrates the glory and grace of Hanuman, the supreme devotee of Rama. It is a song that inspires faith, courage, and service in the listeners. It is a song that transcends time and space and connects generations of devotees. It is a song that is truly mangal (auspicious) in every sense. Jira 6 is the latest version of Jira Software, the #1 software development tool used by agile teams. Jira 6 comes with many new features and improvements that make it easier, faster, and more enjoyable to plan, track, release, and report on your projects. Here are some of the reasons why you should upgrade to Jira 6 today. Download File ····· https://urlgoal.com/2uI8EG Jira 6 has a new look and feel that is modern, clean, and consistent across all Atlassian products. The new design makes it easier to navigate, find what you need, and focus on your work. You can also customize your dashboard with gadgets that show you the most relevant information for your projects. Jira 6 is faster than ever before. It has been optimized for speed and scalability, so you can work with large and complex projects without any lag or downtime. Jira 6 also supports the latest browsers and devices, so you can access your projects from anywhere and on any screen. Jira 6 is built for agile teams. It supports Scrum, Kanban, and hybrid methodologies, and lets you create and manage agile boards that suit your workflow. You can also use Jira 6 to integrate with other Atlassian products like Confluence, Bitbucket, and Trello, as well as thousands of apps from the Atlassian Marketplace. This way, you can streamline your agile processes and collaborate better with your team. Jira 6 is adaptable to any type of project and team. You can start with ready-made templates or create your own from scratch. You can also customize every aspect of Jira 6 to fit your needs, such as workflows, fields, screens, permissions, notifications, and more. Jira 6 also lets you create custom reports and dashboards to track and visualize your project progress and performance. If you are already using Jira Software Server or Data Center, you can upgrade to Jira 6 by following the instructions on this page[^2^]. If you are using Jira Software Cloud, you don't need to do anything as your instance will be automatically updated to Jira 6. If you are new to Jira Software, you can try it for free for 30 days by signing up here[^1^]. Don't miss this opportunity to take your project management to the next level with Jira 6. Upgrade today and enjoy the benefits of the most powerful and versatile software development tool on the market. Jira 6 is an intuitive and agile project management platform that enables teams to dynamically plan, track and manage their workflows and projects. Jira 6 offers numerous features like sprints for Scrum management and other agile boards like Kanban. This allows teams to effectively and efficiently organize projects, define, assign and prioritize tasks, collaborate with team members and stakeholders, monitor progress and performance, and deliver quality results on time[^3^]. Jira 6 is suitable for any type of team and project, whether it is software development, IT service management, marketing, HR, legal, operations, or any other domain. Jira 6 can be used by teams of any size, from startups to enterprises, and can scale with your business needs. Jira 6 also integrates with other Atlassian products like Confluence, Bitbucket, and Trello, as well as thousands of apps from the Atlassian Marketplace. This way, you can connect your tools and data across your entire organization and streamline your workflows[^1^]. Getting started with Jira 6 is easy and fast. You can choose between Jira Software Cloud or Jira Software Server or Data Center, depending on your preferences and requirements. Jira Software Cloud is hosted by Atlassian and offers automatic updates, security, and reliability. Jira Software Server or Data Center is hosted by you and offers more customization and control. You can try Jira Software Cloud for free for up to 10 users or Jira Software Server or Data Center for free for 30 days[^2^]. Once you sign up, you can create your first project using a ready-made template or a custom one. You can also invite your team members and start working on your tasks right away. DOWNLOAD ✺✺✺ https://cinurl.com/2uEYAH Download File https://cinurl.com/2uEY5V Download File ✫✫✫ https://cinurl.com/2uEYms DOWNLOAD ⏩ https://urluss.com/2uCEGE Download >>> https://bytlly.com/2uGm44 ReFX Nexus 2.3.2 Beta is a powerful and versatile VST plugin that offers a huge library of sounds and effects for music production. Whether you are making EDM, hip hop, pop, or any other genre, ReFX Nexus 2.3.2 Beta can help you create amazing tracks with ease. DOWNLOAD ———>>> https://urlcod.com/2uK80x However, ReFX Nexus 2.3.2 Beta is not a free plugin, and it requires an eLicenser emulator to activate it. If you want to use ReFX Nexus 2.3.2 Beta without paying for it, you will need to download and install ReFX Nexus 2.3.2 Beta Installation Crack.zip, which is a file that contains the cracked version of the plugin and the emulator. In this article, we will show you how to download and install ReFX Nexus 2.3.2 Beta Installation Crack.zip safely and quickly, so you can enjoy using ReFX Nexus 2.3.2 Beta on your computer. The first step is to download ReFX Nexus 2.3.2 Beta Installation Crack.zip from a reliable source. There are many websites that claim to offer this file, but some of them may contain viruses or malware that can harm your computer. One of the best sources to download ReFX Nexus 2.3.2 Beta Installation Crack.zip is Google Drive, which is a cloud storage service that allows you to upload and download files securely. You can access Google Drive from any browser or device, and you can also share files with others easily. To download ReFX Nexus 2.3.2 Beta Installation Crack.zip from Google Drive, you will need to follow this link: https://drive.google.com/file/d/0B8pNJFQwMENlY3hTSkVseUx2REU/view This link will take you to a page where you can see the file name and size, as well as a preview of the file contents. You will also see a button that says "Download" at the top right corner of the page. Click on the "Download" button and wait for the file to be downloaded to your computer. The file size is about 1 GB, so it may take some time depending on your internet speed. How to install ReFX Nexus 2.3.2 Beta with crack The second step is to extract ReFX Nexus 2.3.2 Beta Installation Crack.zip to a folder on your computer. This will allow you to access the files inside the zip file, which include the installer for ReFX Nexus 2.3.2 Beta and the crack for the eLicenser emulator. To extract ReFX Nexus 2.3.2 Beta Installation Crack.zip, you will need a software that can handle zip files, such as WinRAR or 7-Zip. These are free programs that you can download and install on your computer easily. Once you have installed one of these programs, locate ReFX Nexus 2.3.2 Beta Installation Crack.zip on your computer and right-click on it. You will see a menu that gives you several options, such as "Extract Here", "Extract to", or "Open with". Choose the option that suits you best, and select a destination folder for the extracted files. Wait for the extraction process to finish, and then open the destination folder where you extracted ReFX Nexus 2.3.2 Beta Installation Crack.zip. The third step is to install ReFX Nexus 2.3.2 Beta on your computer using the installer that you extracted from ReFX Nexus 2.3.2 Beta Installation Crack.zip. To install ReFX Nexus 2.3.2 Beta, open the folder where you extracted ReFX Nexus 2.3.2 Beta Installation Crack.zip and look for a file named "reFX.Nexus.vst.x86.Beta.Crack-iND.exe". This is the installer for ReFX Nexus 2.3.2 Beta. Double-click on this file and follow the instructions on the screen to install ReFX Nexus 2.3.2 Beta on your computer. You will need to choose a location for the plugin files, as well as a VST folder where your DAW (digital audio workstation) can find them. You will also need to agree to the terms and conditions of use for ReFX Nexus 2.3 The fourth step is to run the crack for the eLicenser emulator, which is a software that mimics the original eLicenser dongle that is required to activate ReFX Nexus 2.3.2 Beta. To run the crack for the eLicenser emulator, open the folder where you extracted ReFX Nexus 2.3.2 Beta Installation Crack.zip and look for a file named "eLicenser Emulator.exe". This is the crack for the eLicenser emulator. Double-click on this file and wait for it to load. You will see a window that shows a list of licenses for various ReFX products, including ReFX Nexus 2.3.2 Beta. Select ReFX Nexus 2.3.2 Beta from the list and click on "Apply". This will generate a license file for ReFX Nexus 2.3.2 Beta and save it to your computer. You will also see a message that says "License applied successfully". Click on "OK" to close the window. The final step is to enjoy using ReFX Nexus 2.3.2 Beta with crack on your computer. To use ReFX Nexus 2.3.2 Beta with crack, you will need to launch your DAW (digital audio workstation) and load ReFX Nexus 2.3.2 Beta as a VST plugin. You will see a window that asks you to select a license file for ReFX Nexus 2.3.2 Beta. Navigate to the folder where you saved the license file from the previous step and select it. You will then see a window that says "Thank you for using NEXUS!". Click on "OK" to close the window. You can now access all the features and sounds of ReFX Nexus 2.3.2 Beta without any limitations or restrictions. In this article, we have shown you how to download and install ReFX Nexus 2.3.2 Beta Installation Crack.zip safely and quickly, so you can enjoy using ReFX Nexus 2.3.2 Beta on your computer. ReFX Nexus 2.3.2 Beta is a powerful and versatile VST plugin that offers a huge library of sounds and effects for music production. By using ReFX Nexus 2.3.2 Beta Installation Crack.zip, you can use ReFX Nexus 2.3.2 Beta without paying for it or needing an eLicenser dongle. We hope you found this article helpful and informative, and we wish you happy music making with ReFX Nexus 2.3.2 Beta! In this article, we have shown you how to download and install ReFX Nexus 2.3.2 Beta Installation Crack.zip safely and quickly, so you can enjoy using ReFX Nexus 2.3.2 Beta on your computer. ReFX Nexus 2.3.2 Beta is a powerful and versatile VST plugin that offers a huge library of sounds and effects for music production. By using ReFX Nexus 2.3.2 Beta Installation Crack.zip, you can use ReFX Nexus 2.3.2 Beta without paying for it or needing an eLicenser dongle. We hope you found this article helpful and informative, and we wish you happy music making with ReFX Nexus 2.3.2 Beta! WordPress is a powerful platform that allows you to create and manage your own website. However, if you want to take your site to the next level, you need to install some plugins. Plugins are extensions that add new features and functionalities to your WordPress site. They can help you improve your site's performance, security, design, SEO, marketing, and more. But with thousands of plugins available, how do you choose the best ones for your site? That's where Plugins Pack comes in. Plugins Pack is a curated collection of the best and most popular plugins for WordPress. It includes plugins for every aspect of your site, such as: Download File ►►► https://urlcod.com/2uK392 In this article, we will show you how to use Plugins Pack to install the best plugins for your WordPress site. We will also explain the benefits of each plugin and how they can help you achieve your goals. To use Plugins Pack, you need to follow these steps: Plugins Pack has many benefits for WordPress users, such as: If you want to enhance your WordPress site with the best plugins, Plugins Pack is the solution for you. It allows you to install a collection of the best and most popular plugins for WordPress with just a few clicks. It also offers you a bundle of premium plugins at a discounted price. Whether you want to improve your site's performance, security, design, SEO, marketing, or e-commerce, Plugins Pack has something for you. Try it today and see the difference! TRAKTOR LE is a DJ software that lets you mix music across two decks with basic features like looping, cueing, and effects. It is a light version of TRAKTOR PRO 3, which is used by professional DJs in clubs around the world. If you want to try out TRAKTOR LE for free, here are some ways you can do it. If you have a Numark product that includes TRAKTOR LE in the box, such as the TRAKTOR KONTROL Z1, TRAKTOR AUDIO 2, or KOMPLETE AUDIO 6, you can download the software from the Numark website. You will need an authorization code that is printed on the CD sleeve that came with your product. You can also contact Numark customer support if you have lost or misplaced your code. Download ★ https://urlcod.com/2uK5I6 If you don't have a Numark product, you can still download TRAKTOR DJ 2, which is a free DJ software for desktop and iPad that lets you play tracks from SoundCloud Go+ and your own music library. You can use TRAKTOR DJ 2 to learn the basics of DJing and then upgrade to TRAKTOR LE 3 for more features and functionality. To upgrade, you will need to purchase a license from the Native Instruments website. You will also need a compatible audio interface or controller to use TRAKTOR LE 3. If you want to experience the full power of TRAKTOR, you can try out TRAKTOR PRO 3, which is the flagship DJ software from Native Instruments. It lets you mix across four channels, with a full effects suite, extended cueing options, sampling, remix decks, stem decks, and more. You can download a free demo version of TRAKTOR PRO 3 that lets you use it for 30 minutes at a time. If you like it, you can buy a license and unlock it permanently. TRAKTOR LE is a great way to start your DJ journey and mix music with ease. You can get it for free with a qualifying Numark product, upgrade from TRAKTOR DJ 2, or try out TRAKTOR PRO 3 for more advanced features. Whatever option you choose, you will enjoy the quality and performance of TRAKTOR, one of the most popular DJ software in the world. Once you have downloaded TRAKTOR LE from the Numark website or upgraded from TRAKTOR DJ 2, you will need to install it on your computer. Here are the steps to follow: To use TRAKTOR LE effectively, you will need to learn some basic DJ skills and techniques, such as: You can find more tips and tutorials on the Native Instruments website or on YouTube. You can also check out some online courses or books that teach you how to DJ with TRAKTOR LE. TRAKTOR LE is a fun and easy way to start DJing and express your musical creativity. You can download it for free with a qualifying Numark product, upgrade from TRAKTOR DJ 2, or try out TRAKTOR PRO 3 for more advanced features. Once you have installed it on your computer, you can connect your audio interface or controller and start mixing your favorite tracks. You can also learn some basic DJ skills and techniques that will help you improve your mixes and impress your listeners. If you are looking for a fun and exciting game to play with your friends or strangers online, you might have heard of Among Us. It is a multiplayer social deduction game that has taken the gaming world by storm in recent months. Download Zip • https://bltlly.com/2uOnVj In this game, you can join a crew of up to 15 players who are trying to complete tasks on a spaceship, a base, or a planet. However, among you there are one or more impostors who are secretly trying to kill everyone else. As a crewmate, you have to work together with your teammates to find out who the impostors are and vote them out before they kill you all. As an impostor, you have to deceive, sabotage, and eliminate your enemies without getting caught. Among Us is a game that tests your skills of communication, deduction, deception, and teamwork. It is also a game that offers endless replay value and fun moments with your friends or strangers. among us 4.2.2 apk download for android Among Us is constantly being updated by its developers, Innersloth, to provide new features, fixes, and improvements to the game. If you are already a fan of Among Us, you might be wondering what is new in the latest version of the game, which is Among Us 4.2.2 APK. Among Us 4.2.2 APK is the most recent update of the game, which was released on February 23, 2023. It includes some new features and improvements that make the game more enjoyable and challenging. Here are some of the main changes that you can expect in Among Us 4.2.2 APK: As you can see, Among Us 4.2.2 APK offers a lot of new content and enhancements that make the game more fun and diverse. If you want to try out the latest version of Among Us, you will need to download and install it on your Android device. Downloading and installing Among Us 4.2.2 APK on your Android device is not very difficult, but it does require some steps and precautions. Here is a step-by-step guide with screenshots to help you out: However, before you download and install Among Us 4.2.2 APK on your device, you should be aware of some potential risks and precautions: Therefore, you should always download and install APK files from trusted sources and at your own risk. You should also backup your data and scan your device for any threats before and after installing APK files. You should also respect the rights and rules of the game and the app store and avoid any unfair or illegal activities. Now that you have downloaded and installed Among Us 4.2.2 APK on your device, you might be wondering how to play it online or over local WiFi. Playing Among Us 4.2.2 APK is very easy and fun, but it does require some steps and settings. Here is a brief explanation of the game modes and settings that you can choose from: Once you have chosen your mode and lobby, you can start playing Among Us 4.2.2 APK with your friends or strangers online or over local WiFi. However, before you do that, you should also customize your game settings to suit your preferences and skills. Here are some of the game settings that you can adjust: You can change these settings according to your liking and challenge level. You can also use some custom rules or modes that you and your friends agree on, such as no talking, no killing, or no venting. Now that you have set up your game, you are ready to play Among Us 4.2.2 APK with your friends or strangers online or over local WiFi. However, playing the game is not as easy as it sounds. You will need some skills and strategies to win as a crewmate or an impostor. Here are some tips and tricks that you can use: In conclusion, Among Us 4.2.2 APK is the latest version of the popular social deduction game that offers a lot of new features and improvements that make the game more fun and challenging. You can download and install it on your Android device by following the steps and precautions mentioned above. You can also play it online or over local WiFi with your friends or strangers by choosing your mode, settings, role, map, and strategy. Among Us 4.2.2 APK is a game that will test your skills of communication, deduction, deception, and teamwork. It is also a game that will provide you with endless replay value and fun moments with your friends or strangers. So what are you waiting for? Download Among Us 4.2.2 APK now and join the fun! Here are some frequently asked questions about Among Us 4.2.2 APK: Yes, Among Us 4.2.2 APK is safe to download if you use a reliable source like MEGA.NZ. However, you should always scan your device for any malware or viruses before and after installing APK files. You should also backup your data and respect the terms of service of the game and the app store. Among Us 4.2.2 APK is compatible with most Android devices that have Android 4.4 or higher. However, some devices may experience some issues or errors due to different specifications or settings. You should check the compatibility of your device before downloading and installing Among Us 4.2.2 APK. Yes, Among Us 4.2.2 APK supports cross-platform play, which means you can play with your friends who are using iOS, Windows, or Nintendo Switch devices. You just need to join the same lobby or create a private lobby with a code that you can share with your friends. You can customize your character in Among Us 4.2.2 APK by tapping on the laptop icon on the bottom right corner of the screen. You can change your name, color, hat, skin, and pet according to your preference. You can also unlock more items by purchasing them with real money or watching ads. You can report or ban hackers or cheaters in Among Us 4.2.2 APK by using the new account system that lets you create a profile and report or ban players who are hacking, cheating, or being toxic in the game. You can also use the kick or ban option in the lobby menu if you are the host of the game. I hope this article has answered your questions and helped you understand more about Among Us 4.2.2 APK. If you have any more questions or feedback, feel free to leave a comment below. Are you a fan of sandbox games where you can unleash your creativity and imagination? Do you love watching epic battles between different animals with realistic physics and ragdoll effects? If you answered yes to these questions, then you might want to check out Animal Revolt Battle Simulator, a game that lets you create your own scenarios with hundreds of animals and weapons. And if you want to enhance your gaming experience even more, you might want to try the latest version mod apk of Animal Revolt Battle Simulator, which gives you unlimited money, menu, and more. In this article, we will review the game and its mod apk, and show you how to download and install it on your device. DOWNLOAD >>> https://bltlly.com/2uOsKe Animal Revolt Battle Simulator is a game developed by Beast Battle Games, a studio that specializes in making physics-based sandbox games. The game was released in June 2020 for Android devices, and has since gained a lot of popularity among gamers who enjoy creating their own scenarios with different animals and weapons. The game has a simple premise: you can choose from hundreds of animals and weapons, place them on a map, and watch them fight each other. You can also customize the terrain, weather, and time of day to suit your preferences. The game uses realistic physics and ragdoll effects to make the battles more fun and chaotic. You can also share your creations with other players online or download their scenarios. The game offers a wide variety of animals and weapons for you to choose from. You can find animals from different categories such as mammals, reptiles, birds, fish, insects, dinosaurs, mythical creatures, and more. You can also find weapons such as swords, axes, spears, guns, rockets, grenades, bombs, lasers, flamethrowers, and more. You can mix and match any animal or weapon you want to create your own scenarios. For example, you can pit a lion against a shark, a dragon against a tank, or a gorilla against a chainsaw. The game uses realistic physics and ragdoll effects to make the battles more fun and chaotic. You can see how the animals react to different forces such as gravity , friction, collision, and explosion. You can also see how the animals get injured, bleed, lose limbs, or die. The game also has a slow motion mode that lets you see the action in more detail. You can also zoom in or out, rotate, or move the camera to get different angles of the battle. animal revolt battle simulator mod apk unlimited money The game also allows you to customize the terrain, weather, and time of day to suit your preferences. You can choose from different maps such as grassland, desert, forest, island, city, moon, and more. You can also change the weather conditions such as rain, snow, fog, wind, and more. You can also adjust the time of day such as dawn, noon, dusk, or night. You can use these options to create different atmospheres and effects for your scenarios. The game also has an online mode where you can share your creations with other players or download their scenarios. You can browse through different categories such as popular, recent, featured, or random. You can also rate, comment, or report the scenarios you see. You can also search for specific scenarios by using keywords or filters. You can also create your own profile and follow other players to see their creations. If you want to enhance your gaming experience even more, you might want to try the latest version mod apk of Animal Revolt Battle Simulator. This is a modified version of the game that gives you unlimited money, menu, and more. With this mod apk, you can get access to all the animals and weapons in the game without spending any real money. You can also use the menu to enable or disable various options such as god mode, slow motion, gravity, and more. You can also enjoy the game without any ads or in-app purchases. You can also get the latest updates and bug fixes for the game. One of the benefits of using the mod apk is that you can get unlimited money to buy any animal or weapon you want. You don't have to worry about running out of money or saving up for expensive items. You can buy any animal or weapon you want from the shop without any restrictions. You can also upgrade your animals and weapons to make them stronger and more effective. Another benefit of using the mod apk is that you can access the menu to enable or disable various options such as god mode, slow motion, and gravity. You can use these options to make your scenarios more fun and interesting. For example, you can use god mode to make your animals invincible and immune to damage. You can use slow motion to see the action in more detail and create cinematic effects. You can use gravity to change the force that affects the animals and weapons. A third benefit of using the mod apk is that you can enjoy the game without any ads or in-app purchases. You don't have to deal with annoying ads that interrupt your gameplay or tempt you to spend real money. You can play the game without any distractions or limitations. You can also save your data and battery by playing the game offline. A fourth benefit of using the mod apk is that you can get the latest updates and bug fixes for the game. You don't have to wait for the official updates from the developer or worry about compatibility issues. You can get the newest features and improvements for the game as soon as they are available. You can also avoid any glitches or errors that might affect your gameplay. If you are interested in trying the mod apk of Animal Revolt Battle Simulator, you might be wondering how to download and install it on your device. Don't worry, it's not a complicated process. Just follow these simple steps and you'll be ready to play in no time. The first step is to download the mod apk file from a trusted source such as [GetModsApk], a website that provides free and safe mod apks for various games and apps. You can find the link to the mod apk of Animal Revolt Battle Simulator on their website, or you can click [here] to go directly to the download page. You will see a button that says "Download Apk". Tap on it and wait for the download to start. The second step is to enable unknown sources on your device settings to allow installation of third-party apps. This is necessary because the mod apk is not from the official Google Play Store, and your device might block it by default. To enable unknown sources, go to your device settings, then security, then unknown sources. Toggle the switch to turn it on. You might see a warning message that says installing apps from unknown sources might harm your device. Ignore it and tap on OK. The third step is to locate and tap on the downloaded file to start the installation process. You can find the file in your downloads folder, or you can use a file manager app to search for it. Once you find it, tap on it and you will see a screen that says "Do you want to install this application?". Tap on Install and wait for the installation to complete. The fourth step is to follow the instructions on the screen and wait for the installation to complete. You might see some permissions requests that ask you to allow access to certain features or data on your device. Tap on Allow or Accept to grant them. You might also see some pop-ups that ask you to verify your identity or agree to some terms and conditions. Tap on Continue or Agree to proceed. Once the installation is done, you will see a screen that says "App installed". Tap on Open to launch the game. The final step is to launch the game and enjoy the mod features. You will see a menu icon on the top left corner of the screen. Tap on it and you will see various options such as money, menu, settings, and more. Tap on money to get unlimited money to buy any animal or weapon you want. Tap on menu to enable or disable various options such as god mode, slow motion, gravity, and more. Tap on settings to adjust the graphics, sound, language, and more. You can also access these options during gameplay by tapping on the menu icon again. You can also enjoy the game without any ads or in-app purchases. You can also get the latest updates and bug fixes for the game. Animal Revolt Battle Simulator is a physics-based sandbox game where you can create epic battles with different animals and weapons. You can also customize the terrain, weather, and time of day to suit your preferences. You can also share your creations with other players online or download their scenarios. If you want to enhance your gaming experience even more, you can try the latest version mod apk of Animal Revolt Battle Simulator, which gives you unlimited money, menu, and more. You can download and install the mod apk easily by following the simple steps we have provided in this article. We hope you enjoy playing Animal Revolt Battle Simulator and its mod apk. Here are some frequently asked questions about Animal Revolt Battle Simulator and its mod apk. Reels music is one of the most popular features of Instagram, where users can create short videos with catchy songs and sounds. But what if you want to download the reels music mp3 and listen to it offline or use it for other purposes? In this article, we will show you how to download reels music mp3 from Instagram and other sources in easy steps. Download ✅ https://bltlly.com/2uOiXo Reels music is the audio that accompanies the reels videos on Instagram. Reels are short, fun, and creative videos that users can make and share with their followers or the public. Reels music can be any song or sound that the user chooses from the Instagram library or uploads from their own device. Some examples of reels music are: Downloading reels music mp3 can have many benefits and use cases, such as: The easiest way to download reels music mp3 from Instagram is to save the reel audio in the app itself. This way, you can access the audio anytime within the app and use it for creating your own reels. Here's how to do it: If you want to download the reel audio as an mp3 file and store it on your phone's storage, you can use some third-party websites that let you extract and download the reel sound without any video portion. Here's how to do it: If you want to download reels music that is not available on Instagram, you can look for some websites that offer free music downloads for personal or commercial use. These websites have a large collection of songs and sounds that you can use for your reels or other projects without worrying about copyright issues. Here are some examples of such websites: download reels music mp3 free online To download reels music from these websites, you just need to follow these steps: If you have a reels video that you want to convert to mp3, you can use some online tools that can extract the audio from the video and save it as an mp3 file. Here are some examples of such tools: To convert reels videos to mp3 using these tools, you just need to follow these steps: Downloading reels music mp3 is not a difficult task if you know the right methods and tools. In this article, we have shown you how to download reels music mp3 from Instagram and other sources in easy steps. You can use these methods to enjoy your favorite reels music offline or use it for your own creative purposes. We hope you found this article helpful and informative. If you have any questions or feedback, please let us know in the comments below. A1: To use saved reel audio for creating reels, you need to follow these steps: A2: To download reels music mp3 on PC, you can use the same methods and tools as mentioned above for phone. You just need to open the websites on your PC browser and follow the same steps. Alternatively, you can also use some desktop software or browser extensions that can download Instagram reel audio or convert reels videos to mp3. Some examples are [5](https://www.4kdownload.com/products/product-videodownloader) 4K Video Downloader, [4](https://www.dvdvideosoft.com/products/dvd/Free-YouTube-to-MP3-Converter.htm) Free YouTube to MP3 Converter, and [3](https://chrome.google.com/webstore/detail/instagram-downloader/cpgaheeihidjmolbakklolchdplenjai) Instagram Downloader. A3: To download reels music mp3 without watermark, you need to use a tool that can remove the watermark from the reels video before converting it to mp3. Some examples are [2](https://www.apowersoft.com/watermark-remover) Apowersoft Watermark Remover, [1](https://www.inpaintonline.com/remove-watermark-from-video/) Inpaint Online, and [0](https://www.remove-watermark.com) Remove Watermark. You just need to upload or paste the reels video, select the watermark area, and click on Remove or Erase. Then, you can download the video without watermark and convert it to mp3 using any of the methods mentioned above. A4: To download reels music mp3 in high quality, you need to use a tool that can preserve the original quality of the audio or enhance it with some features. Some examples are [9](https://mp3fy.com/en1/) MP3FY, [8](https://ytmp3.eu/en/) YTMP3.eu, and [7](https://www.mp3juices.cc) MP3Juices. These tools can download reels music mp3 in 320kbps or higher bitrate, which means better sound quality. You just need to paste the reels video URL and click on Download or Convert. Then, you can choose the quality option and download the mp3 file in high quality. A5: To download reels music mp3 with lyrics, you need to use a tool that can add lyrics to the audio file or display them on the screen. Some examples are [6](https://www.musixmatch.com) Musixmatch, [5](https://www.lyrics.com) Lyrics.com, and [4](https://www.lyricfinder.org) Lyric Finder. These tools can sync lyrics with the audio or show them as text or images. You just need to search for the reels music title or artist name and click on Play or Download. Then, you can enjoy the reels music mp3 with lyrics on your device. Have you ever lost access to your bitcoin wallet because you forgot or misplaced your private key? If so, you are not alone. Many bitcoin users have experienced this frustrating situation at some point. Fortunately, there are some tools that can help you find your lost or forgotten bitcoin private keys. These tools are called bitcoin key finder apk apps. Download File ……… https://bltlly.com/2uOgBd A bitcoin key finder apk is an application that can generate and scan possible private keys that match your bitcoin address. A private key is a secret number that allows you to spend or transfer your bitcoins. Without it, you cannot access your funds. Therefore, it is very important to keep your private key safe and secure. However, sometimes accidents happen and you may lose your private key due to device failure, theft, malware, human error, or other reasons. In such cases, a bitcoin key finder apk can come in handy. It can help you recover your private key by brute-forcing different combinations of numbers until it finds a match. However, using a bitcoin key finder apk is not without risks. First of all, it may take a very long time to find your private key depending on how complex it is. There are more than 10^77 possible private keys in the bitcoin network, which means it may take years or even centuries to find yours. Second, there is no guarantee that the app will find your private key at all. It may be that your private key is too rare or too random to be found by any algorithm. Third, there is a risk that the app may steal or leak your private key if it is not trustworthy or secure. Therefore, you should always be careful when choosing and using a bitcoin key finder apk. bitcoin key finder app There are many bitcoin key finder apk apps available online, but not all of them are reliable or effective. Here are some of the most popular ones that you may want to try: This app claims to be able to find any bitcoin private key by searching through a list of bitcoin addresses with positive balance. It allows users to specify. the range of bits to search, provides full or mini info output, and includes a bitcoin puzzle feature. However, it requires users to have a list of bitcoin addresses with positive balance, does not guarantee finding the private key in a reasonable time, and does not offer any security or encryption for the found keys. This app claims to be able to find the private key just by entering the bitcoin address. It guarantees 100% success rate, easy to use interface, and cloud backup. However, it may take a long time to find the key depending on the address, does not offer any security or encryption for the found keys, and may not be compatible with all devices or operating systems. This app is the same as Bitcoin Private Key Finder by NandoryStudioApps, but for Windows PC users. It has the same features, advantages, and disadvantages as the previous app. To help you choose the best app for your needs, here is a table that summarizes the main features, advantages, and disadvantages of each app: In conclusion, a bitcoin key finder apk is a tool that can help you find your lost or forgotten bitcoin private keys. However, it is not a magic solution that can instantly recover your keys. It may take a lot of time and effort to find your keys, and there is no guarantee that you will succeed. Moreover, there are some risks involved in using these apps, such as losing your keys to hackers or scammers. Therefore, before you decide to use a bitcoin key finder apk, you should weigh the pros and cons of each app carefully. You should also consider your own needs and preferences. For example, if you have a list of bitcoin addresses with positive balance and you want more control over the search parameters, you may want to try KeyFinder by bitcrafting. If you only have one bitcoin address and you want a simple and easy way to find your private key, you may want to try Bitcoin Private Key Finder by NandoryStudioApps. If you are a Windows PC user and you want the same features as Bitcoin Private Key Finder by NandoryStudioApps, you may want to try Bitcoin Private Key Finder on Windows PC by com.gmail.nandorystudioapps.bitcoinprivatekeyfinder. However, no matter which app you choose, you should always be careful and cautious when using it. You should never share your private key with anyone or store it in an insecure place. You should also backup or restore your bitcoin wallet using your private key in case you lose your device or app. You should also encrypt your private key for extra security. You can learn more about these topics in the FAQs section below. We hope this article has helped you understand what a bitcoin key finder apk is and how to use it. If you have any questions or comments, please feel free to leave them below. We would love to hear from you. And if you found this article useful, please share it with your friends and family who may also need it. Thank you for reading and happy bitcoin hunting! A bitcoin private key is a secret number that allows you to spend or transfer your bitcoins. It is like a password that unlocks your bitcoin wallet. It is important because without it, you cannot access your funds. Therefore, you should never lose or forget your private key, or share it with anyone else. If you have your private key, you can backup or restore your bitcoin wallet using different methods. One of the most common methods is to use a seed phrase, which is a set of words that represent your private key. You can write down your seed phrase on a piece of paper or store it in a secure place. Then, you can use it to restore your wallet on any device or app that supports the same protocol. Another method is to use a QR code, which is a graphical representation of your private key. You can scan the QR code with your device or app to backup or restore your wallet. However, you should be careful not to lose or damage the QR code, or expose it to anyone else. If you want to add an extra layer of security to your bitcoin private key, you can encrypt it with a passphrase, which is a word or phrase that you choose. You can use any encryption software or tool that supports the same protocol as your wallet or app. Then, you can enter your passphrase whenever you want to access your private key. However, you should remember your passphrase and not share it with anyone else. If none of the bitcoin key finder apk apps work for you, you may want to try some other ways to find or recover your lost or forgotten bitcoin private key. Some of these ways are: To prevent losing or forgetting your bitcoin private key, or having it stolen by hackers or scammers, you should follow some best practices such as: If you are looking for free 3D models to use in your projects, you might have come across the .max file format. This format is widely used by 3D artists and designers who work with 3ds Max, one of the most popular 3D modeling, animation, and rendering software. In this article, we will explain what .max format is, why you might want to use it, where you can find free 3D models in .max format, and how you can download and use them. .max is the native file format of [text](^1^), a powerful and versatile software that allows you to create stunning 3D scenes and animations. .max files can store all the information about a 3D scene, such as geometry, materials, textures, lights, cameras, animation keys, modifiers, plugins, and more. This means that you can save your work in progress and resume it later without losing any data or quality. Download Zip ✅ https://bltlly.com/2uOgDo Another advantage of using .max format is that it is compatible with many other software and platforms. You can import or export .max files to other Autodesk products, such as Maya, AutoCAD, Revit, Inventor, etc. You can also use third-party plugins or converters to open or save .max files in other formats, such as OBJ, FBX, STL, C4D, BLEND, MA, MB, etc. This way, you can easily share your work with other collaborators or clients who use different software. There are many online sources where you can find free 3D models in various formats, including .max. However, not all of them are reliable, high-quality, or legal to use. Therefore, you need to be careful and selective when choosing where to download free 3D models in .max format. Here are some of the best and most trusted websites that offer free 3D models in .max format: [text] is the largest online marketplace for 3D models, where you can find over 900,000 3D models in various formats and categories. Among them, there are thousands of free 3D models in .max format available for download. You can use the quality, poly count, license, and other filters to narrow down your search and find the best 3D model for your project. You can also preview the 3D model in 3D view, check the details and specifications, and read the reviews and ratings before downloading. [text] is a community-driven platform for 3D artists and designers, where you can buy, sell, or download 3D models. It has over 100,000 free 3D models in various formats, including .max. You can browse the free 3D models by best match, lowpoly, PBR, rigged, animated, and other categories. You can also see the number of downloads, likes, views, and comments for each 3D model. You can download the 3D model directly from the website or contact the seller for more information. [text] is the developer of 3ds Max and other popular 3D software, such as Maya, AutoCAD, Revit, etc. It also provides a collection of free 3D models created by Autodesk and its partners. These 3D models are high-quality, realistic, and optimized for various industries and applications, such as architecture, engineering, entertainment, education, etc. You can download the 3D models in .max format or other formats supported by Autodesk software. Once you have found the free 3D model that you want to use in your project, you need to download it and open it in your software. Here are the steps to do that: The first step is to download the .max file from the website where you found it. Before downloading, make sure you check the license and terms of use of the 3D model. Some 3D models are free for personal use only, while others are free for commercial use as well. Some 3D models may require attribution or credit to the original creator or source. You need to respect the rights and wishes of the 3D model owner and follow their guidelines. free download 3d max models for architecture Also, make sure you have enough storage space and a stable internet connection to download the .max file. Some .max files can be very large in size, especially if they contain high-resolution textures or complex animations. You don't want to interrupt or corrupt your download halfway through. Save the file to a location that you can easily access later. The next step is to open the .max file in your software of choice. The best software to open .max files is obviously [text], since it is the native software that created them. However, if you don't have access to 3ds Max or prefer another software, you can also use other software that can read .max files. Some examples are Blender (with [text]), Cinema 4D (with [text]), Maya (with [text]), SketchUp (with [text]), etc. To open the .max file in your software, simply launch the software and go to File > Open. Then browse to the location where you saved the .max file and select it. Click Open to load it into your software. The final step is to edit, animate, render, or export the 3D model as you wish. You can use the tools and features of your software to modify the 3D model according to your needs. You can add animation, lighting, materials, textures, and other effects to enhance the 3D model. You can also render the 3D model to create a realistic image or video of it. Alternatively, you can export the 3D model to another format if you want to use it in another software or platform. For example, if you want to use the 3D model in a game engine, such as Unity or Unreal Engine, you can export it to FBX format, which is widely supported by most game engines. To do that, go to File > Export and choose FBX as the output format. Then select the options and settings that suit your needs and click Export. In this article, we have shown you how to download free 3D models in .max format and use them in your projects. We have explained what .max format is, why you might want to use it, where you can find free 3D models in .max format, and how you can download and use them. We hope you have found this article helpful and informative. If you have any questions or comments, please feel free to leave them below. .max and .3ds are both file formats used by 3ds Max software. However, .max is the native file format that can store all the information about a 3D scene, while .3ds is an older and simpler file format that can only store basic geometry and materials. Therefore, .max files are more versatile and flexible, but also larger and harder to share. .3ds files are more compatible and portable, but also more limited and prone to errors. Yes, you can open .max files without 3ds Max if you have another software that can read .max files. Some examples are Blender (with [text]), Cinema 4D (with [text]), Maya (with [text]), SketchUp (with [text]), etc. However, you may not be able to access all the features and functions of the .max file, such as animation, modifiers, plugins, etc. You can convert .max files to other formats by using 3ds Max or another software that can export .max files. For example, if you want to convert a .max file to OBJ format, you can open the .max file in 3ds Max and go to File > Export > Export Selected. Then choose OBJ as the output format and click Export. You can also use third-party converters or online tools to convert .max files to other formats. It depends on the license and terms of use of the free 3D models in .max format. Some 3D models are free for personal use only, while others are free for commercial use as well. Some 3D models may require attribution or credit to the original creator or source. You need to respect the rights and wishes of the 3D model owner and follow their guidelines. You also need to check the laws and regulations of your country or region regarding the use of free 3D models. Here are some tips for using free 3D models in .max format: If you need to install or reinstall Windows 8, you can use the tools on this page to create your own installation media using either a USB flash drive or a DVD. Before you begin, make sure you have: Download • https://urlcod.com/2uHwlZ Follow these steps to download Windows 8 Microsoft ISO file: Congratulations! You have successfully downloaded Windows 8 Microsoft ISO file. You can now install or reinstall Windows 8 on your PC. Before you install Windows 8, you should back up your important files and settings to an external drive or cloud storage. This will help you restore them in case something goes wrong during the installation process. After you have created the installation media, you can boot your PC from it and follow the on-screen instructions to install Windows 8. You will need to enter your product key during the installation. You can also customize some settings like your language, keyboard layout, time zone and network preferences. Once the installation is complete, you can enjoy the new features and improvements of Windows 8. Some of them include: We hope this article has helped you download Windows 8 Microsoft ISO file and install it on your PC. If you have any questions or feedback, please leave a comment below. If you have a Gigabyte motherboard with an Intel 4 series chipset, you may need to download and install the Gigabyte Intel 4 Series Utility DVD Ver.2.1 to update your drivers and utilities. This DVD contains the latest versions of the Intel INF installation, Intel Rapid Storage Technology, Realtek audio driver, Realtek LAN driver, Intel graphics driver, and more. In this article, we will show you how to download and install the Gigabyte Intel 4 Series Utility DVD Ver.2.1 in a few simple steps. DOWNLOAD ->>> https://urlcod.com/2uHvJz The first step is to download the Gigabyte Intel 4 Series Utility DVD Ver.2.1 from the official Gigabyte website. You can find the download link here: https://www.gigabyte.com/Motherboard/GA-G41M-Combo-rev-13/support#support-dl-utility Make sure you select the correct operating system and version for your computer. The file size is about 3.5 GB, so it may take some time to download depending on your internet speed. After downloading the Gigabyte Intel 4 Series Utility DVD Ver.2.1, you need to burn it to a blank DVD disc using a DVD burner software. You can use any software that supports ISO image burning, such as Nero, ImgBurn, or Windows Disc Image Burner. Insert a blank DVD disc into your DVD drive and launch your DVD burner software. Select the ISO image file that you downloaded and choose the burn option. Follow the instructions on the screen to complete the burning process. Once you have burned the Gigabyte Intel 4 Series Utility DVD Ver.2.1 to a disc, you can install it on your computer. Make sure your computer is connected to a power source and restart it. When your computer boots up, press the F12 key or the key that corresponds to your boot menu option. Select your DVD drive as the boot device and press Enter. The Gigabyte Intel 4 Series Utility DVD Ver.2.1 will load and display a menu with different options. You can choose to install all the drivers and utilities at once or select specific ones that you need. Follow the instructions on the screen to complete the installation process. You may need to restart your computer several times during the installation. The Gigabyte Intel 4 Series Utility DVD Ver.2.1 is a useful tool that can help you update your drivers and utilities for your Gigabyte motherboard with an Intel 4 series chipset. By following these steps, you can easily download and install the Gigabyte Intel 4 Series Utility DVD Ver.2.1 on your computer. Updating your drivers and utilities can improve the performance and stability of your computer. Drivers are software components that enable your hardware devices to communicate with your operating system and applications. Utilities are software tools that help you manage and optimize your system settings and features. By updating your drivers and utilities, you can ensure that your hardware devices are working properly and efficiently. You can also fix any compatibility issues or bugs that may cause errors or crashes. Updating your drivers and utilities can also enhance the security of your system by protecting it from malicious attacks or viruses. If you want to check the current version of your drivers and utilities, you can use the Gigabyte @BIOS utility that is included in the Gigabyte Intel 4 Series Utility DVD Ver.2.1. The Gigabyte @BIOS utility allows you to update your BIOS (Basic Input/Output System) firmware, which is the software that controls the basic functions of your motherboard. To use the Gigabyte @BIOS utility, insert the Gigabyte Intel 4 Series Utility DVD Ver.2.1 into your DVD drive and run the @BIOS.exe file from the DVD. The Gigabyte @BIOS utility will launch and display your current BIOS version and date. You can also check the version and date of your other drivers and utilities by clicking on the corresponding icons on the left side of the utility window. If you find that your drivers and utilities are outdated, you can use the Gigabyte Intel 4 Series Utility DVD Ver.2.1 to update them to the latest versions. Adobe Audition CC 2019 12.1.1.42 Free Download Latest Version for Windows. The program and all files are checked and installed manually before uploading, program is working perfectly fine without any problem. It is full offline installer standalone setup of Adobe Audition CC 2019 12.1.1.42 Free Download for supported version of Windows. Below are some amazing features you can experience after installation of Adobe Audition CC 2019 12.1.1.42 Free Download please keep in mind features may vary and totally depends if your system support them. Download >>>>> https://urlcod.com/2uyV5c Click on below button to start Adobe Audition CC 2019 12.1.1.42 Free Download. This is complete offline installer and standalone setup of Adobe Audition CC 2019 12.1.1.42 for Windows. This would be working perfectly fine with compatible version of Windows. Cuphead is a classic run and gun action game heavily focused on boss battles. Inspired by cartoons of the 1930s, the visuals and audio are painstakingly created with the same techniques of the era, i.e. traditional hand drawn cel animation, watercolor backgrounds, and original jazz recordings. Download Zip ✦✦✦ https://urlcod.com/2uyWM6 Play as Cuphead or Mugman (in single player or local co-op) as you traverse strange worlds, acquire new weapons, learn powerful super moves, and discover hidden secrets while you try to pay your debt back to the devil! Cuphead Update v1 1 4-CODEX is the latest patch for the game that adds new features, fixes bugs, and improves performance. In this article, we will show you how to download and install Cuphead Update v1 1 4-CODEX using a key generator. Cuphead Update v1 1 4-CODEX Key Generator is a tool that can generate a unique and valid key for Cuphead Update v1 1 4-CODEX. This key can be used to activate the game on Steam and download the update for free. Cuphead Update v1 1 4-CODEX Key Generator is safe and easy to use. It does not contain any viruses or malware. It works on Windows and Mac OS computers. To use Cuphead Update v1 1 4-CODEX Key Generator, you need to follow these steps: Cuphead Update v1 1 4-CODEX Key Generator Download Link: https://fancli.com/1ra8dp Cuphead Update v1 1 4-CODEX brings some new features and improvements to the game. Here are some of them: Cuphead Update v1 1 4-CODEX is a great update for a great game. It adds new features, fixes bugs, and improves performance. If you want to get Cuphead Update v1 1 4-CODEX for free, you can use Cuphead Update v1 1 4-CODEX Key Generator. This tool can generate a unique and valid key for Cuphead Update v1 1 4-CODEX that you can use to activate the game on Steam and download the update. Cuphead Update v1 1 4-CODEX Key Generator is safe and easy to use. It works on Windows and Mac OS computers. Download it now and enjoy the game! Cuphead Update v1 1 4-CODEX is a challenging game that requires skill and strategy. Here are some tips on how to play Cuphead Update v1 1 4-CODEX: Cuphead Update v1 1 4-CODEX is a game that offers a lot of fun and satisfaction. Here are some reasons why you should play Cuphead Update v1 1 4-CODEX: Cuphead Update v1 1 4-CODEX is a great update for a great game. It adds new features, fixes bugs, and improves performance. If you want to get Cuphead Update v1 1 4-CODEX for free, you can use Cuphead Update v1 1 4-CODEX Key Generator. This tool can generate a unique and valid key for Cuphead Update v1 1 4-CODEX that you can use to activate the game on Steam and download the update. Cuphead Update v1 1 4-CODEX Key Generator is safe and easy to use. It works on Windows and Mac OS computers. Download it now and enjoy the game!Calculator CofAI
-
-Length Converter
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Xiaor.py b/spaces/CofAI/chat.b4/g4f/Provider/Providers/Xiaor.py
deleted file mode 100644
index 5757f9971157116cbbfabbe5420e3b7e88fed4e7..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat.b4/g4f/Provider/Providers/Xiaor.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import requests
-import os
-import json
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://xiaor.eu.org'
-model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k',
- 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613']
-supports_stream = True
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
- headers = {
- 'Content-Type': 'application/json',
- }
- data = {
- 'model': model,
- 'temperature': 0.7,
- 'presence_penalty': 0,
- 'messages': messages,
- }
- response = requests.post(url + '/p1/v1/chat/completions',
- json=data, stream=True)
-
- if stream:
- for chunk in response.iter_content(chunk_size=None):
- chunk = chunk.decode('utf-8')
- if chunk.strip():
- message = json.loads(chunk)['choices'][0]['message']['content']
- yield message
- else:
- message = response.json()['choices'][0]['message']['content']
- yield message
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/Wewordle.py b/spaces/CofAI/chat/g4f/Provider/Providers/Wewordle.py
deleted file mode 100644
index 090d0bf3ab2e1f3851880393d43662edfbe9d984..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/g4f/Provider/Providers/Wewordle.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import os
-import requests
-import json
-import random
-import time
-import string
-from ...typing import sha256, Dict, get_type_hints
-
-url = "https://wewordle.org/gptapi/v1/android/turbo"
-model = ['gpt-3.5-turbo']
-supports_stream = False
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- base = ''
- for message in messages:
- base += '%s: %s\n' % (message['role'], message['content'])
- base += 'assistant:'
- # randomize user id and app id
- _user_id = ''.join(random.choices(
- f'{string.ascii_lowercase}{string.digits}', k=16))
- _app_id = ''.join(random.choices(
- f'{string.ascii_lowercase}{string.digits}', k=31))
- # make current date with format utc
- _request_date = time.strftime("%Y-%m-%dT%H:%M:%S.000Z", time.gmtime())
- headers = {
- 'accept': '*/*',
- 'pragma': 'no-cache',
- 'Content-Type': 'application/json',
- 'Connection': 'keep-alive'
- }
- data = {
- "user": _user_id,
- "messages": [
- {"role": "user", "content": base}
- ],
- "subscriber": {
- "originalPurchaseDate": None,
- "originalApplicationVersion": None,
- "allPurchaseDatesMillis": {},
- "entitlements": {
- "active": {},
- "all": {}
- },
- "allPurchaseDates": {},
- "allExpirationDatesMillis": {},
- "allExpirationDates": {},
- "originalAppUserId": f"$RCAnonymousID:{_app_id}",
- "latestExpirationDate": None,
- "requestDate": _request_date,
- "latestExpirationDateMillis": None,
- "nonSubscriptionTransactions": [],
- "originalPurchaseDateMillis": None,
- "managementURL": None,
- "allPurchasedProductIdentifiers": [],
- "firstSeen": _request_date,
- "activeSubscriptions": []
- }
- }
- response = requests.post(url, headers=headers, data=json.dumps(data))
- if response.status_code == 200:
- _json = response.json()
- if 'message' in _json:
- message_content = _json['message']['content']
- message_content = message_content.replace('**assistant:** ', '')
- yield message_content
- else:
- print(f"Error Occurred::{response.status_code}")
- return None
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/CorvaeOboro/gen_ability_icon/dnnlib/__init__.py b/spaces/CorvaeOboro/gen_ability_icon/dnnlib/__init__.py
deleted file mode 100644
index 2f08cf36f11f9b0fd94c1b7caeadf69b98375b04..0000000000000000000000000000000000000000
--- a/spaces/CorvaeOboro/gen_ability_icon/dnnlib/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-from .util import EasyDict, make_cache_dir_path
diff --git a/spaces/Cran-May/ygVI/app.py b/spaces/Cran-May/ygVI/app.py
deleted file mode 100644
index d5be40236716654cca52996de779141be135d5f4..0000000000000000000000000000000000000000
--- a/spaces/Cran-May/ygVI/app.py
+++ /dev/null
@@ -1,250 +0,0 @@
-from typing import Iterator
-
-import gradio as gr
-
-
-from model import run
-
-DEFAULT_SYSTEM_PROMPT = ""
-MAX_MAX_NEW_TOKENS = 2048
-DEFAULT_MAX_NEW_TOKENS = 1024
-MAX_INPUT_TOKEN_LENGTH = 4000
-
-DESCRIPTION = """
-# 玉刚六号改/yugangVI-Chat
-"""
-LICENSE="基于Baichuan-13B-Chat以及https://github.com/ouwei2013/baichuan13b.cpp"
-
-
-
-def clear_and_save_textbox(message: str) -> tuple[str, str]:
- return '', message
-
-
-def display_input(message: str,
- history: list[tuple[str, str]]) -> list[tuple[str, str]]:
- history.append((message, ''))
- return history
-
-
-def delete_prev_fn(
- history: list[tuple[str, str]]) -> tuple[list[tuple[str, str]], str]:
- try:
- message, _ = history.pop()
- except IndexError:
- message = ''
- return history, message or ''
-
-
-def generate(
- message: str,
- history_with_input: list[tuple[str, str]],
- system_prompt: str,
- max_new_tokens: int,
- temperature: float,
- top_p: float,
- top_k: int,
-) -> Iterator[list[tuple[str, str]]]:
-
- history = history_with_input[:-1]
- generator = run(message, history, system_prompt, max_new_tokens, temperature, top_p, top_k)
- for response in generator:
- yield history + [(message, response)]
-
-
-def process_example(message: str) -> tuple[str, list[tuple[str, str]]]:
- generator = generate(message, [], DEFAULT_SYSTEM_PROMPT, 8192, 1, 0.95, 50)
- for x in generator:
- pass
- return '', x
-
-
-def check_input_token_length(message: str, chat_history: list[tuple[str, str]], system_prompt: str) -> None:
- a = 1
-
-
-with gr.Blocks(css='style.css') as demo:
- gr.Markdown(DESCRIPTION)
- gr.DuplicateButton(value='Duplicate Space for private use',
- elem_id='duplicate-button')
-
- with gr.Group():
- chatbot = gr.Chatbot(label='Chatbot')
- with gr.Row():
- textbox = gr.Textbox(
- container=False,
- show_label=False,
- placeholder='请输入/Type a message...',
- scale=10,
- )
- submit_button = gr.Button('提交/Submit',
- variant='primary',
- scale=1,
- min_width=0)
- with gr.Row():
- retry_button = gr.Button('🔄 重来/Retry', variant='secondary')
- undo_button = gr.Button('↩️ 撤销/Undo', variant='secondary')
- clear_button = gr.Button('🗑️ 清除/Clear', variant='secondary')
-
- saved_input = gr.State()
-
- with gr.Accordion(label='进阶设置/Advanced options', open=False):
- system_prompt = gr.Textbox(label='预设引导词/System prompt',
- value=DEFAULT_SYSTEM_PROMPT,
- lines=6)
- max_new_tokens = gr.Slider(
- label='Max new tokens',
- minimum=1,
- maximum=MAX_MAX_NEW_TOKENS,
- step=1,
- value=DEFAULT_MAX_NEW_TOKENS,
- )
- temperature = gr.Slider(
- label='情感温度/Temperature',
- minimum=0.1,
- maximum=4.0,
- step=0.1,
- value=0.3,
- )
- top_p = gr.Slider(
- label='Top-p (nucleus sampling)',
- minimum=0.05,
- maximum=1.0,
- step=0.05,
- value=0.85,
- )
- top_k = gr.Slider(
- label='Top-k',
- minimum=1,
- maximum=1000,
- step=1,
- value=5,
- )
-
- gr.Examples(
- examples=[
- '中华人民共和国的首都是?',
-
- ],
- inputs=textbox,
- outputs=[textbox, chatbot],
- fn=process_example,
- cache_examples=True,
- )
-
- gr.Markdown(LICENSE)
-
- textbox.submit(
- fn=clear_and_save_textbox,
- inputs=textbox,
- outputs=[textbox, saved_input],
- api_name=False,
- queue=False,
- ).then(
- fn=display_input,
- inputs=[saved_input, chatbot],
- outputs=chatbot,
- api_name=False,
- queue=False,
- ).then(
- fn=check_input_token_length,
- inputs=[saved_input, chatbot, system_prompt],
- api_name=False,
- queue=False,
- ).success(
- fn=generate,
- inputs=[
- saved_input,
- chatbot,
- system_prompt,
- max_new_tokens,
- temperature,
- top_p,
- top_k,
- ],
- outputs=chatbot,
- api_name=False,
- )
-
- button_event_preprocess = submit_button.click(
- fn=clear_and_save_textbox,
- inputs=textbox,
- outputs=[textbox, saved_input],
- api_name=False,
- queue=False,
- ).then(
- fn=display_input,
- inputs=[saved_input, chatbot],
- outputs=chatbot,
- api_name=False,
- queue=False,
- ).then(
- fn=check_input_token_length,
- inputs=[saved_input, chatbot, system_prompt],
- api_name=False,
- queue=False,
- ).success(
- fn=generate,
- inputs=[
- saved_input,
- chatbot,
- system_prompt,
- max_new_tokens,
- temperature,
- top_p,
- top_k,
- ],
- outputs=chatbot,
- api_name=False,
- )
-
- retry_button.click(
- fn=delete_prev_fn,
- inputs=chatbot,
- outputs=[chatbot, saved_input],
- api_name=False,
- queue=False,
- ).then(
- fn=display_input,
- inputs=[saved_input, chatbot],
- outputs=chatbot,
- api_name=False,
- queue=False,
- ).then(
- fn=generate,
- inputs=[
- saved_input,
- chatbot,
- system_prompt,
- max_new_tokens,
- temperature,
- top_p,
- top_k,
- ],
- outputs=chatbot,
- api_name=False,
- )
-
- undo_button.click(
-
- fn=delete_prev_fn,
- inputs=chatbot,
- outputs=[chatbot, saved_input],
- api_name=False,
- queue=False,
- ).then(
- fn=lambda x: x,
- inputs=[saved_input],
- outputs=textbox,
- api_name=False,
- queue=False,
- )
-
- clear_button.click(
- fn=lambda: ([], ''),
- outputs=[chatbot, saved_input],
- queue=False,
- api_name=False,
- )
-
-demo.queue(max_size=20).launch()
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/dtype.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/dtype.py
deleted file mode 100644
index baedb192be4bbddd52bc0105a344a0484c890fe1..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/dtype.py
+++ /dev/null
@@ -1,39 +0,0 @@
-#coding=utf-8
-'''
-Created on 2016年9月27日
-@author: dengdan
-'''
-import numpy as np
-
-float32 = 'float32'
-floatX = float32
-int32 = 'int32'
-uint8 = 'uint8'
-string = 'str'
-
-def cast(obj, dtype):
- if isinstance(obj, list):
- return np.asarray(obj, dtype = floatX)
- return np.cast[dtype](obj)
-
-def int(obj):
- return cast(obj, 'int')
-
-def double(obj):
- return cast(obj, 'double')
-
-def is_number(obj):
- try:
- obj + 1
- except:
- return False
- return True
-
-def is_str(s):
- return type(s) == str
-
-def is_list(s):
- return type(s) == list
-
-def is_tuple(s):
- return type(s) == tuple
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/DdsImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/DdsImagePlugin.py
deleted file mode 100644
index a946daeaa6b9a5946fc5492443dfddbb10881c99..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/DdsImagePlugin.py
+++ /dev/null
@@ -1,291 +0,0 @@
-"""
-A Pillow loader for .dds files (S3TC-compressed aka DXTC)
-Jerome Leclanche P$JT$JU$9h$JU$KV>P$KV$KW$9h$KW&FU>P&FU&FV$9h&FV;'S>P;'S;=`BZ<%l?HT>P?HT?HU$9h?HUO>P(n%d_$c&j'vp'y!bOY%ZYZ&cZr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_#O%Z#O#P&c#P#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z&j&hT$c&jO!^&c!_#o&c#p;'S&c;'S;=`&w<%lO&c&j&zP;=`<%l&c'|'U]$c&j'y!bOY&}YZ&cZw&}wx&cx!^&}!^!_'}!_#O&}#O#P&c#P#o&}#o#p'}#p;'S&};'S;=`(l<%lO&}!b(SU'y!bOY'}Zw'}x#O'}#P;'S'};'S;=`(f<%lO'}!b(iP;=`<%l'}'|(oP;=`<%l&}'[(y]$c&j'vpOY(rYZ&cZr(rrs&cs!^(r!^!_)r!_#O(r#O#P&c#P#o(r#o#p)r#p;'S(r;'S;=`*a<%lO(rp)wU'vpOY)rZr)rs#O)r#P;'S)r;'S;=`*Z<%lO)rp*^P;=`<%l)r'[*dP;=`<%l(r#S*nX'vp'y!bOY*gZr*grs'}sw*gwx)rx#O*g#P;'S*g;'S;=`+Z<%lO*g#S+^P;=`<%l*g(n+dP;=`<%l%Z(CS+rq$c&j'vp'y!b'l(;dOX%ZXY+gYZ&cZ[+g[p%Zpq+gqr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_#O%Z#O#P&c#P#o%Z#o#p*g#p$f%Z$f$g+g$g#BY%Z#BY#BZ+g#BZ$IS%Z$IS$I_+g$I_$JT%Z$JT$JU+g$JU$KV%Z$KV$KW+g$KW&FU%Z&FU&FV+g&FV;'S%Z;'S;=`+a<%l?HT%Z?HT?HU+g?HUO%Z(CS.ST'w#S$c&j'm(;dO!^&c!_#o&c#p;'S&c;'S;=`&w<%lO&c(CS.n_$c&j'vp'y!b'm(;dOY%ZYZ&cZr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_#O%Z#O#P&c#P#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z%#`/x`$c&j!l$Ip'vp'y!bOY%ZYZ&cZr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_!`0z!`#O%Z#O#P&c#P#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z%#S1V`#p$Id$c&j'vp'y!bOY%ZYZ&cZr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_!`2X!`#O%Z#O#P&c#P#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z%#S2d_#p$Id$c&j'vp'y!bOY%ZYZ&cZr%Zrs&}sw%Zwx(rx!^%Z!^!_*g!_#O%Z#O#P&c#P#o%Z#o#p*g#p;'S%Z;'S;=`+a<%lO%Z$2b3l_'u$(n$c&j'y!bOY4kYZ5qZr4krs7nsw4kwx5qx!^4k!^!_8p!_#O4k#O#P5q#P#o4k#o#p8p#p;'S4k;'S;=`:X<%lO4k*r4r_$c&j'y!bOY4kYZ5qZr4krs7nsw4kwx5qx!^4k!^!_8p!_#O4k#O#P5q#P#o4k#o#p8p#p;'S4k;'S;=`:X<%lO4k)`5vX$c&jOr5qrs6cs!^5q!^!_6y!_#o5q#o#p6y#p;'S5q;'S;=`7h<%lO5q)`6jT$^#t$c&jO!^&c!_#o&c#p;'S&c;'S;=`&w<%lO&c#t6|TOr6yrs7]s;'S6y;'S;=`7b<%lO6y#t7bO$^#t#t7eP;=`<%l6y)`7kP;=`<%l5q*r7w]$^#t$c&j'y!bOY&}YZ&cZw&}wx&cx!^&}!^!_'}!_#O&}#O#P&c#P#o&}#o#p'}#p;'S&};'S;=`(l<%lO&}%W8uZ'y!bOY8pYZ6yZr8prs9hsw8pwx6yx#O8p#O#P6y#P;'S8p;'S;=`:R<%lO8p%W9oU$^#t'y!bOY'}Zw'}x#O'}#P;'S'};'S;=`(f<%lO'}%W:UP;=`<%l8p*r:[P;=`<%l4k#%|:hg$c&j'vp'y!bOY%ZYZ&cZr%Zrs&}st%Ztu
-
-
- **{result['name']}**
-
- _Asking Price:_ {metadata_pid.get('asking_price', 'N/A')}
-
- _Category:_ {metadata_pid.get('category', 'N/A')}
-
- _Location:_ {metadata_pid.get('location', 'N/A')}
- """, unsafe_allow_html=True)
-
- st.markdown(f"""**_Description:_** {result['description'][:MAX_LENGTH_DESC]}...[more]({result['url']})
- """)
-
- with col_compare:
- st.checkbox('compare', key=f"cb_compare__{pid}", on_change=callback_count_checked)
-
-# display summary tab
-if st.session_state['count_checked'] > 0:
- with summary_container.container():
- st.divider()
- st.header('Summary')
- if st.button('Compare Products'):
-
- # populate pids that are checked
- relevant_pids = [key.split('__')[-1] for key in st.session_state['checked_boxes']]
- relevant_pids = list(set(relevant_pids))
-
- # get metadata from pinecone
- metadata_filter = {
- 'product_id': {"$in": relevant_pids}
- }
- results = query_pinecone(
- vector=ZERO_EMBEDDING_VECTOR,
- top_k=100,
- include_metadata=True,
- metadata_filter=metadata_filter
- )
-
- # organize document by product_id
- documents = {}
- for res in results['matches']:
- pid, chunk_id = res['id'].split('-')
- if pid not in documents:
- documents[pid] = {}
- if "chunk" not in documents[pid]:
- documents[pid]['chunk'] = {}
- documents[pid]['chunk'][chunk_id] = res['metadata']['document']
-
- # concatenate documents
- products = []
- for pid, doc in documents.items():
- products.append(
- doc['chunk']['1'] + '\n\n' + doc['chunk']['2']
- )
-
- # summarize
- with st.spinner('Summarizing...'):
- summary = summarize_products(products)
- st.markdown(summary.get("content"), unsafe_allow_html=True)
-else:
- try:
- summary_container.empty()
- except NameError:
- pass
-
-# ### Uncomment if you want to debug states
-# with st.expander("developer tool"):
-# st.json(st.session_state)
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/ARTInSOFT VBUC.lic Crack [TOP].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/ARTInSOFT VBUC.lic Crack [TOP].md
deleted file mode 100644
index 901e6f8f0958673cb4ec004c65fcd1fc2576a579..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/ARTInSOFT VBUC.lic Crack [TOP].md
+++ /dev/null
@@ -1,6 +0,0 @@
-ARTInSOFT VBUC.lic crack
-
- 8a78ff9644
-
-
-
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bhoot Damar Tantra Pdf Download.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bhoot Damar Tantra Pdf Download.md
deleted file mode 100644
index c0303659d656ec4987e4535ebae624a5d5b3e515..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Bhoot Damar Tantra Pdf Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-bhoot damar tantra pdf download
-
-Vishva buy bhoot damar tantra book kalyaan mahaan indrajaal book contains thousands of tantra ... Bhoot damar maha tantra hindi - free download as pdf file (. 4d29de3e1b
-
-
-
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Elcomsoft Internet Password Breaker Cracked HOT!.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Elcomsoft Internet Password Breaker Cracked HOT!.md
deleted file mode 100644
index 84e1e70bf284c5f5592ccc1a8a6a433679f24e17..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Elcomsoft Internet Password Breaker Cracked HOT!.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
-Elcomsoft Internet Password Breaker Cracked: A Smart and Easy Way to Recover Web and Email Passwords
-
-Elcomsoft Internet Password Breaker Cracked
-
-What is Elcomsoft Internet Password Breaker cracked?
-
-How to download and use Elcomsoft Internet Password Breaker cracked?
-
-
-
-
-What are the benefits and risks of using Elcomsoft Internet Password Breaker cracked?
-
-
-
-
-
-
-
-Conclusion
-
-How to use Elcomsoft Internet Password Breaker cracked to recover web and email passwords?
-
-
-
-
-How to optimize your web and email passwords security?
-
-
-
-
-Conclusion
-
-
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fabrication CAMduct 2019 Keygen X-force V1.0.5 275 !!EXCLUSIVE!!.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fabrication CAMduct 2019 Keygen X-force V1.0.5 275 !!EXCLUSIVE!!.md
deleted file mode 100644
index 1fb61e4f128a9dbb6275485c060761e71aa87bc6..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Fabrication CAMduct 2019 Keygen X-force V1.0.5 275 !!EXCLUSIVE!!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Fabrication CAMduct 2019 keygen x-force v1.0.5 275
-
-3ds Max 2016-2017-2018-2019-2020 4.20.00 + crack (FULL),Autodesk Maya . ... Fabrication CAMduct 2008 Keygen X-force V1.0.5 275. 4d29de3e1b
-
-
-
diff --git a/spaces/rkp74/MCQ-Generation/app.py b/spaces/rkp74/MCQ-Generation/app.py
deleted file mode 100644
index 6c5696b28826c0e77b68c5fb05f1eaf5725dfe8c..0000000000000000000000000000000000000000
--- a/spaces/rkp74/MCQ-Generation/app.py
+++ /dev/null
@@ -1,70 +0,0 @@
-from fastT5 import get_onnx_model,get_onnx_runtime_sessions,OnnxT5
-from transformers import AutoTokenizer
-from pathlib import Path
-import os
-from fastapi import FastAPI
-from pydantic import BaseModel
-
-app = FastAPI()
-
-
-class QuestionRequest(BaseModel):
- context: str
- answer: str
-
-class QuestionResponse(BaseModel):
- question: str
-
-trained_model_path = './t5_squad_v1/'
-
-pretrained_model_name = Path(trained_model_path).stem
-
-
-encoder_path = os.path.join(trained_model_path,f"{pretrained_model_name}-encoder-quantized.onnx")
-decoder_path = os.path.join(trained_model_path,f"{pretrained_model_name}-decoder-quantized.onnx")
-init_decoder_path = os.path.join(trained_model_path,f"{pretrained_model_name}-init-decoder-quantized.onnx")
-
-model_paths = encoder_path, decoder_path, init_decoder_path
-model_sessions = get_onnx_runtime_sessions(model_paths)
-model = OnnxT5(trained_model_path, model_sessions)
-
-tokenizer = AutoTokenizer.from_pretrained(trained_model_path)
-
-
-def get_question(sentence,answer,mdl,tknizer):
- text = "context: {} answer: {}".format(sentence,answer)
- print (text)
- max_len = 256
- encoding = tknizer.encode_plus(text,max_length=max_len, pad_to_max_length=False,truncation=True, return_tensors="pt")
-
- input_ids, attention_mask = encoding["input_ids"], encoding["attention_mask"]
-
- outs = mdl.generate(input_ids=input_ids,
- attention_mask=attention_mask,
- early_stopping=True,
- num_beams=5,
- num_return_sequences=1,
- no_repeat_ngram_size=2,
- max_length=128)
-
-
- dec = [tknizer.decode(ids,skip_special_tokens=True) for ids in outs]
-
-
- Question = dec[0].replace("question:","")
- Question= Question.strip()
- return Question
-
-
-
-@app.get('/')
-def index():
- return {'message':'hello world'}
-
-@app.post("/getquestion", response_model=QuestionResponse)
-def getquestion(request: QuestionRequest):
- context = request.context
- answer = request.answer
- ques = get_question(context,answer,model,tokenizer)
- return QuestionResponse(question=ques)
-
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/lvis.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/lvis.py
deleted file mode 100644
index 5f6196eee59393b3a7733c76694d21dbb1279e68..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/datasets/lvis.py
+++ /dev/null
@@ -1,742 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import itertools
-import logging
-import os.path as osp
-import tempfile
-import warnings
-from collections import OrderedDict
-
-import numpy as np
-from mmcv.utils import print_log
-from terminaltables import AsciiTable
-
-from .builder import DATASETS
-from .coco import CocoDataset
-
-
-@DATASETS.register_module()
-class LVISV05Dataset(CocoDataset):
-
- CLASSES = (
- 'acorn', 'aerosol_can', 'air_conditioner', 'airplane', 'alarm_clock',
- 'alcohol', 'alligator', 'almond', 'ambulance', 'amplifier', 'anklet',
- 'antenna', 'apple', 'apple_juice', 'applesauce', 'apricot', 'apron',
- 'aquarium', 'armband', 'armchair', 'armoire', 'armor', 'artichoke',
- 'trash_can', 'ashtray', 'asparagus', 'atomizer', 'avocado', 'award',
- 'awning', 'ax', 'baby_buggy', 'basketball_backboard', 'backpack',
- 'handbag', 'suitcase', 'bagel', 'bagpipe', 'baguet', 'bait', 'ball',
- 'ballet_skirt', 'balloon', 'bamboo', 'banana', 'Band_Aid', 'bandage',
- 'bandanna', 'banjo', 'banner', 'barbell', 'barge', 'barrel',
- 'barrette', 'barrow', 'baseball_base', 'baseball', 'baseball_bat',
- 'baseball_cap', 'baseball_glove', 'basket', 'basketball_hoop',
- 'basketball', 'bass_horn', 'bat_(animal)', 'bath_mat', 'bath_towel',
- 'bathrobe', 'bathtub', 'batter_(food)', 'battery', 'beachball', 'bead',
- 'beaker', 'bean_curd', 'beanbag', 'beanie', 'bear', 'bed',
- 'bedspread', 'cow', 'beef_(food)', 'beeper', 'beer_bottle', 'beer_can',
- 'beetle', 'bell', 'bell_pepper', 'belt', 'belt_buckle', 'bench',
- 'beret', 'bib', 'Bible', 'bicycle', 'visor', 'binder', 'binoculars',
- 'bird', 'birdfeeder', 'birdbath', 'birdcage', 'birdhouse',
- 'birthday_cake', 'birthday_card', 'biscuit_(bread)', 'pirate_flag',
- 'black_sheep', 'blackboard', 'blanket', 'blazer', 'blender', 'blimp',
- 'blinker', 'blueberry', 'boar', 'gameboard', 'boat', 'bobbin',
- 'bobby_pin', 'boiled_egg', 'bolo_tie', 'deadbolt', 'bolt', 'bonnet',
- 'book', 'book_bag', 'bookcase', 'booklet', 'bookmark',
- 'boom_microphone', 'boot', 'bottle', 'bottle_opener', 'bouquet',
- 'bow_(weapon)', 'bow_(decorative_ribbons)', 'bow-tie', 'bowl',
- 'pipe_bowl', 'bowler_hat', 'bowling_ball', 'bowling_pin',
- 'boxing_glove', 'suspenders', 'bracelet', 'brass_plaque', 'brassiere',
- 'bread-bin', 'breechcloth', 'bridal_gown', 'briefcase',
- 'bristle_brush', 'broccoli', 'broach', 'broom', 'brownie',
- 'brussels_sprouts', 'bubble_gum', 'bucket', 'horse_buggy', 'bull',
- 'bulldog', 'bulldozer', 'bullet_train', 'bulletin_board',
- 'bulletproof_vest', 'bullhorn', 'corned_beef', 'bun', 'bunk_bed',
- 'buoy', 'burrito', 'bus_(vehicle)', 'business_card', 'butcher_knife',
- 'butter', 'butterfly', 'button', 'cab_(taxi)', 'cabana', 'cabin_car',
- 'cabinet', 'locker', 'cake', 'calculator', 'calendar', 'calf',
- 'camcorder', 'camel', 'camera', 'camera_lens', 'camper_(vehicle)',
- 'can', 'can_opener', 'candelabrum', 'candle', 'candle_holder',
- 'candy_bar', 'candy_cane', 'walking_cane', 'canister', 'cannon',
- 'canoe', 'cantaloup', 'canteen', 'cap_(headwear)', 'bottle_cap',
- 'cape', 'cappuccino', 'car_(automobile)', 'railcar_(part_of_a_train)',
- 'elevator_car', 'car_battery', 'identity_card', 'card', 'cardigan',
- 'cargo_ship', 'carnation', 'horse_carriage', 'carrot', 'tote_bag',
- 'cart', 'carton', 'cash_register', 'casserole', 'cassette', 'cast',
- 'cat', 'cauliflower', 'caviar', 'cayenne_(spice)', 'CD_player',
- 'celery', 'cellular_telephone', 'chain_mail', 'chair', 'chaise_longue',
- 'champagne', 'chandelier', 'chap', 'checkbook', 'checkerboard',
- 'cherry', 'chessboard', 'chest_of_drawers_(furniture)',
- 'chicken_(animal)', 'chicken_wire', 'chickpea', 'Chihuahua',
- 'chili_(vegetable)', 'chime', 'chinaware', 'crisp_(potato_chip)',
- 'poker_chip', 'chocolate_bar', 'chocolate_cake', 'chocolate_milk',
- 'chocolate_mousse', 'choker', 'chopping_board', 'chopstick',
- 'Christmas_tree', 'slide', 'cider', 'cigar_box', 'cigarette',
- 'cigarette_case', 'cistern', 'clarinet', 'clasp', 'cleansing_agent',
- 'clementine', 'clip', 'clipboard', 'clock', 'clock_tower',
- 'clothes_hamper', 'clothespin', 'clutch_bag', 'coaster', 'coat',
- 'coat_hanger', 'coatrack', 'cock', 'coconut', 'coffee_filter',
- 'coffee_maker', 'coffee_table', 'coffeepot', 'coil', 'coin',
- 'colander', 'coleslaw', 'coloring_material', 'combination_lock',
- 'pacifier', 'comic_book', 'computer_keyboard', 'concrete_mixer',
- 'cone', 'control', 'convertible_(automobile)', 'sofa_bed', 'cookie',
- 'cookie_jar', 'cooking_utensil', 'cooler_(for_food)',
- 'cork_(bottle_plug)', 'corkboard', 'corkscrew', 'edible_corn',
- 'cornbread', 'cornet', 'cornice', 'cornmeal', 'corset',
- 'romaine_lettuce', 'costume', 'cougar', 'coverall', 'cowbell',
- 'cowboy_hat', 'crab_(animal)', 'cracker', 'crape', 'crate', 'crayon',
- 'cream_pitcher', 'credit_card', 'crescent_roll', 'crib', 'crock_pot',
- 'crossbar', 'crouton', 'crow', 'crown', 'crucifix', 'cruise_ship',
- 'police_cruiser', 'crumb', 'crutch', 'cub_(animal)', 'cube',
- 'cucumber', 'cufflink', 'cup', 'trophy_cup', 'cupcake', 'hair_curler',
- 'curling_iron', 'curtain', 'cushion', 'custard', 'cutting_tool',
- 'cylinder', 'cymbal', 'dachshund', 'dagger', 'dartboard',
- 'date_(fruit)', 'deck_chair', 'deer', 'dental_floss', 'desk',
- 'detergent', 'diaper', 'diary', 'die', 'dinghy', 'dining_table', 'tux',
- 'dish', 'dish_antenna', 'dishrag', 'dishtowel', 'dishwasher',
- 'dishwasher_detergent', 'diskette', 'dispenser', 'Dixie_cup', 'dog',
- 'dog_collar', 'doll', 'dollar', 'dolphin', 'domestic_ass', 'eye_mask',
- 'doorbell', 'doorknob', 'doormat', 'doughnut', 'dove', 'dragonfly',
- 'drawer', 'underdrawers', 'dress', 'dress_hat', 'dress_suit',
- 'dresser', 'drill', 'drinking_fountain', 'drone', 'dropper',
- 'drum_(musical_instrument)', 'drumstick', 'duck', 'duckling',
- 'duct_tape', 'duffel_bag', 'dumbbell', 'dumpster', 'dustpan',
- 'Dutch_oven', 'eagle', 'earphone', 'earplug', 'earring', 'easel',
- 'eclair', 'eel', 'egg', 'egg_roll', 'egg_yolk', 'eggbeater',
- 'eggplant', 'electric_chair', 'refrigerator', 'elephant', 'elk',
- 'envelope', 'eraser', 'escargot', 'eyepatch', 'falcon', 'fan',
- 'faucet', 'fedora', 'ferret', 'Ferris_wheel', 'ferry', 'fig_(fruit)',
- 'fighter_jet', 'figurine', 'file_cabinet', 'file_(tool)', 'fire_alarm',
- 'fire_engine', 'fire_extinguisher', 'fire_hose', 'fireplace',
- 'fireplug', 'fish', 'fish_(food)', 'fishbowl', 'fishing_boat',
- 'fishing_rod', 'flag', 'flagpole', 'flamingo', 'flannel', 'flash',
- 'flashlight', 'fleece', 'flip-flop_(sandal)', 'flipper_(footwear)',
- 'flower_arrangement', 'flute_glass', 'foal', 'folding_chair',
- 'food_processor', 'football_(American)', 'football_helmet',
- 'footstool', 'fork', 'forklift', 'freight_car', 'French_toast',
- 'freshener', 'frisbee', 'frog', 'fruit_juice', 'fruit_salad',
- 'frying_pan', 'fudge', 'funnel', 'futon', 'gag', 'garbage',
- 'garbage_truck', 'garden_hose', 'gargle', 'gargoyle', 'garlic',
- 'gasmask', 'gazelle', 'gelatin', 'gemstone', 'giant_panda',
- 'gift_wrap', 'ginger', 'giraffe', 'cincture',
- 'glass_(drink_container)', 'globe', 'glove', 'goat', 'goggles',
- 'goldfish', 'golf_club', 'golfcart', 'gondola_(boat)', 'goose',
- 'gorilla', 'gourd', 'surgical_gown', 'grape', 'grasshopper', 'grater',
- 'gravestone', 'gravy_boat', 'green_bean', 'green_onion', 'griddle',
- 'grillroom', 'grinder_(tool)', 'grits', 'grizzly', 'grocery_bag',
- 'guacamole', 'guitar', 'gull', 'gun', 'hair_spray', 'hairbrush',
- 'hairnet', 'hairpin', 'ham', 'hamburger', 'hammer', 'hammock',
- 'hamper', 'hamster', 'hair_dryer', 'hand_glass', 'hand_towel',
- 'handcart', 'handcuff', 'handkerchief', 'handle', 'handsaw',
- 'hardback_book', 'harmonium', 'hat', 'hatbox', 'hatch', 'veil',
- 'headband', 'headboard', 'headlight', 'headscarf', 'headset',
- 'headstall_(for_horses)', 'hearing_aid', 'heart', 'heater',
- 'helicopter', 'helmet', 'heron', 'highchair', 'hinge', 'hippopotamus',
- 'hockey_stick', 'hog', 'home_plate_(baseball)', 'honey', 'fume_hood',
- 'hook', 'horse', 'hose', 'hot-air_balloon', 'hotplate', 'hot_sauce',
- 'hourglass', 'houseboat', 'hummingbird', 'hummus', 'polar_bear',
- 'icecream', 'popsicle', 'ice_maker', 'ice_pack', 'ice_skate',
- 'ice_tea', 'igniter', 'incense', 'inhaler', 'iPod',
- 'iron_(for_clothing)', 'ironing_board', 'jacket', 'jam', 'jean',
- 'jeep', 'jelly_bean', 'jersey', 'jet_plane', 'jewelry', 'joystick',
- 'jumpsuit', 'kayak', 'keg', 'kennel', 'kettle', 'key', 'keycard',
- 'kilt', 'kimono', 'kitchen_sink', 'kitchen_table', 'kite', 'kitten',
- 'kiwi_fruit', 'knee_pad', 'knife', 'knight_(chess_piece)',
- 'knitting_needle', 'knob', 'knocker_(on_a_door)', 'koala', 'lab_coat',
- 'ladder', 'ladle', 'ladybug', 'lamb_(animal)', 'lamb-chop', 'lamp',
- 'lamppost', 'lampshade', 'lantern', 'lanyard', 'laptop_computer',
- 'lasagna', 'latch', 'lawn_mower', 'leather', 'legging_(clothing)',
- 'Lego', 'lemon', 'lemonade', 'lettuce', 'license_plate', 'life_buoy',
- 'life_jacket', 'lightbulb', 'lightning_rod', 'lime', 'limousine',
- 'linen_paper', 'lion', 'lip_balm', 'lipstick', 'liquor', 'lizard',
- 'Loafer_(type_of_shoe)', 'log', 'lollipop', 'lotion',
- 'speaker_(stereo_equipment)', 'loveseat', 'machine_gun', 'magazine',
- 'magnet', 'mail_slot', 'mailbox_(at_home)', 'mallet', 'mammoth',
- 'mandarin_orange', 'manger', 'manhole', 'map', 'marker', 'martini',
- 'mascot', 'mashed_potato', 'masher', 'mask', 'mast',
- 'mat_(gym_equipment)', 'matchbox', 'mattress', 'measuring_cup',
- 'measuring_stick', 'meatball', 'medicine', 'melon', 'microphone',
- 'microscope', 'microwave_oven', 'milestone', 'milk', 'minivan',
- 'mint_candy', 'mirror', 'mitten', 'mixer_(kitchen_tool)', 'money',
- 'monitor_(computer_equipment) computer_monitor', 'monkey', 'motor',
- 'motor_scooter', 'motor_vehicle', 'motorboat', 'motorcycle',
- 'mound_(baseball)', 'mouse_(animal_rodent)',
- 'mouse_(computer_equipment)', 'mousepad', 'muffin', 'mug', 'mushroom',
- 'music_stool', 'musical_instrument', 'nailfile', 'nameplate', 'napkin',
- 'neckerchief', 'necklace', 'necktie', 'needle', 'nest', 'newsstand',
- 'nightshirt', 'nosebag_(for_animals)', 'noseband_(for_animals)',
- 'notebook', 'notepad', 'nut', 'nutcracker', 'oar', 'octopus_(food)',
- 'octopus_(animal)', 'oil_lamp', 'olive_oil', 'omelet', 'onion',
- 'orange_(fruit)', 'orange_juice', 'oregano', 'ostrich', 'ottoman',
- 'overalls_(clothing)', 'owl', 'packet', 'inkpad', 'pad', 'paddle',
- 'padlock', 'paintbox', 'paintbrush', 'painting', 'pajamas', 'palette',
- 'pan_(for_cooking)', 'pan_(metal_container)', 'pancake', 'pantyhose',
- 'papaya', 'paperclip', 'paper_plate', 'paper_towel', 'paperback_book',
- 'paperweight', 'parachute', 'parakeet', 'parasail_(sports)',
- 'parchment', 'parka', 'parking_meter', 'parrot',
- 'passenger_car_(part_of_a_train)', 'passenger_ship', 'passport',
- 'pastry', 'patty_(food)', 'pea_(food)', 'peach', 'peanut_butter',
- 'pear', 'peeler_(tool_for_fruit_and_vegetables)', 'pegboard',
- 'pelican', 'pen', 'pencil', 'pencil_box', 'pencil_sharpener',
- 'pendulum', 'penguin', 'pennant', 'penny_(coin)', 'pepper',
- 'pepper_mill', 'perfume', 'persimmon', 'baby', 'pet', 'petfood',
- 'pew_(church_bench)', 'phonebook', 'phonograph_record', 'piano',
- 'pickle', 'pickup_truck', 'pie', 'pigeon', 'piggy_bank', 'pillow',
- 'pin_(non_jewelry)', 'pineapple', 'pinecone', 'ping-pong_ball',
- 'pinwheel', 'tobacco_pipe', 'pipe', 'pistol', 'pita_(bread)',
- 'pitcher_(vessel_for_liquid)', 'pitchfork', 'pizza', 'place_mat',
- 'plate', 'platter', 'playing_card', 'playpen', 'pliers',
- 'plow_(farm_equipment)', 'pocket_watch', 'pocketknife',
- 'poker_(fire_stirring_tool)', 'pole', 'police_van', 'polo_shirt',
- 'poncho', 'pony', 'pool_table', 'pop_(soda)', 'portrait',
- 'postbox_(public)', 'postcard', 'poster', 'pot', 'flowerpot', 'potato',
- 'potholder', 'pottery', 'pouch', 'power_shovel', 'prawn', 'printer',
- 'projectile_(weapon)', 'projector', 'propeller', 'prune', 'pudding',
- 'puffer_(fish)', 'puffin', 'pug-dog', 'pumpkin', 'puncher', 'puppet',
- 'puppy', 'quesadilla', 'quiche', 'quilt', 'rabbit', 'race_car',
- 'racket', 'radar', 'radiator', 'radio_receiver', 'radish', 'raft',
- 'rag_doll', 'raincoat', 'ram_(animal)', 'raspberry', 'rat',
- 'razorblade', 'reamer_(juicer)', 'rearview_mirror', 'receipt',
- 'recliner', 'record_player', 'red_cabbage', 'reflector',
- 'remote_control', 'rhinoceros', 'rib_(food)', 'rifle', 'ring',
- 'river_boat', 'road_map', 'robe', 'rocking_chair', 'roller_skate',
- 'Rollerblade', 'rolling_pin', 'root_beer',
- 'router_(computer_equipment)', 'rubber_band', 'runner_(carpet)',
- 'plastic_bag', 'saddle_(on_an_animal)', 'saddle_blanket', 'saddlebag',
- 'safety_pin', 'sail', 'salad', 'salad_plate', 'salami',
- 'salmon_(fish)', 'salmon_(food)', 'salsa', 'saltshaker',
- 'sandal_(type_of_shoe)', 'sandwich', 'satchel', 'saucepan', 'saucer',
- 'sausage', 'sawhorse', 'saxophone', 'scale_(measuring_instrument)',
- 'scarecrow', 'scarf', 'school_bus', 'scissors', 'scoreboard',
- 'scrambled_eggs', 'scraper', 'scratcher', 'screwdriver',
- 'scrubbing_brush', 'sculpture', 'seabird', 'seahorse', 'seaplane',
- 'seashell', 'seedling', 'serving_dish', 'sewing_machine', 'shaker',
- 'shampoo', 'shark', 'sharpener', 'Sharpie', 'shaver_(electric)',
- 'shaving_cream', 'shawl', 'shears', 'sheep', 'shepherd_dog',
- 'sherbert', 'shield', 'shirt', 'shoe', 'shopping_bag', 'shopping_cart',
- 'short_pants', 'shot_glass', 'shoulder_bag', 'shovel', 'shower_head',
- 'shower_curtain', 'shredder_(for_paper)', 'sieve', 'signboard', 'silo',
- 'sink', 'skateboard', 'skewer', 'ski', 'ski_boot', 'ski_parka',
- 'ski_pole', 'skirt', 'sled', 'sleeping_bag', 'sling_(bandage)',
- 'slipper_(footwear)', 'smoothie', 'snake', 'snowboard', 'snowman',
- 'snowmobile', 'soap', 'soccer_ball', 'sock', 'soda_fountain',
- 'carbonated_water', 'sofa', 'softball', 'solar_array', 'sombrero',
- 'soup', 'soup_bowl', 'soupspoon', 'sour_cream', 'soya_milk',
- 'space_shuttle', 'sparkler_(fireworks)', 'spatula', 'spear',
- 'spectacles', 'spice_rack', 'spider', 'sponge', 'spoon', 'sportswear',
- 'spotlight', 'squirrel', 'stapler_(stapling_machine)', 'starfish',
- 'statue_(sculpture)', 'steak_(food)', 'steak_knife',
- 'steamer_(kitchen_appliance)', 'steering_wheel', 'stencil',
- 'stepladder', 'step_stool', 'stereo_(sound_system)', 'stew', 'stirrer',
- 'stirrup', 'stockings_(leg_wear)', 'stool', 'stop_sign', 'brake_light',
- 'stove', 'strainer', 'strap', 'straw_(for_drinking)', 'strawberry',
- 'street_sign', 'streetlight', 'string_cheese', 'stylus', 'subwoofer',
- 'sugar_bowl', 'sugarcane_(plant)', 'suit_(clothing)', 'sunflower',
- 'sunglasses', 'sunhat', 'sunscreen', 'surfboard', 'sushi', 'mop',
- 'sweat_pants', 'sweatband', 'sweater', 'sweatshirt', 'sweet_potato',
- 'swimsuit', 'sword', 'syringe', 'Tabasco_sauce', 'table-tennis_table',
- 'table', 'table_lamp', 'tablecloth', 'tachometer', 'taco', 'tag',
- 'taillight', 'tambourine', 'army_tank', 'tank_(storage_vessel)',
- 'tank_top_(clothing)', 'tape_(sticky_cloth_or_paper)', 'tape_measure',
- 'tapestry', 'tarp', 'tartan', 'tassel', 'tea_bag', 'teacup',
- 'teakettle', 'teapot', 'teddy_bear', 'telephone', 'telephone_booth',
- 'telephone_pole', 'telephoto_lens', 'television_camera',
- 'television_set', 'tennis_ball', 'tennis_racket', 'tequila',
- 'thermometer', 'thermos_bottle', 'thermostat', 'thimble', 'thread',
- 'thumbtack', 'tiara', 'tiger', 'tights_(clothing)', 'timer', 'tinfoil',
- 'tinsel', 'tissue_paper', 'toast_(food)', 'toaster', 'toaster_oven',
- 'toilet', 'toilet_tissue', 'tomato', 'tongs', 'toolbox', 'toothbrush',
- 'toothpaste', 'toothpick', 'cover', 'tortilla', 'tow_truck', 'towel',
- 'towel_rack', 'toy', 'tractor_(farm_equipment)', 'traffic_light',
- 'dirt_bike', 'trailer_truck', 'train_(railroad_vehicle)', 'trampoline',
- 'tray', 'tree_house', 'trench_coat', 'triangle_(musical_instrument)',
- 'tricycle', 'tripod', 'trousers', 'truck', 'truffle_(chocolate)',
- 'trunk', 'vat', 'turban', 'turkey_(bird)', 'turkey_(food)', 'turnip',
- 'turtle', 'turtleneck_(clothing)', 'typewriter', 'umbrella',
- 'underwear', 'unicycle', 'urinal', 'urn', 'vacuum_cleaner', 'valve',
- 'vase', 'vending_machine', 'vent', 'videotape', 'vinegar', 'violin',
- 'vodka', 'volleyball', 'vulture', 'waffle', 'waffle_iron', 'wagon',
- 'wagon_wheel', 'walking_stick', 'wall_clock', 'wall_socket', 'wallet',
- 'walrus', 'wardrobe', 'wasabi', 'automatic_washer', 'watch',
- 'water_bottle', 'water_cooler', 'water_faucet', 'water_filter',
- 'water_heater', 'water_jug', 'water_gun', 'water_scooter', 'water_ski',
- 'water_tower', 'watering_can', 'watermelon', 'weathervane', 'webcam',
- 'wedding_cake', 'wedding_ring', 'wet_suit', 'wheel', 'wheelchair',
- 'whipped_cream', 'whiskey', 'whistle', 'wick', 'wig', 'wind_chime',
- 'windmill', 'window_box_(for_plants)', 'windshield_wiper', 'windsock',
- 'wine_bottle', 'wine_bucket', 'wineglass', 'wing_chair',
- 'blinder_(for_horses)', 'wok', 'wolf', 'wooden_spoon', 'wreath',
- 'wrench', 'wristband', 'wristlet', 'yacht', 'yak', 'yogurt',
- 'yoke_(animal_equipment)', 'zebra', 'zucchini')
-
- PALETTE = None
-
- def load_annotations(self, ann_file):
- """Load annotation from lvis style annotation file.
-
- Args:
- ann_file (str): Path of annotation file.
-
- Returns:
- list[dict]: Annotation info from LVIS api.
- """
-
- try:
- import lvis
- if getattr(lvis, '__version__', '0') >= '10.5.3':
- warnings.warn(
- 'mmlvis is deprecated, please install official lvis-api by "pip install git+https://github.com/lvis-dataset/lvis-api.git"', # noqa: E501
- UserWarning)
- from lvis import LVIS
- except ImportError:
- raise ImportError(
- 'Package lvis is not installed. Please run "pip install git+https://github.com/lvis-dataset/lvis-api.git".' # noqa: E501
- )
- self.coco = LVIS(ann_file)
- self.cat_ids = self.coco.get_cat_ids()
- self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)}
- self.img_ids = self.coco.get_img_ids()
- data_infos = []
- for i in self.img_ids:
- info = self.coco.load_imgs([i])[0]
- if info['file_name'].startswith('COCO'):
- # Convert form the COCO 2014 file naming convention of
- # COCO_[train/val/test]2014_000000000000.jpg to the 2017
- # naming convention of 000000000000.jpg
- # (LVIS v1 will fix this naming issue)
- info['filename'] = info['file_name'][-16:]
- else:
- info['filename'] = info['file_name']
- data_infos.append(info)
- return data_infos
-
- def evaluate(self,
- results,
- metric='bbox',
- logger=None,
- jsonfile_prefix=None,
- classwise=False,
- proposal_nums=(100, 300, 1000),
- iou_thrs=np.arange(0.5, 0.96, 0.05)):
- """Evaluation in LVIS protocol.
-
- Args:
- results (list[list | tuple]): Testing results of the dataset.
- metric (str | list[str]): Metrics to be evaluated. Options are
- 'bbox', 'segm', 'proposal', 'proposal_fast'.
- logger (logging.Logger | str | None): Logger used for printing
- related information during evaluation. Default: None.
- jsonfile_prefix (str | None):
- classwise (bool): Whether to evaluating the AP for each class.
- proposal_nums (Sequence[int]): Proposal number used for evaluating
- recalls, such as recall@100, recall@1000.
- Default: (100, 300, 1000).
- iou_thrs (Sequence[float]): IoU threshold used for evaluating
- recalls. If set to a list, the average recall of all IoUs will
- also be computed. Default: 0.5.
-
- Returns:
- dict[str, float]: LVIS style metrics.
- """
-
- try:
- import lvis
- if getattr(lvis, '__version__', '0') >= '10.5.3':
- warnings.warn(
- 'mmlvis is deprecated, please install official lvis-api by "pip install git+https://github.com/lvis-dataset/lvis-api.git"', # noqa: E501
- UserWarning)
- from lvis import LVISEval, LVISResults
- except ImportError:
- raise ImportError(
- 'Package lvis is not installed. Please run "pip install git+https://github.com/lvis-dataset/lvis-api.git".' # noqa: E501
- )
- assert isinstance(results, list), 'results must be a list'
- assert len(results) == len(self), (
- 'The length of results is not equal to the dataset len: {} != {}'.
- format(len(results), len(self)))
-
- metrics = metric if isinstance(metric, list) else [metric]
- allowed_metrics = ['bbox', 'segm', 'proposal', 'proposal_fast']
- for metric in metrics:
- if metric not in allowed_metrics:
- raise KeyError('metric {} is not supported'.format(metric))
-
- if jsonfile_prefix is None:
- tmp_dir = tempfile.TemporaryDirectory()
- jsonfile_prefix = osp.join(tmp_dir.name, 'results')
- else:
- tmp_dir = None
- result_files = self.results2json(results, jsonfile_prefix)
-
- eval_results = OrderedDict()
- # get original api
- lvis_gt = self.coco
- for metric in metrics:
- msg = 'Evaluating {}...'.format(metric)
- if logger is None:
- msg = '\n' + msg
- print_log(msg, logger=logger)
-
- if metric == 'proposal_fast':
- ar = self.fast_eval_recall(
- results, proposal_nums, iou_thrs, logger='silent')
- log_msg = []
- for i, num in enumerate(proposal_nums):
- eval_results['AR@{}'.format(num)] = ar[i]
- log_msg.append('\nAR@{}\t{:.4f}'.format(num, ar[i]))
- log_msg = ''.join(log_msg)
- print_log(log_msg, logger=logger)
- continue
-
- if metric not in result_files:
- raise KeyError('{} is not in results'.format(metric))
- try:
- lvis_dt = LVISResults(lvis_gt, result_files[metric])
- except IndexError:
- print_log(
- 'The testing results of the whole dataset is empty.',
- logger=logger,
- level=logging.ERROR)
- break
-
- iou_type = 'bbox' if metric == 'proposal' else metric
- lvis_eval = LVISEval(lvis_gt, lvis_dt, iou_type)
- lvis_eval.params.imgIds = self.img_ids
- if metric == 'proposal':
- lvis_eval.params.useCats = 0
- lvis_eval.params.maxDets = list(proposal_nums)
- lvis_eval.evaluate()
- lvis_eval.accumulate()
- lvis_eval.summarize()
- for k, v in lvis_eval.get_results().items():
- if k.startswith('AR'):
- val = float('{:.4f}'.format(float(v)))
- eval_results[k] = val
- else:
- lvis_eval.evaluate()
- lvis_eval.accumulate()
- lvis_eval.summarize()
- lvis_results = lvis_eval.get_results()
- if classwise: # Compute per-category AP
- # Compute per-category AP
- # from https://github.com/facebookresearch/detectron2/
- precisions = lvis_eval.eval['precision']
- # precision: (iou, recall, cls, area range, max dets)
- assert len(self.cat_ids) == precisions.shape[2]
-
- results_per_category = []
- for idx, catId in enumerate(self.cat_ids):
- # area range index 0: all area ranges
- # max dets index -1: typically 100 per image
- # the dimensions of precisions are
- # [num_thrs, num_recalls, num_cats, num_area_rngs]
- nm = self.coco.load_cats([catId])[0]
- precision = precisions[:, :, idx, 0]
- precision = precision[precision > -1]
- if precision.size:
- ap = np.mean(precision)
- else:
- ap = float('nan')
- results_per_category.append(
- (f'{nm["name"]}', f'{float(ap):0.3f}'))
-
- num_columns = min(6, len(results_per_category) * 2)
- results_flatten = list(
- itertools.chain(*results_per_category))
- headers = ['category', 'AP'] * (num_columns // 2)
- results_2d = itertools.zip_longest(*[
- results_flatten[i::num_columns]
- for i in range(num_columns)
- ])
- table_data = [headers]
- table_data += [result for result in results_2d]
- table = AsciiTable(table_data)
- print_log('\n' + table.table, logger=logger)
-
- for k, v in lvis_results.items():
- if k.startswith('AP'):
- key = '{}_{}'.format(metric, k)
- val = float('{:.4f}'.format(float(v)))
- eval_results[key] = val
- ap_summary = ' '.join([
- '{}:{:.4f}'.format(k, float(v))
- for k, v in lvis_results.items() if k.startswith('AP')
- ])
- eval_results['{}_mAP_copypaste'.format(metric)] = ap_summary
- lvis_eval.print_results()
- if tmp_dir is not None:
- tmp_dir.cleanup()
- return eval_results
-
-
-LVISDataset = LVISV05Dataset
-DATASETS.register_module(name='LVISDataset', module=LVISDataset)
-
-
-@DATASETS.register_module()
-class LVISV1Dataset(LVISDataset):
-
- CLASSES = (
- 'aerosol_can', 'air_conditioner', 'airplane', 'alarm_clock', 'alcohol',
- 'alligator', 'almond', 'ambulance', 'amplifier', 'anklet', 'antenna',
- 'apple', 'applesauce', 'apricot', 'apron', 'aquarium',
- 'arctic_(type_of_shoe)', 'armband', 'armchair', 'armoire', 'armor',
- 'artichoke', 'trash_can', 'ashtray', 'asparagus', 'atomizer',
- 'avocado', 'award', 'awning', 'ax', 'baboon', 'baby_buggy',
- 'basketball_backboard', 'backpack', 'handbag', 'suitcase', 'bagel',
- 'bagpipe', 'baguet', 'bait', 'ball', 'ballet_skirt', 'balloon',
- 'bamboo', 'banana', 'Band_Aid', 'bandage', 'bandanna', 'banjo',
- 'banner', 'barbell', 'barge', 'barrel', 'barrette', 'barrow',
- 'baseball_base', 'baseball', 'baseball_bat', 'baseball_cap',
- 'baseball_glove', 'basket', 'basketball', 'bass_horn', 'bat_(animal)',
- 'bath_mat', 'bath_towel', 'bathrobe', 'bathtub', 'batter_(food)',
- 'battery', 'beachball', 'bead', 'bean_curd', 'beanbag', 'beanie',
- 'bear', 'bed', 'bedpan', 'bedspread', 'cow', 'beef_(food)', 'beeper',
- 'beer_bottle', 'beer_can', 'beetle', 'bell', 'bell_pepper', 'belt',
- 'belt_buckle', 'bench', 'beret', 'bib', 'Bible', 'bicycle', 'visor',
- 'billboard', 'binder', 'binoculars', 'bird', 'birdfeeder', 'birdbath',
- 'birdcage', 'birdhouse', 'birthday_cake', 'birthday_card',
- 'pirate_flag', 'black_sheep', 'blackberry', 'blackboard', 'blanket',
- 'blazer', 'blender', 'blimp', 'blinker', 'blouse', 'blueberry',
- 'gameboard', 'boat', 'bob', 'bobbin', 'bobby_pin', 'boiled_egg',
- 'bolo_tie', 'deadbolt', 'bolt', 'bonnet', 'book', 'bookcase',
- 'booklet', 'bookmark', 'boom_microphone', 'boot', 'bottle',
- 'bottle_opener', 'bouquet', 'bow_(weapon)', 'bow_(decorative_ribbons)',
- 'bow-tie', 'bowl', 'pipe_bowl', 'bowler_hat', 'bowling_ball', 'box',
- 'boxing_glove', 'suspenders', 'bracelet', 'brass_plaque', 'brassiere',
- 'bread-bin', 'bread', 'breechcloth', 'bridal_gown', 'briefcase',
- 'broccoli', 'broach', 'broom', 'brownie', 'brussels_sprouts',
- 'bubble_gum', 'bucket', 'horse_buggy', 'bull', 'bulldog', 'bulldozer',
- 'bullet_train', 'bulletin_board', 'bulletproof_vest', 'bullhorn',
- 'bun', 'bunk_bed', 'buoy', 'burrito', 'bus_(vehicle)', 'business_card',
- 'butter', 'butterfly', 'button', 'cab_(taxi)', 'cabana', 'cabin_car',
- 'cabinet', 'locker', 'cake', 'calculator', 'calendar', 'calf',
- 'camcorder', 'camel', 'camera', 'camera_lens', 'camper_(vehicle)',
- 'can', 'can_opener', 'candle', 'candle_holder', 'candy_bar',
- 'candy_cane', 'walking_cane', 'canister', 'canoe', 'cantaloup',
- 'canteen', 'cap_(headwear)', 'bottle_cap', 'cape', 'cappuccino',
- 'car_(automobile)', 'railcar_(part_of_a_train)', 'elevator_car',
- 'car_battery', 'identity_card', 'card', 'cardigan', 'cargo_ship',
- 'carnation', 'horse_carriage', 'carrot', 'tote_bag', 'cart', 'carton',
- 'cash_register', 'casserole', 'cassette', 'cast', 'cat', 'cauliflower',
- 'cayenne_(spice)', 'CD_player', 'celery', 'cellular_telephone',
- 'chain_mail', 'chair', 'chaise_longue', 'chalice', 'chandelier',
- 'chap', 'checkbook', 'checkerboard', 'cherry', 'chessboard',
- 'chicken_(animal)', 'chickpea', 'chili_(vegetable)', 'chime',
- 'chinaware', 'crisp_(potato_chip)', 'poker_chip', 'chocolate_bar',
- 'chocolate_cake', 'chocolate_milk', 'chocolate_mousse', 'choker',
- 'chopping_board', 'chopstick', 'Christmas_tree', 'slide', 'cider',
- 'cigar_box', 'cigarette', 'cigarette_case', 'cistern', 'clarinet',
- 'clasp', 'cleansing_agent', 'cleat_(for_securing_rope)', 'clementine',
- 'clip', 'clipboard', 'clippers_(for_plants)', 'cloak', 'clock',
- 'clock_tower', 'clothes_hamper', 'clothespin', 'clutch_bag', 'coaster',
- 'coat', 'coat_hanger', 'coatrack', 'cock', 'cockroach',
- 'cocoa_(beverage)', 'coconut', 'coffee_maker', 'coffee_table',
- 'coffeepot', 'coil', 'coin', 'colander', 'coleslaw',
- 'coloring_material', 'combination_lock', 'pacifier', 'comic_book',
- 'compass', 'computer_keyboard', 'condiment', 'cone', 'control',
- 'convertible_(automobile)', 'sofa_bed', 'cooker', 'cookie',
- 'cooking_utensil', 'cooler_(for_food)', 'cork_(bottle_plug)',
- 'corkboard', 'corkscrew', 'edible_corn', 'cornbread', 'cornet',
- 'cornice', 'cornmeal', 'corset', 'costume', 'cougar', 'coverall',
- 'cowbell', 'cowboy_hat', 'crab_(animal)', 'crabmeat', 'cracker',
- 'crape', 'crate', 'crayon', 'cream_pitcher', 'crescent_roll', 'crib',
- 'crock_pot', 'crossbar', 'crouton', 'crow', 'crowbar', 'crown',
- 'crucifix', 'cruise_ship', 'police_cruiser', 'crumb', 'crutch',
- 'cub_(animal)', 'cube', 'cucumber', 'cufflink', 'cup', 'trophy_cup',
- 'cupboard', 'cupcake', 'hair_curler', 'curling_iron', 'curtain',
- 'cushion', 'cylinder', 'cymbal', 'dagger', 'dalmatian', 'dartboard',
- 'date_(fruit)', 'deck_chair', 'deer', 'dental_floss', 'desk',
- 'detergent', 'diaper', 'diary', 'die', 'dinghy', 'dining_table', 'tux',
- 'dish', 'dish_antenna', 'dishrag', 'dishtowel', 'dishwasher',
- 'dishwasher_detergent', 'dispenser', 'diving_board', 'Dixie_cup',
- 'dog', 'dog_collar', 'doll', 'dollar', 'dollhouse', 'dolphin',
- 'domestic_ass', 'doorknob', 'doormat', 'doughnut', 'dove', 'dragonfly',
- 'drawer', 'underdrawers', 'dress', 'dress_hat', 'dress_suit',
- 'dresser', 'drill', 'drone', 'dropper', 'drum_(musical_instrument)',
- 'drumstick', 'duck', 'duckling', 'duct_tape', 'duffel_bag', 'dumbbell',
- 'dumpster', 'dustpan', 'eagle', 'earphone', 'earplug', 'earring',
- 'easel', 'eclair', 'eel', 'egg', 'egg_roll', 'egg_yolk', 'eggbeater',
- 'eggplant', 'electric_chair', 'refrigerator', 'elephant', 'elk',
- 'envelope', 'eraser', 'escargot', 'eyepatch', 'falcon', 'fan',
- 'faucet', 'fedora', 'ferret', 'Ferris_wheel', 'ferry', 'fig_(fruit)',
- 'fighter_jet', 'figurine', 'file_cabinet', 'file_(tool)', 'fire_alarm',
- 'fire_engine', 'fire_extinguisher', 'fire_hose', 'fireplace',
- 'fireplug', 'first-aid_kit', 'fish', 'fish_(food)', 'fishbowl',
- 'fishing_rod', 'flag', 'flagpole', 'flamingo', 'flannel', 'flap',
- 'flash', 'flashlight', 'fleece', 'flip-flop_(sandal)',
- 'flipper_(footwear)', 'flower_arrangement', 'flute_glass', 'foal',
- 'folding_chair', 'food_processor', 'football_(American)',
- 'football_helmet', 'footstool', 'fork', 'forklift', 'freight_car',
- 'French_toast', 'freshener', 'frisbee', 'frog', 'fruit_juice',
- 'frying_pan', 'fudge', 'funnel', 'futon', 'gag', 'garbage',
- 'garbage_truck', 'garden_hose', 'gargle', 'gargoyle', 'garlic',
- 'gasmask', 'gazelle', 'gelatin', 'gemstone', 'generator',
- 'giant_panda', 'gift_wrap', 'ginger', 'giraffe', 'cincture',
- 'glass_(drink_container)', 'globe', 'glove', 'goat', 'goggles',
- 'goldfish', 'golf_club', 'golfcart', 'gondola_(boat)', 'goose',
- 'gorilla', 'gourd', 'grape', 'grater', 'gravestone', 'gravy_boat',
- 'green_bean', 'green_onion', 'griddle', 'grill', 'grits', 'grizzly',
- 'grocery_bag', 'guitar', 'gull', 'gun', 'hairbrush', 'hairnet',
- 'hairpin', 'halter_top', 'ham', 'hamburger', 'hammer', 'hammock',
- 'hamper', 'hamster', 'hair_dryer', 'hand_glass', 'hand_towel',
- 'handcart', 'handcuff', 'handkerchief', 'handle', 'handsaw',
- 'hardback_book', 'harmonium', 'hat', 'hatbox', 'veil', 'headband',
- 'headboard', 'headlight', 'headscarf', 'headset',
- 'headstall_(for_horses)', 'heart', 'heater', 'helicopter', 'helmet',
- 'heron', 'highchair', 'hinge', 'hippopotamus', 'hockey_stick', 'hog',
- 'home_plate_(baseball)', 'honey', 'fume_hood', 'hook', 'hookah',
- 'hornet', 'horse', 'hose', 'hot-air_balloon', 'hotplate', 'hot_sauce',
- 'hourglass', 'houseboat', 'hummingbird', 'hummus', 'polar_bear',
- 'icecream', 'popsicle', 'ice_maker', 'ice_pack', 'ice_skate',
- 'igniter', 'inhaler', 'iPod', 'iron_(for_clothing)', 'ironing_board',
- 'jacket', 'jam', 'jar', 'jean', 'jeep', 'jelly_bean', 'jersey',
- 'jet_plane', 'jewel', 'jewelry', 'joystick', 'jumpsuit', 'kayak',
- 'keg', 'kennel', 'kettle', 'key', 'keycard', 'kilt', 'kimono',
- 'kitchen_sink', 'kitchen_table', 'kite', 'kitten', 'kiwi_fruit',
- 'knee_pad', 'knife', 'knitting_needle', 'knob', 'knocker_(on_a_door)',
- 'koala', 'lab_coat', 'ladder', 'ladle', 'ladybug', 'lamb_(animal)',
- 'lamb-chop', 'lamp', 'lamppost', 'lampshade', 'lantern', 'lanyard',
- 'laptop_computer', 'lasagna', 'latch', 'lawn_mower', 'leather',
- 'legging_(clothing)', 'Lego', 'legume', 'lemon', 'lemonade', 'lettuce',
- 'license_plate', 'life_buoy', 'life_jacket', 'lightbulb',
- 'lightning_rod', 'lime', 'limousine', 'lion', 'lip_balm', 'liquor',
- 'lizard', 'log', 'lollipop', 'speaker_(stereo_equipment)', 'loveseat',
- 'machine_gun', 'magazine', 'magnet', 'mail_slot', 'mailbox_(at_home)',
- 'mallard', 'mallet', 'mammoth', 'manatee', 'mandarin_orange', 'manger',
- 'manhole', 'map', 'marker', 'martini', 'mascot', 'mashed_potato',
- 'masher', 'mask', 'mast', 'mat_(gym_equipment)', 'matchbox',
- 'mattress', 'measuring_cup', 'measuring_stick', 'meatball', 'medicine',
- 'melon', 'microphone', 'microscope', 'microwave_oven', 'milestone',
- 'milk', 'milk_can', 'milkshake', 'minivan', 'mint_candy', 'mirror',
- 'mitten', 'mixer_(kitchen_tool)', 'money',
- 'monitor_(computer_equipment) computer_monitor', 'monkey', 'motor',
- 'motor_scooter', 'motor_vehicle', 'motorcycle', 'mound_(baseball)',
- 'mouse_(computer_equipment)', 'mousepad', 'muffin', 'mug', 'mushroom',
- 'music_stool', 'musical_instrument', 'nailfile', 'napkin',
- 'neckerchief', 'necklace', 'necktie', 'needle', 'nest', 'newspaper',
- 'newsstand', 'nightshirt', 'nosebag_(for_animals)',
- 'noseband_(for_animals)', 'notebook', 'notepad', 'nut', 'nutcracker',
- 'oar', 'octopus_(food)', 'octopus_(animal)', 'oil_lamp', 'olive_oil',
- 'omelet', 'onion', 'orange_(fruit)', 'orange_juice', 'ostrich',
- 'ottoman', 'oven', 'overalls_(clothing)', 'owl', 'packet', 'inkpad',
- 'pad', 'paddle', 'padlock', 'paintbrush', 'painting', 'pajamas',
- 'palette', 'pan_(for_cooking)', 'pan_(metal_container)', 'pancake',
- 'pantyhose', 'papaya', 'paper_plate', 'paper_towel', 'paperback_book',
- 'paperweight', 'parachute', 'parakeet', 'parasail_(sports)', 'parasol',
- 'parchment', 'parka', 'parking_meter', 'parrot',
- 'passenger_car_(part_of_a_train)', 'passenger_ship', 'passport',
- 'pastry', 'patty_(food)', 'pea_(food)', 'peach', 'peanut_butter',
- 'pear', 'peeler_(tool_for_fruit_and_vegetables)', 'wooden_leg',
- 'pegboard', 'pelican', 'pen', 'pencil', 'pencil_box',
- 'pencil_sharpener', 'pendulum', 'penguin', 'pennant', 'penny_(coin)',
- 'pepper', 'pepper_mill', 'perfume', 'persimmon', 'person', 'pet',
- 'pew_(church_bench)', 'phonebook', 'phonograph_record', 'piano',
- 'pickle', 'pickup_truck', 'pie', 'pigeon', 'piggy_bank', 'pillow',
- 'pin_(non_jewelry)', 'pineapple', 'pinecone', 'ping-pong_ball',
- 'pinwheel', 'tobacco_pipe', 'pipe', 'pistol', 'pita_(bread)',
- 'pitcher_(vessel_for_liquid)', 'pitchfork', 'pizza', 'place_mat',
- 'plate', 'platter', 'playpen', 'pliers', 'plow_(farm_equipment)',
- 'plume', 'pocket_watch', 'pocketknife', 'poker_(fire_stirring_tool)',
- 'pole', 'polo_shirt', 'poncho', 'pony', 'pool_table', 'pop_(soda)',
- 'postbox_(public)', 'postcard', 'poster', 'pot', 'flowerpot', 'potato',
- 'potholder', 'pottery', 'pouch', 'power_shovel', 'prawn', 'pretzel',
- 'printer', 'projectile_(weapon)', 'projector', 'propeller', 'prune',
- 'pudding', 'puffer_(fish)', 'puffin', 'pug-dog', 'pumpkin', 'puncher',
- 'puppet', 'puppy', 'quesadilla', 'quiche', 'quilt', 'rabbit',
- 'race_car', 'racket', 'radar', 'radiator', 'radio_receiver', 'radish',
- 'raft', 'rag_doll', 'raincoat', 'ram_(animal)', 'raspberry', 'rat',
- 'razorblade', 'reamer_(juicer)', 'rearview_mirror', 'receipt',
- 'recliner', 'record_player', 'reflector', 'remote_control',
- 'rhinoceros', 'rib_(food)', 'rifle', 'ring', 'river_boat', 'road_map',
- 'robe', 'rocking_chair', 'rodent', 'roller_skate', 'Rollerblade',
- 'rolling_pin', 'root_beer', 'router_(computer_equipment)',
- 'rubber_band', 'runner_(carpet)', 'plastic_bag',
- 'saddle_(on_an_animal)', 'saddle_blanket', 'saddlebag', 'safety_pin',
- 'sail', 'salad', 'salad_plate', 'salami', 'salmon_(fish)',
- 'salmon_(food)', 'salsa', 'saltshaker', 'sandal_(type_of_shoe)',
- 'sandwich', 'satchel', 'saucepan', 'saucer', 'sausage', 'sawhorse',
- 'saxophone', 'scale_(measuring_instrument)', 'scarecrow', 'scarf',
- 'school_bus', 'scissors', 'scoreboard', 'scraper', 'screwdriver',
- 'scrubbing_brush', 'sculpture', 'seabird', 'seahorse', 'seaplane',
- 'seashell', 'sewing_machine', 'shaker', 'shampoo', 'shark',
- 'sharpener', 'Sharpie', 'shaver_(electric)', 'shaving_cream', 'shawl',
- 'shears', 'sheep', 'shepherd_dog', 'sherbert', 'shield', 'shirt',
- 'shoe', 'shopping_bag', 'shopping_cart', 'short_pants', 'shot_glass',
- 'shoulder_bag', 'shovel', 'shower_head', 'shower_cap',
- 'shower_curtain', 'shredder_(for_paper)', 'signboard', 'silo', 'sink',
- 'skateboard', 'skewer', 'ski', 'ski_boot', 'ski_parka', 'ski_pole',
- 'skirt', 'skullcap', 'sled', 'sleeping_bag', 'sling_(bandage)',
- 'slipper_(footwear)', 'smoothie', 'snake', 'snowboard', 'snowman',
- 'snowmobile', 'soap', 'soccer_ball', 'sock', 'sofa', 'softball',
- 'solar_array', 'sombrero', 'soup', 'soup_bowl', 'soupspoon',
- 'sour_cream', 'soya_milk', 'space_shuttle', 'sparkler_(fireworks)',
- 'spatula', 'spear', 'spectacles', 'spice_rack', 'spider', 'crawfish',
- 'sponge', 'spoon', 'sportswear', 'spotlight', 'squid_(food)',
- 'squirrel', 'stagecoach', 'stapler_(stapling_machine)', 'starfish',
- 'statue_(sculpture)', 'steak_(food)', 'steak_knife', 'steering_wheel',
- 'stepladder', 'step_stool', 'stereo_(sound_system)', 'stew', 'stirrer',
- 'stirrup', 'stool', 'stop_sign', 'brake_light', 'stove', 'strainer',
- 'strap', 'straw_(for_drinking)', 'strawberry', 'street_sign',
- 'streetlight', 'string_cheese', 'stylus', 'subwoofer', 'sugar_bowl',
- 'sugarcane_(plant)', 'suit_(clothing)', 'sunflower', 'sunglasses',
- 'sunhat', 'surfboard', 'sushi', 'mop', 'sweat_pants', 'sweatband',
- 'sweater', 'sweatshirt', 'sweet_potato', 'swimsuit', 'sword',
- 'syringe', 'Tabasco_sauce', 'table-tennis_table', 'table',
- 'table_lamp', 'tablecloth', 'tachometer', 'taco', 'tag', 'taillight',
- 'tambourine', 'army_tank', 'tank_(storage_vessel)',
- 'tank_top_(clothing)', 'tape_(sticky_cloth_or_paper)', 'tape_measure',
- 'tapestry', 'tarp', 'tartan', 'tassel', 'tea_bag', 'teacup',
- 'teakettle', 'teapot', 'teddy_bear', 'telephone', 'telephone_booth',
- 'telephone_pole', 'telephoto_lens', 'television_camera',
- 'television_set', 'tennis_ball', 'tennis_racket', 'tequila',
- 'thermometer', 'thermos_bottle', 'thermostat', 'thimble', 'thread',
- 'thumbtack', 'tiara', 'tiger', 'tights_(clothing)', 'timer', 'tinfoil',
- 'tinsel', 'tissue_paper', 'toast_(food)', 'toaster', 'toaster_oven',
- 'toilet', 'toilet_tissue', 'tomato', 'tongs', 'toolbox', 'toothbrush',
- 'toothpaste', 'toothpick', 'cover', 'tortilla', 'tow_truck', 'towel',
- 'towel_rack', 'toy', 'tractor_(farm_equipment)', 'traffic_light',
- 'dirt_bike', 'trailer_truck', 'train_(railroad_vehicle)', 'trampoline',
- 'tray', 'trench_coat', 'triangle_(musical_instrument)', 'tricycle',
- 'tripod', 'trousers', 'truck', 'truffle_(chocolate)', 'trunk', 'vat',
- 'turban', 'turkey_(food)', 'turnip', 'turtle', 'turtleneck_(clothing)',
- 'typewriter', 'umbrella', 'underwear', 'unicycle', 'urinal', 'urn',
- 'vacuum_cleaner', 'vase', 'vending_machine', 'vent', 'vest',
- 'videotape', 'vinegar', 'violin', 'vodka', 'volleyball', 'vulture',
- 'waffle', 'waffle_iron', 'wagon', 'wagon_wheel', 'walking_stick',
- 'wall_clock', 'wall_socket', 'wallet', 'walrus', 'wardrobe',
- 'washbasin', 'automatic_washer', 'watch', 'water_bottle',
- 'water_cooler', 'water_faucet', 'water_heater', 'water_jug',
- 'water_gun', 'water_scooter', 'water_ski', 'water_tower',
- 'watering_can', 'watermelon', 'weathervane', 'webcam', 'wedding_cake',
- 'wedding_ring', 'wet_suit', 'wheel', 'wheelchair', 'whipped_cream',
- 'whistle', 'wig', 'wind_chime', 'windmill', 'window_box_(for_plants)',
- 'windshield_wiper', 'windsock', 'wine_bottle', 'wine_bucket',
- 'wineglass', 'blinder_(for_horses)', 'wok', 'wolf', 'wooden_spoon',
- 'wreath', 'wrench', 'wristband', 'wristlet', 'yacht', 'yogurt',
- 'yoke_(animal_equipment)', 'zebra', 'zucchini')
-
- def load_annotations(self, ann_file):
- try:
- import lvis
- if getattr(lvis, '__version__', '0') >= '10.5.3':
- warnings.warn(
- 'mmlvis is deprecated, please install official lvis-api by "pip install git+https://github.com/lvis-dataset/lvis-api.git"', # noqa: E501
- UserWarning)
- from lvis import LVIS
- except ImportError:
- raise ImportError(
- 'Package lvis is not installed. Please run "pip install git+https://github.com/lvis-dataset/lvis-api.git".' # noqa: E501
- )
- self.coco = LVIS(ann_file)
- self.cat_ids = self.coco.get_cat_ids()
- self.cat2label = {cat_id: i for i, cat_id in enumerate(self.cat_ids)}
- self.img_ids = self.coco.get_img_ids()
- data_infos = []
- for i in self.img_ids:
- info = self.coco.load_imgs([i])[0]
- # coco_url is used in LVISv1 instead of file_name
- # e.g. http://images.cocodataset.org/train2017/000000391895.jpg
- # train/val split in specified in url
- info['filename'] = info['coco_url'].replace(
- 'http://images.cocodataset.org/', '')
- data_infos.append(info)
- return data_infos
diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/roi_heads/mask_heads/grid_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/roi_heads/mask_heads/grid_head.py
deleted file mode 100644
index 0c0702d2a3f8bb7f2292307b907260bdecf1a164..0000000000000000000000000000000000000000
--- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/roi_heads/mask_heads/grid_head.py
+++ /dev/null
@@ -1,363 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from mmcv.cnn import ConvModule
-from mmcv.runner import BaseModule
-
-from mmdet.models.builder import HEADS, build_loss
-
-
-@HEADS.register_module()
-class GridHead(BaseModule):
-
- def __init__(self,
- grid_points=9,
- num_convs=8,
- roi_feat_size=14,
- in_channels=256,
- conv_kernel_size=3,
- point_feat_channels=64,
- deconv_kernel_size=4,
- class_agnostic=False,
- loss_grid=dict(
- type='CrossEntropyLoss', use_sigmoid=True,
- loss_weight=15),
- conv_cfg=None,
- norm_cfg=dict(type='GN', num_groups=36),
- init_cfg=[
- dict(type='Kaiming', layer=['Conv2d', 'Linear']),
- dict(
- type='Normal',
- layer='ConvTranspose2d',
- std=0.001,
- override=dict(
- type='Normal',
- name='deconv2',
- std=0.001,
- bias=-np.log(0.99 / 0.01)))
- ]):
- super(GridHead, self).__init__(init_cfg)
- self.grid_points = grid_points
- self.num_convs = num_convs
- self.roi_feat_size = roi_feat_size
- self.in_channels = in_channels
- self.conv_kernel_size = conv_kernel_size
- self.point_feat_channels = point_feat_channels
- self.conv_out_channels = self.point_feat_channels * self.grid_points
- self.class_agnostic = class_agnostic
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- if isinstance(norm_cfg, dict) and norm_cfg['type'] == 'GN':
- assert self.conv_out_channels % norm_cfg['num_groups'] == 0
-
- assert self.grid_points >= 4
- self.grid_size = int(np.sqrt(self.grid_points))
- if self.grid_size * self.grid_size != self.grid_points:
- raise ValueError('grid_points must be a square number')
-
- # the predicted heatmap is half of whole_map_size
- if not isinstance(self.roi_feat_size, int):
- raise ValueError('Only square RoIs are supporeted in Grid R-CNN')
- self.whole_map_size = self.roi_feat_size * 4
-
- # compute point-wise sub-regions
- self.sub_regions = self.calc_sub_regions()
-
- self.convs = []
- for i in range(self.num_convs):
- in_channels = (
- self.in_channels if i == 0 else self.conv_out_channels)
- stride = 2 if i == 0 else 1
- padding = (self.conv_kernel_size - 1) // 2
- self.convs.append(
- ConvModule(
- in_channels,
- self.conv_out_channels,
- self.conv_kernel_size,
- stride=stride,
- padding=padding,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- bias=True))
- self.convs = nn.Sequential(*self.convs)
-
- self.deconv1 = nn.ConvTranspose2d(
- self.conv_out_channels,
- self.conv_out_channels,
- kernel_size=deconv_kernel_size,
- stride=2,
- padding=(deconv_kernel_size - 2) // 2,
- groups=grid_points)
- self.norm1 = nn.GroupNorm(grid_points, self.conv_out_channels)
- self.deconv2 = nn.ConvTranspose2d(
- self.conv_out_channels,
- grid_points,
- kernel_size=deconv_kernel_size,
- stride=2,
- padding=(deconv_kernel_size - 2) // 2,
- groups=grid_points)
-
- # find the 4-neighbor of each grid point
- self.neighbor_points = []
- grid_size = self.grid_size
- for i in range(grid_size): # i-th column
- for j in range(grid_size): # j-th row
- neighbors = []
- if i > 0: # left: (i - 1, j)
- neighbors.append((i - 1) * grid_size + j)
- if j > 0: # up: (i, j - 1)
- neighbors.append(i * grid_size + j - 1)
- if j < grid_size - 1: # down: (i, j + 1)
- neighbors.append(i * grid_size + j + 1)
- if i < grid_size - 1: # right: (i + 1, j)
- neighbors.append((i + 1) * grid_size + j)
- self.neighbor_points.append(tuple(neighbors))
- # total edges in the grid
- self.num_edges = sum([len(p) for p in self.neighbor_points])
-
- self.forder_trans = nn.ModuleList() # first-order feature transition
- self.sorder_trans = nn.ModuleList() # second-order feature transition
- for neighbors in self.neighbor_points:
- fo_trans = nn.ModuleList()
- so_trans = nn.ModuleList()
- for _ in range(len(neighbors)):
- # each transition module consists of a 5x5 depth-wise conv and
- # 1x1 conv.
- fo_trans.append(
- nn.Sequential(
- nn.Conv2d(
- self.point_feat_channels,
- self.point_feat_channels,
- 5,
- stride=1,
- padding=2,
- groups=self.point_feat_channels),
- nn.Conv2d(self.point_feat_channels,
- self.point_feat_channels, 1)))
- so_trans.append(
- nn.Sequential(
- nn.Conv2d(
- self.point_feat_channels,
- self.point_feat_channels,
- 5,
- 1,
- 2,
- groups=self.point_feat_channels),
- nn.Conv2d(self.point_feat_channels,
- self.point_feat_channels, 1)))
- self.forder_trans.append(fo_trans)
- self.sorder_trans.append(so_trans)
-
- self.loss_grid = build_loss(loss_grid)
-
- def forward(self, x):
- assert x.shape[-1] == x.shape[-2] == self.roi_feat_size
- # RoI feature transformation, downsample 2x
- x = self.convs(x)
-
- c = self.point_feat_channels
- # first-order fusion
- x_fo = [None for _ in range(self.grid_points)]
- for i, points in enumerate(self.neighbor_points):
- x_fo[i] = x[:, i * c:(i + 1) * c]
- for j, point_idx in enumerate(points):
- x_fo[i] = x_fo[i] + self.forder_trans[i][j](
- x[:, point_idx * c:(point_idx + 1) * c])
-
- # second-order fusion
- x_so = [None for _ in range(self.grid_points)]
- for i, points in enumerate(self.neighbor_points):
- x_so[i] = x[:, i * c:(i + 1) * c]
- for j, point_idx in enumerate(points):
- x_so[i] = x_so[i] + self.sorder_trans[i][j](x_fo[point_idx])
-
- # predicted heatmap with fused features
- x2 = torch.cat(x_so, dim=1)
- x2 = self.deconv1(x2)
- x2 = F.relu(self.norm1(x2), inplace=True)
- heatmap = self.deconv2(x2)
-
- # predicted heatmap with original features (applicable during training)
- if self.training:
- x1 = x
- x1 = self.deconv1(x1)
- x1 = F.relu(self.norm1(x1), inplace=True)
- heatmap_unfused = self.deconv2(x1)
- else:
- heatmap_unfused = heatmap
-
- return dict(fused=heatmap, unfused=heatmap_unfused)
-
- def calc_sub_regions(self):
- """Compute point specific representation regions.
-
- See Grid R-CNN Plus (https://arxiv.org/abs/1906.05688) for details.
- """
- # to make it consistent with the original implementation, half_size
- # is computed as 2 * quarter_size, which is smaller
- half_size = self.whole_map_size // 4 * 2
- sub_regions = []
- for i in range(self.grid_points):
- x_idx = i // self.grid_size
- y_idx = i % self.grid_size
- if x_idx == 0:
- sub_x1 = 0
- elif x_idx == self.grid_size - 1:
- sub_x1 = half_size
- else:
- ratio = x_idx / (self.grid_size - 1) - 0.25
- sub_x1 = max(int(ratio * self.whole_map_size), 0)
-
- if y_idx == 0:
- sub_y1 = 0
- elif y_idx == self.grid_size - 1:
- sub_y1 = half_size
- else:
- ratio = y_idx / (self.grid_size - 1) - 0.25
- sub_y1 = max(int(ratio * self.whole_map_size), 0)
- sub_regions.append(
- (sub_x1, sub_y1, sub_x1 + half_size, sub_y1 + half_size))
- return sub_regions
-
- def get_targets(self, sampling_results, rcnn_train_cfg):
- # mix all samples (across images) together.
- pos_bboxes = torch.cat([res.pos_bboxes for res in sampling_results],
- dim=0).cpu()
- pos_gt_bboxes = torch.cat(
- [res.pos_gt_bboxes for res in sampling_results], dim=0).cpu()
- assert pos_bboxes.shape == pos_gt_bboxes.shape
-
- # expand pos_bboxes to 2x of original size
- x1 = pos_bboxes[:, 0] - (pos_bboxes[:, 2] - pos_bboxes[:, 0]) / 2
- y1 = pos_bboxes[:, 1] - (pos_bboxes[:, 3] - pos_bboxes[:, 1]) / 2
- x2 = pos_bboxes[:, 2] + (pos_bboxes[:, 2] - pos_bboxes[:, 0]) / 2
- y2 = pos_bboxes[:, 3] + (pos_bboxes[:, 3] - pos_bboxes[:, 1]) / 2
- pos_bboxes = torch.stack([x1, y1, x2, y2], dim=-1)
- pos_bbox_ws = (pos_bboxes[:, 2] - pos_bboxes[:, 0]).unsqueeze(-1)
- pos_bbox_hs = (pos_bboxes[:, 3] - pos_bboxes[:, 1]).unsqueeze(-1)
-
- num_rois = pos_bboxes.shape[0]
- map_size = self.whole_map_size
- # this is not the final target shape
- targets = torch.zeros((num_rois, self.grid_points, map_size, map_size),
- dtype=torch.float)
-
- # pre-compute interpolation factors for all grid points.
- # the first item is the factor of x-dim, and the second is y-dim.
- # for a 9-point grid, factors are like (1, 0), (0.5, 0.5), (0, 1)
- factors = []
- for j in range(self.grid_points):
- x_idx = j // self.grid_size
- y_idx = j % self.grid_size
- factors.append((1 - x_idx / (self.grid_size - 1),
- 1 - y_idx / (self.grid_size - 1)))
-
- radius = rcnn_train_cfg.pos_radius
- radius2 = radius**2
- for i in range(num_rois):
- # ignore small bboxes
- if (pos_bbox_ws[i] <= self.grid_size
- or pos_bbox_hs[i] <= self.grid_size):
- continue
- # for each grid point, mark a small circle as positive
- for j in range(self.grid_points):
- factor_x, factor_y = factors[j]
- gridpoint_x = factor_x * pos_gt_bboxes[i, 0] + (
- 1 - factor_x) * pos_gt_bboxes[i, 2]
- gridpoint_y = factor_y * pos_gt_bboxes[i, 1] + (
- 1 - factor_y) * pos_gt_bboxes[i, 3]
-
- cx = int((gridpoint_x - pos_bboxes[i, 0]) / pos_bbox_ws[i] *
- map_size)
- cy = int((gridpoint_y - pos_bboxes[i, 1]) / pos_bbox_hs[i] *
- map_size)
-
- for x in range(cx - radius, cx + radius + 1):
- for y in range(cy - radius, cy + radius + 1):
- if x >= 0 and x < map_size and y >= 0 and y < map_size:
- if (x - cx)**2 + (y - cy)**2 <= radius2:
- targets[i, j, y, x] = 1
- # reduce the target heatmap size by a half
- # proposed in Grid R-CNN Plus (https://arxiv.org/abs/1906.05688).
- sub_targets = []
- for i in range(self.grid_points):
- sub_x1, sub_y1, sub_x2, sub_y2 = self.sub_regions[i]
- sub_targets.append(targets[:, [i], sub_y1:sub_y2, sub_x1:sub_x2])
- sub_targets = torch.cat(sub_targets, dim=1)
- sub_targets = sub_targets.to(sampling_results[0].pos_bboxes.device)
- return sub_targets
-
- def loss(self, grid_pred, grid_targets):
- loss_fused = self.loss_grid(grid_pred['fused'], grid_targets)
- loss_unfused = self.loss_grid(grid_pred['unfused'], grid_targets)
- loss_grid = loss_fused + loss_unfused
- return dict(loss_grid=loss_grid)
-
- def get_bboxes(self, det_bboxes, grid_pred, img_metas):
- # TODO: refactoring
- assert det_bboxes.shape[0] == grid_pred.shape[0]
- det_bboxes = det_bboxes.cpu()
- cls_scores = det_bboxes[:, [4]]
- det_bboxes = det_bboxes[:, :4]
- grid_pred = grid_pred.sigmoid().cpu()
-
- R, c, h, w = grid_pred.shape
- half_size = self.whole_map_size // 4 * 2
- assert h == w == half_size
- assert c == self.grid_points
-
- # find the point with max scores in the half-sized heatmap
- grid_pred = grid_pred.view(R * c, h * w)
- pred_scores, pred_position = grid_pred.max(dim=1)
- xs = pred_position % w
- ys = pred_position // w
-
- # get the position in the whole heatmap instead of half-sized heatmap
- for i in range(self.grid_points):
- xs[i::self.grid_points] += self.sub_regions[i][0]
- ys[i::self.grid_points] += self.sub_regions[i][1]
-
- # reshape to (num_rois, grid_points)
- pred_scores, xs, ys = tuple(
- map(lambda x: x.view(R, c), [pred_scores, xs, ys]))
-
- # get expanded pos_bboxes
- widths = (det_bboxes[:, 2] - det_bboxes[:, 0]).unsqueeze(-1)
- heights = (det_bboxes[:, 3] - det_bboxes[:, 1]).unsqueeze(-1)
- x1 = (det_bboxes[:, 0, None] - widths / 2)
- y1 = (det_bboxes[:, 1, None] - heights / 2)
- # map the grid point to the absolute coordinates
- abs_xs = (xs.float() + 0.5) / w * widths + x1
- abs_ys = (ys.float() + 0.5) / h * heights + y1
-
- # get the grid points indices that fall on the bbox boundaries
- x1_inds = [i for i in range(self.grid_size)]
- y1_inds = [i * self.grid_size for i in range(self.grid_size)]
- x2_inds = [
- self.grid_points - self.grid_size + i
- for i in range(self.grid_size)
- ]
- y2_inds = [(i + 1) * self.grid_size - 1 for i in range(self.grid_size)]
-
- # voting of all grid points on some boundary
- bboxes_x1 = (abs_xs[:, x1_inds] * pred_scores[:, x1_inds]).sum(
- dim=1, keepdim=True) / (
- pred_scores[:, x1_inds].sum(dim=1, keepdim=True))
- bboxes_y1 = (abs_ys[:, y1_inds] * pred_scores[:, y1_inds]).sum(
- dim=1, keepdim=True) / (
- pred_scores[:, y1_inds].sum(dim=1, keepdim=True))
- bboxes_x2 = (abs_xs[:, x2_inds] * pred_scores[:, x2_inds]).sum(
- dim=1, keepdim=True) / (
- pred_scores[:, x2_inds].sum(dim=1, keepdim=True))
- bboxes_y2 = (abs_ys[:, y2_inds] * pred_scores[:, y2_inds]).sum(
- dim=1, keepdim=True) / (
- pred_scores[:, y2_inds].sum(dim=1, keepdim=True))
-
- bbox_res = torch.cat(
- [bboxes_x1, bboxes_y1, bboxes_x2, bboxes_y2, cls_scores], dim=1)
- bbox_res[:, [0, 2]].clamp_(min=0, max=img_metas[0]['img_shape'][1])
- bbox_res[:, [1, 3]].clamp_(min=0, max=img_metas[0]['img_shape'][0])
-
- return bbox_res
diff --git a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (kamasutra 3d Hindi Movie Worldfree4u) HOT!.md b/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (kamasutra 3d Hindi Movie Worldfree4u) HOT!.md
deleted file mode 100644
index 961c5beae46aeb8c7928322e4fa4d9a501169a27..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/HD Online Player (kamasutra 3d Hindi Movie Worldfree4u) HOT!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-HD Online Player (kamasutra 3d hindi movie worldfree4u)
-
-Kamasutra 3D Hindi (2015) Full Movie Download Free HD, DVDRip, 720P, ... Find Full Free Movies online HD print & watch on any android mobile, tab,PC,etc ... Download full hd Best formatted film movie in by torrent Shamanthakamani 2017 Movie 1080P Telugu HD Movie. ... 2004, the role that money and oil played in the. 4d29de3e1b
-
-
-
diff --git a/spaces/russel0719/deepfake_detector/training/transforms/__init__.py b/spaces/russel0719/deepfake_detector/training/transforms/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/sahshd/ChuanhuChatGPT/chatgpt - macOS.command b/spaces/sahshd/ChuanhuChatGPT/chatgpt - macOS.command
deleted file mode 100644
index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000
--- a/spaces/sahshd/ChuanhuChatGPT/chatgpt - macOS.command
+++ /dev/null
@@ -1,7 +0,0 @@
-#!/bin/bash
-echo Opening ChuanhuChatGPT...
-cd "$(dirname "${BASH_SOURCE[0]}")"
-nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 &
-sleep 5
-open http://127.0.0.1:7860
-echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal.
\ No newline at end of file
diff --git a/spaces/samcaicn/bingai/src/lib/storage.ts b/spaces/samcaicn/bingai/src/lib/storage.ts
deleted file mode 100644
index a5b7825c4f76a28c704da512ae39e8bb45addd09..0000000000000000000000000000000000000000
--- a/spaces/samcaicn/bingai/src/lib/storage.ts
+++ /dev/null
@@ -1,27 +0,0 @@
-import { getMany, set, del, clear } from 'idb-keyval';
-
-export const Storage = {
- async get(key: string | string[] | null): Promise
-Deep.Freeze.Standard.v7.21.020.3447 LATEST WITH SERIAL
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/scedlatioru/img-to-music/example/Power Iso 5.2 Serials - [EC] Serial Key FREE Keygen.md b/spaces/scedlatioru/img-to-music/example/Power Iso 5.2 Serials - [EC] Serial Key FREE Keygen.md
deleted file mode 100644
index 3478ccb403be2fd89f13b2882b7fa19c4f2d84fb..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Power Iso 5.2 Serials - [EC] Serial Key FREE Keygen.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Power Iso 5.2 Serials - [EC] Serial Key Keygen
-
- d5da3c52bf
-
-
-
diff --git a/spaces/schoemann/vanGogh_in_Kaiserswerth/README.md b/spaces/schoemann/vanGogh_in_Kaiserswerth/README.md
deleted file mode 100644
index 27521606827b744459a765d4daf98e72c8316d9d..0000000000000000000000000000000000000000
--- a/spaces/schoemann/vanGogh_in_Kaiserswerth/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: VanGogh In Kaiserswerth
-emoji: 💻
-colorFrom: purple
-colorTo: red
-sdk: gradio
-sdk_version: 3.15.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/inference/utils/burst_utils.py b/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/inference/utils/burst_utils.py
deleted file mode 100644
index 570442848c83378f8562485aa7cca3502910440c..0000000000000000000000000000000000000000
--- a/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/inference/utils/burst_utils.py
+++ /dev/null
@@ -1,19 +0,0 @@
-from os import path
-import copy
-import json
-
-
-class BURSTResultHandler:
- def __init__(self, dataset_json):
- self.dataset_json = copy.deepcopy(dataset_json)
-
- # get rid of the segmentations while keeping the metadata
- self.dataset_json['sequences'] = []
-
- def add_sequence(self, sequence_json):
- self.dataset_json['sequences'].append(sequence_json)
-
- def dump(self, root):
- json_path = path.join(root, 'predictions.json')
- with open(json_path, 'w') as f:
- json.dump(self.dataset_json, f)
\ No newline at end of file
diff --git a/spaces/segments-tobias/conex/espnet/st/__init__.py b/spaces/segments-tobias/conex/espnet/st/__init__.py
deleted file mode 100644
index b7f177368e62a5578b8706300e101f831a3972ac..0000000000000000000000000000000000000000
--- a/spaces/segments-tobias/conex/espnet/st/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-"""Initialize sub package."""
diff --git a/spaces/seok07/1JK50/infer_pack/models.py b/spaces/seok07/1JK50/infer_pack/models.py
deleted file mode 100644
index 1b4b06e5c7c8e84f0ef8b4f0174a5e0ec6800344..0000000000000000000000000000000000000000
--- a/spaces/seok07/1JK50/infer_pack/models.py
+++ /dev/null
@@ -1,1116 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-class SynthesizerTrnMs768NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-class SynthesizerTrnMs768NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2,3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/shriarul5273/Yolov7/utils/general.py b/spaces/shriarul5273/Yolov7/utils/general.py
deleted file mode 100644
index 6b7edb3e013683b2ee38af9ce9e103616aaaa3ff..0000000000000000000000000000000000000000
--- a/spaces/shriarul5273/Yolov7/utils/general.py
+++ /dev/null
@@ -1,892 +0,0 @@
-# YOLOR general utils
-
-import glob
-import logging
-import math
-import os
-import platform
-import random
-import re
-import subprocess
-import time
-from pathlib import Path
-
-import cv2
-import numpy as np
-import pandas as pd
-import torch
-import torchvision
-import yaml
-
-from utils.google_utils import gsutil_getsize
-from utils.metrics import fitness
-from utils.torch_utils import init_torch_seeds
-
-# Settings
-torch.set_printoptions(linewidth=320, precision=5, profile='long')
-np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5
-pd.options.display.max_columns = 10
-cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader)
-os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads
-
-
-def set_logging(rank=-1):
- logging.basicConfig(
- format="%(message)s",
- level=logging.INFO if rank in [-1, 0] else logging.WARN)
-
-
-def init_seeds(seed=0):
- # Initialize random number generator (RNG) seeds
- random.seed(seed)
- np.random.seed(seed)
- init_torch_seeds(seed)
-
-
-def get_latest_run(search_dir='.'):
- # Return path to most recent 'last.pt' in /runs (i.e. to --resume from)
- last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True)
- return max(last_list, key=os.path.getctime) if last_list else ''
-
-
-def isdocker():
- # Is environment a Docker container
- return Path('/workspace').exists() # or Path('/.dockerenv').exists()
-
-
-def emojis(str=''):
- # Return platform-dependent emoji-safe version of string
- return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str
-
-
-def check_online():
- # Check internet connectivity
- import socket
- try:
- socket.create_connection(("1.1.1.1", 443), 5) # check host accesability
- return True
- except OSError:
- return False
-
-
-def check_git_status():
- # Recommend 'git pull' if code is out of date
- print(colorstr('github: '), end='')
- try:
- assert Path('.git').exists(), 'skipping check (not a git repository)'
- assert not isdocker(), 'skipping check (Docker image)'
- assert check_online(), 'skipping check (offline)'
-
- cmd = 'git fetch && git config --get remote.origin.url'
- url = subprocess.check_output(cmd, shell=True).decode().strip().rstrip('.git') # github repo url
- branch = subprocess.check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out
- n = int(subprocess.check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind
- if n > 0:
- s = f"⚠️ WARNING: code is out of date by {n} commit{'s' * (n > 1)}. " \
- f"Use 'git pull' to update or 'git clone {url}' to download latest."
- else:
- s = f'up to date with {url} ✅'
- print(emojis(s)) # emoji-safe
- except Exception as e:
- print(e)
-
-
-def check_requirements(requirements='requirements.txt', exclude=()):
- # Check installed dependencies meet requirements (pass *.txt file or list of packages)
- import pkg_resources as pkg
- prefix = colorstr('red', 'bold', 'requirements:')
- if isinstance(requirements, (str, Path)): # requirements.txt file
- file = Path(requirements)
- if not file.exists():
- print(f"{prefix} {file.resolve()} not found, check failed.")
- return
- requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(file.open()) if x.name not in exclude]
- else: # list or tuple of packages
- requirements = [x for x in requirements if x not in exclude]
-
- n = 0 # number of packages updates
- for r in requirements:
- try:
- pkg.require(r)
- except Exception as e: # DistributionNotFound or VersionConflict if requirements not met
- n += 1
- print(f"{prefix} {e.req} not found and is required by YOLOR, attempting auto-update...")
- print(subprocess.check_output(f"pip install '{e.req}'", shell=True).decode())
-
- if n: # if packages updated
- source = file.resolve() if 'file' in locals() else requirements
- s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \
- f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n"
- print(emojis(s)) # emoji-safe
-
-
-def check_img_size(img_size, s=32):
- # Verify img_size is a multiple of stride s
- new_size = make_divisible(img_size, int(s)) # ceil gs-multiple
- if new_size != img_size:
- print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size))
- return new_size
-
-
-def check_imshow():
- # Check if environment supports image displays
- try:
- assert not isdocker(), 'cv2.imshow() is disabled in Docker environments'
- cv2.imshow('test', np.zeros((1, 1, 3)))
- cv2.waitKey(1)
- cv2.destroyAllWindows()
- cv2.waitKey(1)
- return True
- except Exception as e:
- print(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}')
- return False
-
-
-def check_file(file):
- # Search for file if not found
- if Path(file).is_file() or file == '':
- return file
- else:
- files = glob.glob('./**/' + file, recursive=True) # find file
- assert len(files), f'File Not Found: {file}' # assert file was found
- assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique
- return files[0] # return file
-
-
-def check_dataset(dict):
- # Download dataset if not found locally
- val, s = dict.get('val'), dict.get('download')
- if val and len(val):
- val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path
- if not all(x.exists() for x in val):
- print('\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()])
- if s and len(s): # download script
- print('Downloading %s ...' % s)
- if s.startswith('http') and s.endswith('.zip'): # URL
- f = Path(s).name # filename
- torch.hub.download_url_to_file(s, f)
- r = os.system('unzip -q %s -d ../ && rm %s' % (f, f)) # unzip
- else: # bash script
- r = os.system(s)
- print('Dataset autodownload %s\n' % ('success' if r == 0 else 'failure')) # analyze return value
- else:
- raise Exception('Dataset not found.')
-
-
-def make_divisible(x, divisor):
- # Returns x evenly divisible by divisor
- return math.ceil(x / divisor) * divisor
-
-
-def clean_str(s):
- # Cleans a string by replacing special characters with underscore _
- return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s)
-
-
-def one_cycle(y1=0.0, y2=1.0, steps=100):
- # lambda function for sinusoidal ramp from y1 to y2
- return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1
-
-
-def colorstr(*input):
- # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world')
- *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string
- colors = {'black': '\033[30m', # basic colors
- 'red': '\033[31m',
- 'green': '\033[32m',
- 'yellow': '\033[33m',
- 'blue': '\033[34m',
- 'magenta': '\033[35m',
- 'cyan': '\033[36m',
- 'white': '\033[37m',
- 'bright_black': '\033[90m', # bright colors
- 'bright_red': '\033[91m',
- 'bright_green': '\033[92m',
- 'bright_yellow': '\033[93m',
- 'bright_blue': '\033[94m',
- 'bright_magenta': '\033[95m',
- 'bright_cyan': '\033[96m',
- 'bright_white': '\033[97m',
- 'end': '\033[0m', # misc
- 'bold': '\033[1m',
- 'underline': '\033[4m'}
- return ''.join(colors[x] for x in args) + f'{string}' + colors['end']
-
-
-def labels_to_class_weights(labels, nc=80):
- # Get class weights (inverse frequency) from training labels
- if labels[0] is None: # no labels loaded
- return torch.Tensor()
-
- labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO
- classes = labels[:, 0].astype(np.int) # labels = [class xywh]
- weights = np.bincount(classes, minlength=nc) # occurrences per class
-
- # Prepend gridpoint count (for uCE training)
- # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image
- # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start
-
- weights[weights == 0] = 1 # replace empty bins with 1
- weights = 1 / weights # number of targets per class
- weights /= weights.sum() # normalize
- return torch.from_numpy(weights)
-
-
-def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)):
- # Produces image weights based on class_weights and image contents
- class_counts = np.array([np.bincount(x[:, 0].astype(np.int), minlength=nc) for x in labels])
- image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1)
- # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample
- return image_weights
-
-
-def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper)
- # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
- # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n')
- # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n')
- # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco
- # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet
- x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34,
- 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
- 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90]
- return x
-
-
-def xyxy2xywh(x):
- # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center
- y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center
- y[:, 2] = x[:, 2] - x[:, 0] # width
- y[:, 3] = x[:, 3] - x[:, 1] # height
- return y
-
-
-def xywh2xyxy(x):
- # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
- y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
- y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
- y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
- return y
-
-
-def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0):
- # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x
- y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y
- y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x
- y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y
- return y
-
-
-def xyn2xy(x, w=640, h=640, padw=0, padh=0):
- # Convert normalized segments into pixel segments, shape (n,2)
- y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x)
- y[:, 0] = w * x[:, 0] + padw # top left x
- y[:, 1] = h * x[:, 1] + padh # top left y
- return y
-
-
-def segment2box(segment, width=640, height=640):
- # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy)
- x, y = segment.T # segment xy
- inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height)
- x, y, = x[inside], y[inside]
- return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy
-
-
-def segments2boxes(segments):
- # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh)
- boxes = []
- for s in segments:
- x, y = s.T # segment xy
- boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy
- return xyxy2xywh(np.array(boxes)) # cls, xywh
-
-
-def resample_segments(segments, n=1000):
- # Up-sample an (n,2) segment
- for i, s in enumerate(segments):
- s = np.concatenate((s, s[0:1, :]), axis=0)
- x = np.linspace(0, len(s) - 1, n)
- xp = np.arange(len(s))
- segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy
- return segments
-
-
-def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
- # Rescale coords (xyxy) from img1_shape to img0_shape
- if ratio_pad is None: # calculate from img0_shape
- gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
- pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
- else:
- gain = ratio_pad[0][0]
- pad = ratio_pad[1]
-
- coords[:, [0, 2]] -= pad[0] # x padding
- coords[:, [1, 3]] -= pad[1] # y padding
- coords[:, :4] /= gain
- clip_coords(coords, img0_shape)
- return coords
-
-
-def clip_coords(boxes, img_shape):
- # Clip bounding xyxy bounding boxes to image shape (height, width)
- boxes[:, 0].clamp_(0, img_shape[1]) # x1
- boxes[:, 1].clamp_(0, img_shape[0]) # y1
- boxes[:, 2].clamp_(0, img_shape[1]) # x2
- boxes[:, 3].clamp_(0, img_shape[0]) # y2
-
-
-def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7):
- # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4
- box2 = box2.T
-
- # Get the coordinates of bounding boxes
- if x1y1x2y2: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
- else: # transform from xywh to xyxy
- b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
- b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
- b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
- b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
- union = w1 * h1 + w2 * h2 - inter + eps
-
- iou = inter / union
-
- if GIoU or DIoU or CIoU:
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
- rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 +
- (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared
- if DIoU:
- return iou - rho2 / c2 # DIoU
- elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / (h2 + eps)) - torch.atan(w1 / (h1 + eps)), 2)
- with torch.no_grad():
- alpha = v / (v - iou + (1 + eps))
- return iou - (rho2 / c2 + v * alpha) # CIoU
- else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
- c_area = cw * ch + eps # convex area
- return iou - (c_area - union) / c_area # GIoU
- else:
- return iou # IoU
-
-
-
-
-def bbox_alpha_iou(box1, box2, x1y1x2y2=False, GIoU=False, DIoU=False, CIoU=False, alpha=2, eps=1e-9):
- # Returns tsqrt_he IoU of box1 to box2. box1 is 4, box2 is nx4
- box2 = box2.T
-
- # Get the coordinates of bounding boxes
- if x1y1x2y2: # x1, y1, x2, y2 = box1
- b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
- b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
- else: # transform from xywh to xyxy
- b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
- b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
- b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
- b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
-
- # Intersection area
- inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
- (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
-
- # Union Area
- w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
- w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
- union = w1 * h1 + w2 * h2 - inter + eps
-
- # change iou into pow(iou+eps)
- # iou = inter / union
- iou = torch.pow(inter/union + eps, alpha)
- # beta = 2 * alpha
- if GIoU or DIoU or CIoU:
- cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
- ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
- if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
- c2 = (cw ** 2 + ch ** 2) ** alpha + eps # convex diagonal
- rho_x = torch.abs(b2_x1 + b2_x2 - b1_x1 - b1_x2)
- rho_y = torch.abs(b2_y1 + b2_y2 - b1_y1 - b1_y2)
- rho2 = ((rho_x ** 2 + rho_y ** 2) / 4) ** alpha # center distance
- if DIoU:
- return iou - rho2 / c2 # DIoU
- elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
- v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
- with torch.no_grad():
- alpha_ciou = v / ((1 + eps) - inter / union + v)
- # return iou - (rho2 / c2 + v * alpha_ciou) # CIoU
- return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha)) # CIoU
- else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
- # c_area = cw * ch + eps # convex area
- # return iou - (c_area - union) / c_area # GIoU
- c_area = torch.max(cw * ch + eps, union) # convex area
- return iou - torch.pow((c_area - union) / c_area + eps, alpha) # GIoU
- else:
- return iou # torch.log(iou+eps) or iou
-
-
-def box_iou(box1, box2):
- # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
- """
- Return intersection-over-union (Jaccard index) of boxes.
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
- Arguments:
- box1 (Tensor[N, 4])
- box2 (Tensor[M, 4])
- Returns:
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
- IoU values for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter)
-
-
-def wh_iou(wh1, wh2):
- # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2
- wh1 = wh1[:, None] # [N,1,2]
- wh2 = wh2[None] # [1,M,2]
- inter = torch.min(wh1, wh2).prod(2) # [N,M]
- return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter)
-
-
-def box_giou(box1, box2):
- """
- Return generalized intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise generalized IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- areai = whi[:, :, 0] * whi[:, :, 1]
-
- return iou - (areai - union) / areai
-
-
-def box_ciou(box1, box2, eps: float = 1e-7):
- """
- Return complete intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- eps (float, optional): small number to prevent division by zero. Default: 1e-7
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise complete IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps
-
- # centers of boxes
- x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2
- y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2
- x_g = (box2[:, 0] + box2[:, 2]) / 2
- y_g = (box2[:, 1] + box2[:, 3]) / 2
- # The distance between boxes' centers squared.
- centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2
-
- w_pred = box1[:, None, 2] - box1[:, None, 0]
- h_pred = box1[:, None, 3] - box1[:, None, 1]
-
- w_gt = box2[:, 2] - box2[:, 0]
- h_gt = box2[:, 3] - box2[:, 1]
-
- v = (4 / (torch.pi ** 2)) * torch.pow((torch.atan(w_gt / h_gt) - torch.atan(w_pred / h_pred)), 2)
- with torch.no_grad():
- alpha = v / (1 - iou + v + eps)
- return iou - (centers_distance_squared / diagonal_distance_squared) - alpha * v
-
-
-def box_diou(box1, box2, eps: float = 1e-7):
- """
- Return distance intersection-over-union (Jaccard index) between two sets of boxes.
- Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with
- ``0 <= x1 < x2`` and ``0 <= y1 < y2``.
- Args:
- boxes1 (Tensor[N, 4]): first set of boxes
- boxes2 (Tensor[M, 4]): second set of boxes
- eps (float, optional): small number to prevent division by zero. Default: 1e-7
- Returns:
- Tensor[N, M]: the NxM matrix containing the pairwise distance IoU values
- for every element in boxes1 and boxes2
- """
-
- def box_area(box):
- # box = 4xn
- return (box[2] - box[0]) * (box[3] - box[1])
-
- area1 = box_area(box1.T)
- area2 = box_area(box2.T)
-
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
- union = (area1[:, None] + area2 - inter)
-
- iou = inter / union
-
- lti = torch.min(box1[:, None, :2], box2[:, :2])
- rbi = torch.max(box1[:, None, 2:], box2[:, 2:])
-
- whi = (rbi - lti).clamp(min=0) # [N,M,2]
- diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps
-
- # centers of boxes
- x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2
- y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2
- x_g = (box2[:, 0] + box2[:, 2]) / 2
- y_g = (box2[:, 1] + box2[:, 3]) / 2
- # The distance between boxes' centers squared.
- centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2
-
- # The distance IoU is the IoU penalized by a normalized
- # distance between boxes' centers squared.
- return iou - (centers_distance_squared / diagonal_distance_squared)
-
-
-def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False,
- labels=()):
- """Runs Non-Maximum Suppression (NMS) on inference results
-
- Returns:
- list of detections, on (n,6) tensor per image [xyxy, conf, cls]
- """
-
- nc = prediction.shape[2] - 5 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height
- max_det = 300 # maximum number of detections per image
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- l = labels[xi]
- v = torch.zeros((len(l), nc + 5), device=x.device)
- v[:, :4] = l[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- if nc == 1:
- x[:, 5:] = x[:, 4:5] # for models with one class, cls_loss is 0 and cls_conf is always 0.5,
- # so there is no need to multiplicate.
- else:
- x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Apply finite constraint
- # if not torch.isfinite(x).all():
- # x = x[torch.isfinite(x).all(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
- elif n > max_nms: # excess boxes
- x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if i.shape[0] > max_det: # limit detections
- i = i[:max_det]
- if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- print(f'WARNING: NMS time limit {time_limit}s exceeded')
- break # time limit exceeded
-
- return output
-
-
-def non_max_suppression_kpt(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False,
- labels=(), kpt_label=False, nc=None, nkpt=None):
- """Runs Non-Maximum Suppression (NMS) on inference results
-
- Returns:
- list of detections, on (n,6) tensor per image [xyxy, conf, cls]
- """
- if nc is None:
- nc = prediction.shape[2] - 5 if not kpt_label else prediction.shape[2] - 56 # number of classes
- xc = prediction[..., 4] > conf_thres # candidates
-
- # Settings
- min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height
- max_det = 300 # maximum number of detections per image
- max_nms = 30000 # maximum number of boxes into torchvision.ops.nms()
- time_limit = 10.0 # seconds to quit after
- redundant = True # require redundant detections
- multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img)
- merge = False # use merge-NMS
-
- t = time.time()
- output = [torch.zeros((0,6), device=prediction.device)] * prediction.shape[0]
- for xi, x in enumerate(prediction): # image index, image inference
- # Apply constraints
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
- x = x[xc[xi]] # confidence
-
- # Cat apriori labels if autolabelling
- if labels and len(labels[xi]):
- l = labels[xi]
- v = torch.zeros((len(l), nc + 5), device=x.device)
- v[:, :4] = l[:, 1:5] # box
- v[:, 4] = 1.0 # conf
- v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls
- x = torch.cat((x, v), 0)
-
- # If none remain process next image
- if not x.shape[0]:
- continue
-
- # Compute conf
- x[:, 5:5+nc] *= x[:, 4:5] # conf = obj_conf * cls_conf
-
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
- box = xywh2xyxy(x[:, :4])
-
- # Detections matrix nx6 (xyxy, conf, cls)
- if multi_label:
- i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
- else: # best class only
- if not kpt_label:
- conf, j = x[:, 5:].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
- else:
- kpts = x[:, 6:]
- conf, j = x[:, 5:6].max(1, keepdim=True)
- x = torch.cat((box, conf, j.float(), kpts), 1)[conf.view(-1) > conf_thres]
-
-
- # Filter by class
- if classes is not None:
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
-
- # Apply finite constraint
- # if not torch.isfinite(x).all():
- # x = x[torch.isfinite(x).all(1)]
-
- # Check shape
- n = x.shape[0] # number of boxes
- if not n: # no boxes
- continue
- elif n > max_nms: # excess boxes
- x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence
-
- # Batched NMS
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
- i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
- if i.shape[0] > max_det: # limit detections
- i = i[:max_det]
- if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
- # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
- weights = iou * scores[None] # box weights
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
- if redundant:
- i = i[iou.sum(1) > 1] # require redundancy
-
- output[xi] = x[i]
- if (time.time() - t) > time_limit:
- print(f'WARNING: NMS time limit {time_limit}s exceeded')
- break # time limit exceeded
-
- return output
-
-
-def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer()
- # Strip optimizer from 'f' to finalize training, optionally save as 's'
- x = torch.load(f, map_location=torch.device('cpu'))
- if x.get('ema'):
- x['model'] = x['ema'] # replace model with ema
- for k in 'optimizer', 'training_results', 'wandb_id', 'ema', 'updates': # keys
- x[k] = None
- x['epoch'] = -1
- x['model'].half() # to FP16
- for p in x['model'].parameters():
- p.requires_grad = False
- torch.save(x, s or f)
- mb = os.path.getsize(s or f) / 1E6 # filesize
- print(f"Optimizer stripped from {f},{(' saved as %s,' % s) if s else ''} {mb:.1f}MB")
-
-
-def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''):
- # Print mutation results to evolve.txt (for use with train.py --evolve)
- a = '%10s' * len(hyp) % tuple(hyp.keys()) # hyperparam keys
- b = '%10.3g' * len(hyp) % tuple(hyp.values()) # hyperparam values
- c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3)
- print('\n%s\n%s\nEvolved fitness: %s\n' % (a, b, c))
-
- if bucket:
- url = 'gs://%s/evolve.txt' % bucket
- if gsutil_getsize(url) > (os.path.getsize('evolve.txt') if os.path.exists('evolve.txt') else 0):
- os.system('gsutil cp %s .' % url) # download evolve.txt if larger than local
-
- with open('evolve.txt', 'a') as f: # append result
- f.write(c + b + '\n')
- x = np.unique(np.loadtxt('evolve.txt', ndmin=2), axis=0) # load unique rows
- x = x[np.argsort(-fitness(x))] # sort
- np.savetxt('evolve.txt', x, '%10.3g') # save sort by fitness
-
- # Save yaml
- for i, k in enumerate(hyp.keys()):
- hyp[k] = float(x[0, i + 7])
- with open(yaml_file, 'w') as f:
- results = tuple(x[0, :7])
- c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3)
- f.write('# Hyperparameter Evolution Results\n# Generations: %g\n# Metrics: ' % len(x) + c + '\n\n')
- yaml.dump(hyp, f, sort_keys=False)
-
- if bucket:
- os.system('gsutil cp evolve.txt %s gs://%s' % (yaml_file, bucket)) # upload
-
-
-def apply_classifier(x, model, img, im0):
- # applies a second stage classifier to yolo outputs
- im0 = [im0] if isinstance(im0, np.ndarray) else im0
- for i, d in enumerate(x): # per image
- if d is not None and len(d):
- d = d.clone()
-
- # Reshape and pad cutouts
- b = xyxy2xywh(d[:, :4]) # boxes
- b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square
- b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad
- d[:, :4] = xywh2xyxy(b).long()
-
- # Rescale boxes from img_size to im0 size
- scale_coords(img.shape[2:], d[:, :4], im0[i].shape)
-
- # Classes
- pred_cls1 = d[:, 5].long()
- ims = []
- for j, a in enumerate(d): # per item
- cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])]
- im = cv2.resize(cutout, (224, 224)) # BGR
- # cv2.imwrite('test%i.jpg' % j, cutout)
-
- im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416
- im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32
- im /= 255.0 # 0 - 255 to 0.0 - 1.0
- ims.append(im)
-
- pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction
- x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections
-
- return x
-
-
-def increment_path(path, exist_ok=True, sep=''):
- # Increment path, i.e. runs/exp --> runs/exp{sep}0, runs/exp{sep}1 etc.
- path = Path(path) # os-agnostic
- if (path.exists() and exist_ok) or (not path.exists()):
- return str(path)
- else:
- dirs = glob.glob(f"{path}{sep}*") # similar paths
- matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs]
- i = [int(m.groups()[0]) for m in matches if m] # indices
- n = max(i) + 1 if i else 2 # increment number
- return f"{path}{sep}{n}" # update path
diff --git a/spaces/shripadbhat/whisper-demo/README.md b/spaces/shripadbhat/whisper-demo/README.md
deleted file mode 100644
index 665302068703304733ef62a601590cc4eade0cd5..0000000000000000000000000000000000000000
--- a/spaces/shripadbhat/whisper-demo/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Whisper Hindi Transcriber
-emoji: 🇮🇳
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
-tags:
-- whisper-event
-duplicated_from: whisper-event/whisper-demo
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/shubh2014shiv/Japanese_NLP/GD_download.py b/spaces/shubh2014shiv/Japanese_NLP/GD_download.py
deleted file mode 100644
index 2b4800c788edd1668ef2c10a620230777e55b56c..0000000000000000000000000000000000000000
--- a/spaces/shubh2014shiv/Japanese_NLP/GD_download.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import requests
-
-def download_file_from_google_drive(id, destination):
- URL = "https://docs.google.com/uc?export=download"
-
- session = requests.Session()
-
- response = session.get(URL, params = { 'id' : id }, stream = True)
- token = get_confirm_token(response)
-
- if token:
- params = { 'id' : id, 'confirm' : token }
- response = session.get(URL, params = params, stream = True)
-
- save_response_content(response, destination)
-
-def get_confirm_token(response):
- for key, value in response.cookies.items():
- if key.startswith('download_warning'):
- return value
-
- return None
-
-def save_response_content(response, destination):
- CHUNK_SIZE = 32768
-
- with open(destination, "wb") as f:
- for chunk in response.iter_content(CHUNK_SIZE):
- if chunk: # filter out keep-alive new chunks
- f.write(chunk)
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Blockman Go Hack The Best Way to Get GCubes Coins Without Spending Money.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Blockman Go Hack The Best Way to Get GCubes Coins Without Spending Money.md
deleted file mode 100644
index 3d5b9cf2e9d78d3833c79fc46dc83c436f40b8dd..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Blockman Go Hack The Best Way to Get GCubes Coins Without Spending Money.md
+++ /dev/null
@@ -1,108 +0,0 @@
-
-How to Download a Blockman Go Hack
-how to download a blockman go hack lets go
- What is Blockman Go?
-A sandbox game with various mini-games and modes
-A social platform with customization and chat features
-
-Sakura school simulator 10 new pose mod apk download
-Easy tutorial for 10 new pose in sakura school simulator
-Sakura school simulator 10 new pose update 2023
-Sakura school simulator 10 new pose and dance pose
-Sakura school simulator 10 new pose by Ichal korg
-Sakura school simulator 10 new pose free download
-Sakura school simulator 10 new pose tutorial Neko-nyah
-Sakura school simulator 10 new pose link in description
-Sakura school simulator 10 new pose and dance tutorial
-Sakura school simulator 10 new pose sugar cookie
-Sakura school simulator 10 new pose lemon cake
-Sakura school simulator 10 new pose baru sakura
-Sakura school simulator 10 new pose cara download
-Sakura school simulator 10 new pose and dance mod
-Sakura school simulator 10 new pose easy and real
-Sakura school simulator 10 new pose Garusoft developer
-Sakura school simulator 10 new pose Daystar music
-Sakura school simulator 10 new pose SCTV channel
-Sakura school simulator 10 new pose subscribe to Ichal korg
-Sakura school simulator 10 new dance pose tutorial
-Sakura school simulator 10 new dance mod apk download
-Easy tutorial for 10 new dance in sakura school simulator
-Sakura school simulator 10 new dance update 2023
-Sakura school simulator 10 new dance and pose mod
-Sakura school simulator 10 new dance by Ichal korg
-Sakura school simulator 10 new dance free download
-Sakura school simulator 10 new dance tutorial Neko-nyah
-Sakura school simulator 10 new dance link in description
-Sakura school simulator 10 new dance and pose tutorial
-Sakura school simulator 10 new dance sugar cookie
-Sakura school simulator 10 new dance lemon cake
-Sakura school simulator 10 new dance baru sakura
-Sakura school simulator 10 new dance cara download
-Sakura school simulator 10 new dance and pose mod
-Sakura school simulator 10 new dance easy and real
-Sakura school simulator 10 new dance Garusoft developer
-Sakura school simulator 10 new dance Daystar music
-Sakura school simulator 10 new dance SCTV channel
-Sakura school simulator 10 new dance subscribe to Ichal korgWhy do some players want to hack Blockman Go?
-To get unlimited gold and diamonds
-To cheat in competitive games
-To access premium features and items
-How to download a Blockman Go hack safely?
-Avoid unsolicited links and ads
-Use curated software lists and reviews
-Use Google Play Protect and antivirus software
-Check the app permissions and privacy policy
-What are the risks of using a Blockman Go hack?
-Malware infection and data theft
-Account suspension and ban
-Unfair gameplay and bad reputation
-Conclusion
-FAQs
-
-
-
-Question
-Answer
-
-
-How can I get gold and diamonds in Blockman Go without hacking?
-You can get gold and diamonds in Blockman Go by playing games, watching ads, completing tasks, inviting friends, joining events, or buying them with real money or GCubes.
-
-
-How can I report a hacker in Blockman Go?
-You can report a hacker in Blockman Go by tapping on their name in the game and selecting the report option. You can also contact the customer service by email at service@blockmango.net or by filling out the feedback form in the app.
-
-
-How can I become a VIP member or get GCubes in Blockman Go?
-You can become a VIP member or get GCubes in Blockman Go by buying them with real money in the app. You can also earn GCubes by inviting friends or completing surveys.
-
-
-How can I update or uninstall a Blockman Go hack?
-You can update or uninstall a Blockman Go hack by following the instructions provided by the hack's developer or source. You can also go to Settings > Apps > Blockman Go Hack > Update or Uninstall.
-
-
-How can I restore my data or account if I lose them due to a Blockman Go hack?
-You can restore your data or account if you lose them due to a Blockman Go hack by contacting the customer service by email at service@blockmango.net or by filling out the feedback form in the app. You may need to provide proof of your identity and ownership of your account.
-
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Car Game APK 2021 The Most Realistic and Fun Racing Games for Your Phone.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Car Game APK 2021 The Most Realistic and Fun Racing Games for Your Phone.md
deleted file mode 100644
index d80eab25debd9e872f70c4063720ef6b7acae12d..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Car Game APK 2021 The Most Realistic and Fun Racing Games for Your Phone.md
+++ /dev/null
@@ -1,229 +0,0 @@
-
-Car Game Download APK 2021: The Best Racing Games for Android
-car game download apk 2021
-Introduction
-What are car games?
-Why download car games as APK files?
-
-
-
-
-How to download and install car games APK files?
-
-
-Top 5 Car Games
Top 5 Car Games to Download as APK Files in 2021
-
-car driving simulator game download apk 2021
-car stunt game download apk 2021
-car parking game download apk 2021
-car game download apk mod 2021
-car game download apk offline 2021
-car game download apk for android 2021
-car game download apk for pc 2021
-car game download apk free 2021
-car game download apk latest version 2021
-car game download apk unlimited money 2021
-car game download apk hack 2021
-car game download apk full version 2021
-car game download apk new update 2021
-car game download apk online 2021
-car game download apk fast and furious 2021
-car game download apk need for speed 2021
-car game download apk real racing 3 2021
-car game download apk asphalt 9 2021
-car game download apk csr racing 2 2021
-car game download apk hill climb racing 2 2021
-car game download apk traffic racer 2021
-car game download apk drag racing 2021
-car game download apk extreme car driving simulator 2021[^1^]
-car game download apk ultimate car driving simulator 2021[^1^]
-car game download apk race master 3d - car racing 2021[^1^]
-car game download apk need for speed no limits 2021[^1^]
-car game download apk racing in car 2021[^2^]
-car game download apk city racing 3d 2021
-car game download apk turbo driving racing 3d 2021
-car game download apk crazy for speed 2 2021
-car game download apk gt racing 2 - the real car experience 2021
-car game download apk real drift car racing lite 2021
-car game download apk real bike racing - motorcycle simulator games free 2021
-car game download apk real truck driver - truck simulator games free offline driving games new games free games for kids and adults games for boys and girls games for teens and adults games for all ages games without wifi or internet games that don't need wifi or internet games that don't use wifi or internet games that don't require wifi or internet games that work without wifi or internet games that can be played without wifi or internet games that you can play without wifi or internet best offline games free no wifi games fun offline games free no wifi needed games cool offline games free no wifi required games awesome offline games free no internet connection required games amazing offline games free no internet needed games super offline games free no internet required games fantastic offline games free no wifi connection required games wonderful offline games free no internet connection needed games marvelous offline games free no wifi needed to play games splendid offline games free no internet needed to play games excellent offline games free no wifi required to play games terrific offline games free no internet required to play games fabulous offline games free no wifi connection required to play games great offline games free no internet connection required to play games outstanding offline games free no wifi needed for playing games brilliant offline games free no internet needed for playing games superb offline games free no wifi required for playing games magnificent offline games free no internet required for playingRacing in Car 2021
-Features
-
-
-Pros and cons
-
-
-
-Pros Cons
-Immersive and realistic gameplay Limited number of cars and tracks
-Customization options for your car No multiplayer or online mode
-Offline mode available No career or story mode
-Easy and intuitive controls No music or sound options Extreme Car Driving Simulator
-Features
-
-
-Pros and cons
-
-
-Pros Cons
-Huge open world city to explore Repetitive and boring missions
-Different modes to choose from Poor graphics and sound quality
-Customization options for your car Lack of variety in cars and environments
-Realistic damage system and driving physics Annoying ads and pop-ups
-Leaderboards and achievements No multiplayer or online mode Ultimate Car Driving Simulator
-Features
-
-
-Pros and cons
-
-
-
-Pros Cons
-Big open world with realistic physics and graphics Some bugs and glitches
-Different cars to unlock and drive Some cars are too expensive or hard to get
-Customization options for your car Some customization parts are not compatible with some cars
-Various game modes: free roam, traffic, offroad, etc. No multiplayer or online mode
-Mini games and challenges No story or career mode Race Master 3D - Car Racing
-Features
-
-
-Pros and cons
-
-
-Pros Cons
-Different tracks and environments to race on No customization options for your car
-Different cars to choose from and upgrade No free roam or sandbox mode
-Realistic car physics and sound effects No damage system or crash effects
-Multiplayer mode with online leaderboards and chat No offline mode available Daily rewards and bonuses Lots of ads and in-app purchases Need for Speed™ No Limits
-Features
-
-
-Pros and cons
-
-
-
-Pros Cons
-Over 100 real cars from top brands to collect and customize Requires a lot of storage space and data
-Different modes and events: campaign, car series, rivals, etc. Limited fuel and gold resources
-Stunning graphics and sound effects May not run smoothly on low-end devices
-Multiplayer mode with online leaderboards and chat No offline mode available
-Daily rewards and challenges Lots of ads and in-app purchases Conclusion
-
-
-FAQs
-
-
-
-
-
-
-
-
-
-
-
-
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/DRAGON BALL LEGENDS The Ultimate DB Anime Action-RPG Experience for iPad.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/DRAGON BALL LEGENDS The Ultimate DB Anime Action-RPG Experience for iPad.md
deleted file mode 100644
index f811809c90bd36e9b4f1ca3c9639d05e3697a80d..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/DRAGON BALL LEGENDS The Ultimate DB Anime Action-RPG Experience for iPad.md
+++ /dev/null
@@ -1,154 +0,0 @@
-
-How to Download Dragon Ball Legends on iPad
-What is Dragon Ball Legends?
-A brief introduction to the game and its features
-dragon ball legends download ipad
-
-
- The requirements and compatibility of the game for iPad
-
-
-How to Download and Install Dragon Ball Legends from the App Store
-The steps to find and download the game from the App Store
-
-
- How to open and play the game on your iPad
-
-
- How to Download and Install Dragon Ball Legends from iCloud
-The benefits of using iCloud to sync your apps across devices
-
-
- The steps to access and download the game from iCloud
-
-Dragon ball legends ipad app store
-Dragon ball legends ios download link
-Best tips and tricks for dragon ball legends on ipad
-Dragon ball legends compatible devices and requirements
-Dragon ball legends gameplay and features for ipad
-Dragon ball legends free download for ipad
-Dragon ball legends update and patch notes for ipad
-Dragon ball legends review and ratings for ipad
-Dragon ball legends cheats and hacks for ipad
-Dragon ball legends online multiplayer mode for ipad
-Dragon ball legends offline mode for ipad
-Dragon ball legends story mode and characters for ipad
-Dragon ball legends events and rewards for ipad
-Dragon ball legends guides and tutorials for ipad
-Dragon ball legends forums and communities for ipad
-Dragon ball legends support and customer service for ipad
-Dragon ball legends bugs and issues for ipad
-Dragon ball legends fan art and wallpapers for ipad
-Dragon ball legends merchandise and accessories for ipad
-Dragon ball legends news and updates for ipad
-Dragon ball legends wiki and database for ipad
-Dragon ball legends codes and coupons for ipad
-Dragon ball legends best teams and strategies for ipad
-Dragon ball legends tier list and rankings for ipad
-Dragon ball legends summons and banners for ipad
-Dragon ball legends crystals and currency for ipad
-Dragon ball legends shop and offers for ipad
-Dragon ball legends pvp and rankings for ipad
-Dragon ball legends coop and raids for ipad
-Dragon ball legends guilds and clans for ipad
-Dragon ball legends missions and achievements for ipad
-Dragon ball legends seasons and battle pass for ipad
-Dragon ball legends challenges and quests for ipad
-Dragon ball legends skins and costumes for ipad
-Dragon ball legends mods and customizations for ipad
-Dragon ball legends voice actors and cast for ipad
-Dragon ball legends soundtrack and music for ipad
-Dragon ball legends videos and trailers for ipad
-Dragon ball legends podcasts and interviews for ipad
-Dragon ball legends memes and jokes for ipad
-Dragon ball legends crossover and collaboration for ipad
-Dragon ball legends anniversary and celebration for ipad
-Dragon ball legends history and origin for ipad
-Dragon ball legends fun facts and trivia for ipad
-Dragon ball legends comparison and differences with other dragon ball games for ipad
-
- How to Update Dragon Ball Legends on your iPad
-The importance of updating the game for new features and bug fixes
-The steps to check and update the game from the App Store or iCloud
-
-
- Conclusion
-A summary of the main points and a call to action for the readers
-FAQs
-Q1: How much space does Dragon Ball Legends take on my iPad?
-Q2: How can I play Dragon Ball Legends with my friends online?
-Q3: How can I get more characters and items in Dragon Ball Legends?
-Q4: How can I contact the support team if I have any issues with the game?
-Q5: Where can I find more information and tips about Dragon Ball Legends?
-
-
-
\ No newline at end of file
diff --git a/spaces/siviltoplumtech/metadata/README.md b/spaces/siviltoplumtech/metadata/README.md
deleted file mode 100644
index 1fc57833ba8cdf7a0adffdd4496bdf8c4c454499..0000000000000000000000000000000000000000
--- a/spaces/siviltoplumtech/metadata/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Metadata
-emoji: 💻
-colorFrom: yellow
-colorTo: gray
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/scripts/download_pretrained_models_from_gdrive.py b/spaces/sklkd93/CodeFormer/CodeFormer/scripts/download_pretrained_models_from_gdrive.py
deleted file mode 100644
index 7df5be6fc260394ee9bbd0a7ae377e2ca657fe83..0000000000000000000000000000000000000000
--- a/spaces/sklkd93/CodeFormer/CodeFormer/scripts/download_pretrained_models_from_gdrive.py
+++ /dev/null
@@ -1,60 +0,0 @@
-import argparse
-import os
-from os import path as osp
-
-# from basicsr.utils.download_util import download_file_from_google_drive
-import gdown
-
-
-def download_pretrained_models(method, file_ids):
- save_path_root = f'./weights/{method}'
- os.makedirs(save_path_root, exist_ok=True)
-
- for file_name, file_id in file_ids.items():
- file_url = 'https://drive.google.com/uc?id='+file_id
- save_path = osp.abspath(osp.join(save_path_root, file_name))
- if osp.exists(save_path):
- user_response = input(f'{file_name} already exist. Do you want to cover it? Y/N\n')
- if user_response.lower() == 'y':
- print(f'Covering {file_name} to {save_path}')
- gdown.download(file_url, save_path, quiet=False)
- # download_file_from_google_drive(file_id, save_path)
- elif user_response.lower() == 'n':
- print(f'Skipping {file_name}')
- else:
- raise ValueError('Wrong input. Only accepts Y/N.')
- else:
- print(f'Downloading {file_name} to {save_path}')
- gdown.download(file_url, save_path, quiet=False)
- # download_file_from_google_drive(file_id, save_path)
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
-
- parser.add_argument(
- 'method',
- type=str,
- help=("Options: 'CodeFormer' 'facelib'. Set to 'all' to download all the models."))
- args = parser.parse_args()
-
- # file name: file id
- # 'dlib': {
- # 'mmod_human_face_detector-4cb19393.dat': '1qD-OqY8M6j4PWUP_FtqfwUPFPRMu6ubX',
- # 'shape_predictor_5_face_landmarks-c4b1e980.dat': '1vF3WBUApw4662v9Pw6wke3uk1qxnmLdg',
- # 'shape_predictor_68_face_landmarks-fbdc2cb8.dat': '1tJyIVdCHaU6IDMDx86BZCxLGZfsWB8yq'
- # }
- file_ids = {
- 'CodeFormer': {
- 'codeformer.pth': '1v_E_vZvP-dQPF55Kc5SRCjaKTQXDz-JB'
- },
- 'facelib': {
- 'yolov5l-face.pth': '131578zMA6B2x8VQHyHfa6GEPtulMCNzV',
- 'parsing_parsenet.pth': '16pkohyZZ8ViHGBk3QtVqxLZKzdo466bK'
- }
- }
-
- if args.method == 'all':
- for method in file_ids.keys():
- download_pretrained_models(method, file_ids[method])
- else:
- download_pretrained_models(args.method, file_ids[args.method])
\ No newline at end of file
diff --git a/spaces/smjain/smjainvoice/train.py b/spaces/smjain/smjainvoice/train.py
deleted file mode 100644
index 202b46344646676b0ae0fe69d1d615913c7a6b94..0000000000000000000000000000000000000000
--- a/spaces/smjain/smjainvoice/train.py
+++ /dev/null
@@ -1,149 +0,0 @@
-#!/usr/bin/env python3
-#coding:utf-8
-
-import os
-import os.path as osp
-import re
-import sys
-import yaml
-import shutil
-import numpy as np
-import paddle
-import click
-import warnings
-warnings.simplefilter('ignore')
-
-from functools import reduce
-from munch import Munch
-
-from starganv2vc_paddle.meldataset import build_dataloader
-from starganv2vc_paddle.optimizers import build_optimizer
-from starganv2vc_paddle.models import build_model
-from starganv2vc_paddle.trainer import Trainer
-from visualdl import LogWriter
-
-from starganv2vc_paddle.Utils.ASR.models import ASRCNN
-from starganv2vc_paddle.Utils.JDC.model import JDCNet
-
-import logging
-from logging import StreamHandler
-logger = logging.getLogger(__name__)
-logger.setLevel(logging.DEBUG)
-handler = StreamHandler()
-handler.setLevel(logging.DEBUG)
-logger.addHandler(handler)
-
-
-@click.command()
-@click.option('-p', '--config_path', default='Configs/config.yml', type=str)
-
-def main(config_path):
- config = yaml.safe_load(open(config_path))
-
- log_dir = config['log_dir']
- if not osp.exists(log_dir): os.makedirs(log_dir, exist_ok=True)
- shutil.copy(config_path, osp.join(log_dir, osp.basename(config_path)))
- writer = LogWriter(log_dir + "/visualdl")
-
- # write logs
- file_handler = logging.FileHandler(osp.join(log_dir, 'train.log'))
- file_handler.setLevel(logging.DEBUG)
- file_handler.setFormatter(logging.Formatter('%(levelname)s:%(asctime)s: %(message)s'))
- logger.addHandler(file_handler)
-
- batch_size = config.get('batch_size', 10)
- epochs = config.get('epochs', 1000)
- save_freq = config.get('save_freq', 20)
- train_path = config.get('train_data', None)
- val_path = config.get('val_data', None)
- stage = config.get('stage', 'star')
- fp16_run = config.get('fp16_run', False)
-
- # load data
- train_list, val_list = get_data_path_list(train_path, val_path)
- train_dataloader = build_dataloader(train_list,
- batch_size=batch_size,
- num_workers=4)
- val_dataloader = build_dataloader(val_list,
- batch_size=batch_size,
- validation=True,
- num_workers=2)
-
- # load pretrained ASR model
- ASR_config = config.get('ASR_config', False)
- ASR_path = config.get('ASR_path', False)
- with open(ASR_config) as f:
- ASR_config = yaml.safe_load(f)
- ASR_model_config = ASR_config['model_params']
- ASR_model = ASRCNN(**ASR_model_config)
- params = paddle.load(ASR_path)['model']
- ASR_model.set_state_dict(params)
- _ = ASR_model.eval()
-
- # load pretrained F0 model
- F0_path = config.get('F0_path', False)
- F0_model = JDCNet(num_class=1, seq_len=192)
- params = paddle.load(F0_path)['net']
- F0_model.set_state_dict(params)
-
- # build model
- model, model_ema = build_model(Munch(config['model_params']), F0_model, ASR_model)
-
- scheduler_params = {
- "max_lr": float(config['optimizer_params'].get('lr', 2e-4)),
- "pct_start": float(config['optimizer_params'].get('pct_start', 0.0)),
- "epochs": epochs,
- "steps_per_epoch": len(train_dataloader),
- }
-
- scheduler_params_dict = {key: scheduler_params.copy() for key in model}
- scheduler_params_dict['mapping_network']['max_lr'] = 2e-6
- optimizer = build_optimizer({key: model[key].parameters() for key in model},
- scheduler_params_dict=scheduler_params_dict)
-
- trainer = Trainer(args=Munch(config['loss_params']), model=model,
- model_ema=model_ema,
- optimizer=optimizer,
- train_dataloader=train_dataloader,
- val_dataloader=val_dataloader,
- logger=logger,
- fp16_run=fp16_run)
-
- if config.get('pretrained_model', '') != '':
- trainer.load_checkpoint(config['pretrained_model'],
- load_only_params=config.get('load_only_params', True))
-
- for _ in range(1, epochs+1):
- epoch = trainer.epochs
- train_results = trainer._train_epoch()
- eval_results = trainer._eval_epoch()
- results = train_results.copy()
- results.update(eval_results)
- logger.info('--- epoch %d ---' % epoch)
- for key, value in results.items():
- if isinstance(value, float):
- logger.info('%-15s: %.4f' % (key, value))
- writer.add_scalar(key, value, epoch)
- else:
- for v in value:
- writer.add_histogram('eval_spec', v, epoch)
- if (epoch % save_freq) == 0:
- trainer.save_checkpoint(osp.join(log_dir, 'epoch_%05d.pd' % epoch))
-
- return 0
-
-def get_data_path_list(train_path=None, val_path=None):
- if train_path is None:
- train_path = "Data/train_list.txt"
- if val_path is None:
- val_path = "Data/val_list.txt"
-
- with open(train_path, 'r') as f:
- train_list = f.readlines()
- with open(val_path, 'r') as f:
- val_list = f.readlines()
-
- return train_list, val_list
-
-if __name__=="__main__":
- main()
diff --git a/spaces/songdaooi/ketsueki/face_parsing/__init__.py b/spaces/songdaooi/ketsueki/face_parsing/__init__.py
deleted file mode 100644
index e98735aec33d8a4f5525f7ca03f1285d18782285..0000000000000000000000000000000000000000
--- a/spaces/songdaooi/ketsueki/face_parsing/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .swap import init_parser, swap_regions, mask_regions, mask_regions_to_list, SoftErosion
\ No newline at end of file
diff --git a/spaces/sophiamyang/Panel_apps/trivia.py b/spaces/sophiamyang/Panel_apps/trivia.py
deleted file mode 100644
index 012495961d5b2148c81f213c91e865811ecad2cf..0000000000000000000000000000000000000000
--- a/spaces/sophiamyang/Panel_apps/trivia.py
+++ /dev/null
@@ -1,168 +0,0 @@
-#!/usr/bin/env python
-# coding: utf-8
-
-# In[ ]:
-
-
-import panel as pn
-import requests
-import pandas as pd
-pn.extension()
-
-
-# In[ ]:
-
-
-def get_data(num_questions, difficulty, category):
- url = f"https://opentdb.com/api.php?amount={num_questions}&category={category_match[category]}&difficulty={difficulty}&type=boolean"
- df = pd.DataFrame(
- requests.get(url).json()['results']
- )
- return df
-
-
-# In[ ]:
-
-
-category = pn.widgets.Select(
- name='Category',
- options=[
- 'General Knowledge',
- 'Film',
- 'Music',
- 'Video Games',
- 'Science & Nature',
- 'Computers',
- 'Geography',
- 'History',
- 'Politics',
- 'Animals',
- 'Japanese Anime & Manga'
- ],
- value='General Knowledge'
-)
-category
-
-
-# In[ ]:
-
-
-category_match = {
- 'General Knowledge': 9,
- 'Books': 10,
- 'Film': 11,
- 'Music': 12,
- 'Musicals & Theatres': 13,
- 'Television': 14,
- 'Video Games': 15,
- 'Board Games': 16,
- 'Science & Nature': 17,
- 'Computers': 18,
- 'Mathematics': 19,
- 'Mythology': 20,
- 'Sports': 21,
- 'Geography': 22,
- 'History': 23,
- 'Politics': 24,
- 'Art': 25,
- 'Celebrities': 26,
- 'Animals': 27,
- 'Vehicles': 28,
- 'Comics': 29,
- 'Gadgets': 30,
- 'Japanese Anime & Manga': 31,
- 'Cartoon & Animations': 32
-}
-
-
-# In[ ]:
-
-
-difficulty = pn.widgets.Select(
- name='Difficulty',
- options=['easy', 'medium', 'hard'],
- value='easy'
-)
-difficulty
-
-
-# In[ ]:
-
-
-num_questions = pn.widgets.DiscreteSlider(
- name='Number of Questions',
- options=[5, 10, 15, 20], value=5
-)
-num_questions
-
-
-# In[ ]:
-
-
-def question_list(i, df):
-
- button_true = pn.widgets.Button(name='True')
- button_false = pn.widgets.Button(name='False')
-
- text = pn.widgets.StaticText(value='')
-
- def processing_button_true(event):
- if df.correct_answer[i] == 'True':
- text.value = 'Correct!'
- else:
- text.value = 'Incorrect!'
-
- def processing_button_false(event):
- if df.correct_answer[i] == 'False':
- text.value = 'Correct!'
- else:
- text.value = 'Incorrect!'
-
- button_true.on_click(processing_button_true)
- button_false.on_click(processing_button_false)
- return pn.Column(
- pn.pane.Markdown(f"""
-
- #Question {i+1}:
- ### {df.question[i]}
- """),
-
- pn.Row(button_true,button_false),
- text)
-
-
-# In[ ]:
-
-
-def get_data_and_questions(num_questions, difficulty, category):
- df = get_data(num_questions, difficulty, category)
- question_pane = [question_list(i, df) for i in range(len(df))]
- trivia_pane = pn.Column(*question_pane)
- return trivia_pane
-
-
-# In[ ]:
-
-
-interactive = pn.bind(get_data_and_questions, num_questions, difficulty, category)
-
-
-# In[ ]:
-
-
-# Layout using Template
-template = pn.template.FastListTemplate(
- title='Trivia Game',
- sidebar=[num_questions, difficulty, category],
- main=[interactive],
- accent_base_color="#88d8b0",
- header_background="#88d8b0",
-)
-template.servable()
-
-
-# In[ ]:
-
-
-
-
diff --git a/spaces/spacerini/miracl-chinese/README.md b/spaces/spacerini/miracl-chinese/README.md
deleted file mode 100644
index 0d41d688870c3402f773d25fcef1caa60a5c0875..0000000000000000000000000000000000000000
--- a/spaces/spacerini/miracl-chinese/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Miracl Search - Chinese
-emoji: 🌍🙌🌏
-colorFrom: red
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.25.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sparanoid/milky-green-svc/data_utils.py b/spaces/sparanoid/milky-green-svc/data_utils.py
deleted file mode 100644
index f6fbe86c95d5eaa14cbade0336d1843dfe543b75..0000000000000000000000000000000000000000
--- a/spaces/sparanoid/milky-green-svc/data_utils.py
+++ /dev/null
@@ -1,411 +0,0 @@
-import os
-import random
-
-import numpy as np
-import torch
-import torch.utils.data
-from mel_processing import spectrogram_torch
-
-from utils import load_wav_to_torch, load_filepaths_and_text
-
-
-def dropout1d(myarray, ratio=0.5):
- indices = np.random.choice(np.arange(myarray.size), replace=False,
- size=int(myarray.size * ratio))
- myarray[indices] = 0
- return myarray
-
-
-class TextAudioLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
-
- def __init__(self, audiopaths_and_text, hparams):
- self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
- self.text_cleaners = hparams.text_cleaners
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
-
- self.cleaned_text = getattr(hparams, "cleaned_text", False)
-
- self.add_blank = hparams.add_blank
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 190)
-
- random.seed(1234)
- random.shuffle(self.audiopaths_and_text)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
- lengths = []
- for audiopath, text, pitch in self.audiopaths_and_text:
- lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
- self.lengths = lengths
-
- def get_audio_text_pair(self, audiopath_and_text):
- # separate filename and text
- audiopath, text, pitch = audiopath_and_text[0], audiopath_and_text[1], audiopath_and_text[2]
- text = self.get_text(text)
- spec, wav = self.get_audio(audiopath)
- pitch = self.get_pitch(pitch)
- return text, spec, wav, pitch
-
- def get_pitch(self, pitch):
-
- return torch.LongTensor(np.load(pitch))
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError("{} {} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate))
- audio_norm = audio / self.max_wav_value
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- spec = torch.load(spec_filename)
- else:
- spec = spectrogram_torch(audio_norm, self.filter_length,
- self.sampling_rate, self.hop_length, self.win_length,
- center=False)
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename)
- return spec, audio_norm
-
- def get_text(self, text):
- soft = np.load(text)
- text_norm = torch.FloatTensor(soft)
- return text_norm
-
- def __getitem__(self, index):
- return self.get_audio_text_pair(self.audiopaths_and_text[index])
-
- def __len__(self):
- return len(self.audiopaths_and_text)
-
-
-class TextAudioCollate:
- """ Zero-pads model inputs and targets
- """
-
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text and aduio
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[1].size(1) for x in batch]),
- dim=0, descending=True)
-
- max_text_len = max([len(x[0]) for x in batch])
- max_spec_len = max([x[1].size(1) for x in batch])
- max_wav_len = max([x[2].size(1) for x in batch])
- max_pitch_len = max([x[3].shape[0] for x in batch])
- # print(batch)
-
- text_lengths = torch.LongTensor(len(batch))
- spec_lengths = torch.LongTensor(len(batch))
- wav_lengths = torch.LongTensor(len(batch))
-
- text_padded = torch.FloatTensor(len(batch), max_text_len, 256)
- spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
- pitch_padded = torch.LongTensor(len(batch), max_pitch_len)
-
- text_padded.zero_()
- spec_padded.zero_()
- wav_padded.zero_()
- pitch_padded.zero_()
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- text = row[0]
- text_padded[i, :text.size(0), :] = text
- text_lengths[i] = text.size(0)
-
- spec = row[1]
- spec_padded[i, :, :spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wav = row[2]
- wav_padded[i, :, :wav.size(1)] = wav
- wav_lengths[i] = wav.size(1)
-
- pitch = row[3]
- pitch_padded[i, :pitch.size(0)] = pitch
-
- if self.return_ids:
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing, pitch_padded
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, pitch_padded
-
-
-"""Multi speaker version"""
-
-
-class TextAudioSpeakerLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, speaker_id, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
-
- def __init__(self, audiopaths_sid_text, hparams):
- self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text)
- self.text_cleaners = hparams.text_cleaners
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
-
- self.cleaned_text = getattr(hparams, "cleaned_text", False)
-
- self.add_blank = hparams.add_blank
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 190)
-
- random.seed(1234)
- random.shuffle(self.audiopaths_sid_text)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
-
- lengths = []
- for audiopath, sid, text, pitch in self.audiopaths_sid_text:
- lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
- self.lengths = lengths
-
- def get_audio_text_speaker_pair(self, audiopath_sid_text):
- # separate filename, speaker_id and text
- audiopath, sid, text, pitch = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2], \
- audiopath_sid_text[3]
- text = self.get_text(text)
- spec, wav = self.get_audio(audiopath)
- sid = self.get_sid(sid)
- pitch = self.get_pitch(pitch)
-
- return text, spec, wav, pitch, sid
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError("{} {} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate))
- audio_norm = audio / self.max_wav_value
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- spec = torch.load(spec_filename)
- else:
- spec = spectrogram_torch(audio_norm, self.filter_length,
- self.sampling_rate, self.hop_length, self.win_length,
- center=False)
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename)
- return spec, audio_norm
-
- def get_text(self, text):
- soft = np.load(text)
- text_norm = torch.FloatTensor(soft)
- return text_norm
-
- def get_pitch(self, pitch):
- return torch.LongTensor(np.load(pitch))
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def __getitem__(self, index):
- return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index])
-
- def __len__(self):
- return len(self.audiopaths_sid_text)
-
-
-class TextAudioSpeakerCollate:
- """ Zero-pads model inputs and targets
- """
-
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text, audio and speaker identities
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized, sid]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[1].size(1) for x in batch]),
- dim=0, descending=True)
-
- max_text_len = max([len(x[0]) for x in batch])
- max_spec_len = max([x[1].size(1) for x in batch])
- max_wav_len = max([x[2].size(1) for x in batch])
- max_pitch_len = max([x[3].shape[0] for x in batch])
-
- text_lengths = torch.LongTensor(len(batch))
- spec_lengths = torch.LongTensor(len(batch))
- wav_lengths = torch.LongTensor(len(batch))
- sid = torch.LongTensor(len(batch))
-
- text_padded = torch.FloatTensor(len(batch), max_text_len, 256)
- spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
- pitch_padded = torch.LongTensor(len(batch), max_pitch_len)
-
- text_padded.zero_()
- spec_padded.zero_()
- wav_padded.zero_()
- pitch_padded.zero_()
-
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- text = row[0]
- text_padded[i, :text.size(0)] = text
- text_lengths[i] = text.size(0)
-
- spec = row[1]
- spec_padded[i, :, :spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wav = row[2]
- wav_padded[i, :, :wav.size(1)] = wav
- wav_lengths[i] = wav.size(1)
-
- pitch = row[3]
- pitch_padded[i, :pitch.size(0)] = pitch
-
- sid[i] = row[4]
-
- if self.return_ids:
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, pitch_padded, sid, ids_sorted_decreasing
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, pitch_padded, sid
-
-
-class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
- """
- Maintain similar input lengths in a batch.
- Length groups are specified by boundaries.
- Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
-
- It removes samples which are not included in the boundaries.
- Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
- """
-
- def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True):
- super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
- self.lengths = dataset.lengths
- self.batch_size = batch_size
- self.boundaries = boundaries
-
- self.buckets, self.num_samples_per_bucket = self._create_buckets()
- self.total_size = sum(self.num_samples_per_bucket)
- self.num_samples = self.total_size // self.num_replicas
-
- def _create_buckets(self):
- buckets = [[] for _ in range(len(self.boundaries) - 1)]
- for i in range(len(self.lengths)):
- length = self.lengths[i]
- idx_bucket = self._bisect(length)
- if idx_bucket != -1:
- buckets[idx_bucket].append(i)
-
- for i in range(len(buckets) - 1, 0, -1):
- if len(buckets[i]) == 0:
- buckets.pop(i)
- self.boundaries.pop(i + 1)
-
- num_samples_per_bucket = []
- for i in range(len(buckets)):
- len_bucket = len(buckets[i])
- total_batch_size = self.num_replicas * self.batch_size
- rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size
- num_samples_per_bucket.append(len_bucket + rem)
- return buckets, num_samples_per_bucket
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- g = torch.Generator()
- g.manual_seed(self.epoch)
-
- indices = []
- if self.shuffle:
- for bucket in self.buckets:
- indices.append(torch.randperm(len(bucket), generator=g).tolist())
- else:
- for bucket in self.buckets:
- indices.append(list(range(len(bucket))))
-
- batches = []
- for i in range(len(self.buckets)):
- bucket = self.buckets[i]
- len_bucket = len(bucket)
- ids_bucket = indices[i]
- num_samples_bucket = self.num_samples_per_bucket[i]
-
- # add extra samples to make it evenly divisible
- rem = num_samples_bucket - len_bucket
- ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)]
-
- # subsample
- ids_bucket = ids_bucket[self.rank::self.num_replicas]
-
- # batching
- for j in range(len(ids_bucket) // self.batch_size):
- batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]]
- batches.append(batch)
-
- if self.shuffle:
- batch_ids = torch.randperm(len(batches), generator=g).tolist()
- batches = [batches[i] for i in batch_ids]
- self.batches = batches
-
- assert len(self.batches) * self.batch_size == self.num_samples
- return iter(self.batches)
-
- def _bisect(self, x, lo=0, hi=None):
- if hi is None:
- hi = len(self.boundaries) - 1
-
- if hi > lo:
- mid = (hi + lo) // 2
- if self.boundaries[mid] < x <= self.boundaries[mid + 1]:
- return mid
- elif x <= self.boundaries[mid]:
- return self._bisect(x, lo, mid)
- else:
- return self._bisect(x, mid + 1, hi)
- else:
- return -1
-
- def __len__(self):
- return self.num_samples // self.batch_size
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_multi_corpus_dataset.py b/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_multi_corpus_dataset.py
deleted file mode 100644
index 5a79f4b680e5bc2c7374ec6dd8ea525c47b40985..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/test_multi_corpus_dataset.py
+++ /dev/null
@@ -1,79 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import unittest
-from collections import OrderedDict
-
-import torch
-from fairseq.data import LanguagePairDataset, TokenBlockDataset
-from fairseq.data.multi_corpus_dataset import MultiCorpusDataset
-from tests.test_train import mock_dict
-
-
-class TestMultiCorpusDataset(unittest.TestCase):
- def setUp(self):
- d = mock_dict()
- tokens_1 = torch.LongTensor([i for i in range(1, 5000, 2)]).view(1, -1)
- tokens_ds1 = TokenBlockDataset(
- tokens_1,
- sizes=[tokens_1.size(-1)],
- block_size=1,
- pad=0,
- eos=1,
- include_targets=False,
- )
- self.dataset_1 = LanguagePairDataset(
- tokens_ds1, tokens_ds1.sizes, d, shuffle=False
- )
- tokens_2 = torch.LongTensor([i for i in range(0, 5000, 2)]).view(1, -1)
- tokens_ds2 = TokenBlockDataset(
- tokens_2,
- sizes=[tokens_2.size(-1)],
- block_size=1,
- pad=0,
- eos=1,
- include_targets=False,
- )
- self.dataset_2 = LanguagePairDataset(
- tokens_ds2, tokens_ds2.sizes, d, shuffle=False
- )
-
- def _test_sample_helper(
- self,
- distribution,
- ):
- m = MultiCorpusDataset(
- OrderedDict({0: self.dataset_1, 1: self.dataset_2}),
- distribution=distribution,
- seed=0,
- sort_indices=True,
- )
- m.set_epoch(1)
- indices = m.ordered_indices()
- count_sample_from_first_dataset = 0
- items = set()
- for i in indices:
- item = m[i]["source"].item()
- if item % 2 == 1:
- count_sample_from_first_dataset += 1
-
- items.add(item)
- sample_from_first_ds_percentage = (
- 1.0 * count_sample_from_first_dataset / len(indices)
- )
- self.assertLess(
- abs(sample_from_first_ds_percentage - distribution[0]),
- 0.01,
- )
- self.assertEqual(
- len(items),
- int(min(len(self.dataset_1), len(indices) * distribution[0])
- + min(len(self.dataset_1), len(indices) * distribution[1]))
- )
- print(distribution)
-
- def test_multi_corpus_dataset(self):
- for distribution in [[0.5, 0.5], [0.1, 0.9], [0.9, 0.1]]:
- self._test_sample_helper(distribution=distribution)
diff --git a/spaces/stomexserde/gpt4-ui/Examples/3Cx 11 Crack Keygen Pes.md b/spaces/stomexserde/gpt4-ui/Examples/3Cx 11 Crack Keygen Pes.md
deleted file mode 100644
index 502d1ab29e2ce71311db34784695f71d333b7148..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/3Cx 11 Crack Keygen Pes.md
+++ /dev/null
@@ -1,154 +0,0 @@
-
-3CX 11 Crack Keygen PES: What You Need to Know
-Introduction
-3Cx 11 Crack Keygen Pes
- What is 3CX 11 and what are its features and benefits?
-
-
-
-
- What is PES 2022 and what are its features and gameplay?
-
-
What is a crack keygen and how does it work?
-How to Download and Install 3CX 11 Crack Keygen PES
-Where to find a reliable source for the crack keygen
-
-
-How to run the crack keygen and generate a license key
-
-
- How to activate 3CX 11 and PES 2022 with the license key
-
-
- How to Use 3CX 11 and PES 2022 with the Crack Keygen
-How to access the video conferencing, live chat, and SMS features of 3CX 11
-
-
- How to customize your team, play online matches, and enjoy the realistic graphics of PES 2022
-
-
- Pros and Cons of Using 3CX 11 Crack Keygen PES
-The advantages of using the crack keygen
-
-
- The disadvantages of using the crack keygen
-
-
- Conclusion
-FAQs
-What is a crack keygen and how does it work?
-Is it legal and safe to use a crack keygen for 3CX 11 and PES 2022?
-What are the system requirements for running 3CX 11 and PES 2022 with the crack keygen?
-
-
-
-Product Minimum Requirements Recommended Requirements
-3CX 11
-
-
-PES 2022
-
-
How can I update 3CX 11 and PES 2022 with the latest patches and data?
-
-
- Where can I find more information and support for 3CX 11 and PES 2022?
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Autodata 3.40 Full Version !!TOP!!.md b/spaces/stomexserde/gpt4-ui/Examples/Autodata 3.40 Full Version !!TOP!!.md
deleted file mode 100644
index 8ebc66ff938d1301c74c6f27e6027843616f60d1..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Autodata 3.40 Full Version !!TOP!!.md
+++ /dev/null
@@ -1,161 +0,0 @@
-
-Autodata 3.40 Full Version: A Comprehensive Guide for Automotive Professionals
- Autodata 3.40 Full Version
- What is Autodata 3.40 Full Version?
- Features and benefits of Autodata 3.40 Full Version
-
-
- System requirements and installation of Autodata 3.40 Full Version
-
-
-
-
- How to use Autodata 3.40 Full Version?
- How to access technical data and diagrams
-
-
- How to perform service and repair procedures
-
-
- How to troubleshoot common problems and faults
-
-
- How to change Autodata 3.40 Full Version language?
- How to download and install the English language pack
-
-
- How to switch between languages in Autodata 3.40 Full Version
-
-
- Where to download Autodata 3.40 Full Version?
- How to download Autodata 3.40 Full Version from the Internet Archive
-
-
- How to download Autodata 3.40 Full Version from MOTORCARSOFT.COM
-
-
- How to download Autodata 3.40 Full Version from Lionn Auto Softwares
-
-
- Conclusion
- FAQs
- Q: Is Autodata 3.40 Full Version legal?
- Q: Is Autodata 3.40 Full Version safe?
- Q: How to update Autodata 3.40 Full Version?
- Q: How to uninstall Autodata 3.40 Full Version?
-
-
- Q: How to contact Autodata Limited?
-
-
- Q: How to get more automotive software for free?
-
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Download Game Plants Vs Zombies 2 Cho Pc Win 7.md b/spaces/stomexserde/gpt4-ui/Examples/Download Game Plants Vs Zombies 2 Cho Pc Win 7.md
deleted file mode 100644
index 4c92ec1570d6cc9e74af584b761e7825050a8fdd..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Download Game Plants Vs Zombies 2 Cho Pc Win 7.md
+++ /dev/null
@@ -1,52 +0,0 @@
-
-How to Download Game Plants Vs Zombies 2 Cho Pc Win 7
-What is BlueStacks?
-Download Game Plants Vs Zombies 2 Cho Pc Win 7
-How to Download and Install BlueStacks?
-
-
-How to Download and Install Plants Vs Zombies 2?
-
-
-What are the Benefits of Playing Plants Vs Zombies 2 on PC?
-
-
-Conclusion
-What is Plants Vs Zombies 2?
-How to Play Plants Vs Zombies 2?
-What are the Tips and Tricks for Plants Vs Zombies 2?
-
-
- e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Fix Microsoft Security Essential For Free Speed Up Slow Running Computer By Removing Microsoft Secur PATCHED.md b/spaces/stomexserde/gpt4-ui/Examples/Fix Microsoft Security Essential For Free Speed Up Slow Running Computer By Removing Microsoft Secur PATCHED.md
deleted file mode 100644
index 4951854ece1517bd403fbbb521a67be2e2afd52b..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Fix Microsoft Security Essential For Free Speed Up Slow Running Computer By Removing Microsoft Secur PATCHED.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-How to Fix Microsoft Security Essential and Speed Up Your Slow Running Computer
-Fix Microsoft Security Essential For Free Speed Up Slow Running Computer By Removing Microsoft Secur
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Jai Hanuman Tv Serial Song Mangal Ko Janme Mangal Hi Karte Mangal Mai Hanuman Mp3 Songs Reviews Rapi HOT.md b/spaces/stomexserde/gpt4-ui/Examples/Jai Hanuman Tv Serial Song Mangal Ko Janme Mangal Hi Karte Mangal Mai Hanuman Mp3 Songs Reviews Rapi HOT.md
deleted file mode 100644
index c263a180c1b7eb50f1c428d59033524cca72347a..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Jai Hanuman Tv Serial Song Mangal Ko Janme Mangal Hi Karte Mangal Mai Hanuman Mp3 Songs Reviews Rapi HOT.md
+++ /dev/null
@@ -1,15 +0,0 @@
-
-Here is a possible title and article with html formatting for the keyword "jai hanuman tv serial song mangal ko janme mangal hi karte mangal mai hanuman mp3 songs reviews rapi":
-
-Jai Hanuman: A Review of the Title Song "Mangal Ko Janme Mangal Hi Karte Mangal Mai Hanuman"
-jai hanuman tv serial song mangal ko janme mangal hi karte mangal mai hanuman mp3 songs reviews rapi
-
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Jira 6 Keygen.md b/spaces/stomexserde/gpt4-ui/Examples/Jira 6 Keygen.md
deleted file mode 100644
index 2c0f280dba75292ff7927b6f9ffb0c3ea9cdc001..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Jira 6 Keygen.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-Why You Should Upgrade to Jira 6
-Jira 6 Keygen
-New Look and Feel
-Improved Performance
-Enhanced Agile Experience
-More Flexibility and Customization
-How to Upgrade to Jira 6
-What is Jira 6?
-Who can use Jira 6?
-How to get started with Jira 6?
-
-
-
\ No newline at end of file
diff --git a/spaces/strickvl/fastai_redaction_classifier/README.md b/spaces/strickvl/fastai_redaction_classifier/README.md
deleted file mode 100644
index c75270f90b09b447977d989025e9a02857d02d88..0000000000000000000000000000000000000000
--- a/spaces/strickvl/fastai_redaction_classifier/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Redacted Image Classifier
-emoji: 🩻
-colorFrom: yellow
-colorTo: pink
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at
-https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/sub314xxl/MetaGPT/tests/metagpt/actions/test_write_code.py b/spaces/sub314xxl/MetaGPT/tests/metagpt/actions/test_write_code.py
deleted file mode 100644
index d53e3724344ffdd3a8f91b8a9f427ed8c83ffcc4..0000000000000000000000000000000000000000
--- a/spaces/sub314xxl/MetaGPT/tests/metagpt/actions/test_write_code.py
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Time : 2023/5/11 17:45
-@Author : alexanderwu
-@File : test_write_code.py
-@Modified By: mashenquan, 2023-8-1, fix-bug: `filename` of `write_code.run()` is missing.
-@Modified By: mashenquan, 2023/8/20. Remove global configuration `CONFIG`, enable configuration support for business isolation.
-"""
-import pytest
-
-from metagpt.config import Config
-from metagpt.provider.openai_api import OpenAIGPTAPI as LLM, CostManager
-from metagpt.actions.write_code import WriteCode
-from metagpt.logs import logger
-from tests.metagpt.actions.mock import TASKS_2, WRITE_CODE_PROMPT_SAMPLE
-
-
-@pytest.mark.asyncio
-async def test_write_code():
- api_design = "设计一个名为'add'的函数,该函数接受两个整数作为输入,并返回它们的和。"
- conf = Config()
- cost_manager = CostManager(**conf.runtime_options)
- llm = LLM(options=conf.runtime_options, cost_manager=cost_manager)
- write_code = WriteCode(options=conf.runtime_options, name="write_code", llm=llm)
- code = await write_code.run(context=api_design, filename="test")
- logger.info(code)
-
- # 我们不能精确地预测生成的代码,但我们可以检查某些关键字
- assert 'def add' in code
- assert 'return' in code
-
-
-@pytest.mark.asyncio
-async def test_write_code_directly():
- prompt = WRITE_CODE_PROMPT_SAMPLE + '\n' + TASKS_2[0]
- options = Config().runtime_options
- llm = LLM(options=options, cost_manager=CostManager(**options))
- rsp = await llm.aask(prompt)
- logger.info(rsp)
diff --git a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/modules/codebooks_patterns.py b/spaces/sub314xxl/MusicGen-Continuation/audiocraft/modules/codebooks_patterns.py
deleted file mode 100644
index c5b35cbea8cff84aa56116dbdd860fc72a913a13..0000000000000000000000000000000000000000
--- a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/modules/codebooks_patterns.py
+++ /dev/null
@@ -1,539 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from collections import namedtuple
-from dataclasses import dataclass
-from functools import lru_cache
-import logging
-import typing as tp
-
-from abc import ABC, abstractmethod
-import torch
-
-LayoutCoord = namedtuple('LayoutCoord', ['t', 'q']) # (timestep, codebook index)
-PatternLayout = tp.List[tp.List[LayoutCoord]] # Sequence of coordinates
-logger = logging.getLogger(__name__)
-
-
-@dataclass
-class Pattern:
- """Base implementation of a pattern over a sequence with multiple codebooks.
-
- The codebook pattern consists in a layout, defining for each sequence step
- the list of coordinates of each codebook timestep in the resulting interleaved sequence.
- The first item of the pattern is always an empty list in order to properly insert a special token
- to start with. For convenience, we also keep track of ``n_q`` the number of codebooks used for the pattern
- and ``timesteps`` the number of timesteps corresponding to the original sequence.
-
- The pattern provides convenient methods to build and revert interleaved sequences from it:
- ``build_pattern_sequence`` maps a given a dense input tensor of multi-codebook sequence from [B, K, T]
- to the interleaved sequence of shape [B, K, S] applying the pattern, with S being the batch size,
- K being the number of codebooks, T the number of original timesteps and S the number of sequence steps
- for the output sequence. The unfilled positions are replaced with a special token and the built sequence
- is returned along with a mask indicating valid tokens.
- ``revert_pattern_sequence`` maps back an interleaved sequence of shape [B, K, S] to the original alignment
- of codebooks across timesteps to an output tensor of shape [B, K, T], using again a special token and a mask
- to fill and specify invalid positions if needed.
- See the dedicated methods for more details.
- """
- # Pattern layout, for each sequence step, we have a list of coordinates
- # corresponding to the original codebook timestep and position.
- # The first list is always an empty list in order to properly insert
- # a special token to start with.
- layout: PatternLayout
- timesteps: int
- n_q: int
-
- def __post_init__(self):
- assert len(self.layout) > 0
- assert self.layout[0] == []
- self._validate_layout()
- self._build_reverted_sequence_scatter_indexes = lru_cache(100)(self._build_reverted_sequence_scatter_indexes)
- self._build_pattern_sequence_scatter_indexes = lru_cache(100)(self._build_pattern_sequence_scatter_indexes)
- logger.info("New pattern, time steps: %d, sequence steps: %d", self.timesteps, len(self.layout))
-
- def _validate_layout(self):
- """Runs checks on the layout to ensure a valid pattern is defined.
- A pattern is considered invalid if:
- - Multiple timesteps for a same codebook are defined in the same sequence step
- - The timesteps for a given codebook are not in ascending order as we advance in the sequence
- (this would mean that we have future timesteps before past timesteps).
- """
- q_timesteps = {q: 0 for q in range(self.n_q)}
- for s, seq_coords in enumerate(self.layout):
- if len(seq_coords) > 0:
- qs = set()
- for coord in seq_coords:
- qs.add(coord.q)
- last_q_timestep = q_timesteps[coord.q]
- assert coord.t >= last_q_timestep, \
- f"Past timesteps are found in the sequence for codebook = {coord.q} at step {s}"
- q_timesteps[coord.q] = coord.t
- # each sequence step contains at max 1 coordinate per codebook
- assert len(qs) == len(seq_coords), \
- f"Multiple entries for a same codebook are found at step {s}"
-
- @property
- def num_sequence_steps(self):
- return len(self.layout) - 1
-
- @property
- def max_delay(self):
- max_t_in_seq_coords = 0
- for seq_coords in self.layout[1:]:
- for coords in seq_coords:
- max_t_in_seq_coords = max(max_t_in_seq_coords, coords.t + 1)
- return max_t_in_seq_coords - self.timesteps
-
- @property
- def valid_layout(self):
- valid_step = len(self.layout) - self.max_delay
- return self.layout[:valid_step]
-
- def get_sequence_coords_with_timestep(self, t: int, q: tp.Optional[int] = None):
- """Get codebook coordinates in the layout that corresponds to the specified timestep t
- and optionally to the codebook q. Coordinates are returned as a tuple with the sequence step
- and the actual codebook coordinates.
- """
- assert t <= self.timesteps, "provided timesteps is greater than the pattern's number of timesteps"
- if q is not None:
- assert q <= self.n_q, "provided number of codebooks is greater than the pattern's number of codebooks"
- coords = []
- for s, seq_codes in enumerate(self.layout):
- for code in seq_codes:
- if code.t == t and (q is None or code.q == q):
- coords.append((s, code))
- return coords
-
- def get_steps_with_timestep(self, t: int, q: tp.Optional[int] = None) -> tp.List[int]:
- return [step for step, coords in self.get_sequence_coords_with_timestep(t, q)]
-
- def get_first_step_with_timesteps(self, t: int, q: tp.Optional[int] = None) -> tp.Optional[int]:
- steps_with_timesteps = self.get_steps_with_timestep(t, q)
- return steps_with_timesteps[0] if len(steps_with_timesteps) > 0 else None
-
- def _build_pattern_sequence_scatter_indexes(self, timesteps: int, n_q: int, keep_only_valid_steps: bool,
- device: tp.Union[torch.device, str] = 'cpu'):
- """Build scatter indexes corresponding to the pattern, up to the provided sequence_steps.
-
- Args:
- timesteps (int): Maximum number of timesteps steps to consider.
- keep_only_valid_steps (bool): Restrict the pattern layout to match only valid steps.
- device (Union[torch.device, str]): Device for created tensors.
- Returns:
- indexes (torch.Tensor): Indexes corresponding to the sequence, of shape [K, S].
- mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes, of shape [K, S].
- """
- assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}"
- assert timesteps <= self.timesteps, "invalid number of timesteps used to build the sequence from the pattern"
- # use the proper layout based on whether we limit ourselves to valid steps only or not,
- # note that using the valid_layout will result in a truncated sequence up to the valid steps
- ref_layout = self.valid_layout if keep_only_valid_steps else self.layout
- # single item indexing being super slow with pytorch vs. numpy, so we use numpy here
- indexes = torch.zeros(n_q, len(ref_layout), dtype=torch.long).numpy()
- mask = torch.zeros(n_q, len(ref_layout), dtype=torch.bool).numpy()
- # fill indexes with last sequence step value that will correspond to our special token
- # the last value is n_q * timesteps as we have flattened z and append special token as the last token
- # which will correspond to the index: n_q * timesteps
- indexes[:] = n_q * timesteps
- # iterate over the pattern and fill scattered indexes and mask
- for s, sequence_coords in enumerate(ref_layout):
- for coords in sequence_coords:
- if coords.t < timesteps:
- indexes[coords.q, s] = coords.t + coords.q * timesteps
- mask[coords.q, s] = 1
- indexes = torch.from_numpy(indexes).to(device)
- mask = torch.from_numpy(mask).to(device)
- return indexes, mask
-
- def build_pattern_sequence(self, z: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False):
- """Build sequence corresponding to the pattern from the input tensor z.
- The sequence is built using up to sequence_steps if specified, and non-pattern
- coordinates are filled with the special token.
-
- Args:
- z (torch.Tensor): Input tensor of multi-codebooks sequence, of shape [B, K, T].
- special_token (int): Special token used to fill non-pattern coordinates in the new sequence.
- keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps.
- Steps that are beyond valid steps will be replaced by the special_token in that case.
- Returns:
- values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, S] with S
- corresponding either to the sequence_steps if provided, otherwise to the length of the pattern.
- indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, S].
- mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, S].
- """
- B, K, T = z.shape
- indexes, mask = self._build_pattern_sequence_scatter_indexes(
- T, K, keep_only_valid_steps=keep_only_valid_steps, device=str(z.device)
- )
- z = z.view(B, -1)
- # we append the special token as the last index of our flattened z tensor
- z = torch.cat([z, torch.zeros_like(z[:, :1]) + special_token], dim=1)
- values = z[:, indexes.view(-1)]
- values = values.view(B, K, indexes.shape[-1])
- return values, indexes, mask
-
- def _build_reverted_sequence_scatter_indexes(self, sequence_steps: int, n_q: int,
- keep_only_valid_steps: bool = False,
- is_model_output: bool = False,
- device: tp.Union[torch.device, str] = 'cpu'):
- """Builds scatter indexes required to retrieve the original multi-codebook sequence
- from interleaving pattern.
-
- Args:
- sequence_steps (int): Sequence steps.
- n_q (int): Number of codebooks.
- keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps.
- Steps that are beyond valid steps will be replaced by the special_token in that case.
- is_model_output (bool): Whether to keep the sequence item corresponding to initial special token or not.
- device (Union[torch.device, str]): Device for created tensors.
- Returns:
- torch.Tensor: Indexes for reconstructing the output, of shape [K, T].
- mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T].
- """
- ref_layout = self.valid_layout if keep_only_valid_steps else self.layout
- # TODO(jade): Do we want to further truncate to only valid timesteps here as well?
- timesteps = self.timesteps
- assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}"
- assert sequence_steps <= len(ref_layout), \
- f"sequence to revert is longer than the defined pattern: {sequence_steps} > {len(ref_layout)}"
-
- # ensure we take the appropriate indexes to keep the model output from the first special token as well
- if is_model_output:
- ref_layout = ref_layout[1:]
-
- # single item indexing being super slow with pytorch vs. numpy, so we use numpy here
- indexes = torch.zeros(n_q, timesteps, dtype=torch.long).numpy()
- mask = torch.zeros(n_q, timesteps, dtype=torch.bool).numpy()
- # fill indexes with last sequence step value that will correspond to our special token
- indexes[:] = n_q * sequence_steps
- for s, sequence_codes in enumerate(ref_layout):
- if s < sequence_steps:
- for code in sequence_codes:
- if code.t < timesteps:
- indexes[code.q, code.t] = s + code.q * sequence_steps
- mask[code.q, code.t] = 1
- indexes = torch.from_numpy(indexes).to(device)
- mask = torch.from_numpy(mask).to(device)
- return indexes, mask
-
- def revert_pattern_sequence(self, s: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False):
- """Revert a sequence built from the pattern back to the original multi-codebook sequence without interleaving.
- The sequence is reverted using up to timesteps if specified, and non-pattern coordinates
- are filled with the special token.
-
- Args:
- s (torch.Tensor): Interleaved sequence tensor obtained from the pattern, of shape [B, K, S].
- special_token (int or float): Special token used to fill non-pattern coordinates in the new sequence.
- Returns:
- values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, T] with T
- corresponding either to the timesteps if provided, or the total timesteps in pattern otherwise.
- indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, T].
- mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T].
- """
- B, K, S = s.shape
- indexes, mask = self._build_reverted_sequence_scatter_indexes(
- S, K, keep_only_valid_steps, is_model_output=False, device=str(s.device)
- )
- s = s.view(B, -1)
- # we append the special token as the last index of our flattened z tensor
- s = torch.cat([s, torch.zeros_like(s[:, :1]) + special_token], dim=1)
- values = s[:, indexes.view(-1)]
- values = values.view(B, K, indexes.shape[-1])
- return values, indexes, mask
-
- def revert_pattern_logits(self, logits: torch.Tensor, special_token: float, keep_only_valid_steps: bool = False):
- """Revert model logits obtained on a sequence built from the pattern
- back to a tensor matching the original sequence.
-
- This method is similar to ``revert_pattern_sequence`` with the following specificities:
- 1. It is designed to work with the extra cardinality dimension
- 2. We return the logits for the first sequence item that matches the special_token and
- which matching target in the original sequence is the first item of the sequence,
- while we skip the last logits as there is no matching target
- """
- B, card, K, S = logits.shape
- indexes, mask = self._build_reverted_sequence_scatter_indexes(
- S, K, keep_only_valid_steps, is_model_output=True, device=logits.device
- )
- logits = logits.reshape(B, card, -1)
- # we append the special token as the last index of our flattened z tensor
- logits = torch.cat([logits, torch.zeros_like(logits[:, :, :1]) + special_token], dim=-1) # [B, card, K x S]
- values = logits[:, :, indexes.view(-1)]
- values = values.view(B, card, K, indexes.shape[-1])
- return values, indexes, mask
-
-
-class CodebooksPatternProvider(ABC):
- """Abstraction around providing pattern for interleaving codebooks.
-
- The CodebooksPatternProvider abstraction allows to implement various strategies to
- define interleaving pattern of sequences composed of multiple codebooks. For a given
- number of codebooks `n_q`, the pattern provider can generate a specified pattern
- corresponding to a sequence of `T` timesteps with `n_q` parallel codebooks. This pattern
- can be used to construct a new sequence from the original codes respecting the specified
- pattern. The pattern is defined as a list of list of code coordinates, code coordinate
- being a tuple with the original timestep and codebook to build the new sequence.
- Note that all patterns must start with an empty list that is then used to insert a first
- sequence step of special tokens in the newly generated sequence.
-
- Args:
- n_q (int): number of codebooks.
- cached (bool): if True, patterns for a given length are cached. In general
- that should be true for efficiency reason to avoid synchronization points.
- """
- def __init__(self, n_q: int, cached: bool = True):
- assert n_q > 0
- self.n_q = n_q
- self.get_pattern = lru_cache(100)(self.get_pattern) # type: ignore
-
- @abstractmethod
- def get_pattern(self, timesteps: int) -> Pattern:
- """Builds pattern with specific interleaving between codebooks.
-
- Args:
- timesteps (int): Total numer of timesteps.
- """
- raise NotImplementedError()
-
-
-class DelayedPatternProvider(CodebooksPatternProvider):
- """Provider for delayed pattern across delayed codebooks.
- Codebooks are delayed in the sequence and sequence steps will contain codebooks
- from different timesteps.
-
- Example:
- Taking timesteps=4 and n_q=3, delays=None, the multi-codebook sequence:
- [[1, 2, 3, 4],
- [1, 2, 3, 4],
- [1, 2, 3, 4]]
- The resulting sequence obtained from the returned pattern is:
- [[S, 1, 2, 3, 4],
- [S, S, 1, 2, 3],
- [S, S, S, 1, 2]]
- (with S being a special token)
-
- Args:
- n_q (int): Number of codebooks.
- delays (Optional[List[int]]): Delay for each of the codebooks.
- If delays not defined, each codebook is delayed by 1 compared to the previous one.
- flatten_first (int): Flatten the first N timesteps.
- empty_initial (int): Prepend with N empty list of coordinates.
- """
- def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None,
- flatten_first: int = 0, empty_initial: int = 0):
- super().__init__(n_q)
- if delays is None:
- delays = list(range(n_q))
- self.delays = delays
- self.flatten_first = flatten_first
- self.empty_initial = empty_initial
- assert len(self.delays) == self.n_q
- assert sorted(self.delays) == self.delays
-
- def get_pattern(self, timesteps: int) -> Pattern:
- out: PatternLayout = [[]]
- max_delay = max(self.delays)
- if self.empty_initial:
- out += [[] for _ in range(self.empty_initial)]
- if self.flatten_first:
- for t in range(min(timesteps, self.flatten_first)):
- for q in range(self.n_q):
- out.append([LayoutCoord(t, q)])
- for t in range(self.flatten_first, timesteps + max_delay):
- v = []
- for q, delay in enumerate(self.delays):
- t_for_q = t - delay
- if t_for_q >= self.flatten_first:
- v.append(LayoutCoord(t_for_q, q))
- out.append(v)
- return Pattern(out, n_q=self.n_q, timesteps=timesteps)
-
-
-class ParallelPatternProvider(DelayedPatternProvider):
- """Provider for parallel pattern across codebooks.
- This pattern provider is a special case of the delayed pattern with actually no delay,
- hence delays=repeat(0, n_q).
-
- Args:
- n_q (int): Number of codebooks.
- """
- def __init__(self, n_q: int):
- super().__init__(n_q, [0] * n_q)
-
-
-class UnrolledPatternProvider(CodebooksPatternProvider):
- """Provider for unrolling codebooks pattern.
- This pattern provider enables to represent the codebook flattened completely or only to some extend
- while also specifying a given delay between the flattened codebooks representation, allowing to
- unroll the codebooks in the sequence.
-
- Example:
- 1. Flattening of the codebooks.
- By default, the pattern provider will fully flatten the codebooks such as flattening=range(n_q),
- taking n_q = 3 and timesteps = 4:
- [[1, 2, 3, 4],
- [1, 2, 3, 4],
- [1, 2, 3, 4]]
- will result into:
- [[S, S, 1, S, S, 2, S, S, 3, S, S, 4],
- [S, 1, S, S, 2, S, S, 3, S, S, 4, S],
- [1, S, S, 2, S, S, 3, S, S, 4, S, S]]
- 2. Partial flattening of the codebooks. The ``flattening`` parameter allows to specify the inner step
- for each of the codebook, allowing to define which codebook to flatten (or keep in parallel), for example
- taking n_q = 3, timesteps = 4 and flattening = [0, 1, 1]:
- [[1, 2, 3, 4],
- [1, 2, 3, 4],
- [1, 2, 3, 4]]
- will result into:
- [[S, 1, S, S, 2, S, S, 3, S, S, 4, S],
- [S, 1, S, S, 2, S, S, 3, S, S, 4, S],
- [1, S, S, 2, S, S, 3, S, S, 4, S, S]]
- 3. Flattening with delay. The ``delay`` parameter allows to further unroll the sequence of codebooks
- allowing to specify the delay per codebook. Note that the delay between codebooks flattened to the
- same inner timestep should be coherent. For example, taking n_q = 3, timesteps = 4, flattening = [0, 1, 1]
- and delays = [0, 3, 3]:
- [[1, 2, 3, 4],
- [1, 2, 3, 4],
- [1, 2, 3, 4]]
- will result into:
- [[S, S, S, 1, S, 2, S, 3, S, 4],
- [S, S, S, 1, S, 2, S, 3, S, 4],
- [1, 2, 3, S, 4, S, 5, S, 6, S]]
-
- Args:
- n_q (int): Number of codebooks.
- flattening (Optional[List[int]]): Flattening schema over the codebooks. If not defined,
- the codebooks will be flattened to 1 codebook per step, meaning that the sequence will
- have n_q extra steps for each timestep.
- delays (Optional[List[int]]): Delay for each of the codebooks. If not defined,
- no delay is added and therefore will default to [0] * ``n_q``.
- Note that two codebooks that will be flattened to the same inner step
- should have the same delay, otherwise the pattern is considered as invalid.
- """
- FlattenedCodebook = namedtuple('FlattenedCodebook', ['codebooks', 'delay'])
-
- def __init__(self, n_q: int, flattening: tp.Optional[tp.List[int]] = None,
- delays: tp.Optional[tp.List[int]] = None):
- super().__init__(n_q)
- if flattening is None:
- flattening = list(range(n_q))
- if delays is None:
- delays = [0] * n_q
- assert len(flattening) == n_q
- assert len(delays) == n_q
- assert sorted(flattening) == flattening
- assert sorted(delays) == delays
- self._flattened_codebooks = self._build_flattened_codebooks(delays, flattening)
- self.max_delay = max(delays)
-
- def _build_flattened_codebooks(self, delays: tp.List[int], flattening: tp.List[int]):
- """Build a flattened codebooks representation as a dictionary of inner step
- and the actual codebook indices corresponding to the flattened codebook. For convenience, we
- also store the delay associated to the flattened codebook to avoid maintaining an extra mapping.
- """
- flattened_codebooks: dict = {}
- for q, (inner_step, delay) in enumerate(zip(flattening, delays)):
- if inner_step not in flattened_codebooks:
- flat_codebook = UnrolledPatternProvider.FlattenedCodebook(codebooks=[q], delay=delay)
- else:
- flat_codebook = flattened_codebooks[inner_step]
- assert flat_codebook.delay == delay, (
- "Delay and flattening between codebooks is inconsistent: ",
- "two codebooks flattened to the same position should have the same delay."
- )
- flat_codebook.codebooks.append(q)
- flattened_codebooks[inner_step] = flat_codebook
- return flattened_codebooks
-
- @property
- def _num_inner_steps(self):
- """Number of inner steps to unroll between timesteps in order to flatten the codebooks.
- """
- return max([inner_step for inner_step in self._flattened_codebooks.keys()]) + 1
-
- def num_virtual_steps(self, timesteps: int) -> int:
- return timesteps * self._num_inner_steps + 1
-
- def get_pattern(self, timesteps: int) -> Pattern:
- """Builds pattern for delay across codebooks.
-
- Args:
- timesteps (int): Total numer of timesteps.
- """
- # the PatternLayout is built as a tuple of sequence position and list of coordinates
- # so that it can be reordered properly given the required delay between codebooks of given timesteps
- indexed_out: list = [(-1, [])]
- max_timesteps = timesteps + self.max_delay
- for t in range(max_timesteps):
- # for each timestep, we unroll the flattened codebooks,
- # emitting the sequence step with the corresponding delay
- for step in range(self._num_inner_steps):
- if step in self._flattened_codebooks:
- # we have codebooks at this virtual step to emit
- step_codebooks = self._flattened_codebooks[step]
- t_for_q = t + step_codebooks.delay
- coords = [LayoutCoord(t, q) for q in step_codebooks.codebooks]
- if t_for_q < max_timesteps and t < max_timesteps:
- indexed_out.append((t_for_q, coords))
- else:
- # there is no codebook in this virtual step so we emit an empty list
- indexed_out.append((t, []))
- out = [coords for _, coords in sorted(indexed_out)]
- return Pattern(out, n_q=self.n_q, timesteps=timesteps)
-
-
-class VALLEPattern(CodebooksPatternProvider):
- """Almost VALL-E style pattern. We futher allow some delays for the
- codebooks other than the first one.
-
- Args:
- n_q (int): Number of codebooks.
- delays (Optional[List[int]]): Delay for each of the codebooks.
- If delays not defined, each codebook is delayed by 1 compared to the previous one.
- """
- def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None):
- super().__init__(n_q)
- if delays is None:
- delays = [0] * (n_q - 1)
- self.delays = delays
- assert len(self.delays) == self.n_q - 1
- assert sorted(self.delays) == self.delays
-
- def get_pattern(self, timesteps: int) -> Pattern:
- out: PatternLayout = [[]]
- for t in range(timesteps):
- out.append([LayoutCoord(t, 0)])
- max_delay = max(self.delays)
- for t in range(timesteps + max_delay):
- v = []
- for q, delay in enumerate(self.delays):
- t_for_q = t - delay
- if t_for_q >= 0:
- v.append(LayoutCoord(t_for_q, q + 1))
- out.append(v)
- return Pattern(out, n_q=self.n_q, timesteps=timesteps)
-
-
-class MusicLMPattern(CodebooksPatternProvider):
- """Almost MusicLM style pattern. This is equivalent to full flattening
- but in a different order.
-
- Args:
- n_q (int): Number of codebooks.
- group_by (int): Number of codebooks to group together.
- """
- def __init__(self, n_q: int, group_by: int = 2):
- super().__init__(n_q)
- self.group_by = group_by
-
- def get_pattern(self, timesteps: int) -> Pattern:
- out: PatternLayout = [[]]
- for offset in range(0, self.n_q, self.group_by):
- for t in range(timesteps):
- for q in range(offset, offset + self.group_by):
- out.append([LayoutCoord(t, q)])
- return Pattern(out, n_q=self.n_q, timesteps=timesteps)
diff --git a/spaces/subhajitmaji/MusicGen/audiocraft/models/lm.py b/spaces/subhajitmaji/MusicGen/audiocraft/models/lm.py
deleted file mode 100644
index c8aad8f06797eef3293605056e1de14d07c56c2a..0000000000000000000000000000000000000000
--- a/spaces/subhajitmaji/MusicGen/audiocraft/models/lm.py
+++ /dev/null
@@ -1,527 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass
-from functools import partial
-import logging
-import math
-import typing as tp
-
-import torch
-from torch import nn
-
-from ..utils import utils
-from ..modules.streaming import StreamingModule, State
-from ..modules.transformer import StreamingTransformer, create_norm_fn
-from ..modules.conditioners import (
- ConditionFuser,
- ClassifierFreeGuidanceDropout,
- AttributeDropout,
- ConditioningProvider,
- ConditioningAttributes,
- ConditionType,
-)
-from ..modules.codebooks_patterns import CodebooksPatternProvider
-from ..modules.activations import get_activation_fn
-
-
-logger = logging.getLogger(__name__)
-ConditionTensors = tp.Dict[str, ConditionType]
-CFGConditions = tp.Union[ConditionTensors, tp.Tuple[ConditionTensors, ConditionTensors]]
-
-
-def get_init_fn(method: str, input_dim: int, init_depth: tp.Optional[int] = None):
- """LM layer initialization.
- Inspired from xlformers: https://github.com/fairinternal/xlformers
-
- Args:
- method (str): Method name for init function. Valid options are:
- 'gaussian', 'uniform'.
- input_dim (int): Input dimension of the initialized module.
- init_depth (Optional[int]): Optional init depth value used to rescale
- the standard deviation if defined.
- """
- # Compute std
- std = 1 / math.sqrt(input_dim)
- # Rescale with depth
- if init_depth is not None:
- std = std / math.sqrt(2 * init_depth)
-
- if method == 'gaussian':
- return partial(
- torch.nn.init.trunc_normal_, mean=0.0, std=std, a=-3 * std, b=3 * std
- )
- elif method == 'uniform':
- bound = math.sqrt(3) * std # ensure the standard deviation is `std`
- return partial(torch.nn.init.uniform_, a=-bound, b=bound)
- else:
- raise ValueError("Unsupported layer initialization method")
-
-
-def init_layer(m: nn.Module,
- method: str,
- init_depth: tp.Optional[int] = None,
- zero_bias_init: bool = False):
- """Wrapper around ``get_init_fn`` for proper initialization of LM modules.
-
- Args:
- m (nn.Module): Module to initialize.
- method (str): Method name for the init function.
- init_depth (Optional[int]): Optional init depth value used to rescale
- the standard deviation if defined.
- zero_bias_init (bool): Whether to initialize the bias to 0 or not.
- """
- if isinstance(m, nn.Linear):
- init_fn = get_init_fn(method, m.in_features, init_depth=init_depth)
- if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16:
- weight = m.weight.float()
- init_fn(weight)
- m.weight.data[:] = weight.half()
- else:
- init_fn(m.weight)
- if zero_bias_init and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.Embedding):
- init_fn = get_init_fn(method, m.embedding_dim, init_depth=None)
- if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16:
- weight = m.weight.float()
- init_fn(weight)
- m.weight.data[:] = weight.half()
- else:
- init_fn(m.weight)
-
-
-class ScaledEmbedding(nn.Embedding):
- """Boost learning rate for embeddings (with `scale`).
- """
- def __init__(self, *args, lr=None, **kwargs):
- super().__init__(*args, **kwargs)
- self.lr = lr
-
- def make_optim_group(self):
- group = {"params": list(self.parameters())}
- if self.lr is not None:
- group["lr"] = self.lr
- return group
-
-
-@dataclass
-class LMOutput:
- # The logits are already re-aligned with the input codes
- # hence no extra shift is required, e.g. when computing CE
- logits: torch.Tensor # [B, K, T, card]
- mask: torch.Tensor # [B, K, T]
-
-
-class LMModel(StreamingModule):
- """Transformer-based language model on multiple streams of codes.
-
- Args:
- pattern_provider (CodebooksPatternProvider): Pattern provider for codebook interleaving.
- condition_provider (MusicConditioningProvider): Conditioning provider from metadata.
- fuser (ConditionFuser): Fuser handling the fusing of conditions with language model input.
- n_q (int): Number of parallel streams to model.
- card (int): Cardinality, vocabulary size.
- dim (int): Dimension of the transformer encoder.
- num_heads (int): Number of heads for the transformer encoder.
- hidden_scale (int): Scale for hidden feed forward dimension of the transformer encoder.
- norm (str): Normalization method.
- norm_first (bool): Use pre-norm instead of post-norm.
- emb_lr (Optional[float]): Embedding-specific learning rate.
- bias_proj (bool): Use bias for output projections.
- weight_init (Optional[str]): Method for weight initialization.
- depthwise_init (Optional[str]): Method for depthwise weight initialization.
- zero_bias_init (bool): If true and bias in Linears, initialize bias to zeros.
- cfg_dropout (float): Classifier-free guidance dropout.
- cfg_coef (float): Classifier-free guidance coefficient.
- attribute_dropout (dict): Attribute dropout probabilities.
- two_step_cfg (bool): Whether to run classifier free-guidance with 2 distinct steps.
- **kwargs: Additional parameters for the transformer encoder.
- """
- def __init__(self, pattern_provider: CodebooksPatternProvider, condition_provider: ConditioningProvider,
- fuser: ConditionFuser, n_q: int = 8, card: int = 1024, dim: int = 128, num_heads: int = 8,
- hidden_scale: int = 4, norm: str = 'layer_norm', norm_first: bool = False,
- emb_lr: tp.Optional[float] = None, bias_proj: bool = True,
- weight_init: tp.Optional[str] = None, depthwise_init: tp.Optional[str] = None,
- zero_bias_init: bool = False, cfg_dropout: float = 0, cfg_coef: float = 1.0,
- attribute_dropout: tp.Dict[str, tp.Dict[str, float]] = {}, two_step_cfg: bool = False,
- **kwargs):
- super().__init__()
- self.cfg_coef = cfg_coef
- self.cfg_dropout = ClassifierFreeGuidanceDropout(p=cfg_dropout)
- self.att_dropout = AttributeDropout(p=attribute_dropout)
- self.condition_provider = condition_provider
- self.fuser = fuser
- self.card = card
- embed_dim = self.card + 1
- self.n_q = n_q
- self.dim = dim
- self.pattern_provider = pattern_provider
- self.two_step_cfg = two_step_cfg
- self.emb = nn.ModuleList([ScaledEmbedding(embed_dim, dim, lr=emb_lr) for _ in range(n_q)])
- if 'activation' in kwargs:
- kwargs['activation'] = get_activation_fn(kwargs['activation'])
- self.transformer = StreamingTransformer(
- d_model=dim, num_heads=num_heads, dim_feedforward=int(hidden_scale * dim),
- norm=norm, norm_first=norm_first, **kwargs)
- self.out_norm: tp.Optional[nn.Module] = None
- if norm_first:
- self.out_norm = create_norm_fn(norm, dim)
- self.linears = nn.ModuleList([nn.Linear(dim, self.card, bias=bias_proj) for _ in range(n_q)])
- self._init_weights(weight_init, depthwise_init, zero_bias_init)
- self._fsdp: tp.Optional[nn.Module]
- self.__dict__['_fsdp'] = None
-
- def _init_weights(self, weight_init: tp.Optional[str], depthwise_init: tp.Optional[str], zero_bias_init: bool):
- """Initialization of the transformer module weights.
-
- Args:
- weight_init (Optional[str]): Weight initialization strategy. See ``get_init_fn`` for valid options.
- depthwise_init (Optional[str]): Depwthwise initialization strategy. The following options are valid:
- 'current' where the depth corresponds to the current layer index or 'global' where the total number
- of layer is used as depth. If not set, no depthwise initialization strategy is used.
- zero_bias_init (bool): Whether to initalize bias to zero or not.
- """
- assert depthwise_init is None or depthwise_init in ['current', 'global']
- assert depthwise_init is None or weight_init is not None, \
- "If 'depthwise_init' is defined, a 'weight_init' method should be provided."
- assert not zero_bias_init or weight_init is not None, \
- "If 'zero_bias_init', a 'weight_init' method should be provided"
-
- if weight_init is None:
- return
-
- for emb_layer in self.emb:
- init_layer(emb_layer, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init)
-
- for layer_idx, tr_layer in enumerate(self.transformer.layers):
- depth = None
- if depthwise_init == 'current':
- depth = layer_idx + 1
- elif depthwise_init == 'global':
- depth = len(self.transformer.layers)
- init_fn = partial(init_layer, method=weight_init, init_depth=depth, zero_bias_init=zero_bias_init)
- tr_layer.apply(init_fn)
-
- for linear in self.linears:
- init_layer(linear, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init)
-
- @property
- def special_token_id(self) -> int:
- return self.card
-
- @property
- def num_codebooks(self) -> int:
- return self.n_q
-
- def forward(self, sequence: torch.Tensor,
- conditions: tp.List[ConditioningAttributes],
- condition_tensors: tp.Optional[ConditionTensors] = None) -> torch.Tensor:
- """Apply language model on sequence and conditions.
- Given a tensor of sequence of shape [B, K, S] with K the number of codebooks and
- S the sequence steps, return the logits with shape [B, card, K, S].
-
- Args:
- indices (torch.Tensor): indices of the codes to model.
- conditions (list[ConditioningAttributes]): conditionings to use when modeling
- the given codes. Note that when evaluating multiple time with the same conditioning
- you should pre-compute those and pass them as `condition_tensors`.
- condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning
- tensors, see `conditions`.
- Returns:
- torch.Tensor: Logits.
- """
- B, K, S = sequence.shape
- assert K == self.num_codebooks, 'Sequence shape must match the specified number of codebooks'
- input_ = sum([self.emb[k](sequence[:, k]) for k in range(K)])
- if condition_tensors is None:
- assert not self._is_streaming, "Conditions tensors should be precomputed when streaming."
- # apply dropout modules
- conditions = self.cfg_dropout(conditions)
- conditions = self.att_dropout(conditions)
- tokenized = self.condition_provider.tokenize(conditions)
- # encode conditions and fuse, both have a streaming cache to not recompute when generating.
- condition_tensors = self.condition_provider(tokenized)
- else:
- assert not conditions, "Shouldn't pass both conditions and condition_tensors."
-
- input_, cross_attention_input = self.fuser(input_, condition_tensors)
-
- out = self.transformer(input_, cross_attention_src=cross_attention_input)
- if self.out_norm:
- out = self.out_norm(out)
- logits = torch.stack([self.linears[k](out) for k in range(K)], dim=1) # [B, K, S, card]
-
- # remove the prefix from the model outputs
- if len(self.fuser.fuse2cond['prepend']) > 0:
- logits = logits[:, :, -S:]
-
- return logits # [B, K, S, card]
-
- def compute_predictions(
- self, codes: torch.Tensor,
- conditions: tp.List[ConditioningAttributes],
- condition_tensors: tp.Optional[ConditionTensors] = None) -> LMOutput:
- """Given an input tensor of codes [B, K, T] and list of conditions, runs the model
- forward using the specified codes interleaving pattern.
-
- Args:
- codes (torch.Tensor): Input codes of shape [B, K, T] with B the batch size,
- K the number of codebooks and T the number of timesteps.
- conditions (list[ConditioningAttributes]): conditionings to use when modeling
- the given codes. Note that when evaluating multiple time with the same conditioning
- you should pre-compute those and pass them as `condition_tensors`.
- condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning
- tensors, see `conditions`.
- Returns:
- LMOutput: Language model outputs
- logits (torch.Tensor) of shape [B, K, T, card] corresponding to the provided codes,
- i.e. the first item corresponds to logits to predict the first code, meaning that
- no additional shifting of codes and logits is required.
- mask (torch.Tensor) of shape [B, K, T], mask over valid and invalid positions.
- Given the specified interleaving strategies, parts of the logits and codes should
- not be considered as valid predictions because of invalid context.
- """
- B, K, T = codes.shape
- codes = codes.contiguous()
- # map codes [B, K, T] into pattern sequence [B, K, S] using special_token_id for masked tokens
- pattern = self.pattern_provider.get_pattern(T)
- sequence_codes, sequence_indexes, sequence_mask = pattern.build_pattern_sequence(
- codes, self.special_token_id, keep_only_valid_steps=True
- )
- # apply model on pattern sequence
- model = self if self._fsdp is None else self._fsdp
- logits = model(sequence_codes, conditions, condition_tensors) # [B, K, S, card]
- # map back the logits on pattern sequence to logits on original codes: [B, K, S, card] -> [B, K, T, card]
- # and provide the corresponding mask over invalid positions of tokens
- logits = logits.permute(0, 3, 1, 2) # [B, card, K, S]
- # note: we use nans as special token to make it obvious if we feed unexpected logits
- logits, logits_indexes, logits_mask = pattern.revert_pattern_logits(
- logits, float('nan'), keep_only_valid_steps=True
- )
- logits = logits.permute(0, 2, 3, 1) # [B, K, T, card]
- logits_mask = logits_mask[None, :, :].expand(B, -1, -1) # [K, T] -> [B, K, T]
- return LMOutput(logits, logits_mask)
-
- def _sample_next_token(self,
- sequence: torch.Tensor,
- cfg_conditions: CFGConditions,
- unconditional_state: State,
- use_sampling: bool = False,
- temp: float = 1.0,
- top_k: int = 0,
- top_p: float = 0.0,
- cfg_coef: tp.Optional[float] = None) -> torch.Tensor:
- """Sample next token from the model given a sequence and a set of conditions. The model supports
- multiple sampling strategies (greedy sampling, softmax, top-k, top-p...).
-
- Args:
- sequence (torch.Tensor): Current sequence of shape [B, K, S]
- with K corresponding to the number of codebooks and S the number of sequence steps.
- S = 1 in streaming mode, except for the first step that contains a bigger prompt.
- condition_tensors (Dict[str, ConditionType): Set of conditions. If CFG is used,
- should be twice the batch size, being the concatenation of the conditions + null conditions.
- use_sampling (bool): Whether to use a sampling strategy or not.
- temp (float): Sampling temperature.
- top_k (int): K for "top-k" sampling.
- top_p (float): P for "top-p" sampling.
- cfg_coef (float): classifier free guidance coefficient
- Returns:
- next_token (torch.Tensor): Next token tensor of shape [B, K, 1].
- """
- B = sequence.shape[0]
- cfg_coef = self.cfg_coef if cfg_coef is None else cfg_coef
- model = self if self._fsdp is None else self._fsdp
- if self.two_step_cfg and cfg_conditions != {}:
- assert isinstance(cfg_conditions, tuple)
- condition_tensors, null_condition_tensors = cfg_conditions
- cond_logits = model(sequence, conditions=[], condition_tensors=condition_tensors)
- state = self.get_streaming_state()
- self.set_streaming_state(unconditional_state)
- uncond_logits = model(sequence, conditions=[], condition_tensors=null_condition_tensors)
- unconditional_state.update(self.get_streaming_state())
- self.set_streaming_state(state)
- logits = uncond_logits + (cond_logits - uncond_logits) * self.cfg_coef
- else:
- assert isinstance(cfg_conditions, dict)
- condition_tensors = cfg_conditions
- if condition_tensors:
- # Preparing for CFG, predicting both conditional and unconditional logits.
- sequence = torch.cat([sequence, sequence], dim=0)
- all_logits = model(
- sequence,
- conditions=[], condition_tensors=condition_tensors)
- if condition_tensors:
- cond_logits, uncond_logits = all_logits.split(B, dim=0) # [B, K, T, card]
- logits = uncond_logits + (cond_logits - uncond_logits) * cfg_coef
- else:
- logits = all_logits
-
- logits = logits.permute(0, 1, 3, 2) # [B, K, card, T]
- logits = logits[..., -1] # [B x K x card]
-
- # Apply softmax for sampling if temp > 0. Else, do greedy sampling to avoid zero division error.
- if use_sampling and temp > 0.0:
- probs = torch.softmax(logits / temp, dim=-1)
- if top_p > 0.0:
- next_token = utils.sample_top_p(probs, p=top_p)
- elif top_k > 0:
- next_token = utils.sample_top_k(probs, k=top_k)
- else:
- next_token = utils.multinomial(probs, num_samples=1)
- else:
- next_token = torch.argmax(logits, dim=-1, keepdim=True)
-
- return next_token
-
- @torch.no_grad()
- def generate(self,
- prompt: tp.Optional[torch.Tensor] = None,
- conditions: tp.List[ConditioningAttributes] = [],
- num_samples: tp.Optional[int] = None,
- max_gen_len: int = 256,
- use_sampling: bool = True,
- temp: float = 1.0,
- top_k: int = 250,
- top_p: float = 0.0,
- cfg_coef: tp.Optional[float] = None,
- two_step_cfg: bool = False,
- remove_prompts: bool = False,
- check: bool = False,
- callback: tp.Optional[tp.Callable[[int, int], None]] = None) -> torch.Tensor:
- """Generate tokens sampling from the model given a prompt or unconditionally. Generation can
- be perform in a greedy fashion or using sampling with top K and top P strategies.
-
- Args:
- prompt (Optional[torch.Tensor]): Prompt tokens of shape [B, K, T].
- conditions_tensors (Dict[str, torch.Tensor]): Set of conditions or None.
- num_samples (int or None): Number of samples to generate when no prompt and no conditions are given.
- max_gen_len (int): Maximum generation length.
- use_sampling (bool): Whether to use a sampling strategy or not.
- temp (float): Sampling temperature.
- top_k (int): K for "top-k" sampling.
- top_p (float): P for "top-p" sampling.
- remove_prompts (bool): Whether to remove prompts from generation or not.
- Returns:
- torch.Tensor: Generated tokens.
- """
- assert not self.training, "generation shouldn't be used in training mode."
- first_param = next(iter(self.parameters()))
- device = first_param.device
-
- # Checking all input shapes are consistents.
- possible_num_samples = []
- if num_samples is not None:
- possible_num_samples.append(num_samples)
- elif prompt is not None:
- possible_num_samples.append(prompt.shape[0])
- elif conditions:
- possible_num_samples.append(len(conditions))
- else:
- possible_num_samples.append(1)
- assert [x == possible_num_samples[0] for x in possible_num_samples], "Inconsitent inputs shapes"
- num_samples = possible_num_samples[0]
-
- # below we create set of conditions: one conditional and one unconditional
- # to do that we merge the regular condition together with the null condition
- # we then do 1 forward pass instead of 2.
- # the reason for that is two-fold:
- # 1. it is about x2 faster than doing 2 forward passes
- # 2. avoid the streaming API treating the 2 passes as part of different time steps
- # We also support doing two different passes, in particular to ensure that
- # the padding structure is exactly the same between train anf test.
- # With a batch size of 1, this can be slower though.
- cfg_conditions: CFGConditions
- two_step_cfg = self.two_step_cfg if two_step_cfg is None else two_step_cfg
- if conditions:
- null_conditions = ClassifierFreeGuidanceDropout(p=1.0)(conditions)
- if two_step_cfg:
- cfg_conditions = (
- self.condition_provider(self.condition_provider.tokenize(conditions)),
- self.condition_provider(self.condition_provider.tokenize(null_conditions)),
- )
- else:
- conditions = conditions + null_conditions
- tokenized = self.condition_provider.tokenize(conditions)
- cfg_conditions = self.condition_provider(tokenized)
- else:
- cfg_conditions = {}
-
- if prompt is None:
- assert num_samples > 0
- prompt = torch.zeros((num_samples, self.num_codebooks, 0), dtype=torch.long, device=device)
-
- B, K, T = prompt.shape
- start_offset = T
- assert start_offset < max_gen_len
-
- pattern = self.pattern_provider.get_pattern(max_gen_len)
- # this token is used as default value for codes that are not generated yet
- unknown_token = -1
-
- # we generate codes up to the max_gen_len that will be mapped to the pattern sequence
- gen_codes = torch.full((B, K, max_gen_len), unknown_token, dtype=torch.long, device=device)
- # filling the gen_codes with the prompt if needed
- gen_codes[..., :start_offset] = prompt
- # create the gen_sequence with proper interleaving from the pattern: [B, K, S]
- gen_sequence, indexes, mask = pattern.build_pattern_sequence(gen_codes, self.special_token_id)
- # retrieve the start_offset in the sequence:
- # it is the first sequence step that contains the `start_offset` timestep
- start_offset_sequence = pattern.get_first_step_with_timesteps(start_offset)
- assert start_offset_sequence is not None
-
- with self.streaming():
- unconditional_state = self.get_streaming_state()
- prev_offset = 0
- gen_sequence_len = gen_sequence.shape[-1] # gen_sequence shape is [B, K, S]
- for offset in range(start_offset_sequence, gen_sequence_len):
- # get current sequence (note that the streaming API is providing the caching over previous offsets)
- curr_sequence = gen_sequence[..., prev_offset:offset]
- curr_mask = mask[None, ..., prev_offset:offset].expand(B, -1, -1)
- if check:
- # check coherence between mask and sequence
- assert (curr_sequence == torch.where(curr_mask, curr_sequence, self.special_token_id)).all()
- # should never happen as gen_sequence is filled progressively
- assert not (curr_sequence == unknown_token).any()
- # sample next token from the model, next token shape is [B, K, 1]
- next_token = self._sample_next_token(
- curr_sequence, cfg_conditions, unconditional_state, use_sampling, temp, top_k, top_p,
- cfg_coef=cfg_coef)
- # ensure the tokens that should be masked are properly set to special_token_id
- # as the model never output special_token_id
- valid_mask = mask[..., offset:offset+1].expand(B, -1, -1)
- next_token[~valid_mask] = self.special_token_id
- # ensure we don't overwrite prompt tokens, we only write over unknown tokens
- # (then mask tokens should be left as is as well, which is correct)
- gen_sequence[..., offset:offset+1] = torch.where(
- gen_sequence[..., offset:offset+1] == unknown_token,
- next_token, gen_sequence[..., offset:offset+1]
- )
- prev_offset = offset
- if callback is not None:
- callback(1 + offset - start_offset_sequence, gen_sequence_len - start_offset_sequence)
- unconditional_state.clear()
-
- # ensure sequence has been entirely filled
- assert not (gen_sequence == unknown_token).any()
- # ensure gen_sequence pattern and mask are matching
- # which means the gen_sequence is valid according to the pattern
- assert (
- gen_sequence == torch.where(mask[None, ...].expand(B, -1, -1), gen_sequence, self.special_token_id)
- ).all()
- # get back the codes, trimming the prompt if needed and cutting potentially incomplete timesteps
- out_codes, out_indexes, out_mask = pattern.revert_pattern_sequence(gen_sequence, special_token=unknown_token)
-
- # sanity checks over the returned codes and corresponding masks
- assert (out_codes[..., :max_gen_len] != unknown_token).all()
- assert (out_mask[..., :max_gen_len] == 1).all()
-
- out_start_offset = start_offset if remove_prompts else 0
- out_codes = out_codes[..., out_start_offset:max_gen_len]
-
- # ensure the returned codes are all valid
- assert (out_codes >= 0).all() and (out_codes <= self.card).all()
- return out_codes
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/64 Bit Photograv 2.11 !FULL! Free 210.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/64 Bit Photograv 2.11 !FULL! Free 210.md
deleted file mode 100644
index 5b5c6236f2971fa4fe551bcf4a0d1d46d06f8ce8..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/64 Bit Photograv 2.11 !FULL! Free 210.md
+++ /dev/null
@@ -1,6 +0,0 @@
-64 bit photograv 2.11 free 210
-
-... Britain section concludes the sale including an 1840 Official VR 1d. black, an unused 1862-64 3d. rose, and mint £5 orange of note. 4d29de3e1b
-
-
-
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download Pro Facebook Hack V 2.0 By Anonymouse 590 __EXCLUSIVE__.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download Pro Facebook Hack V 2.0 By Anonymouse 590 __EXCLUSIVE__.md
deleted file mode 100644
index a1a9064a65bb526439daf851ca1becc70dab5d79..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Download Pro Facebook Hack V 2.0 By Anonymouse 590 __EXCLUSIVE__.md
+++ /dev/null
@@ -1,6 +0,0 @@
-download pro facebook hack v 2.0 by anonymouse 590
-
-TMS WEB Core is based on compiling Delphi user interface code into JavaScript and thus creating so-called single page applications. CODE: VERSION 2.0 3 (Basic Books 2006) ("The claim for cyberspace was not only that governments would not regulate ... the Internet, but that they would no longer govern us") Charles W. Wright " Hacker Attacks (NSA attack on the Internet) The encoder (the code I write) will be written primarily in JavaScript and HTML and then converted into an HTML document, which will then be pasted into Microsoft Word. This application will use the TMS Web Core server to call remote databases. The server will run on a single local computer that we will use as a remote server. 8a78ff9644
-
-
-
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Garmin Mobile XT 5.00.50 S60.9 - V 5.00.50 Free.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Garmin Mobile XT 5.00.50 S60.9 - V 5.00.50 Free.md
deleted file mode 100644
index 6f5cd72e93572505ba80725b7b3917f314befb79..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Garmin Mobile XT 5.00.50 S60.9 - V 5.00.50 Free.md
+++ /dev/null
@@ -1,14 +0,0 @@
-Garmin Mobile XT 5.00.50 S60.9 - V 5.00.50
-
-Case scenario 1 #Sec21
-
-If you have installed the "Extended Phone SDK for Nokia N770" on your "D3 Mobile Phone" successfully, it is time to see how to use it.
-
-P.S: Start your web browser and open the "www.google.com". You will be greeted with a welcome page on the screen (Fig. [3](#Fig3)ref-type="fig")Fig. 3Welcome page of Google
-
-Choose the "English" language version if you are not sure about the language you are using. Click on the "Search" button. The search engine is ready to be used.
-
-Enter "mobile nokia extended" in the search box, click on "Search". In the resulting list, click on the "List" button. You will see a list of results on the screen (Fig. [4](#Fig4)ref-type="fig"). Click on the "Giant triangle" icon on the right hand side of the first result in the list. You will be redirected to a new page with a list 4fefd39f24
-
-
-
diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Assassins Creed Origins [FitGirl Repack].md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Assassins Creed Origins [FitGirl Repack].md
deleted file mode 100644
index 8c26281d3afda6e67576dbf16d7d66c304bd5035..0000000000000000000000000000000000000000
--- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Assassins Creed Origins [FitGirl Repack].md
+++ /dev/null
@@ -1,6 +0,0 @@
-Assassin's Creed Origins [FitGirl Repack]
-
-Instead, every repacks game like fitgirl repack includes harmful scripts and files which basically targeted your C drive, where your OS has been installed. R9 380 2Â ... 1fdad05405
-
-
-
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/app.py b/spaces/svjack/ControlNet-Pose-Chinese/app.py
deleted file mode 100644
index 0fa11072a4218e02e65c8a7e3c1eade14a1cae40..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/app.py
+++ /dev/null
@@ -1,130 +0,0 @@
-from diffusers import utils
-from diffusers.utils import deprecation_utils
-from diffusers.models import cross_attention
-utils.deprecate = lambda *arg, **kwargs: None
-deprecation_utils.deprecate = lambda *arg, **kwargs: None
-cross_attention.deprecate = lambda *arg, **kwargs: None
-
-'''
-import os
-import sys
-MAIN_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
-sys.path.insert(0, MAIN_DIR)
-os.chdir(MAIN_DIR)
-'''
-
-import cv2
-import gradio as gr
-import numpy as np
-import torch
-import random
-
-from annotator.util import resize_image, HWC3
-from annotator.openpose import OpenposeDetector
-from diffusers.models.unet_2d_condition import UNet2DConditionModel
-from diffusers.pipelines import DiffusionPipeline
-from diffusers.schedulers import DPMSolverMultistepScheduler
-from models import ControlLoRA, ControlLoRACrossAttnProcessor
-
-
-apply_openpose = OpenposeDetector()
-
-device = 'cuda' if torch.cuda.is_available() else 'cpu'
-
-pipeline = DiffusionPipeline.from_pretrained(
- 'IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1', safety_checker=None
-)
-pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
-pipeline = pipeline.to(device)
-unet: UNet2DConditionModel = pipeline.unet
-
-ckpt_path = "svjack/pose-control-lora-zh"
-control_lora = ControlLoRA.from_pretrained(ckpt_path)
-control_lora = control_lora.to(device)
-
-# load control lora attention processors
-lora_attn_procs = {}
-lora_layers_list = list([list(layer_list) for layer_list in control_lora.lora_layers])
-n_ch = len(unet.config.block_out_channels)
-control_ids = [i for i in range(n_ch)]
-for name in pipeline.unet.attn_processors.keys():
- cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
- if name.startswith("mid_block"):
- control_id = control_ids[-1]
- elif name.startswith("up_blocks"):
- block_id = int(name[len("up_blocks.")])
- control_id = list(reversed(control_ids))[block_id]
- elif name.startswith("down_blocks"):
- block_id = int(name[len("down_blocks.")])
- control_id = control_ids[block_id]
-
- lora_layers = lora_layers_list[control_id]
- if len(lora_layers) != 0:
- lora_layer: ControlLoRACrossAttnProcessor = lora_layers.pop(0)
- lora_attn_procs[name] = lora_layer
-
-unet.set_attn_processor(lora_attn_procs)
-
-
-def process(input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, sample_steps, scale, seed, eta):
- with torch.no_grad():
- input_image = HWC3(input_image)
- detected_map, _ = apply_openpose(resize_image(input_image, detect_resolution))
- detected_map = HWC3(detected_map)
- img = resize_image(input_image, image_resolution)
- H, W, C = img.shape
-
- detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_NEAREST)
-
- control = torch.from_numpy(detected_map[...,::-1].copy().transpose([2,0,1])).float().to(device)[None] / 127.5 - 1
- _ = control_lora(control).control_states
-
- if seed == -1:
- seed = random.randint(0, 65535)
-
- # run inference
- generator = torch.Generator(device=device).manual_seed(seed)
- images = []
- for i in range(num_samples):
- _ = control_lora(control).control_states
- image = pipeline(
- prompt + ', ' + a_prompt, negative_prompt=n_prompt,
- num_inference_steps=sample_steps, guidance_scale=scale, eta=eta,
- generator=generator, height=H, width=W).images[0]
- images.append(np.asarray(image))
-
- results = images
- return [detected_map] + results
-
-
-block = gr.Blocks().queue()
-with block:
- with gr.Row():
- gr.Markdown("## Control Stable Diffusion with Human Pose\n")
- gr.Markdown("This _example_ was **drive** from [https://github.com/svjack/ControlLoRA-Chinese](https://github.com/svjack/ControlLoRA-Chinese)
\n")
- with gr.Row():
- with gr.Column():
- input_image = gr.Image(source='upload', type="numpy", value = "war_v1.jpg")
- prompt = gr.Textbox(label="Prompt", value = "麦田守望者")
- run_button = gr.Button(label="Run")
- with gr.Accordion("Advanced options", open=False):
- num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
- image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=256)
- detect_resolution = gr.Slider(label="OpenPose Resolution", minimum=128, maximum=1024, value=512, step=1)
- sample_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=30, step=1)
- scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
- seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, randomize=True)
- eta = gr.Number(label="eta", value=0.0)
- a_prompt = gr.Textbox(label="Added Prompt",
- value='详细的模拟混合媒体拼贴画,帆布质地的当代艺术风格,朋克艺术,逼真主义,感性的身体,表现主义,极简主义。杰作,完美的组成,逼真的美丽的脸')
- n_prompt = gr.Textbox(label="Negative Prompt",
- value='低质量,模糊,混乱')
- with gr.Column():
- result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
- ips = [input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, sample_steps, scale, seed, eta]
- run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
-
-
-block.launch(server_name='0.0.0.0')
-
-#### block.launch(server_name='172.16.202.228', share=True)
diff --git a/spaces/tabeina/bingo1/src/components/tailwind-indicator.tsx b/spaces/tabeina/bingo1/src/components/tailwind-indicator.tsx
deleted file mode 100644
index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000
--- a/spaces/tabeina/bingo1/src/components/tailwind-indicator.tsx
+++ /dev/null
@@ -1,14 +0,0 @@
-export function TailwindIndicator() {
- if (process.env.NODE_ENV === 'production') return null
-
- return (
- control4 composer pro 2.8 cracked
-
-October 21, 2015 - same in HCE 2.7. x and 2.8. . using Win10 (Pro and Normal, 64-bit and 32-bit) with ComposerPro is no problem - if it was, I would. =========================
-================================================ ============
-========= ========================= .
-Everything related to C# and composer can be easily and without problems if you are using Windows 8.1/Windows 10 with Microsoft Visual Studio 2015 (latest) and if you have ComposerPro installed.
-But unfortunately, there are many people who use or have used Windows 7/8/8.1 in the past and still use this option. 8a78ff9644
-
-
-
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download ReFX Nexus 2.3.2 Beta Installation Crack and Unlock the Full Potential of Your Music Production.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download ReFX Nexus 2.3.2 Beta Installation Crack and Unlock the Full Potential of Your Music Production.md
deleted file mode 100644
index 873a3fa004aa207e6c022ea0f7aff875f362b651..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Download ReFX Nexus 2.3.2 Beta Installation Crack and Unlock the Full Potential of Your Music Production.md
+++ /dev/null
@@ -1,84 +0,0 @@
-
-How to Download and Install ReFX Nexus 2.3.2 Beta with Crack
-ReFX Nexus 2.3.2 Beta Installation Crack.zip
-Step 1: Download ReFX Nexus 2.3.2 Beta Installation Crack.zip
-
-ReFX Nexus 2.3.2 Beta cracked version download
-ReFX Nexus 2.3.2 Beta installation guide and crack.zip
-Download ReFX Nexus 2.3.2 Beta full version with crack
-ReFX Nexus 2.3.2 Beta crack.zip free download
-ReFX Nexus 2.3.2 Beta patch and crack.zip
-ReFX Nexus 2.3.2 Beta serial key and crack.zip
-ReFX Nexus 2.3.2 Beta license key and crack.zip
-ReFX Nexus 2.3.2 Beta activation code and crack.zip
-ReFX Nexus 2.3.2 Beta registration code and crack.zip
-ReFX Nexus 2.3.2 Beta keygen and crack.zip
-ReFX Nexus 2.3.2 Beta torrent download with crack
-ReFX Nexus 2.3.2 Beta direct download link with crack
-ReFX Nexus 2.3.2 Beta mega download with crack
-ReFX Nexus 2.3.2 Beta mediafire download with crack
-ReFX Nexus 2.3.2 Beta google drive download with crack
-ReFX Nexus 2.3.2 Beta dropbox download with crack
-ReFX Nexus 2.3.2 Beta rapidshare download with crack
-ReFX Nexus 2.3.2 Beta zippyshare download with crack
-ReFX Nexus 2.3.2 Beta uploaded download with crack
-ReFX Nexus 2.3.2 Beta filefactory download with crack
-ReFX Nexus 2.3.2 Beta turbobit download with crack
-ReFX Nexus 2.3.2 Beta nitroflare download with crack
-ReFX Nexus 2.3.2 Beta openload download with crack
-ReFX Nexus 2.3.2 Beta streamango download with crack
-ReFX Nexus 2.3.2 Beta review and crack.zip
-ReFX Nexus 2.3.2 Beta tutorial and crack.zip
-ReFX Nexus 2.3.2 Beta video and crack.zip
-ReFX Nexus 2.3.2 Beta demo and crack.zip
-ReFX Nexus 2.3.2 Beta features and crack.zip
-ReFX Nexus 2.3.2 Beta system requirements and crack.zip
-ReFX Nexus 2.3.2 Beta compatibility and crack.zip
-ReFX Nexus 2.3.Step 2: Extract ReFX Nexus 2.3.2 Beta Installation Crack.zip
-Step 3: Install ReFX Nexus 2.3.2 Beta
-Step 4: Run the Crack for the eLicenser Emulator
-Step 5: Enjoy Using ReFX Nexus 2.3.2 Beta with Crack
-Conclusion
-
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Boost Your WordPress Site with Plugins Pack.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Boost Your WordPress Site with Plugins Pack.md
deleted file mode 100644
index a2ded1842a343d389e49473e7256989b3278fb9f..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Boost Your WordPress Site with Plugins Pack.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-Plugins Pack: How to Enhance Your WordPress Site with the Best Plugins
-plugins pack
-
-
-How to Use Plugins Pack?
-
-
-What are the Benefits of Plugins Pack?
-
-
-
-Conclusion
-
-
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download TRAKTOR LE for Free and Unlock Your DJ Potential.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Download TRAKTOR LE for Free and Unlock Your DJ Potential.md
deleted file mode 100644
index 7ebda6989c7bdf749b91d94304f78b6c13aaa46c..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download TRAKTOR LE for Free and Unlock Your DJ Potential.md
+++ /dev/null
@@ -1,32 +0,0 @@
-
-How to Download TRAKTOR LE for Free
-Option 1: Get TRAKTOR LE with a qualifying Numark product
-traktor le free download full version
-Option 2: Upgrade from TRAKTOR DJ 2
-Option 3: Try out TRAKTOR PRO 3 for free
-Conclusion
-How to Install and Use TRAKTOR LE
-
-
-
-
-Conclusion
-
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Among Us 4.2.2 APK Download and Install the Latest Version of the Impostor Game.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Among Us 4.2.2 APK Download and Install the Latest Version of the Impostor Game.md
deleted file mode 100644
index 70c23bfd4b28f87f51ee04d9711daebb84da31fc..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Among Us 4.2.2 APK Download and Install the Latest Version of the Impostor Game.md
+++ /dev/null
@@ -1,200 +0,0 @@
-
-Among Us 4.2.2 APK: How to Download and Play the Latest Version of the Popular Social Deduction Game
| among us 4.2.2 apk
-
-among us 4.2.2 apk mod menu
-among us 4.2.2 apk unlocked skins
-among us 4.2.2 apk latest version
-among us 4.2.2 apk no ads
-among us 4.2.2 apk free download
-among us 4.2.2 apk hack
-among us 4.2.2 apk always impostor
-among us 4.2.2 apk with voice chat
-among us 4.2.2 apk update
-among us 4.2.2 apk online multiplayer
-among us 4.2.2 apk unlimited money
-among us 4.2.2 apk premium
-among us 4.2.2 apk full version
-among us 4.2.2 apk cracked
-among us 4.2.2 apk for pc
-among us 4.2.2 apk for ios
-among us 4.2.2 apk for firestick
-among us 4.2.2 apk for smart tv
-among us 4.2.2 apk for chromebook
-among us 4.2.2 apk mirror
-among us 4.2.2 apk rexdl
-among us 4.2.2 apk revdl
-among us 4.2.2 apk happymod
-among us 4.2.2 apk apkpure
-among us 4.2.2 apk uptodown
-among us 4.2.2 apk mob.org
-among us 4.2.2 apk android republic
-among us 4.2.2 apk an1.com
-among us 4.2.2 apk dlandroid.com
-among us 4.2.2 apk platinmods.com
-among us 4.2.2 apk ihackedit.com
-among us 4.2.2 apk blackmod.net
-among us 4.2.2 apk andropalace.org
-among us 4.2.2 apk moddroid.com
-among us 4.2.apk apkmody.io
-among us 4..22.apk apkmirror.com
-among us 42..apk apksfree.com
-amon gus .422.apk apknite.com What is new in Among Us 4.2.2 APK?
-
-
- How to download and install Among Us 4.2.2 APK on Android devices?
-
-
-
-
-
-
-
-
-
- How to play Among Us 4.2.2 APK online or over local WiFi?
-
-
-
-
-
-
-
-
-
-Setting
-Description
-
-
-Impostors
-The number of impostors in the game (1 to 3)
-
-
-Confirm Ejects
-Whether to reveal the role of the ejected player or not
-
-
-Emergency Meetings
-The number of emergency meetings that each player can call
-
-
-Emergency Cooldown
-The time between emergency meetings
-
-
-Discussion Time
-The time for discussion before voting
-
-
-Voting Time
-The time for voting after discussion
-
-
-Player Speed
-The speed of the players in the game
-
-
-Crewmate Vision
-The vision range of the crewmates in the game
-
-
-Impostor Vision
-The vision range of the impostors in the game
-
-
-Kill Cooldown
-The time between kills for the impostors
-
-
-Kill Distance
-The distance from which the impostors can kill
-
-
-Visual Tasks
-Whether to show visual indicators for some tasks or not
-
-
-Common Tasks
-The number of common tasks that all crewmates share
-
-
-Long Tasks The number of long tasks that take more time and steps to complete
-
-
-Short Tasks
-The number of short tasks that take less time and steps to complete
-Tips and tricks for crewmates
-
-
- Tips and tricks for impostors
-
-
- Conclusion
- FAQs
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Animal Revolt Battle Simulator Mod Apk Download and Enjoy Unlimited Money Gems and Weapons.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Animal Revolt Battle Simulator Mod Apk Download and Enjoy Unlimited Money Gems and Weapons.md
deleted file mode 100644
index 9355af90eb7872f4c62ecee434acdd8fe630e85a..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Animal Revolt Battle Simulator Mod Apk Download and Enjoy Unlimited Money Gems and Weapons.md
+++ /dev/null
@@ -1,93 +0,0 @@
-
-Animal Revolt Battle Simulator Latest Version Mod Apk: A Review
-animal revolt battle simulator latest version mod apk
-What is Animal Revolt Battle Simulator?
-A physics-based sandbox game where you can create epic battles with different animals
-Features of the game
-- Choose from hundreds of animals and weapons to create your own scenarios
-- Watch the realistic physics and ragdoll effects as the animals fight each other
-
-animal revolt battle simulator mod apk download for android
-animal revolt battle simulator mod apk latest version 2021
-animal revolt battle simulator mod apk free shopping
-animal revolt battle simulator mod apk unlocked all animals
-animal revolt battle simulator mod apk hack menu
-animal revolt battle simulator mod apk no ads
-animal revolt battle simulator mod apk rexdl
-animal revolt battle simulator mod apk revdl
-animal revolt battle simulator mod apk happymod
-animal revolt battle simulator mod apk android 1
-animal revolt battle simulator mod apk offline
-animal revolt battle simulator mod apk online
-animal revolt battle simulator mod apk obb
-animal revolt battle simulator mod apk data
-animal revolt battle simulator mod apk unlimited gems
-animal revolt battle simulator mod apk unlimited diamonds
-animal revolt battle simulator mod apk unlimited coins
-animal revolt battle simulator mod apk unlimited weapons
-animal revolt battle simulator mod apk unlimited levels
-animal revolt battle simulator mod apk god mode
-animal revolt battle simulator mod apk mega mod
-animal revolt battle simulator mod apk premium
-animal revolt battle simulator mod apk pro
-animal revolt battle simulator mod apk vip
-animal revolt battle simulator latest version download free
-animal revolt battle simulator latest version update 2021
-animal revolt battle simulator latest version features
-animal revolt battle simulator latest version gameplay
-animal revolt battle simulator latest version review
-animal revolt battle simulator latest version cheats
-animal revolt battle simulator latest version tips and tricks
-animal revolt battle simulator latest version guide and walkthrough
-animal revolt battle simulator latest version new animals and maps
-animal revolt battle simulator latest version bug fixes and improvements
-download animal revolt battle simulator for pc windows 10/8/7
-download animal revolt battle simulator for mac os x
-download animal revolt battle simulator for ios iphone/ipad/ipod touch
-download animal revolt battle simulator for linux ubuntu/debian/fedora/mint etc.
-download animal revolt battle simulator for chromebook chrome os- Customize the terrain, weather, and time of day to suit your preferences
-- Share your creations with other players online or download their scenarios
-What is the latest version mod apk of Animal Revolt Battle Simulator?
-A modified version of the game that gives you unlimited money, menu, and more
-Benefits of using the mod apk
-- Get unlimited money to buy any animal or weapon you want
-- Access the menu to enable or disable various options such as god mode, slow motion, and gravity
-- Enjoy the game without any ads or in-app purchases
-- Get the latest updates and bug fixes for the game
-How to download and install the mod apk?
-A simple guide to get the mod apk on your device
-- Download the mod apk file from a trusted source such as [GetModsApk]
-- Enable unknown sources on your device settings to allow installation of third-party apps
-- Locate and tap on the downloaded file to start the installation process
-- Follow the instructions on the screen and wait for the installation to complete
-- Launch the game and enjoy the mod features
-Conclusion
-FAQs
-
-
401be4b1e0
-Q: Is Animal Revolt Battle Simulator free to play? A: Yes, Animal Revolt Battle Simulator is free to play. However, it contains ads and in-app purchases that might affect your gameplay. If you want to play without ads or in-app purchases, you can try the mod apk.
-Q: Is Animal Revolt Battle Simulator safe to play? A: Yes, Animal Revolt Battle Simulator is safe to play. It does not contain any viruses or malware that might harm your device. However, you should always download the game and its mod apk from trusted sources such as [GetModsApk] to avoid any risks.
-Q: Is Animal Revolt Battle Simulator compatible with my device? A: Animal Revolt Battle Simulator is compatible with most Android devices that have Android 4.4 or higher. However, some devices might experience lag or crashes due to the high graphics and physics of the game. You can adjust the graphics settings in the game to improve the performance.
-Q: How can I contact the developer of Animal Revolt Battle Simulator? A: You can contact the developer of Animal Revolt Battle Simulator by sending an email to beastbattle.simulator@gmail.com or by visiting their Facebook page at https://www.facebook.com/BeastBattleSimulator/.
-Q: How can I support the developer of Animal Revolt Battle Simulator? A: You can support the developer of Animal Revolt Battle Simulator by rating and reviewing the game on Google Play Store, by sharing the game with your friends and family, or by making in-app purchases in the game.
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Best Reels Music MP3 Download Sites and Apps in 2023.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Best Reels Music MP3 Download Sites and Apps in 2023.md
deleted file mode 100644
index 4e42844ab94f56a63af1cc747cfe574936dbfff3..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Best Reels Music MP3 Download Sites and Apps in 2023.md
+++ /dev/null
@@ -1,138 +0,0 @@
-
-How to Download Reels Music MP3
-download reels music mp3
- What is Reels Music?
-
-
- Why Download Reels Music MP3?
-
-
- How to Download Reels Music MP3 from Instagram
-Method 1: Save Reel Audio in Instagram App
-
-
- Method 2: Download Instagram Reel Audio Using Websites
-
-
- How to Download Reels Music MP3 from Other Sources
-Method 3: Download Reels Music from No Copyright Songs Websites
-
-download reels music mp3 songs from instagram
-download reels music mp3 converter app
-download reels music mp3 high quality
-download reels music mp3 without watermark
-download reels music mp3 latest trending
-download reels music mp3 for offline listening
-download reels music mp3 with lyrics
-download reels music mp3 best sites
-download reels music mp3 fast and easy
-download reels music mp3 remixes and mashups
-download reels music mp3 from youtube videos
-download reels music mp3 playlist generator
-download reels music mp3 by genre and mood
-download reels music mp3 unlimited downloads
-download reels music mp3 no registration required
-download reels music mp3 legal and safe
-download reels music mp3 original soundtracks
-download reels music mp3 popular and viral
-download reels music mp3 new releases and updates
-download reels music mp3 ringtone maker
-download reels music mp3 editor and cutter
-download reels music mp3 downloader for pc
-download reels music mp3 downloader for android
-download reels music mp3 downloader for ios
-download reels music mp3 downloader for mac
-download reels music mp3 downloader for windows
-download reels music mp3 downloader for linux
-download reels music mp3 downloader for chrome
-download reels music mp3 downloader for firefox
-download reels music mp3 downloader for safari
-download reels music mp3 downloader for opera
-download reels music mp3 downloader for edge
-download reels music mp3 downloader for brave
-download reels music mp3 downloader for tor
-download reels music mp3 downloader extension
-download reels music mp3 downloader plugin
-download reels music mp3 downloader add-on
-download reels music mp3 downloader software
-download reels music mp3 downloader tool
-
-
-
- Method 4: Convert Reels Videos to MP3 Using Online Converter
-
-
-
-
- Conclusion
-FAQs
-Q1: How to use saved reel audio for creating reels?
-
-
- Q2: How to download reels music mp3 on PC?
-Q3: How to download reels music mp3 without watermark?
-Q4: How to download reels music mp3 in high quality?
-Q5: How to download reels music mp3 with lyrics?
-
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bitcoin Key Finder How to Recover Your Lost Private Key with This App.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bitcoin Key Finder How to Recover Your Lost Private Key with This App.md
deleted file mode 100644
index 7b7f91db8a750c934179cd9a7f7ac3043250a110..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bitcoin Key Finder How to Recover Your Lost Private Key with This App.md
+++ /dev/null
@@ -1,122 +0,0 @@
-
-Bitcoin Key Finder APK: A Guide to Finding Your Lost or Forgotten Bitcoin Private Keys
- bitcoin key finder apk
-
-bitcoin key finder software
-bitcoin key finder github
-bitcoin key finder download
-bitcoin key finder tool
-bitcoin key finder android
-bitcoin key finder windows
-bitcoin key finder online
-bitcoin key finder free
-bitcoin key finder crack
-bitcoin private key finder apk
-bitcoin public key finder apk
-bitcoin address key finder apk
-bitcoin wallet key finder apk
-bitcoin recovery key finder apk
-bitcoin private key finder app
-bitcoin public key finder app
-bitcoin address key finder app
-bitcoin wallet key finder app
-bitcoin recovery key finder app
-bitcoin private key finder software
-bitcoin public key finder software
-bitcoin address key finder software
-bitcoin wallet key finder software
-bitcoin recovery key finder software
-bitcoin private key finder github
-bitcoin public key finder github
-bitcoin address key finder github
-bitcoin wallet key finder github
-bitcoin recovery key finder github
-bitcoin private key finder download
-bitcoin public key finder download
-bitcoin address key finder download
-bitcoin wallet key finder download
-bitcoin recovery key finder download
-bitcoin private key finder tool
-bitcoin public key finder tool
-bitcoin address key finder tool
-bitcoin wallet key finder tool
-bitcoin recovery key finder tool
-bitcoin private key finder android
-bitcoin public key finder android
-bitcoin address key finder android
-bitcoin wallet key finder android
-bitcoin recovery key finder androidA Brief Overview of Some Popular Bitcoin Key Finder APK Apps
- KeyFinder by bitcrafting
- Bitcoin Private Key Finder by NandoryStudioApps
- Bitcoin Private Key Finder on Windows PC by com.gmail.nandorystudioapps.bitcoinprivatekeyfinder
- A Comparison of the Features, Advantages, and Disadvantages of Each App
-
-
-
-
-App Name
-Features
-Advantages
-Disadvantages
-
-
-KeyFinder by bitcrafting
-- Searches through a list of bitcoin addresses with positive balance
-
- Allows users to specify the range of bits to search
- Provides full or mini info output
- Includes a bitcoin puzzle feature- Supports a wide range of bitcoin addresses
-
- Gives users more control over the search parameters
- Shows more information about the generated keys
- Offers an extra challenge for bitcoin enthusiasts- Requires users to have a list of bitcoin addresses with positive balance
-
- Does not guarantee finding the private key in a reasonable time
- Does not offer any security or encryption for the found keys
-
-Bitcoin Private Key Finder by NandoryStudioApps
-- Finds the private key just by entering the bitcoin address
-
- Guarantees 100% success rate
- Easy to use interface
- Supports cloud backup- Simplifies the process of finding the private key
-
- Ensures that the user will find the private key eventually
- Does not require any technical skills or knowledge
- Allows users to save their keys online for future access- May take a long time to find the key depending on the address
-
- Does not offer any security or encryption for the found keys
- May not be compatible with all devices or operating systems
-
-Bitcoin Private Key Finder on Windows PC by com.gmail.nandorystudioapps.bitcoinprivatekeyfinder
-Same as Bitcoin Private Key Finder by NandoryStudioApps, but for Windows PC users
-Same as Bitcoin Private Key Finder by NandoryStudioApps, but for Windows PC users
-Same as Bitcoin Private Key Finder by NandoryStudioApps, but for Windows PC users
-Conclusion
- FAQs
- What is a bitcoin private key and why is it important?
- How can I backup or restore my bitcoin wallet using my private key?
- How can I encrypt my bitcoin private key for extra security?
- What are some other ways to find or recover my lost or forgotten bitcoin private key?
-
-
- What are some best practices to protect my bitcoin private key from theft or loss?
-
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Free 3D Models in MAX Format for Any Project.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Free 3D Models in MAX Format for Any Project.md
deleted file mode 100644
index c4fe1549de63ffd7b5ea3ada65f0bbdb30cd1a89..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Free 3D Models in MAX Format for Any Project.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-How to Download Free 3D Models in .max Format
-What is .max Format and Why Use It?
-free download 3d model.max
- Where to Find Free 3D Models in .max Format?
-TurboSquid
-CGTrader
-Autodesk
-How to Download and Use Free 3D Models in .max Format?
-Download the .max file from the source website
-
-free download 3d max models of cars
-free download 3d max models of furniture
-free download 3d max models of animals
-free download 3d max models of characters
-free download 3d max models of plants
-free download 3d max models of weapons
-free download 3d max models of buildings
-free download 3d max models of robots
-free download 3d max models of airplanes
-free download 3d max models of ships
-free download 3d max models of motorcycles
-free download 3d max models of dinosaurs
-free download 3d max models of humans
-free download 3d max models of trees
-free download 3d max models of rocks
-free download 3d max models of landscapes
-free download 3d max models of spaceships
-free download 3d max models of helicopters
-free download 3d max models of trains
-free download 3d max models of tanks
-free download 3d max models of dragons
-free download 3d max models of skulls
-free download 3d max models of guitars
-free download 3d max models of phones
-free download 3d max models of laptops
-free download 3d max models of watches
-free download 3d max models of cameras
-free download 3d max models of clocks
-free download 3d max models of lamps
-free download 3d max models of chairs
-free download 3d max models of tables
-free download 3d max models of sofas
-free download 3d max models of beds
-free download 3d max models of cabinets
-free download 3d max models of shelves
-free download 3d max models of desks
-free download 3d max models of doors
-free download 3d max models of windows
-free download 3d max models of curtains
-free download 3d max models of rugs
-free download 3d max models of pillows
-free download 3d max models of flowers
-free download 3d max models of fruits
-free download 3d max models of vegetables
-free download 3d max models of food
-free download 3d max models of drinks
-free download 3d max models of bottles
-free download 3d max models of glassesOpen the .max file in 3ds Max or another compatible software
-Edit, animate, render, or export the 3D model as you wish
-Conclusion
-FAQs
-What is the difference between .max and .3ds formats?
-Can I open .max files without 3ds Max?
-How can I convert .max files to other formats?
-Are free 3D models in .max format legal to use?
-What are some tips for using free 3D models in .max format?
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/Win7-Sp1-32-64-EN-FaXcooL-Iso.md b/spaces/tioseFevbu/cartoon-converter/Win7-Sp1-32-64-EN-FaXcooL-Iso.md
deleted file mode 100644
index ea96ffc64b8c540ff7d6731eb9c32dfd4e674777..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/Win7-Sp1-32-64-EN-FaXcooL-Iso.md
+++ /dev/null
@@ -1,108 +0,0 @@
-## Win7 Sp1 32 64 EN FaXcooL Iso
-
-
-
-
-
-
-
-
-
-**CLICK HERE >>> [https://urluso.com/2tyQuD](https://urluso.com/2tyQuD)**
-
-
-
-
-
-
-
-
-
-
-
-
-
-# How to Download and Install Win7 sp1 32 64 EN faXcooL iso
-
-
-
-If you are looking for a complete and reliable version of Windows 7 that includes all the available editions (except "N" and "E") and is fully activated, you might want to try Win7 sp1 32 64 EN faXcooL iso. This is an ISO image created by faXcooL, a well-known uploader of Windows operating systems. In this article, we will show you how to download and install Win7 sp1 32 64 EN faXcooL iso on your computer.
-
-
-
-## What is Win7 sp1 32 64 EN faXcooL iso?
-
-
-
-Win7 sp1 32 64 EN faXcooL iso is a bootable ISO image that contains the following versions of Windows 7 with Service Pack 1 (SP1) integrated:
-
-
-
-- Windows 7 Starter 32-bit
-
-- Windows 7 Home Basic 32/64-bit
-
-- Windows 7 Home Premium 32/64-bit
-
-- Windows 7 Professional 32/64-bit
-
-- Windows 7 Enterprise 32/64-bit
-
-- Windows 7 Ultimate 32/64-bit
-
-
-
-The ISO image is in English by default and can be burned to a DVD5 disc or a USB flash drive. The installation is activated (no serial needed) and genuine (Windows Update is available). The ISO image also includes some improvements and fixes for Windows 7, such as:
-
-
-
-- Additional support for communication with third-party federation services
-
-- Improved HDMI audio device performance
-
-- Corrected behavior when printing mixed-orientation XPS documents
-
-- Restored previous folders in Windows Explorer after restarting
-
-
-
-## How to Download Win7 sp1 32 64 EN faXcooL iso?
-
-
-
-You can download Win7 sp1 32 64 EN faXcooL iso from the Internet Archive, a non-profit digital library that offers free access to millions of books, movies, software, music, and more. Here are the steps to download Win7 sp1 32 64 EN faXcooL iso from the Internet Archive:
-
-
-
-1. Go to [https://archive.org/details/win-7-sp-1-32-64-en-fa-xcoo-l](https://archive.org/details/win-7-sp-1-32-64-en-fa-xcoo-l)
-
-2. Click on the "DOWNLOAD OPTIONS" section on the right side of the page.
-
-3. Select the file format that suits your needs. The ISO image is available in ZIP, Torrent, or Original formats. The ZIP format is compressed and requires extraction before burning. The Torrent format requires a torrent client to download. The Original format is uncompressed and ready to burn.
-
-4. Click on the file name to start the download. The file size is about 4 GB, so it may take some time depending on your internet speed.
-
-
-
-## How to Install Win7 sp1 32 64 EN faXcooL iso?
-
-
-
-After downloading Win7 sp1 32 64 EN faXcooL iso, you need to burn it to a DVD5 disc or a USB flash drive. You can use any software that can burn ISO images, such as ImgBurn, Rufus, or Windows USB/DVD Download Tool. Here are the steps to install Win7 sp1 32 64 EN faXcooL iso on your computer:
-
-
-
-1. Insert the DVD5 disc or the USB flash drive that contains Win7 sp1 32 64 EN faXcooL iso into your computer.
-
-2. Restart your computer and boot from the DVD5 disc or the USB flash drive. You may need to change the boot order in your BIOS settings to do this.
-
-3. Follow the instructions on the screen to select your language, time and currency format, keyboard or input method, and the edition of Windows 7 that you want 145887f19f
-
-
-
-
-
-
-
-
-
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Download Window 8 Microsoft Iso.md b/spaces/tioseFevbu/cartoon-converter/scripts/Download Window 8 Microsoft Iso.md
deleted file mode 100644
index 73d3daa718f57c519ada14e14a25e51edadab024..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Download Window 8 Microsoft Iso.md
+++ /dev/null
@@ -1,39 +0,0 @@
-
-How to Download Windows 8 Microsoft ISO File
-download window 8 microsoft iso
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Gigabyte Intel 4 Series Utility Dvd Ver.2.1 Download.md b/spaces/tioseFevbu/cartoon-converter/scripts/Gigabyte Intel 4 Series Utility Dvd Ver.2.1 Download.md
deleted file mode 100644
index cdc746e0541990bffe36d390beb577a181e7e152..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Gigabyte Intel 4 Series Utility Dvd Ver.2.1 Download.md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-How to Download and Install Gigabyte Intel 4 Series Utility DVD Ver.2.1
-gigabyte intel 4 series utility dvd ver.2.1 download
-Step 1: Download the Gigabyte Intel 4 Series Utility DVD Ver.2.1
-Step 2: Burn the Gigabyte Intel 4 Series Utility DVD Ver.2.1 to a Disc
-Step 3: Install the Gigabyte Intel 4 Series Utility DVD Ver.2.1
-Conclusion
-Benefits of Updating Your Drivers and Utilities
-How to Check Your Drivers and Utilities Version
-
-
-
\ No newline at end of file
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/distlib/manifest.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/distlib/manifest.py
deleted file mode 100644
index ca0fe442d9ca499466df9438df16eca405c5f102..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/distlib/manifest.py
+++ /dev/null
@@ -1,393 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Copyright (C) 2012-2013 Python Software Foundation.
-# See LICENSE.txt and CONTRIBUTORS.txt.
-#
-"""
-Class representing the list of files in a distribution.
-
-Equivalent to distutils.filelist, but fixes some problems.
-"""
-import fnmatch
-import logging
-import os
-import re
-import sys
-
-from . import DistlibException
-from .compat import fsdecode
-from .util import convert_path
-
-
-__all__ = ['Manifest']
-
-logger = logging.getLogger(__name__)
-
-# a \ followed by some spaces + EOL
-_COLLAPSE_PATTERN = re.compile('\\\\w*\n', re.M)
-_COMMENTED_LINE = re.compile('#.*?(?=\n)|\n(?=$)', re.M | re.S)
-
-#
-# Due to the different results returned by fnmatch.translate, we need
-# to do slightly different processing for Python 2.7 and 3.2 ... this needed
-# to be brought in for Python 3.6 onwards.
-#
-_PYTHON_VERSION = sys.version_info[:2]
-
-class Manifest(object):
- """A list of files built by on exploring the filesystem and filtered by
- applying various patterns to what we find there.
- """
-
- def __init__(self, base=None):
- """
- Initialise an instance.
-
- :param base: The base directory to explore under.
- """
- self.base = os.path.abspath(os.path.normpath(base or os.getcwd()))
- self.prefix = self.base + os.sep
- self.allfiles = None
- self.files = set()
-
- #
- # Public API
- #
-
- def findall(self):
- """Find all files under the base and set ``allfiles`` to the absolute
- pathnames of files found.
- """
- from stat import S_ISREG, S_ISDIR, S_ISLNK
-
- self.allfiles = allfiles = []
- root = self.base
- stack = [root]
- pop = stack.pop
- push = stack.append
-
- while stack:
- root = pop()
- names = os.listdir(root)
-
- for name in names:
- fullname = os.path.join(root, name)
-
- # Avoid excess stat calls -- just one will do, thank you!
- stat = os.stat(fullname)
- mode = stat.st_mode
- if S_ISREG(mode):
- allfiles.append(fsdecode(fullname))
- elif S_ISDIR(mode) and not S_ISLNK(mode):
- push(fullname)
-
- def add(self, item):
- """
- Add a file to the manifest.
-
- :param item: The pathname to add. This can be relative to the base.
- """
- if not item.startswith(self.prefix):
- item = os.path.join(self.base, item)
- self.files.add(os.path.normpath(item))
-
- def add_many(self, items):
- """
- Add a list of files to the manifest.
-
- :param items: The pathnames to add. These can be relative to the base.
- """
- for item in items:
- self.add(item)
-
- def sorted(self, wantdirs=False):
- """
- Return sorted files in directory order
- """
-
- def add_dir(dirs, d):
- dirs.add(d)
- logger.debug('add_dir added %s', d)
- if d != self.base:
- parent, _ = os.path.split(d)
- assert parent not in ('', '/')
- add_dir(dirs, parent)
-
- result = set(self.files) # make a copy!
- if wantdirs:
- dirs = set()
- for f in result:
- add_dir(dirs, os.path.dirname(f))
- result |= dirs
- return [os.path.join(*path_tuple) for path_tuple in
- sorted(os.path.split(path) for path in result)]
-
- def clear(self):
- """Clear all collected files."""
- self.files = set()
- self.allfiles = []
-
- def process_directive(self, directive):
- """
- Process a directive which either adds some files from ``allfiles`` to
- ``files``, or removes some files from ``files``.
-
- :param directive: The directive to process. This should be in a format
- compatible with distutils ``MANIFEST.in`` files:
-
- http://docs.python.org/distutils/sourcedist.html#commands
- """
- # Parse the line: split it up, make sure the right number of words
- # is there, and return the relevant words. 'action' is always
- # defined: it's the first word of the line. Which of the other
- # three are defined depends on the action; it'll be either
- # patterns, (dir and patterns), or (dirpattern).
- action, patterns, thedir, dirpattern = self._parse_directive(directive)
-
- # OK, now we know that the action is valid and we have the
- # right number of words on the line for that action -- so we
- # can proceed with minimal error-checking.
- if action == 'include':
- for pattern in patterns:
- if not self._include_pattern(pattern, anchor=True):
- logger.warning('no files found matching %r', pattern)
-
- elif action == 'exclude':
- for pattern in patterns:
- found = self._exclude_pattern(pattern, anchor=True)
- #if not found:
- # logger.warning('no previously-included files '
- # 'found matching %r', pattern)
-
- elif action == 'global-include':
- for pattern in patterns:
- if not self._include_pattern(pattern, anchor=False):
- logger.warning('no files found matching %r '
- 'anywhere in distribution', pattern)
-
- elif action == 'global-exclude':
- for pattern in patterns:
- found = self._exclude_pattern(pattern, anchor=False)
- #if not found:
- # logger.warning('no previously-included files '
- # 'matching %r found anywhere in '
- # 'distribution', pattern)
-
- elif action == 'recursive-include':
- for pattern in patterns:
- if not self._include_pattern(pattern, prefix=thedir):
- logger.warning('no files found matching %r '
- 'under directory %r', pattern, thedir)
-
- elif action == 'recursive-exclude':
- for pattern in patterns:
- found = self._exclude_pattern(pattern, prefix=thedir)
- #if not found:
- # logger.warning('no previously-included files '
- # 'matching %r found under directory %r',
- # pattern, thedir)
-
- elif action == 'graft':
- if not self._include_pattern(None, prefix=dirpattern):
- logger.warning('no directories found matching %r',
- dirpattern)
-
- elif action == 'prune':
- if not self._exclude_pattern(None, prefix=dirpattern):
- logger.warning('no previously-included directories found '
- 'matching %r', dirpattern)
- else: # pragma: no cover
- # This should never happen, as it should be caught in
- # _parse_template_line
- raise DistlibException(
- 'invalid action %r' % action)
-
- #
- # Private API
- #
-
- def _parse_directive(self, directive):
- """
- Validate a directive.
- :param directive: The directive to validate.
- :return: A tuple of action, patterns, thedir, dir_patterns
- """
- words = directive.split()
- if len(words) == 1 and words[0] not in ('include', 'exclude',
- 'global-include',
- 'global-exclude',
- 'recursive-include',
- 'recursive-exclude',
- 'graft', 'prune'):
- # no action given, let's use the default 'include'
- words.insert(0, 'include')
-
- action = words[0]
- patterns = thedir = dir_pattern = None
-
- if action in ('include', 'exclude',
- 'global-include', 'global-exclude'):
- if len(words) < 2:
- raise DistlibException(
- '%r expects Adobe Audition CC 2019 12.1.1.42 (x64)
-
-
-
\ No newline at end of file
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Cuphead Update V1 1 4-CODEX Key Generator.md b/spaces/usbethFlerru/sovits-modelsV2/example/Cuphead Update V1 1 4-CODEX Key Generator.md
deleted file mode 100644
index 3324019f33e96838c6c691d88d3f016261c826ca..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Cuphead Update V1 1 4-CODEX Key Generator.md
+++ /dev/null
@@ -1,77 +0,0 @@
-
-Cuphead Update v1 1 4-CODEX Key Generator: How to Get the Latest Version of the Game
-
-Cuphead Update v1 1 4-CODEX Key Generator
-
-What is Cuphead Update v1 1 4-CODEX Key Generator?
-
-How to Use Cuphead Update v1 1 4-CODEX Key Generator?
-
-
-
-
-What's New in Cuphead Update v1 1 4-CODEX?
-
-
-
-
-Conclusion
-
-How to Play Cuphead Update v1 1 4-CODEX
-
-
-
-
-Why You Should Play Cuphead Update v1 1 4-CODEX
-
-
-
-
-Conclusion
-
-
-
-
\ No newline at end of file
diff --git a/spaces/user238921933/stable-diffusion-webui/modules/img2img.py b/spaces/user238921933/stable-diffusion-webui/modules/img2img.py
deleted file mode 100644
index 8ddf224fa2b13a32cb51603a55482e0f0783ec72..0000000000000000000000000000000000000000
--- a/spaces/user238921933/stable-diffusion-webui/modules/img2img.py
+++ /dev/null
@@ -1,184 +0,0 @@
-import math
-import os
-import sys
-import traceback
-
-import numpy as np
-from PIL import Image, ImageOps, ImageFilter, ImageEnhance, ImageChops
-
-from modules import devices, sd_samplers
-from modules.generation_parameters_copypaste import create_override_settings_dict
-from modules.processing import Processed, StableDiffusionProcessingImg2Img, process_images
-from modules.shared import opts, state
-import modules.shared as shared
-import modules.processing as processing
-from modules.ui import plaintext_to_html
-import modules.images as images
-import modules.scripts
-
-
-def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args):
- processing.fix_seed(p)
-
- images = shared.listfiles(input_dir)
-
- is_inpaint_batch = False
- if inpaint_mask_dir:
- inpaint_masks = shared.listfiles(inpaint_mask_dir)
- is_inpaint_batch = len(inpaint_masks) > 0
- if is_inpaint_batch:
- print(f"\nInpaint batch is enabled. {len(inpaint_masks)} masks found.")
-
- print(f"Will process {len(images)} images, creating {p.n_iter * p.batch_size} new images for each.")
-
- save_normally = output_dir == ''
-
- p.do_not_save_grid = True
- p.do_not_save_samples = not save_normally
-
- state.job_count = len(images) * p.n_iter
-
- for i, image in enumerate(images):
- state.job = f"{i+1} out of {len(images)}"
- if state.skipped:
- state.skipped = False
-
- if state.interrupted:
- break
-
- img = Image.open(image)
- # Use the EXIF orientation of photos taken by smartphones.
- img = ImageOps.exif_transpose(img)
- p.init_images = [img] * p.batch_size
-
- if is_inpaint_batch:
- # try to find corresponding mask for an image using simple filename matching
- mask_image_path = os.path.join(inpaint_mask_dir, os.path.basename(image))
- # if not found use first one ("same mask for all images" use-case)
- if not mask_image_path in inpaint_masks:
- mask_image_path = inpaint_masks[0]
- mask_image = Image.open(mask_image_path)
- p.image_mask = mask_image
-
- proc = modules.scripts.scripts_img2img.run(p, *args)
- if proc is None:
- proc = process_images(p)
-
- for n, processed_image in enumerate(proc.images):
- filename = os.path.basename(image)
-
- if n > 0:
- left, right = os.path.splitext(filename)
- filename = f"{left}-{n}{right}"
-
- if not save_normally:
- os.makedirs(output_dir, exist_ok=True)
- if processed_image.mode == 'RGBA':
- processed_image = processed_image.convert("RGB")
- processed_image.save(os.path.join(output_dir, filename))
-
-
-def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, steps: int, sampler_index: int, mask_blur: int, mask_alpha: float, inpainting_fill: int, restore_faces: bool, tiling: bool, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, seed: int, subseed: int, subseed_strength: float, seed_resize_from_h: int, seed_resize_from_w: int, seed_enable_extras: bool, height: int, width: int, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, *args):
- override_settings = create_override_settings_dict(override_settings_texts)
-
- is_batch = mode == 5
-
- if mode == 0: # img2img
- image = init_img.convert("RGB")
- mask = None
- elif mode == 1: # img2img sketch
- image = sketch.convert("RGB")
- mask = None
- elif mode == 2: # inpaint
- image, mask = init_img_with_mask["image"], init_img_with_mask["mask"]
- alpha_mask = ImageOps.invert(image.split()[-1]).convert('L').point(lambda x: 255 if x > 0 else 0, mode='1')
- mask = ImageChops.lighter(alpha_mask, mask.convert('L')).convert('L')
- image = image.convert("RGB")
- elif mode == 3: # inpaint sketch
- image = inpaint_color_sketch
- orig = inpaint_color_sketch_orig or inpaint_color_sketch
- pred = np.any(np.array(image) != np.array(orig), axis=-1)
- mask = Image.fromarray(pred.astype(np.uint8) * 255, "L")
- mask = ImageEnhance.Brightness(mask).enhance(1 - mask_alpha / 100)
- blur = ImageFilter.GaussianBlur(mask_blur)
- image = Image.composite(image.filter(blur), orig, mask.filter(blur))
- image = image.convert("RGB")
- elif mode == 4: # inpaint upload mask
- image = init_img_inpaint
- mask = init_mask_inpaint
- else:
- image = None
- mask = None
-
- # Use the EXIF orientation of photos taken by smartphones.
- if image is not None:
- image = ImageOps.exif_transpose(image)
-
- assert 0. <= denoising_strength <= 1., 'can only work with strength in [0.0, 1.0]'
-
- p = StableDiffusionProcessingImg2Img(
- sd_model=shared.sd_model,
- outpath_samples=opts.outdir_samples or opts.outdir_img2img_samples,
- outpath_grids=opts.outdir_grids or opts.outdir_img2img_grids,
- prompt=prompt,
- negative_prompt=negative_prompt,
- styles=prompt_styles,
- seed=seed,
- subseed=subseed,
- subseed_strength=subseed_strength,
- seed_resize_from_h=seed_resize_from_h,
- seed_resize_from_w=seed_resize_from_w,
- seed_enable_extras=seed_enable_extras,
- sampler_name=sd_samplers.samplers_for_img2img[sampler_index].name,
- batch_size=batch_size,
- n_iter=n_iter,
- steps=steps,
- cfg_scale=cfg_scale,
- width=width,
- height=height,
- restore_faces=restore_faces,
- tiling=tiling,
- init_images=[image],
- mask=mask,
- mask_blur=mask_blur,
- inpainting_fill=inpainting_fill,
- resize_mode=resize_mode,
- denoising_strength=denoising_strength,
- image_cfg_scale=image_cfg_scale,
- inpaint_full_res=inpaint_full_res,
- inpaint_full_res_padding=inpaint_full_res_padding,
- inpainting_mask_invert=inpainting_mask_invert,
- override_settings=override_settings,
- )
-
- p.scripts = modules.scripts.scripts_txt2img
- p.script_args = args
-
- if shared.cmd_opts.enable_console_prompts:
- print(f"\nimg2img: {prompt}", file=shared.progress_print_out)
-
- p.extra_generation_params["Mask blur"] = mask_blur
-
- if is_batch:
- assert not shared.cmd_opts.hide_ui_dir_config, "Launched with --hide-ui-dir-config, batch img2img disabled"
-
- process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args)
-
- processed = Processed(p, [], p.seed, "")
- else:
- processed = modules.scripts.scripts_img2img.run(p, *args)
- if processed is None:
- processed = process_images(p)
-
- p.close()
-
- shared.total_tqdm.clear()
-
- generation_info_js = processed.js()
- if opts.samples_log_stdout:
- print(generation_info_js)
-
- if opts.do_not_show_images:
- processed.images = []
-
- return processed.images, generation_info_js, plaintext_to_html(processed.info), plaintext_to_html(processed.comments)
diff --git a/spaces/vaibhavarduino/anime-plus/e4e/models/stylegan2/__init__.py b/spaces/vaibhavarduino/anime-plus/e4e/models/stylegan2/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/README.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/README.md
deleted file mode 100644
index 64788bf38b1c1dddeee2e8aef24e9db9eae1004f..0000000000000000000000000000000000000000
--- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Simultaneous Segmented Depth Prediction
-emoji: 🚀
-colorFrom: gray
-colorTo: pink
-sdk: gradio
-sdk_version: 3.36.1
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/vedalken/text2Pokemon/app.py b/spaces/vedalken/text2Pokemon/app.py
deleted file mode 100644
index a27da376219e38273064023a24c9f7f7c74171ee..0000000000000000000000000000000000000000
--- a/spaces/vedalken/text2Pokemon/app.py
+++ /dev/null
@@ -1,27 +0,0 @@
-import gradio as gr
-
-from service import Service
-
-prompt = ""
-scale = 10
-n_samples = 4
-rows = 2
-cols = 2
-
-service = Service()
-
-
-def generate_pkm(prompt):
- return service.generate(prompt, n_samples=n_samples, rows=rows, cols=cols)
-
-
-with gr.Blocks() as demo:
- gr.Markdown("Write a prompt to generate a Pokemon!")
- prompt = gr.Textbox(label="Prompt")
- image_output = gr.Image()
- generate = gr.Button("Generate")
- generate.click(generate_pkm, inputs=prompt, outputs=image_output)
-gr.Markdown("Models finetuned by: lambdalabs")
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/vict0rsch/climateGAN/utils_scripts/create_labeled.py b/spaces/vict0rsch/climateGAN/utils_scripts/create_labeled.py
deleted file mode 100644
index 3bf0d02b74a67dd1cace6e0a4ffe778b59ac7f66..0000000000000000000000000000000000000000
--- a/spaces/vict0rsch/climateGAN/utils_scripts/create_labeled.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from pathlib import Path
-from skimage.io import imread, imsave
-import numpy as np
-
-if __name__ == "__main__":
- impath = Path("/Users/victor/Downloads/metrics-v2/imgs")
- labpath = Path("/Users/victor/Downloads/metrics-v2/labels")
- outpath = Path("/Users/victor/Downloads/metrics-v2/labeled")
- outpath.mkdir(exist_ok=True, parents=True)
- ims = sorted(
- [d for d in impath.iterdir() if d.is_file() and not d.name.startswith(".")],
- key=lambda x: x.stem,
- )
- labs = sorted(
- [d for d in labpath.iterdir() if d.is_file() and not d.name.startswith(".")],
- key=lambda x: x.stem.replace("_labeled", ""),
- )
-
- for k, (i, l) in enumerate(zip(ims, labs)):
- print(f"{k + 1} / {len(ims)}", end="\r", flush=True)
- assert i.stem == l.stem.replace("_labeled", "")
- im = imread(i)[:, :, :3]
- la = imread(l)
- ld = (0.7 * im + 0.3 * la).astype(np.uint8)
- imsave(outpath / i.name, ld)
diff --git a/spaces/victorisgeek/SwapFace2Pon/upscaler/RealESRGAN/__init__.py b/spaces/victorisgeek/SwapFace2Pon/upscaler/RealESRGAN/__init__.py
deleted file mode 100644
index 98952fda6e38f8a00b1a6054f9814c0dcf0a4c14..0000000000000000000000000000000000000000
--- a/spaces/victorisgeek/SwapFace2Pon/upscaler/RealESRGAN/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .model import RealESRGAN
\ No newline at end of file
diff --git a/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.cfg.py b/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.cfg.py
deleted file mode 100644
index f490c4bbd598a35de43d36ceafcbd769e7ff21bf..0000000000000000000000000000000000000000
--- a/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.cfg.py
+++ /dev/null
@@ -1,43 +0,0 @@
-batch_size = 1
-modelname = "groundingdino"
-backbone = "swin_B_384_22k"
-position_embedding = "sine"
-pe_temperatureH = 20
-pe_temperatureW = 20
-return_interm_indices = [1, 2, 3]
-backbone_freeze_keywords = None
-enc_layers = 6
-dec_layers = 6
-pre_norm = False
-dim_feedforward = 2048
-hidden_dim = 256
-dropout = 0.0
-nheads = 8
-num_queries = 900
-query_dim = 4
-num_patterns = 0
-num_feature_levels = 4
-enc_n_points = 4
-dec_n_points = 4
-two_stage_type = "standard"
-two_stage_bbox_embed_share = False
-two_stage_class_embed_share = False
-transformer_activation = "relu"
-dec_pred_bbox_embed_share = True
-dn_box_noise_scale = 1.0
-dn_label_noise_ratio = 0.5
-dn_label_coef = 1.0
-dn_bbox_coef = 1.0
-embed_init_tgt = True
-dn_labelbook_size = 2000
-max_text_len = 256
-text_encoder_type = "bert-base-uncased"
-use_text_enhancer = True
-use_fusion_layer = True
-use_checkpoint = True
-use_transformer_ckpt = True
-use_text_cross_attention = True
-text_dropout = 0.0
-fusion_dropout = 0.0
-fusion_droppath = 0.1
-sub_sentence_present = True
diff --git a/spaces/vincentliaw/runwayml-stable-diffusion-v1-5/app.py b/spaces/vincentliaw/runwayml-stable-diffusion-v1-5/app.py
deleted file mode 100644
index a82df332731f067826d3e1ef79fabceffb74d07e..0000000000000000000000000000000000000000
--- a/spaces/vincentliaw/runwayml-stable-diffusion-v1-5/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/runwayml/stable-diffusion-v1-5").launch()
\ No newline at end of file
diff --git a/spaces/vinthony/SadTalker/src/face3d/data/flist_dataset.py b/spaces/vinthony/SadTalker/src/face3d/data/flist_dataset.py
deleted file mode 100644
index c0b6945c80aa756074a5d3c02b9443b15ddcfc57..0000000000000000000000000000000000000000
--- a/spaces/vinthony/SadTalker/src/face3d/data/flist_dataset.py
+++ /dev/null
@@ -1,125 +0,0 @@
-"""This script defines the custom dataset for Deep3DFaceRecon_pytorch
-"""
-
-import os.path
-from data.base_dataset import BaseDataset, get_transform, get_affine_mat, apply_img_affine, apply_lm_affine
-from data.image_folder import make_dataset
-from PIL import Image
-import random
-import util.util as util
-import numpy as np
-import json
-import torch
-from scipy.io import loadmat, savemat
-import pickle
-from util.preprocess import align_img, estimate_norm
-from util.load_mats import load_lm3d
-
-
-def default_flist_reader(flist):
- """
- flist format: impath label\nimpath label\n ...(same to caffe's filelist)
- """
- imlist = []
- with open(flist, 'r') as rf:
- for line in rf.readlines():
- impath = line.strip()
- imlist.append(impath)
-
- return imlist
-
-def jason_flist_reader(flist):
- with open(flist, 'r') as fp:
- info = json.load(fp)
- return info
-
-def parse_label(label):
- return torch.tensor(np.array(label).astype(np.float32))
-
-
-class FlistDataset(BaseDataset):
- """
- It requires one directories to host training images '/path/to/data/train'
- You can train the model with the dataset flag '--dataroot /path/to/data'.
- """
-
- def __init__(self, opt):
- """Initialize this dataset class.
-
- Parameters:
- opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions
- """
- BaseDataset.__init__(self, opt)
-
- self.lm3d_std = load_lm3d(opt.bfm_folder)
-
- msk_names = default_flist_reader(opt.flist)
- self.msk_paths = [os.path.join(opt.data_root, i) for i in msk_names]
-
- self.size = len(self.msk_paths)
- self.opt = opt
-
- self.name = 'train' if opt.isTrain else 'val'
- if '_' in opt.flist:
- self.name += '_' + opt.flist.split(os.sep)[-1].split('_')[0]
-
-
- def __getitem__(self, index):
- """Return a data point and its metadata information.
-
- Parameters:
- index (int) -- a random integer for data indexing
-
- Returns a dictionary that contains A, B, A_paths and B_paths
- img (tensor) -- an image in the input domain
- msk (tensor) -- its corresponding attention mask
- lm (tensor) -- its corresponding 3d landmarks
- im_paths (str) -- image paths
- aug_flag (bool) -- a flag used to tell whether its raw or augmented
- """
- msk_path = self.msk_paths[index % self.size] # make sure index is within then range
- img_path = msk_path.replace('mask/', '')
- lm_path = '.'.join(msk_path.replace('mask', 'landmarks').split('.')[:-1]) + '.txt'
-
- raw_img = Image.open(img_path).convert('RGB')
- raw_msk = Image.open(msk_path).convert('RGB')
- raw_lm = np.loadtxt(lm_path).astype(np.float32)
-
- _, img, lm, msk = align_img(raw_img, raw_lm, self.lm3d_std, raw_msk)
-
- aug_flag = self.opt.use_aug and self.opt.isTrain
- if aug_flag:
- img, lm, msk = self._augmentation(img, lm, self.opt, msk)
-
- _, H = img.size
- M = estimate_norm(lm, H)
- transform = get_transform()
- img_tensor = transform(img)
- msk_tensor = transform(msk)[:1, ...]
- lm_tensor = parse_label(lm)
- M_tensor = parse_label(M)
-
-
- return {'imgs': img_tensor,
- 'lms': lm_tensor,
- 'msks': msk_tensor,
- 'M': M_tensor,
- 'im_paths': img_path,
- 'aug_flag': aug_flag,
- 'dataset': self.name}
-
- def _augmentation(self, img, lm, opt, msk=None):
- affine, affine_inv, flip = get_affine_mat(opt, img.size)
- img = apply_img_affine(img, affine_inv)
- lm = apply_lm_affine(lm, affine, flip, img.size)
- if msk is not None:
- msk = apply_img_affine(msk, affine_inv, method=Image.BILINEAR)
- return img, lm, msk
-
-
-
-
- def __len__(self):
- """Return the total number of images in the dataset.
- """
- return self.size
diff --git a/spaces/vumichien/Generate_human_motion/pyrender/pyrender/material.py b/spaces/vumichien/Generate_human_motion/pyrender/pyrender/material.py
deleted file mode 100644
index 3ce9c2d184ed213c84b015e36bea558cd1efc6b7..0000000000000000000000000000000000000000
--- a/spaces/vumichien/Generate_human_motion/pyrender/pyrender/material.py
+++ /dev/null
@@ -1,707 +0,0 @@
-"""Material properties, conforming to the glTF 2.0 standards as specified in
-https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#reference-material
-and
-https://github.com/KhronosGroup/glTF/tree/master/extensions/2.0/Khronos/KHR_materials_pbrSpecularGlossiness
-
-Author: Matthew Matl
-"""
-import abc
-import numpy as np
-import six
-
-from .constants import TexFlags
-from .utils import format_color_vector, format_texture_source
-from .texture import Texture
-
-
-@six.add_metaclass(abc.ABCMeta)
-class Material(object):
- """Base for standard glTF 2.0 materials.
-
- Parameters
- ----------
- name : str, optional
- The user-defined name of this object.
- normalTexture : (n,n,3) float or :class:`Texture`, optional
- A tangent space normal map. The texture contains RGB components in
- linear space. Each texel represents the XYZ components of a normal
- vector in tangent space. Red [0 to 255] maps to X [-1 to 1]. Green
- [0 to 255] maps to Y [-1 to 1]. Blue [128 to 255] maps to Z
- [1/255 to 1]. The normal vectors use OpenGL conventions where +X is
- right and +Y is up. +Z points toward the viewer.
- occlusionTexture : (n,n,1) float or :class:`Texture`, optional
- The occlusion map texture. The occlusion values are sampled from the R
- channel. Higher values indicate areas that should receive full indirect
- lighting and lower values indicate no indirect lighting. These values
- are linear. If other channels are present (GBA), they are ignored for
- occlusion calculations.
- emissiveTexture : (n,n,3) float or :class:`Texture`, optional
- The emissive map controls the color and intensity of the light being
- emitted by the material. This texture contains RGB components in sRGB
- color space. If a fourth component (A) is present, it is ignored.
- emissiveFactor : (3,) float, optional
- The RGB components of the emissive color of the material. These values
- are linear. If an emissiveTexture is specified, this value is
- multiplied with the texel values.
- alphaMode : str, optional
- The material's alpha rendering mode enumeration specifying the
- interpretation of the alpha value of the main factor and texture.
- Allowed Values:
-
- - `"OPAQUE"` The alpha value is ignored and the rendered output is
- fully opaque.
- - `"MASK"` The rendered output is either fully opaque or fully
- transparent depending on the alpha value and the specified alpha
- cutoff value.
- - `"BLEND"` The alpha value is used to composite the source and
- destination areas. The rendered output is combined with the
- background using the normal painting operation (i.e. the Porter
- and Duff over operator).
-
- alphaCutoff : float, optional
- Specifies the cutoff threshold when in MASK mode. If the alpha value is
- greater than or equal to this value then it is rendered as fully
- opaque, otherwise, it is rendered as fully transparent.
- A value greater than 1.0 will render the entire material as fully
- transparent. This value is ignored for other modes.
- doubleSided : bool, optional
- Specifies whether the material is double sided. When this value is
- false, back-face culling is enabled. When this value is true,
- back-face culling is disabled and double sided lighting is enabled.
- smooth : bool, optional
- If True, the material is rendered smoothly by using only one normal
- per vertex and face indexing.
- wireframe : bool, optional
- If True, the material is rendered in wireframe mode.
- """
-
- def __init__(self,
- name=None,
- normalTexture=None,
- occlusionTexture=None,
- emissiveTexture=None,
- emissiveFactor=None,
- alphaMode=None,
- alphaCutoff=None,
- doubleSided=False,
- smooth=True,
- wireframe=False):
-
- # Set defaults
- if alphaMode is None:
- alphaMode = 'OPAQUE'
-
- if alphaCutoff is None:
- alphaCutoff = 0.5
-
- if emissiveFactor is None:
- emissiveFactor = np.zeros(3).astype(np.float32)
-
- self.name = name
- self.normalTexture = normalTexture
- self.occlusionTexture = occlusionTexture
- self.emissiveTexture = emissiveTexture
- self.emissiveFactor = emissiveFactor
- self.alphaMode = alphaMode
- self.alphaCutoff = alphaCutoff
- self.doubleSided = doubleSided
- self.smooth = smooth
- self.wireframe = wireframe
-
- self._tex_flags = None
-
- @property
- def name(self):
- """str : The user-defined name of this object.
- """
- return self._name
-
- @name.setter
- def name(self, value):
- if value is not None:
- value = str(value)
- self._name = value
-
- @property
- def normalTexture(self):
- """(n,n,3) float or :class:`Texture` : The tangent-space normal map.
- """
- return self._normalTexture
-
- @normalTexture.setter
- def normalTexture(self, value):
- # TODO TMP
- self._normalTexture = self._format_texture(value, 'RGB')
- self._tex_flags = None
-
- @property
- def occlusionTexture(self):
- """(n,n,1) float or :class:`Texture` : The ambient occlusion map.
- """
- return self._occlusionTexture
-
- @occlusionTexture.setter
- def occlusionTexture(self, value):
- self._occlusionTexture = self._format_texture(value, 'R')
- self._tex_flags = None
-
- @property
- def emissiveTexture(self):
- """(n,n,3) float or :class:`Texture` : The emission map.
- """
- return self._emissiveTexture
-
- @emissiveTexture.setter
- def emissiveTexture(self, value):
- self._emissiveTexture = self._format_texture(value, 'RGB')
- self._tex_flags = None
-
- @property
- def emissiveFactor(self):
- """(3,) float : Base multiplier for emission colors.
- """
- return self._emissiveFactor
-
- @emissiveFactor.setter
- def emissiveFactor(self, value):
- if value is None:
- value = np.zeros(3)
- self._emissiveFactor = format_color_vector(value, 3)
-
- @property
- def alphaMode(self):
- """str : The mode for blending.
- """
- return self._alphaMode
-
- @alphaMode.setter
- def alphaMode(self, value):
- if value not in set(['OPAQUE', 'MASK', 'BLEND']):
- raise ValueError('Invalid alpha mode {}'.format(value))
- self._alphaMode = value
-
- @property
- def alphaCutoff(self):
- """float : The cutoff threshold in MASK mode.
- """
- return self._alphaCutoff
-
- @alphaCutoff.setter
- def alphaCutoff(self, value):
- if value < 0 or value > 1:
- raise ValueError('Alpha cutoff must be in range [0,1]')
- self._alphaCutoff = float(value)
-
- @property
- def doubleSided(self):
- """bool : Whether the material is double-sided.
- """
- return self._doubleSided
-
- @doubleSided.setter
- def doubleSided(self, value):
- if not isinstance(value, bool):
- raise TypeError('Double sided must be a boolean value')
- self._doubleSided = value
-
- @property
- def smooth(self):
- """bool : Whether to render the mesh smoothly by
- interpolating vertex normals.
- """
- return self._smooth
-
- @smooth.setter
- def smooth(self, value):
- if not isinstance(value, bool):
- raise TypeError('Double sided must be a boolean value')
- self._smooth = value
-
- @property
- def wireframe(self):
- """bool : Whether to render the mesh in wireframe mode.
- """
- return self._wireframe
-
- @wireframe.setter
- def wireframe(self, value):
- if not isinstance(value, bool):
- raise TypeError('Wireframe must be a boolean value')
- self._wireframe = value
-
- @property
- def is_transparent(self):
- """bool : If True, the object is partially transparent.
- """
- return self._compute_transparency()
-
- @property
- def tex_flags(self):
- """int : Texture availability flags.
- """
- if self._tex_flags is None:
- self._tex_flags = self._compute_tex_flags()
- return self._tex_flags
-
- @property
- def textures(self):
- """list of :class:`Texture` : The textures associated with this
- material.
- """
- return self._compute_textures()
-
- def _compute_transparency(self):
- return False
-
- def _compute_tex_flags(self):
- tex_flags = TexFlags.NONE
- if self.normalTexture is not None:
- tex_flags |= TexFlags.NORMAL
- if self.occlusionTexture is not None:
- tex_flags |= TexFlags.OCCLUSION
- if self.emissiveTexture is not None:
- tex_flags |= TexFlags.EMISSIVE
- return tex_flags
-
- def _compute_textures(self):
- all_textures = [
- self.normalTexture, self.occlusionTexture, self.emissiveTexture
- ]
- textures = set([t for t in all_textures if t is not None])
- return textures
-
- def _format_texture(self, texture, target_channels='RGB'):
- """Format a texture as a float32 np array.
- """
- if isinstance(texture, Texture) or texture is None:
- return texture
- else:
- source = format_texture_source(texture, target_channels)
- return Texture(source=source, source_channels=target_channels)
-
-
-class MetallicRoughnessMaterial(Material):
- """A material based on the metallic-roughness material model from
- Physically-Based Rendering (PBR) methodology.
-
- Parameters
- ----------
- name : str, optional
- The user-defined name of this object.
- normalTexture : (n,n,3) float or :class:`Texture`, optional
- A tangent space normal map. The texture contains RGB components in
- linear space. Each texel represents the XYZ components of a normal
- vector in tangent space. Red [0 to 255] maps to X [-1 to 1]. Green
- [0 to 255] maps to Y [-1 to 1]. Blue [128 to 255] maps to Z
- [1/255 to 1]. The normal vectors use OpenGL conventions where +X is
- right and +Y is up. +Z points toward the viewer.
- occlusionTexture : (n,n,1) float or :class:`Texture`, optional
- The occlusion map texture. The occlusion values are sampled from the R
- channel. Higher values indicate areas that should receive full indirect
- lighting and lower values indicate no indirect lighting. These values
- are linear. If other channels are present (GBA), they are ignored for
- occlusion calculations.
- emissiveTexture : (n,n,3) float or :class:`Texture`, optional
- The emissive map controls the color and intensity of the light being
- emitted by the material. This texture contains RGB components in sRGB
- color space. If a fourth component (A) is present, it is ignored.
- emissiveFactor : (3,) float, optional
- The RGB components of the emissive color of the material. These values
- are linear. If an emissiveTexture is specified, this value is
- multiplied with the texel values.
- alphaMode : str, optional
- The material's alpha rendering mode enumeration specifying the
- interpretation of the alpha value of the main factor and texture.
- Allowed Values:
-
- - `"OPAQUE"` The alpha value is ignored and the rendered output is
- fully opaque.
- - `"MASK"` The rendered output is either fully opaque or fully
- transparent depending on the alpha value and the specified alpha
- cutoff value.
- - `"BLEND"` The alpha value is used to composite the source and
- destination areas. The rendered output is combined with the
- background using the normal painting operation (i.e. the Porter
- and Duff over operator).
-
- alphaCutoff : float, optional
- Specifies the cutoff threshold when in MASK mode. If the alpha value is
- greater than or equal to this value then it is rendered as fully
- opaque, otherwise, it is rendered as fully transparent.
- A value greater than 1.0 will render the entire material as fully
- transparent. This value is ignored for other modes.
- doubleSided : bool, optional
- Specifies whether the material is double sided. When this value is
- false, back-face culling is enabled. When this value is true,
- back-face culling is disabled and double sided lighting is enabled.
- smooth : bool, optional
- If True, the material is rendered smoothly by using only one normal
- per vertex and face indexing.
- wireframe : bool, optional
- If True, the material is rendered in wireframe mode.
- baseColorFactor : (4,) float, optional
- The RGBA components of the base color of the material. The fourth
- component (A) is the alpha coverage of the material. The alphaMode
- property specifies how alpha is interpreted. These values are linear.
- If a baseColorTexture is specified, this value is multiplied with the
- texel values.
- baseColorTexture : (n,n,4) float or :class:`Texture`, optional
- The base color texture. This texture contains RGB(A) components in sRGB
- color space. The first three components (RGB) specify the base color of
- the material. If the fourth component (A) is present, it represents the
- alpha coverage of the material. Otherwise, an alpha of 1.0 is assumed.
- The alphaMode property specifies how alpha is interpreted.
- The stored texels must not be premultiplied.
- metallicFactor : float
- The metalness of the material. A value of 1.0 means the material is a
- metal. A value of 0.0 means the material is a dielectric. Values in
- between are for blending between metals and dielectrics such as dirty
- metallic surfaces. This value is linear. If a metallicRoughnessTexture
- is specified, this value is multiplied with the metallic texel values.
- roughnessFactor : float
- The roughness of the material. A value of 1.0 means the material is
- completely rough. A value of 0.0 means the material is completely
- smooth. This value is linear. If a metallicRoughnessTexture is
- specified, this value is multiplied with the roughness texel values.
- metallicRoughnessTexture : (n,n,2) float or :class:`Texture`, optional
- The metallic-roughness texture. The metalness values are sampled from
- the B channel. The roughness values are sampled from the G channel.
- These values are linear. If other channels are present (R or A), they
- are ignored for metallic-roughness calculations.
- """
-
- def __init__(self,
- name=None,
- normalTexture=None,
- occlusionTexture=None,
- emissiveTexture=None,
- emissiveFactor=None,
- alphaMode=None,
- alphaCutoff=None,
- doubleSided=False,
- smooth=True,
- wireframe=False,
- baseColorFactor=None,
- baseColorTexture=None,
- metallicFactor=1.0,
- roughnessFactor=1.0,
- metallicRoughnessTexture=None):
- super(MetallicRoughnessMaterial, self).__init__(
- name=name,
- normalTexture=normalTexture,
- occlusionTexture=occlusionTexture,
- emissiveTexture=emissiveTexture,
- emissiveFactor=emissiveFactor,
- alphaMode=alphaMode,
- alphaCutoff=alphaCutoff,
- doubleSided=doubleSided,
- smooth=smooth,
- wireframe=wireframe
- )
-
- # Set defaults
- if baseColorFactor is None:
- baseColorFactor = np.ones(4).astype(np.float32)
-
- self.baseColorFactor = baseColorFactor
- self.baseColorTexture = baseColorTexture
- self.metallicFactor = metallicFactor
- self.roughnessFactor = roughnessFactor
- self.metallicRoughnessTexture = metallicRoughnessTexture
-
- @property
- def baseColorFactor(self):
- """(4,) float or :class:`Texture` : The RGBA base color multiplier.
- """
- return self._baseColorFactor
-
- @baseColorFactor.setter
- def baseColorFactor(self, value):
- if value is None:
- value = np.ones(4)
- self._baseColorFactor = format_color_vector(value, 4)
-
- @property
- def baseColorTexture(self):
- """(n,n,4) float or :class:`Texture` : The diffuse texture.
- """
- return self._baseColorTexture
-
- @baseColorTexture.setter
- def baseColorTexture(self, value):
- self._baseColorTexture = self._format_texture(value, 'RGBA')
- self._tex_flags = None
-
- @property
- def metallicFactor(self):
- """float : The metalness of the material.
- """
- return self._metallicFactor
-
- @metallicFactor.setter
- def metallicFactor(self, value):
- if value is None:
- value = 1.0
- if value < 0 or value > 1:
- raise ValueError('Metallic factor must be in range [0,1]')
- self._metallicFactor = float(value)
-
- @property
- def roughnessFactor(self):
- """float : The roughness of the material.
- """
- return self.RoughnessFactor
-
- @roughnessFactor.setter
- def roughnessFactor(self, value):
- if value is None:
- value = 1.0
- if value < 0 or value > 1:
- raise ValueError('Roughness factor must be in range [0,1]')
- self.RoughnessFactor = float(value)
-
- @property
- def metallicRoughnessTexture(self):
- """(n,n,2) float or :class:`Texture` : The metallic-roughness texture.
- """
- return self._metallicRoughnessTexture
-
- @metallicRoughnessTexture.setter
- def metallicRoughnessTexture(self, value):
- self._metallicRoughnessTexture = self._format_texture(value, 'GB')
- self._tex_flags = None
-
- def _compute_tex_flags(self):
- tex_flags = super(MetallicRoughnessMaterial, self)._compute_tex_flags()
- if self.baseColorTexture is not None:
- tex_flags |= TexFlags.BASE_COLOR
- if self.metallicRoughnessTexture is not None:
- tex_flags |= TexFlags.METALLIC_ROUGHNESS
- return tex_flags
-
- def _compute_transparency(self):
- if self.alphaMode == 'OPAQUE':
- return False
- cutoff = self.alphaCutoff
- if self.alphaMode == 'BLEND':
- cutoff = 1.0
- if self.baseColorFactor[3] < cutoff:
- return True
- if (self.baseColorTexture is not None and
- self.baseColorTexture.is_transparent(cutoff)):
- return True
- return False
-
- def _compute_textures(self):
- textures = super(MetallicRoughnessMaterial, self)._compute_textures()
- all_textures = [self.baseColorTexture, self.metallicRoughnessTexture]
- all_textures = {t for t in all_textures if t is not None}
- textures |= all_textures
- return textures
-
-
-class SpecularGlossinessMaterial(Material):
- """A material based on the specular-glossiness material model from
- Physically-Based Rendering (PBR) methodology.
-
- Parameters
- ----------
- name : str, optional
- The user-defined name of this object.
- normalTexture : (n,n,3) float or :class:`Texture`, optional
- A tangent space normal map. The texture contains RGB components in
- linear space. Each texel represents the XYZ components of a normal
- vector in tangent space. Red [0 to 255] maps to X [-1 to 1]. Green
- [0 to 255] maps to Y [-1 to 1]. Blue [128 to 255] maps to Z
- [1/255 to 1]. The normal vectors use OpenGL conventions where +X is
- right and +Y is up. +Z points toward the viewer.
- occlusionTexture : (n,n,1) float or :class:`Texture`, optional
- The occlusion map texture. The occlusion values are sampled from the R
- channel. Higher values indicate areas that should receive full indirect
- lighting and lower values indicate no indirect lighting. These values
- are linear. If other channels are present (GBA), they are ignored for
- occlusion calculations.
- emissiveTexture : (n,n,3) float or :class:`Texture`, optional
- The emissive map controls the color and intensity of the light being
- emitted by the material. This texture contains RGB components in sRGB
- color space. If a fourth component (A) is present, it is ignored.
- emissiveFactor : (3,) float, optional
- The RGB components of the emissive color of the material. These values
- are linear. If an emissiveTexture is specified, this value is
- multiplied with the texel values.
- alphaMode : str, optional
- The material's alpha rendering mode enumeration specifying the
- interpretation of the alpha value of the main factor and texture.
- Allowed Values:
-
- - `"OPAQUE"` The alpha value is ignored and the rendered output is
- fully opaque.
- - `"MASK"` The rendered output is either fully opaque or fully
- transparent depending on the alpha value and the specified alpha
- cutoff value.
- - `"BLEND"` The alpha value is used to composite the source and
- destination areas. The rendered output is combined with the
- background using the normal painting operation (i.e. the Porter
- and Duff over operator).
-
- alphaCutoff : float, optional
- Specifies the cutoff threshold when in MASK mode. If the alpha value is
- greater than or equal to this value then it is rendered as fully
- opaque, otherwise, it is rendered as fully transparent.
- A value greater than 1.0 will render the entire material as fully
- transparent. This value is ignored for other modes.
- doubleSided : bool, optional
- Specifies whether the material is double sided. When this value is
- false, back-face culling is enabled. When this value is true,
- back-face culling is disabled and double sided lighting is enabled.
- smooth : bool, optional
- If True, the material is rendered smoothly by using only one normal
- per vertex and face indexing.
- wireframe : bool, optional
- If True, the material is rendered in wireframe mode.
- diffuseFactor : (4,) float
- The RGBA components of the reflected diffuse color of the material.
- Metals have a diffuse value of [0.0, 0.0, 0.0]. The fourth component
- (A) is the opacity of the material. The values are linear.
- diffuseTexture : (n,n,4) float or :class:`Texture`, optional
- The diffuse texture. This texture contains RGB(A) components of the
- reflected diffuse color of the material in sRGB color space. If the
- fourth component (A) is present, it represents the alpha coverage of
- the material. Otherwise, an alpha of 1.0 is assumed.
- The alphaMode property specifies how alpha is interpreted.
- The stored texels must not be premultiplied.
- specularFactor : (3,) float
- The specular RGB color of the material. This value is linear.
- glossinessFactor : float
- The glossiness or smoothness of the material. A value of 1.0 means the
- material has full glossiness or is perfectly smooth. A value of 0.0
- means the material has no glossiness or is perfectly rough. This value
- is linear.
- specularGlossinessTexture : (n,n,4) or :class:`Texture`, optional
- The specular-glossiness texture is a RGBA texture, containing the
- specular color (RGB) in sRGB space and the glossiness value (A) in
- linear space.
- """
-
- def __init__(self,
- name=None,
- normalTexture=None,
- occlusionTexture=None,
- emissiveTexture=None,
- emissiveFactor=None,
- alphaMode=None,
- alphaCutoff=None,
- doubleSided=False,
- smooth=True,
- wireframe=False,
- diffuseFactor=None,
- diffuseTexture=None,
- specularFactor=None,
- glossinessFactor=1.0,
- specularGlossinessTexture=None):
- super(SpecularGlossinessMaterial, self).__init__(
- name=name,
- normalTexture=normalTexture,
- occlusionTexture=occlusionTexture,
- emissiveTexture=emissiveTexture,
- emissiveFactor=emissiveFactor,
- alphaMode=alphaMode,
- alphaCutoff=alphaCutoff,
- doubleSided=doubleSided,
- smooth=smooth,
- wireframe=wireframe
- )
-
- # Set defaults
- if diffuseFactor is None:
- diffuseFactor = np.ones(4).astype(np.float32)
- if specularFactor is None:
- specularFactor = np.ones(3).astype(np.float32)
-
- self.diffuseFactor = diffuseFactor
- self.diffuseTexture = diffuseTexture
- self.specularFactor = specularFactor
- self.glossinessFactor = glossinessFactor
- self.specularGlossinessTexture = specularGlossinessTexture
-
- @property
- def diffuseFactor(self):
- """(4,) float : The diffuse base color.
- """
- return self._diffuseFactor
-
- @diffuseFactor.setter
- def diffuseFactor(self, value):
- self._diffuseFactor = format_color_vector(value, 4)
-
- @property
- def diffuseTexture(self):
- """(n,n,4) float or :class:`Texture` : The diffuse map.
- """
- return self._diffuseTexture
-
- @diffuseTexture.setter
- def diffuseTexture(self, value):
- self._diffuseTexture = self._format_texture(value, 'RGBA')
- self._tex_flags = None
-
- @property
- def specularFactor(self):
- """(3,) float : The specular color of the material.
- """
- return self._specularFactor
-
- @specularFactor.setter
- def specularFactor(self, value):
- self._specularFactor = format_color_vector(value, 3)
-
- @property
- def glossinessFactor(self):
- """float : The glossiness of the material.
- """
- return self.glossinessFactor
-
- @glossinessFactor.setter
- def glossinessFactor(self, value):
- if value < 0 or value > 1:
- raise ValueError('glossiness factor must be in range [0,1]')
- self._glossinessFactor = float(value)
-
- @property
- def specularGlossinessTexture(self):
- """(n,n,4) or :class:`Texture` : The specular-glossiness texture.
- """
- return self._specularGlossinessTexture
-
- @specularGlossinessTexture.setter
- def specularGlossinessTexture(self, value):
- self._specularGlossinessTexture = self._format_texture(value, 'GB')
- self._tex_flags = None
-
- def _compute_tex_flags(self):
- flags = super(SpecularGlossinessMaterial, self)._compute_tex_flags()
- if self.diffuseTexture is not None:
- flags |= TexFlags.DIFFUSE
- if self.specularGlossinessTexture is not None:
- flags |= TexFlags.SPECULAR_GLOSSINESS
- return flags
-
- def _compute_transparency(self):
- if self.alphaMode == 'OPAQUE':
- return False
- cutoff = self.alphaCutoff
- if self.alphaMode == 'BLEND':
- cutoff = 1.0
- if self.diffuseFactor[3] < cutoff:
- return True
- if (self.diffuseTexture is not None and
- self.diffuseTexture.is_transparent(cutoff)):
- return True
- return False
-
- def _compute_textures(self):
- textures = super(SpecularGlossinessMaterial, self)._compute_textures()
- all_textures = [self.diffuseTexture, self.specularGlossinessTexture]
- all_textures = {t for t in all_textures if t is not None}
- textures |= all_textures
- return textures
diff --git a/spaces/wangboyi/bingAI/Dockerfile b/spaces/wangboyi/bingAI/Dockerfile
deleted file mode 100644
index 3698c7cb7938e025afc53b18a571ae2961fbdffe..0000000000000000000000000000000000000000
--- a/spaces/wangboyi/bingAI/Dockerfile
+++ /dev/null
@@ -1,34 +0,0 @@
-# Build Stage
-# 使用 golang:alpine 作为构建阶段的基础镜像
-FROM golang:alpine AS builder
-
-# 添加 git,以便之后能从GitHub克隆项目
-RUN apk --no-cache add git
-
-# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下
-RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app
-
-# 设置工作目录为之前克隆的项目目录
-WORKDIR /workspace/app
-
-# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小
-RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go
-
-# Runtime Stage
-# 使用轻量级的 alpine 镜像作为运行时的基础镜像
-FROM alpine
-
-# 设置工作目录
-WORKDIR /workspace/app
-
-# 从构建阶段复制编译后的二进制文件到运行时镜像中
-COPY --from=builder /workspace/app/go-proxy-bingai .
-
-# 设置环境变量,此处为随机字符
-ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQWYtX5rG6bE3fZ4iO"
-
-# 暴露8080端口
-EXPOSE 8080
-
-# 容器启动时运行的命令
-CMD ["/workspace/app/go-proxy-bingai"]
\ No newline at end of file
diff --git a/spaces/wisnuarys15/rvc-wisnu5/vc_infer_pipeline.py b/spaces/wisnuarys15/rvc-wisnu5/vc_infer_pipeline.py
deleted file mode 100644
index c26d45068f9b6bf2b194b13c3c89f8a06347c124..0000000000000000000000000000000000000000
--- a/spaces/wisnuarys15/rvc-wisnu5/vc_infer_pipeline.py
+++ /dev/null
@@ -1,306 +0,0 @@
-import numpy as np, parselmouth, torch, pdb
-from time import time as ttime
-import torch.nn.functional as F
-from config import x_pad, x_query, x_center, x_max
-import scipy.signal as signal
-import pyworld, os, traceback, faiss
-from scipy import signal
-
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-
-class VC(object):
- def __init__(self, tgt_sr, device, is_half):
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * x_query # 查询切点前后查询时间
- self.t_center = self.sr * x_center # 查询切点位置
- self.t_max = self.sr * x_max # 免查询时长阈值
- self.device = device
- self.is_half = is_half
-
- def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None):
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- if f0_method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif f0_method == "harvest":
- f0, t = pyworld.harvest(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0]
- f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9, # layer 9
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0])
-
- if (
- isinstance(index, type(None)) == False
- and isinstance(big_npy, type(None)) == False
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
- _, I = index.search(npy, 1)
- npy = big_npy[I.squeeze()]
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch != None and pitchf != None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- if pitch != None and pitchf != None:
- audio1 = (
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768)
- .data.cpu()
- .float()
- .numpy()
- .astype(np.int16)
- )
- else:
- audio1 = (
- (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768)
- .data.cpu()
- .float()
- .numpy()
- .astype(np.int16)
- )
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- file_big_npy,
- index_rate,
- if_f0,
- f0_file=None,
- ):
- if (
- file_big_npy != ""
- and file_index != ""
- and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index) == True
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- big_npy = np.load(file_big_npy)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- print("Feature retrieval library doesn't exist or ratio is 0")
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name") == True:
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0 == 1:
- pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0)
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
- for t in opt_ts:
- t = t // self.window * self.window
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- s = t
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- pitch[:, t // self.window :] if t is not None else pitch,
- pitchf[:, t // self.window :] if t is not None else pitchf,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- audio_opt = np.concatenate(audio_opt)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return audio_opt
diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/models/mudeep.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/models/mudeep.py
deleted file mode 100644
index ddbca675b69fcf38523d8687d8c7b279ededd8d1..0000000000000000000000000000000000000000
--- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/models/mudeep.py
+++ /dev/null
@@ -1,206 +0,0 @@
-from __future__ import division, absolute_import
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-__all__ = ['MuDeep']
-
-
-class ConvBlock(nn.Module):
- """Basic convolutional block.
-
- convolution + batch normalization + relu.
-
- Args:
- in_c (int): number of input channels.
- out_c (int): number of output channels.
- k (int or tuple): kernel size.
- s (int or tuple): stride.
- p (int or tuple): padding.
- """
-
- def __init__(self, in_c, out_c, k, s, p):
- super(ConvBlock, self).__init__()
- self.conv = nn.Conv2d(in_c, out_c, k, stride=s, padding=p)
- self.bn = nn.BatchNorm2d(out_c)
-
- def forward(self, x):
- return F.relu(self.bn(self.conv(x)))
-
-
-class ConvLayers(nn.Module):
- """Preprocessing layers."""
-
- def __init__(self):
- super(ConvLayers, self).__init__()
- self.conv1 = ConvBlock(3, 48, k=3, s=1, p=1)
- self.conv2 = ConvBlock(48, 96, k=3, s=1, p=1)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.conv2(x)
- x = self.maxpool(x)
- return x
-
-
-class MultiScaleA(nn.Module):
- """Multi-scale stream layer A (Sec.3.1)"""
-
- def __init__(self):
- super(MultiScaleA, self).__init__()
- self.stream1 = nn.Sequential(
- ConvBlock(96, 96, k=1, s=1, p=0),
- ConvBlock(96, 24, k=3, s=1, p=1),
- )
- self.stream2 = nn.Sequential(
- nn.AvgPool2d(kernel_size=3, stride=1, padding=1),
- ConvBlock(96, 24, k=1, s=1, p=0),
- )
- self.stream3 = ConvBlock(96, 24, k=1, s=1, p=0)
- self.stream4 = nn.Sequential(
- ConvBlock(96, 16, k=1, s=1, p=0),
- ConvBlock(16, 24, k=3, s=1, p=1),
- ConvBlock(24, 24, k=3, s=1, p=1),
- )
-
- def forward(self, x):
- s1 = self.stream1(x)
- s2 = self.stream2(x)
- s3 = self.stream3(x)
- s4 = self.stream4(x)
- y = torch.cat([s1, s2, s3, s4], dim=1)
- return y
-
-
-class Reduction(nn.Module):
- """Reduction layer (Sec.3.1)"""
-
- def __init__(self):
- super(Reduction, self).__init__()
- self.stream1 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
- self.stream2 = ConvBlock(96, 96, k=3, s=2, p=1)
- self.stream3 = nn.Sequential(
- ConvBlock(96, 48, k=1, s=1, p=0),
- ConvBlock(48, 56, k=3, s=1, p=1),
- ConvBlock(56, 64, k=3, s=2, p=1),
- )
-
- def forward(self, x):
- s1 = self.stream1(x)
- s2 = self.stream2(x)
- s3 = self.stream3(x)
- y = torch.cat([s1, s2, s3], dim=1)
- return y
-
-
-class MultiScaleB(nn.Module):
- """Multi-scale stream layer B (Sec.3.1)"""
-
- def __init__(self):
- super(MultiScaleB, self).__init__()
- self.stream1 = nn.Sequential(
- nn.AvgPool2d(kernel_size=3, stride=1, padding=1),
- ConvBlock(256, 256, k=1, s=1, p=0),
- )
- self.stream2 = nn.Sequential(
- ConvBlock(256, 64, k=1, s=1, p=0),
- ConvBlock(64, 128, k=(1, 3), s=1, p=(0, 1)),
- ConvBlock(128, 256, k=(3, 1), s=1, p=(1, 0)),
- )
- self.stream3 = ConvBlock(256, 256, k=1, s=1, p=0)
- self.stream4 = nn.Sequential(
- ConvBlock(256, 64, k=1, s=1, p=0),
- ConvBlock(64, 64, k=(1, 3), s=1, p=(0, 1)),
- ConvBlock(64, 128, k=(3, 1), s=1, p=(1, 0)),
- ConvBlock(128, 128, k=(1, 3), s=1, p=(0, 1)),
- ConvBlock(128, 256, k=(3, 1), s=1, p=(1, 0)),
- )
-
- def forward(self, x):
- s1 = self.stream1(x)
- s2 = self.stream2(x)
- s3 = self.stream3(x)
- s4 = self.stream4(x)
- return s1, s2, s3, s4
-
-
-class Fusion(nn.Module):
- """Saliency-based learning fusion layer (Sec.3.2)"""
-
- def __init__(self):
- super(Fusion, self).__init__()
- self.a1 = nn.Parameter(torch.rand(1, 256, 1, 1))
- self.a2 = nn.Parameter(torch.rand(1, 256, 1, 1))
- self.a3 = nn.Parameter(torch.rand(1, 256, 1, 1))
- self.a4 = nn.Parameter(torch.rand(1, 256, 1, 1))
-
- # We add an average pooling layer to reduce the spatial dimension
- # of feature maps, which differs from the original paper.
- self.avgpool = nn.AvgPool2d(kernel_size=4, stride=4, padding=0)
-
- def forward(self, x1, x2, x3, x4):
- s1 = self.a1.expand_as(x1) * x1
- s2 = self.a2.expand_as(x2) * x2
- s3 = self.a3.expand_as(x3) * x3
- s4 = self.a4.expand_as(x4) * x4
- y = self.avgpool(s1 + s2 + s3 + s4)
- return y
-
-
-class MuDeep(nn.Module):
- """Multiscale deep neural network.
-
- Reference:
- Qian et al. Multi-scale Deep Learning Architectures
- for Person Re-identification. ICCV 2017.
-
- Public keys:
- - ``mudeep``: Multiscale deep neural network.
- """
-
- def __init__(self, num_classes, loss='softmax', **kwargs):
- super(MuDeep, self).__init__()
- self.loss = loss
-
- self.block1 = ConvLayers()
- self.block2 = MultiScaleA()
- self.block3 = Reduction()
- self.block4 = MultiScaleB()
- self.block5 = Fusion()
-
- # Due to this fully connected layer, input image has to be fixed
- # in shape, i.e. (3, 256, 128), such that the last convolutional feature
- # maps are of shape (256, 16, 8). If input shape is changed,
- # the input dimension of this layer has to be changed accordingly.
- self.fc = nn.Sequential(
- nn.Linear(256 * 16 * 8, 4096),
- nn.BatchNorm1d(4096),
- nn.ReLU(),
- )
- self.classifier = nn.Linear(4096, num_classes)
- self.feat_dim = 4096
-
- def featuremaps(self, x):
- x = self.block1(x)
- x = self.block2(x)
- x = self.block3(x)
- x = self.block4(x)
- x = self.block5(*x)
- return x
-
- def forward(self, x):
- x = self.featuremaps(x)
- x = x.view(x.size(0), -1)
- x = self.fc(x)
- y = self.classifier(x)
-
- if not self.training:
- return x
-
- if self.loss == 'softmax':
- return y
- elif self.loss == 'triplet':
- return y, x
- else:
- raise KeyError('Unsupported loss: {}'.format(self.loss))
diff --git a/spaces/xiaobaiyuan/theme_land/theme_dropdown.py b/spaces/xiaobaiyuan/theme_land/theme_dropdown.py
deleted file mode 100644
index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000
--- a/spaces/xiaobaiyuan/theme_land/theme_dropdown.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import os
-import pathlib
-
-from gradio.themes.utils import ThemeAsset
-
-
-def create_theme_dropdown():
- import gradio as gr
-
- asset_path = pathlib.Path(__file__).parent / "themes"
- themes = []
- for theme_asset in os.listdir(str(asset_path)):
- themes.append(
- (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset)))
- )
-
- def make_else_if(theme_asset):
- return f"""
- else if (theme == '{str(theme_asset[0].version)}') {{
- var theme_css = `{theme_asset[1]._get_theme_css()}`
- }}"""
-
- head, tail = themes[0], themes[1:]
- if_statement = f"""
- if (theme == "{str(head[0].version)}") {{
- var theme_css = `{head[1]._get_theme_css()}`
- }} {" ".join(make_else_if(t) for t in tail)}
- """
-
- latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[
- ::-1
- ]
- latest_to_oldest = [str(t.version) for t in latest_to_oldest]
-
- component = gr.Dropdown(
- choices=latest_to_oldest,
- value=latest_to_oldest[0],
- render=False,
- label="Select Version",
- ).style(container=False)
-
- return (
- component,
- f"""
- (theme) => {{
- if (!document.querySelector('.theme-css')) {{
- var theme_elem = document.createElement('style');
- theme_elem.classList.add('theme-css');
- document.head.appendChild(theme_elem);
- }} else {{
- var theme_elem = document.querySelector('.theme-css');
- }}
- {if_statement}
- theme_elem.innerHTML = theme_css;
- }}
- """,
- )
diff --git a/spaces/xiaoyeAI/clewd/lib/clewd-utils.js b/spaces/xiaoyeAI/clewd/lib/clewd-utils.js
deleted file mode 100644
index 94183a5aa8a259f1740fbd7573052260b1aa8edb..0000000000000000000000000000000000000000
--- a/spaces/xiaoyeAI/clewd/lib/clewd-utils.js
+++ /dev/null
@@ -1,111 +0,0 @@
-/*
-* https://gitgud.io/ahsk/clewd
-* https://github.com/h-a-s-k/clewd
-*/
-'use strict';
-
-const {randomInt, randomBytes} = require('node:crypto'), {version: Version} = require('../package.json'), Encoder = (new TextDecoder,
-new TextEncoder), Main = 'clewd v' + Version + '修改版 by tera', Replacements = {
- user: 'Human',
- assistant: 'Assistant',
- system: '',
- example_user: 'H',
- example_assistant: 'A'
-}, DangerChars = [ ...new Set([ ...Object.values(Replacements).join(''), ...'\n', ...':', ...'\\n' ]) ].filter((char => ' ' !== char)).sort(), AI = {
- end: () => Buffer.from([ 104, 116, 116, 112, 115, 58, 47, 47, 99, 108, 97, 117, 100, 101, 46, 97, 105 ]).toString(),
- mdl: () => Buffer.from([ 99, 108, 97, 117, 100, 101, 45, 50 ]).toString(),
- zone: () => Buffer.from([ 65, 109, 101, 114, 105, 99, 97, 47, 78, 101, 119, 95, 89, 111, 114, 107 ]).toString(),
- agent: () => Buffer.from([ 77, 111, 122, 105, 108, 108, 97, 47, 53, 46, 48, 32, 40, 77, 97, 99, 105, 110, 116, 111, 115, 104, 59, 32, 73, 110, 116, 101, 108, 32, 77, 97, 99, 32, 79, 83, 32, 88, 32, 49, 48, 95, 49, 53, 95, 55, 41, 32, 65, 112, 112, 108, 101, 87, 101, 98, 75, 105, 116, 47, 53, 51, 55, 46, 51, 54, 32, 40, 75, 72, 84, 77, 76, 44, 32, 108, 105, 107, 101, 32, 71, 101, 99, 107, 111, 41, 32, 67, 104, 114, 111, 109, 101, 47, 49, 49, 52, 46, 48, 46, 48, 46, 48, 32, 83, 97, 102, 97, 114, 105, 47, 53, 51, 55, 46, 51, 54, 32, 69, 100, 103, 47, 49, 49, 52, 46, 48, 46, 49, 56, 50, 51, 46, 55, 57 ]).toString(),
- cp: () => Buffer.from([ 55, 55, 49, 44, 52, 56, 54, 53, 45, 52, 56, 54, 54, 45, 52, 56, 54, 55, 45, 52, 57, 49, 57, 53, 45, 52, 57, 49, 57, 57, 45, 52, 57, 49, 57, 54, 45, 52, 57, 50, 48, 48, 45, 53, 50, 51, 57, 51, 45, 53, 50, 51, 57, 50, 45, 52, 57, 49, 55, 49, 45, 52, 57, 49, 55, 50, 45, 49, 53, 54, 45, 49, 53, 55, 45, 52, 55, 45, 53, 51, 44, 48, 45, 50, 51, 45, 54, 53, 50, 56, 49, 45, 49, 48, 45, 49, 49, 45, 51, 53, 45, 49, 54, 45, 53, 45, 49, 51, 45, 49, 56, 45, 53, 49, 45, 52, 53, 45, 52, 51, 45, 50, 55, 45, 49, 55, 53, 49, 51, 45, 50, 49, 44, 50, 57, 45, 50, 51, 45, 50, 52, 44, 48 ]).toString(),
- hdr: refPath => ({
- 'Content-Type': 'application/json',
- Referer: `${AI.end()}/${refPath ? 'chat/' + refPath : ''}`,
- Origin: '' + AI.end()
- })
-}, indexOfH = (text, last = false) => {
- let location = -1;
- const matchesH = text.match(/(?:(?:\\n)|\n){2}((?:Human|H): ?)/gm);
- matchesH?.length > 0 && (location = last ? text.lastIndexOf(matchesH[matchesH.length - 1]) : text.indexOf(matchesH[0]));
- return location;
-}, indexOfA = (text, last = false) => {
- let location = -1;
- const matchesA = text.match(/(?:(?:\\n)|\n){2}((?:Assistant|A): ?)/gm);
- matchesA?.length > 0 && (location = last ? text.lastIndexOf(matchesA[matchesA.length - 1]) : text.indexOf(matchesA[0]));
- return location;
-};
-
-module.exports.encodeDataJSON = completion => Encoder.encode(`data: ${JSON.stringify(completion)}\n\n`);
-
-module.exports.genericFixes = text => text.replace(/(\r\n|\r|\\n)/gm, '\n');
-
-module.exports.Replacements = Replacements;
-
-module.exports.DangerChars = DangerChars;
-
-module.exports.checkResErr = async (res, throwIt = true) => {
- let err, json, errAPI;
- if ('string' == typeof res) {
- json = JSON.parse(res);
- errAPI = json.error;
- err = Error(errAPI.message);
- } else if (res.status < 200 || res.status >= 300) {
- err = Error('Unexpected response code: ' + (res.status || json.status));
- json = await res.json();
- errAPI = json.error;
- }
- if (errAPI) {
- err.status = res.status || json.status;
- err.planned = true;
- errAPI.message && (err.message = errAPI.message);
- errAPI.type && (err.type = errAPI.type);
- if ((429 === res.status || 429 === json.status) && errAPI.resets_at) {
- const hours = ((new Date(1e3 * errAPI.resets_at).getTime() - Date.now()) / 1e3 / 60 / 60).toFixed(1);
- err.message += `, expires in ${hours} hours`;
- }
- if (throwIt) {
- throw err;
- }
- }
- return err;
-};
-
-module.exports.bytesToSize = (bytes = 0) => {
- const b = [ 'B', 'KB', 'MB', 'GB', 'TB' ];
- if (0 === bytes) {
- return '0 B';
- }
- const c = Math.min(Math.floor(Math.log(bytes) / Math.log(1024)), 4);
- return 0 === c ? `${bytes} ${b[c]}` : `${(bytes / 1024 ** c).toFixed(1)} ${b[c]}`;
-};
-
-module.exports.indexOfAny = (text, last = false) => {
- let location = -1;
- const fakes = [ indexOfH(text, last), indexOfA(text, last) ].filter((idx => idx > -1)).sort();
- location = last ? fakes.reverse()[0] : fakes[0];
- return isNaN(location) ? -1 : location;
-};
-
-module.exports.cleanJSON = json => json.replace(/^data: {/gim, '{').replace(/\s+$/gim, '');
-
-module.exports.fileName = () => {
- const len = randomInt(5, 15);
- let name = randomBytes(len).toString('hex');
- for (let i = 0; i < name.length; i++) {
- const char = name.charAt(i);
- isNaN(char) && randomInt(1, 5) % 2 == 0 && ' ' !== name.charAt(i - 1) && (name = name.slice(0, i) + ' ' + name.slice(i));
- }
- return name + '.txt';
-};
-
-module.exports.indexOfA = indexOfA;
-
-module.exports.indexOfH = indexOfH;
-
-module.exports.setTitle = title => {
- title = `${Main} - ${title}`;
- process.title !== title && (process.title = title);
-};
-
-module.exports.Main = Main;
-
-module.exports.AI = AI;
\ No newline at end of file
diff --git a/spaces/xingzhehe/AutoLink/models/model.py b/spaces/xingzhehe/AutoLink/models/model.py
deleted file mode 100644
index cfda57c4aead470c8801fdfb8a82b9cfd1640a5c..0000000000000000000000000000000000000000
--- a/spaces/xingzhehe/AutoLink/models/model.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import importlib
-import PIL
-import pytorch_lightning as pl
-import torch.utils.data
-import wandb
-from typing import Union
-from torchvision import transforms
-from utils_.loss import VGGPerceptualLoss
-from utils_.visualization import *
-import torch.nn.functional as F
-import matplotlib.pyplot as plt
-
-
-class Model(pl.LightningModule):
- def __init__(self, **kwargs):
- super().__init__()
- self.save_hyperparameters()
- self.encoder = importlib.import_module('models.' + self.hparams.encoder).Encoder(self.hparams)
- self.decoder = importlib.import_module('models.' + self.hparams.decoder).Decoder(self.hparams)
- self.batch_size = self.hparams.batch_size
-
- self.vgg_loss = VGGPerceptualLoss()
-
- self.transform = transforms.Compose([
- transforms.ToTensor(),
- transforms.Normalize(0.5, 0.5)
- ])
-
- def forward(self, x: PIL.Image.Image) -> PIL.Image.Image:
- """
- :param x: a PIL image
- :return: an edge map of the same size as x with values in [0, 1] (normalized by max)
- """
- w, h = x.size
- x = self.transform(x).unsqueeze(0)
- x = x.to(self.device)
- kp = self.encoder({'img': x})['keypoints']
- edge_map = self.decoder.rasterize(kp, output_size=64)
- bs = edge_map.shape[0]
- edge_map = edge_map / (1e-8 + edge_map.reshape(bs, 1, -1).max(dim=2, keepdim=True)[0].reshape(bs, 1, 1, 1))
- edge_map = torch.cat([edge_map] * 3, dim=1)
- edge_map = F.interpolate(edge_map, size=(h, w), mode='bilinear', align_corners=False)
- x = torch.clamp(edge_map + (x * 0.5 + 0.5)*0.5, min=0, max=1)
- x = transforms.ToPILImage()(x[0].detach().cpu())
-
- fig = plt.figure(figsize=(1, h/w), dpi=w)
- fig.tight_layout(pad=0)
- plt.axis('off')
- plt.imshow(x)
- kp = kp[0].detach().cpu() * 0.5 + 0.5
- kp[:, 1] *= w
- kp[:, 0] *= h
- plt.scatter(kp[:, 1], kp[:, 0], s=min(w/h, min(1, h/w)), marker='o')
- ncols, nrows = fig.canvas.get_width_height()
- fig.canvas.draw()
- plot = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8).reshape(nrows, ncols, 3)
- plt.close(fig)
- return plot
diff --git a/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py b/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py
deleted file mode 100644
index 9158d5f6260ec74bded95377d382387430d7cd70..0000000000000000000000000000000000000000
--- a/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py
+++ /dev/null
@@ -1,43 +0,0 @@
-batch_size = 1
-modelname = "groundingdino"
-backbone = "swin_T_224_1k"
-position_embedding = "sine"
-pe_temperatureH = 20
-pe_temperatureW = 20
-return_interm_indices = [1, 2, 3]
-backbone_freeze_keywords = None
-enc_layers = 6
-dec_layers = 6
-pre_norm = False
-dim_feedforward = 2048
-hidden_dim = 256
-dropout = 0.0
-nheads = 8
-num_queries = 900
-query_dim = 4
-num_patterns = 0
-num_feature_levels = 4
-enc_n_points = 4
-dec_n_points = 4
-two_stage_type = "standard"
-two_stage_bbox_embed_share = False
-two_stage_class_embed_share = False
-transformer_activation = "relu"
-dec_pred_bbox_embed_share = True
-dn_box_noise_scale = 1.0
-dn_label_noise_ratio = 0.5
-dn_label_coef = 1.0
-dn_bbox_coef = 1.0
-embed_init_tgt = True
-dn_labelbook_size = 2000
-max_text_len = 256
-text_encoder_type = "bert-base-uncased"
-use_text_enhancer = True
-use_fusion_layer = True
-use_checkpoint = True
-use_transformer_ckpt = True
-use_text_cross_attention = True
-text_dropout = 0.0
-fusion_dropout = 0.0
-fusion_droppath = 0.1
-sub_sentence_present = True
diff --git a/spaces/xuyingliKepler/autogenchat/app.py b/spaces/xuyingliKepler/autogenchat/app.py
deleted file mode 100644
index 1bd30857ad7df49e08649e37a15db35c33d413a8..0000000000000000000000000000000000000000
--- a/spaces/xuyingliKepler/autogenchat/app.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import streamlit as st
-import asyncio
-from autogen import AssistantAgent, UserProxyAgent
-
-
-st.write("""# AutoGen Chat Agents""")
-
-class TrackableAssistantAgent(AssistantAgent):
- def _process_received_message(self, message, sender, silent):
- with st.chat_message(sender.name):
- st.markdown(message)
- return super()._process_received_message(message, sender, silent)
-
-
-class TrackableUserProxyAgent(UserProxyAgent):
- def _process_received_message(self, message, sender, silent):
- with st.chat_message(sender.name):
- st.markdown(message)
- return super()._process_received_message(message, sender, silent)
-
-
-selected_model = None
-selected_key = None
-with st.sidebar:
- st.header("OpenAI Configuration")
- selected_model = st.selectbox("Model", ['gpt-3.5-turbo', 'gpt-4'], index=1)
- selected_key = st.text_input("API Key", type="password")
-
-with st.container():
- # for message in st.session_state["messages"]:
- # st.markdown(message)
-
- user_input = st.chat_input("Type something...")
- if user_input:
- if not selected_key or not selected_model:
- st.warning(
- 'You must provide valid OpenAI API key and choose preferred model', icon="⚠️")
- st.stop()
-
- llm_config = {
- "request_timeout": 600,
- "config_list": [
- {
- "model": selected_model,
- "api_key": selected_key
- }
- ]
- }
- # create an AssistantAgent instance named "assistant"
- assistant = TrackableAssistantAgent(
- name="assistant", llm_config=llm_config)
-
- # create a UserProxyAgent instance named "user"
- user_proxy = TrackableUserProxyAgent(
- name="user", human_input_mode="NEVER", llm_config=llm_config)
-
- # Create an event loop
- loop = asyncio.new_event_loop()
- asyncio.set_event_loop(loop)
-
- # Define an asynchronous function
- async def initiate_chat():
- await user_proxy.a_initiate_chat(
- assistant,
- message=user_input,
- )
-
- # Run the asynchronous function within the event loop
- loop.run_until_complete(initiate_chat())
diff --git "a/spaces/xwsm/gpt/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" "b/spaces/xwsm/gpt/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py"
deleted file mode 100644
index e57f80f1d45bd3ec23837253848f7b32a5ccd751..0000000000000000000000000000000000000000
--- "a/spaces/xwsm/gpt/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py"
+++ /dev/null
@@ -1,138 +0,0 @@
-import threading
-from request_llm.bridge_all import predict_no_ui_long_connection
-from toolbox import update_ui
-from toolbox import CatchException, write_results_to_file, report_execption
-from .crazy_utils import breakdown_txt_to_satisfy_token_limit
-
-def extract_code_block_carefully(txt):
- splitted = txt.split('```')
- n_code_block_seg = len(splitted) - 1
- if n_code_block_seg <= 1: return txt
- # 剩下的情况都开头除去 ``` 结尾除去一次 ```
- txt_out = '```'.join(splitted[1:-1])
- return txt_out
-
-
-
-def break_txt_into_half_at_some_linebreak(txt):
- lines = txt.split('\n')
- n_lines = len(lines)
- pre = lines[:(n_lines//2)]
- post = lines[(n_lines//2):]
- return "\n".join(pre), "\n".join(post)
-
-
-@CatchException
-def 全项目切换英文(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port):
- # 第1步:清空历史,以免输入溢出
- history = []
-
- # 第2步:尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import tiktoken
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 第3步:集合文件
- import time, glob, os, shutil, re
- os.makedirs('gpt_log/generated_english_version', exist_ok=True)
- os.makedirs('gpt_log/generated_english_version/crazy_functions', exist_ok=True)
- file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \
- [f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]
- # file_manifest = ['./toolbox.py']
- i_say_show_user_buffer = []
-
- # 第4步:随便显示点什么防止卡顿的感觉
- for index, fp in enumerate(file_manifest):
- # if 'test_project' in fp: continue
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- i_say_show_user =f'[{index}/{len(file_manifest)}] 接下来请将以下代码中包含的所有中文转化为英文,只输出转化后的英文代码,请用代码块输出代码: {os.path.abspath(fp)}'
- i_say_show_user_buffer.append(i_say_show_user)
- chatbot.append((i_say_show_user, "[Local Message] 等待多线程操作,中间过程不予显示."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
-
- # 第5步:Token限制下的截断与处理
- MAX_TOKEN = 3000
- from request_llm.bridge_all import model_info
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
- def get_token_fn(txt): return len(enc.encode(txt, disallowed_special=()))
-
-
- # 第6步:任务函数
- mutable_return = [None for _ in file_manifest]
- observe_window = [[""] for _ in file_manifest]
- def thread_worker(fp,index):
- if index > 10:
- time.sleep(60)
- print('Openai 限制免费用户每分钟20次请求,降低请求频率中。')
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- i_say_template = lambda fp, file_content: f'接下来请将以下代码中包含的所有中文转化为英文,只输出代码,文件名是{fp},文件代码是 ```{file_content}```'
- try:
- gpt_say = ""
- # 分解代码文件
- file_content_breakdown = breakdown_txt_to_satisfy_token_limit(file_content, get_token_fn, MAX_TOKEN)
- for file_content_partial in file_content_breakdown:
- i_say = i_say_template(fp, file_content_partial)
- # # ** gpt request **
- gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=observe_window[index])
- gpt_say_partial = extract_code_block_carefully(gpt_say_partial)
- gpt_say += gpt_say_partial
- mutable_return[index] = gpt_say
- except ConnectionAbortedError as token_exceed_err:
- print('至少一个线程任务Token溢出而失败', e)
- except Exception as e:
- print('至少一个线程任务意外失败', e)
-
- # 第7步:所有线程同时开始执行任务函数
- handles = [threading.Thread(target=thread_worker, args=(fp,index)) for index, fp in enumerate(file_manifest)]
- for h in handles:
- h.daemon = True
- h.start()
- chatbot.append(('开始了吗?', f'多线程操作已经开始'))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 第8步:循环轮询各个线程是否执行完毕
- cnt = 0
- while True:
- cnt += 1
- time.sleep(0.2)
- th_alive = [h.is_alive() for h in handles]
- if not any(th_alive): break
- # 更好的UI视觉效果
- observe_win = []
- for thread_index, alive in enumerate(th_alive):
- observe_win.append("[ ..."+observe_window[thread_index][0][-60:].replace('\n','').replace('```','...').replace(' ','.').replace('
','.....').replace('$','.')+"... ]")
- stat = [f'执行中: {obs}\n\n' if alive else '已完成\n\n' for alive, obs in zip(th_alive, observe_win)]
- stat_str = ''.join(stat)
- chatbot[-1] = (chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt%10+1)))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 第9步:把结果写入文件
- for index, h in enumerate(handles):
- h.join() # 这里其实不需要join了,肯定已经都结束了
- fp = file_manifest[index]
- gpt_say = mutable_return[index]
- i_say_show_user = i_say_show_user_buffer[index]
-
- where_to_relocate = f'gpt_log/generated_english_version/{fp}'
- if gpt_say is not None:
- with open(where_to_relocate, 'w+', encoding='utf-8') as f:
- f.write(gpt_say)
- else: # 失败
- shutil.copyfile(file_manifest[index], where_to_relocate)
- chatbot.append((i_say_show_user, f'[Local Message] 已完成{os.path.abspath(fp)}的转化,\n\n存入{os.path.abspath(where_to_relocate)}'))
- history.append(i_say_show_user); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- time.sleep(1)
-
- # 第10步:备份一个文件
- res = write_results_to_file(history)
- chatbot.append(("生成一份任务执行报告", res))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/actions/control.ts b/spaces/yderre-aubay/midi-player-demo/src/main/actions/control.ts
deleted file mode 100644
index 81f5a8ece96d5f30a840451fdebb02ea09d73442..0000000000000000000000000000000000000000
--- a/spaces/yderre-aubay/midi-player-demo/src/main/actions/control.ts
+++ /dev/null
@@ -1,157 +0,0 @@
-import { maxBy, min, minBy } from "lodash"
-import { ControllerEvent, PitchBendEvent } from "midifile-ts"
-import { isNotUndefined } from "../../common/helpers/array"
-import {
- ControlEventsClipboardData,
- isControlEventsClipboardData,
-} from "../clipboard/clipboardTypes"
-import clipboard from "../services/Clipboard"
-import RootStore from "../stores/RootStore"
-
-export const createOrUpdateControlEventsValue =
- ({
- controlStore: { selectedEventIds, selectedTrack },
- player,
- pushHistory,
- }: RootStore) =>
- tags
- return isGreenPixel(data)
- ? loadSerializedSVG$1(createForeignObjectSVG(size, size, 0, 0, node))
- : Promise.reject(false);
- })
- .then(function (img) {
- ctx.drawImage(img, 0, 0);
- // Edge does not render background-images
- return isGreenPixel(ctx.getImageData(0, 0, size, size).data);
- })
- .catch(function () { return false; });
- };
- var createForeignObjectSVG = function (width, height, x, y, node) {
- var xmlns = 'http://www.w3.org/2000/svg';
- var svg = document.createElementNS(xmlns, 'svg');
- var foreignObject = document.createElementNS(xmlns, 'foreignObject');
- svg.setAttributeNS(null, 'width', width.toString());
- svg.setAttributeNS(null, 'height', height.toString());
- foreignObject.setAttributeNS(null, 'width', '100%');
- foreignObject.setAttributeNS(null, 'height', '100%');
- foreignObject.setAttributeNS(null, 'x', x.toString());
- foreignObject.setAttributeNS(null, 'y', y.toString());
- foreignObject.setAttributeNS(null, 'externalResourcesRequired', 'true');
- svg.appendChild(foreignObject);
- foreignObject.appendChild(node);
- return svg;
- };
- var loadSerializedSVG$1 = function (svg) {
- return new Promise(function (resolve, reject) {
- var img = new Image();
- img.onload = function () { return resolve(img); };
- img.onerror = reject;
- img.src = "data:image/svg+xml;charset=utf-8," + encodeURIComponent(new XMLSerializer().serializeToString(svg));
- });
- };
- var FEATURES = {
- get SUPPORT_RANGE_BOUNDS() {
- var value = testRangeBounds(document);
- Object.defineProperty(FEATURES, 'SUPPORT_RANGE_BOUNDS', { value: value });
- return value;
- },
- get SUPPORT_WORD_BREAKING() {
- var value = FEATURES.SUPPORT_RANGE_BOUNDS && testIOSLineBreak(document);
- Object.defineProperty(FEATURES, 'SUPPORT_WORD_BREAKING', { value: value });
- return value;
- },
- get SUPPORT_SVG_DRAWING() {
- var value = testSVG(document);
- Object.defineProperty(FEATURES, 'SUPPORT_SVG_DRAWING', { value: value });
- return value;
- },
- get SUPPORT_FOREIGNOBJECT_DRAWING() {
- var value = typeof Array.from === 'function' && typeof window.fetch === 'function'
- ? testForeignObject(document)
- : Promise.resolve(false);
- Object.defineProperty(FEATURES, 'SUPPORT_FOREIGNOBJECT_DRAWING', { value: value });
- return value;
- },
- get SUPPORT_CORS_IMAGES() {
- var value = testCORS();
- Object.defineProperty(FEATURES, 'SUPPORT_CORS_IMAGES', { value: value });
- return value;
- },
- get SUPPORT_RESPONSE_TYPE() {
- var value = testResponseType();
- Object.defineProperty(FEATURES, 'SUPPORT_RESPONSE_TYPE', { value: value });
- return value;
- },
- get SUPPORT_CORS_XHR() {
- var value = 'withCredentials' in new XMLHttpRequest();
- Object.defineProperty(FEATURES, 'SUPPORT_CORS_XHR', { value: value });
- return value;
- },
- get SUPPORT_NATIVE_TEXT_SEGMENTATION() {
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- var value = !!(typeof Intl !== 'undefined' && Intl.Segmenter);
- Object.defineProperty(FEATURES, 'SUPPORT_NATIVE_TEXT_SEGMENTATION', { value: value });
- return value;
- }
- };
-
- var TextBounds = /** @class */ (function () {
- function TextBounds(text, bounds) {
- this.text = text;
- this.bounds = bounds;
- }
- return TextBounds;
- }());
- var parseTextBounds = function (context, value, styles, node) {
- var textList = breakText(value, styles);
- var textBounds = [];
- var offset = 0;
- textList.forEach(function (text) {
- if (styles.textDecorationLine.length || text.trim().length > 0) {
- if (FEATURES.SUPPORT_RANGE_BOUNDS) {
- var clientRects = createRange(node, offset, text.length).getClientRects();
- if (clientRects.length > 1) {
- var subSegments = segmentGraphemes(text);
- var subOffset_1 = 0;
- subSegments.forEach(function (subSegment) {
- textBounds.push(new TextBounds(subSegment, Bounds.fromDOMRectList(context, createRange(node, subOffset_1 + offset, subSegment.length).getClientRects())));
- subOffset_1 += subSegment.length;
- });
- }
- else {
- textBounds.push(new TextBounds(text, Bounds.fromDOMRectList(context, clientRects)));
- }
- }
- else {
- var replacementNode = node.splitText(text.length);
- textBounds.push(new TextBounds(text, getWrapperBounds(context, node)));
- node = replacementNode;
- }
- }
- else if (!FEATURES.SUPPORT_RANGE_BOUNDS) {
- node = node.splitText(text.length);
- }
- offset += text.length;
- });
- return textBounds;
- };
- var getWrapperBounds = function (context, node) {
- var ownerDocument = node.ownerDocument;
- if (ownerDocument) {
- var wrapper = ownerDocument.createElement('html2canvaswrapper');
- wrapper.appendChild(node.cloneNode(true));
- var parentNode = node.parentNode;
- if (parentNode) {
- parentNode.replaceChild(wrapper, node);
- var bounds = parseBounds(context, wrapper);
- if (wrapper.firstChild) {
- parentNode.replaceChild(wrapper.firstChild, wrapper);
- }
- return bounds;
- }
- }
- return Bounds.EMPTY;
- };
- var createRange = function (node, offset, length) {
- var ownerDocument = node.ownerDocument;
- if (!ownerDocument) {
- throw new Error('Node has no owner document');
- }
- var range = ownerDocument.createRange();
- range.setStart(node, offset);
- range.setEnd(node, offset + length);
- return range;
- };
- var segmentGraphemes = function (value) {
- if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) {
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- var segmenter = new Intl.Segmenter(void 0, { granularity: 'grapheme' });
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; });
- }
- return splitGraphemes(value);
- };
- var segmentWords = function (value, styles) {
- if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) {
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- var segmenter = new Intl.Segmenter(void 0, {
- granularity: 'word'
- });
- // eslint-disable-next-line @typescript-eslint/no-explicit-any
- return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; });
- }
- return breakWords(value, styles);
- };
- var breakText = function (value, styles) {
- return styles.letterSpacing !== 0 ? segmentGraphemes(value) : segmentWords(value, styles);
- };
- // https://drafts.csswg.org/css-text/#word-separator
- var wordSeparators = [0x0020, 0x00a0, 0x1361, 0x10100, 0x10101, 0x1039, 0x1091];
- var breakWords = function (str, styles) {
- var breaker = LineBreaker(str, {
- lineBreak: styles.lineBreak,
- wordBreak: styles.overflowWrap === "break-word" /* BREAK_WORD */ ? 'break-word' : styles.wordBreak
- });
- var words = [];
- var bk;
- var _loop_1 = function () {
- if (bk.value) {
- var value = bk.value.slice();
- var codePoints = toCodePoints$1(value);
- var word_1 = '';
- codePoints.forEach(function (codePoint) {
- if (wordSeparators.indexOf(codePoint) === -1) {
- word_1 += fromCodePoint$1(codePoint);
- }
- else {
- if (word_1.length) {
- words.push(word_1);
- }
- words.push(fromCodePoint$1(codePoint));
- word_1 = '';
- }
- });
- if (word_1.length) {
- words.push(word_1);
- }
- }
- };
- while (!(bk = breaker.next()).done) {
- _loop_1();
- }
- return words;
- };
-
- var TextContainer = /** @class */ (function () {
- function TextContainer(context, node, styles) {
- this.text = transform(node.data, styles.textTransform);
- this.textBounds = parseTextBounds(context, this.text, styles, node);
- }
- return TextContainer;
- }());
- var transform = function (text, transform) {
- switch (transform) {
- case 1 /* LOWERCASE */:
- return text.toLowerCase();
- case 3 /* CAPITALIZE */:
- return text.replace(CAPITALIZE, capitalize);
- case 2 /* UPPERCASE */:
- return text.toUpperCase();
- default:
- return text;
- }
- };
- var CAPITALIZE = /(^|\s|:|-|\(|\))([a-z])/g;
- var capitalize = function (m, p1, p2) {
- if (m.length > 0) {
- return p1 + p2.toUpperCase();
- }
- return m;
- };
-
- var ImageElementContainer = /** @class */ (function (_super) {
- __extends(ImageElementContainer, _super);
- function ImageElementContainer(context, img) {
- var _this = _super.call(this, context, img) || this;
- _this.src = img.currentSrc || img.src;
- _this.intrinsicWidth = img.naturalWidth;
- _this.intrinsicHeight = img.naturalHeight;
- _this.context.cache.addImage(_this.src);
- return _this;
- }
- return ImageElementContainer;
- }(ElementContainer));
-
- var CanvasElementContainer = /** @class */ (function (_super) {
- __extends(CanvasElementContainer, _super);
- function CanvasElementContainer(context, canvas) {
- var _this = _super.call(this, context, canvas) || this;
- _this.canvas = canvas;
- _this.intrinsicWidth = canvas.width;
- _this.intrinsicHeight = canvas.height;
- return _this;
- }
- return CanvasElementContainer;
- }(ElementContainer));
-
- var SVGElementContainer = /** @class */ (function (_super) {
- __extends(SVGElementContainer, _super);
- function SVGElementContainer(context, img) {
- var _this = _super.call(this, context, img) || this;
- var s = new XMLSerializer();
- var bounds = parseBounds(context, img);
- img.setAttribute('width', bounds.width + "px");
- img.setAttribute('height', bounds.height + "px");
- _this.svg = "data:image/svg+xml," + encodeURIComponent(s.serializeToString(img));
- _this.intrinsicWidth = img.width.baseVal.value;
- _this.intrinsicHeight = img.height.baseVal.value;
- _this.context.cache.addImage(_this.svg);
- return _this;
- }
- return SVGElementContainer;
- }(ElementContainer));
-
- var LIElementContainer = /** @class */ (function (_super) {
- __extends(LIElementContainer, _super);
- function LIElementContainer(context, element) {
- var _this = _super.call(this, context, element) || this;
- _this.value = element.value;
- return _this;
- }
- return LIElementContainer;
- }(ElementContainer));
-
- var OLElementContainer = /** @class */ (function (_super) {
- __extends(OLElementContainer, _super);
- function OLElementContainer(context, element) {
- var _this = _super.call(this, context, element) || this;
- _this.start = element.start;
- _this.reversed = typeof element.reversed === 'boolean' && element.reversed === true;
- return _this;
- }
- return OLElementContainer;
- }(ElementContainer));
-
- var CHECKBOX_BORDER_RADIUS = [
- {
- type: 15 /* DIMENSION_TOKEN */,
- flags: 0,
- unit: 'px',
- number: 3
- }
- ];
- var RADIO_BORDER_RADIUS = [
- {
- type: 16 /* PERCENTAGE_TOKEN */,
- flags: 0,
- number: 50
- }
- ];
- var reformatInputBounds = function (bounds) {
- if (bounds.width > bounds.height) {
- return new Bounds(bounds.left + (bounds.width - bounds.height) / 2, bounds.top, bounds.height, bounds.height);
- }
- else if (bounds.width < bounds.height) {
- return new Bounds(bounds.left, bounds.top + (bounds.height - bounds.width) / 2, bounds.width, bounds.width);
- }
- return bounds;
- };
- var getInputValue = function (node) {
- var value = node.type === PASSWORD ? new Array(node.value.length + 1).join('\u2022') : node.value;
- return value.length === 0 ? node.placeholder || '' : value;
- };
- var CHECKBOX = 'checkbox';
- var RADIO = 'radio';
- var PASSWORD = 'password';
- var INPUT_COLOR = 0x2a2a2aff;
- var InputElementContainer = /** @class */ (function (_super) {
- __extends(InputElementContainer, _super);
- function InputElementContainer(context, input) {
- var _this = _super.call(this, context, input) || this;
- _this.type = input.type.toLowerCase();
- _this.checked = input.checked;
- _this.value = getInputValue(input);
- if (_this.type === CHECKBOX || _this.type === RADIO) {
- _this.styles.backgroundColor = 0xdededeff;
- _this.styles.borderTopColor =
- _this.styles.borderRightColor =
- _this.styles.borderBottomColor =
- _this.styles.borderLeftColor =
- 0xa5a5a5ff;
- _this.styles.borderTopWidth =
- _this.styles.borderRightWidth =
- _this.styles.borderBottomWidth =
- _this.styles.borderLeftWidth =
- 1;
- _this.styles.borderTopStyle =
- _this.styles.borderRightStyle =
- _this.styles.borderBottomStyle =
- _this.styles.borderLeftStyle =
- 1 /* SOLID */;
- _this.styles.backgroundClip = [0 /* BORDER_BOX */];
- _this.styles.backgroundOrigin = [0 /* BORDER_BOX */];
- _this.bounds = reformatInputBounds(_this.bounds);
- }
- switch (_this.type) {
- case CHECKBOX:
- _this.styles.borderTopRightRadius =
- _this.styles.borderTopLeftRadius =
- _this.styles.borderBottomRightRadius =
- _this.styles.borderBottomLeftRadius =
- CHECKBOX_BORDER_RADIUS;
- break;
- case RADIO:
- _this.styles.borderTopRightRadius =
- _this.styles.borderTopLeftRadius =
- _this.styles.borderBottomRightRadius =
- _this.styles.borderBottomLeftRadius =
- RADIO_BORDER_RADIUS;
- break;
- }
- return _this;
- }
- return InputElementContainer;
- }(ElementContainer));
-
- var SelectElementContainer = /** @class */ (function (_super) {
- __extends(SelectElementContainer, _super);
- function SelectElementContainer(context, element) {
- var _this = _super.call(this, context, element) || this;
- var option = element.options[element.selectedIndex || 0];
- _this.value = option ? option.text || '' : '';
- return _this;
- }
- return SelectElementContainer;
- }(ElementContainer));
-
- var TextareaElementContainer = /** @class */ (function (_super) {
- __extends(TextareaElementContainer, _super);
- function TextareaElementContainer(context, element) {
- var _this = _super.call(this, context, element) || this;
- _this.value = element.value;
- return _this;
- }
- return TextareaElementContainer;
- }(ElementContainer));
-
- var IFrameElementContainer = /** @class */ (function (_super) {
- __extends(IFrameElementContainer, _super);
- function IFrameElementContainer(context, iframe) {
- var _this = _super.call(this, context, iframe) || this;
- _this.src = iframe.src;
- _this.width = parseInt(iframe.width, 10) || 0;
- _this.height = parseInt(iframe.height, 10) || 0;
- _this.backgroundColor = _this.styles.backgroundColor;
- try {
- if (iframe.contentWindow &&
- iframe.contentWindow.document &&
- iframe.contentWindow.document.documentElement) {
- _this.tree = parseTree(context, iframe.contentWindow.document.documentElement);
- // http://www.w3.org/TR/css3-background/#special-backgrounds
- var documentBackgroundColor = iframe.contentWindow.document.documentElement
- ? parseColor(context, getComputedStyle(iframe.contentWindow.document.documentElement).backgroundColor)
- : COLORS.TRANSPARENT;
- var bodyBackgroundColor = iframe.contentWindow.document.body
- ? parseColor(context, getComputedStyle(iframe.contentWindow.document.body).backgroundColor)
- : COLORS.TRANSPARENT;
- _this.backgroundColor = isTransparent(documentBackgroundColor)
- ? isTransparent(bodyBackgroundColor)
- ? _this.styles.backgroundColor
- : bodyBackgroundColor
- : documentBackgroundColor;
- }
- }
- catch (e) { }
- return _this;
- }
- return IFrameElementContainer;
- }(ElementContainer));
-
- var LIST_OWNERS = ['OL', 'UL', 'MENU'];
- var parseNodeTree = function (context, node, parent, root) {
- for (var childNode = node.firstChild, nextNode = void 0; childNode; childNode = nextNode) {
- nextNode = childNode.nextSibling;
- if (isTextNode(childNode) && childNode.data.trim().length > 0) {
- parent.textNodes.push(new TextContainer(context, childNode, parent.styles));
- }
- else if (isElementNode(childNode)) {
- if (isSlotElement(childNode) && childNode.assignedNodes) {
- childNode.assignedNodes().forEach(function (childNode) { return parseNodeTree(context, childNode, parent, root); });
- }
- else {
- var container = createContainer(context, childNode);
- if (container.styles.isVisible()) {
- if (createsRealStackingContext(childNode, container, root)) {
- container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */;
- }
- else if (createsStackingContext(container.styles)) {
- container.flags |= 2 /* CREATES_STACKING_CONTEXT */;
- }
- if (LIST_OWNERS.indexOf(childNode.tagName) !== -1) {
- container.flags |= 8 /* IS_LIST_OWNER */;
- }
- parent.elements.push(container);
- childNode.slot;
- if (childNode.shadowRoot) {
- parseNodeTree(context, childNode.shadowRoot, container, root);
- }
- else if (!isTextareaElement(childNode) &&
- !isSVGElement(childNode) &&
- !isSelectElement(childNode)) {
- parseNodeTree(context, childNode, container, root);
- }
- }
- }
- }
- }
- };
- var createContainer = function (context, element) {
- if (isImageElement(element)) {
- return new ImageElementContainer(context, element);
- }
- if (isCanvasElement(element)) {
- return new CanvasElementContainer(context, element);
- }
- if (isSVGElement(element)) {
- return new SVGElementContainer(context, element);
- }
- if (isLIElement(element)) {
- return new LIElementContainer(context, element);
- }
- if (isOLElement(element)) {
- return new OLElementContainer(context, element);
- }
- if (isInputElement(element)) {
- return new InputElementContainer(context, element);
- }
- if (isSelectElement(element)) {
- return new SelectElementContainer(context, element);
- }
- if (isTextareaElement(element)) {
- return new TextareaElementContainer(context, element);
- }
- if (isIFrameElement(element)) {
- return new IFrameElementContainer(context, element);
- }
- return new ElementContainer(context, element);
- };
- var parseTree = function (context, element) {
- var container = createContainer(context, element);
- container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */;
- parseNodeTree(context, element, container, container);
- return container;
- };
- var createsRealStackingContext = function (node, container, root) {
- return (container.styles.isPositionedWithZIndex() ||
- container.styles.opacity < 1 ||
- container.styles.isTransformed() ||
- (isBodyElement(node) && root.styles.isTransparent()));
- };
- var createsStackingContext = function (styles) { return styles.isPositioned() || styles.isFloating(); };
- var isTextNode = function (node) { return node.nodeType === Node.TEXT_NODE; };
- var isElementNode = function (node) { return node.nodeType === Node.ELEMENT_NODE; };
- var isHTMLElementNode = function (node) {
- return isElementNode(node) && typeof node.style !== 'undefined' && !isSVGElementNode(node);
- };
- var isSVGElementNode = function (element) {
- return typeof element.className === 'object';
- };
- var isLIElement = function (node) { return node.tagName === 'LI'; };
- var isOLElement = function (node) { return node.tagName === 'OL'; };
- var isInputElement = function (node) { return node.tagName === 'INPUT'; };
- var isHTMLElement = function (node) { return node.tagName === 'HTML'; };
- var isSVGElement = function (node) { return node.tagName === 'svg'; };
- var isBodyElement = function (node) { return node.tagName === 'BODY'; };
- var isCanvasElement = function (node) { return node.tagName === 'CANVAS'; };
- var isVideoElement = function (node) { return node.tagName === 'VIDEO'; };
- var isImageElement = function (node) { return node.tagName === 'IMG'; };
- var isIFrameElement = function (node) { return node.tagName === 'IFRAME'; };
- var isStyleElement = function (node) { return node.tagName === 'STYLE'; };
- var isScriptElement = function (node) { return node.tagName === 'SCRIPT'; };
- var isTextareaElement = function (node) { return node.tagName === 'TEXTAREA'; };
- var isSelectElement = function (node) { return node.tagName === 'SELECT'; };
- var isSlotElement = function (node) { return node.tagName === 'SLOT'; };
- // https://html.spec.whatwg.org/multipage/custom-elements.html#valid-custom-element-name
- var isCustomElement = function (node) { return node.tagName.indexOf('-') > 0; };
-
- var CounterState = /** @class */ (function () {
- function CounterState() {
- this.counters = {};
- }
- CounterState.prototype.getCounterValue = function (name) {
- var counter = this.counters[name];
- if (counter && counter.length) {
- return counter[counter.length - 1];
- }
- return 1;
- };
- CounterState.prototype.getCounterValues = function (name) {
- var counter = this.counters[name];
- return counter ? counter : [];
- };
- CounterState.prototype.pop = function (counters) {
- var _this = this;
- counters.forEach(function (counter) { return _this.counters[counter].pop(); });
- };
- CounterState.prototype.parse = function (style) {
- var _this = this;
- var counterIncrement = style.counterIncrement;
- var counterReset = style.counterReset;
- var canReset = true;
- if (counterIncrement !== null) {
- counterIncrement.forEach(function (entry) {
- var counter = _this.counters[entry.counter];
- if (counter && entry.increment !== 0) {
- canReset = false;
- if (!counter.length) {
- counter.push(1);
- }
- counter[Math.max(0, counter.length - 1)] += entry.increment;
- }
- });
- }
- var counterNames = [];
- if (canReset) {
- counterReset.forEach(function (entry) {
- var counter = _this.counters[entry.counter];
- counterNames.push(entry.counter);
- if (!counter) {
- counter = _this.counters[entry.counter] = [];
- }
- counter.push(entry.reset);
- });
- }
- return counterNames;
- };
- return CounterState;
- }());
- var ROMAN_UPPER = {
- integers: [1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1],
- values: ['M', 'CM', 'D', 'CD', 'C', 'XC', 'L', 'XL', 'X', 'IX', 'V', 'IV', 'I']
- };
- var ARMENIAN = {
- integers: [
- 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, 80, 70,
- 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1
- ],
- values: [
- 'Ք',
- 'Փ',
- 'Ւ',
- 'Ց',
- 'Ր',
- 'Տ',
- 'Վ',
- 'Ս',
- 'Ռ',
- 'Ջ',
- 'Պ',
- 'Չ',
- 'Ո',
- 'Շ',
- 'Ն',
- 'Յ',
- 'Մ',
- 'Ճ',
- 'Ղ',
- 'Ձ',
- 'Հ',
- 'Կ',
- 'Ծ',
- 'Խ',
- 'Լ',
- 'Ի',
- 'Ժ',
- 'Թ',
- 'Ը',
- 'Է',
- 'Զ',
- 'Ե',
- 'Դ',
- 'Գ',
- 'Բ',
- 'Ա'
- ]
- };
- var HEBREW = {
- integers: [
- 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 400, 300, 200, 100, 90, 80, 70, 60, 50, 40, 30, 20,
- 19, 18, 17, 16, 15, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1
- ],
- values: [
- 'י׳',
- 'ט׳',
- 'ח׳',
- 'ז׳',
- 'ו׳',
- 'ה׳',
- 'ד׳',
- 'ג׳',
- 'ב׳',
- 'א׳',
- 'ת',
- 'ש',
- 'ר',
- 'ק',
- 'צ',
- 'פ',
- 'ע',
- 'ס',
- 'נ',
- 'מ',
- 'ל',
- 'כ',
- 'יט',
- 'יח',
- 'יז',
- 'טז',
- 'טו',
- 'י',
- 'ט',
- 'ח',
- 'ז',
- 'ו',
- 'ה',
- 'ד',
- 'ג',
- 'ב',
- 'א'
- ]
- };
- var GEORGIAN = {
- integers: [
- 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90,
- 80, 70, 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1
- ],
- values: [
- 'ჵ',
- 'ჰ',
- 'ჯ',
- 'ჴ',
- 'ხ',
- 'ჭ',
- 'წ',
- 'ძ',
- 'ც',
- 'ჩ',
- 'შ',
- 'ყ',
- 'ღ',
- 'ქ',
- 'ფ',
- 'ჳ',
- 'ტ',
- 'ს',
- 'რ',
- 'ჟ',
- 'პ',
- 'ო',
- 'ჲ',
- 'ნ',
- 'მ',
- 'ლ',
- 'კ',
- 'ი',
- 'თ',
- 'ჱ',
- 'ზ',
- 'ვ',
- 'ე',
- 'დ',
- 'გ',
- 'ბ',
- 'ა'
- ]
- };
- var createAdditiveCounter = function (value, min, max, symbols, fallback, suffix) {
- if (value < min || value > max) {
- return createCounterText(value, fallback, suffix.length > 0);
- }
- return (symbols.integers.reduce(function (string, integer, index) {
- while (value >= integer) {
- value -= integer;
- string += symbols.values[index];
- }
- return string;
- }, '') + suffix);
- };
- var createCounterStyleWithSymbolResolver = function (value, codePointRangeLength, isNumeric, resolver) {
- var string = '';
- do {
- if (!isNumeric) {
- value--;
- }
- string = resolver(value) + string;
- value /= codePointRangeLength;
- } while (value * codePointRangeLength >= codePointRangeLength);
- return string;
- };
- var createCounterStyleFromRange = function (value, codePointRangeStart, codePointRangeEnd, isNumeric, suffix) {
- var codePointRangeLength = codePointRangeEnd - codePointRangeStart + 1;
- return ((value < 0 ? '-' : '') +
- (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, isNumeric, function (codePoint) {
- return fromCodePoint$1(Math.floor(codePoint % codePointRangeLength) + codePointRangeStart);
- }) +
- suffix));
- };
- var createCounterStyleFromSymbols = function (value, symbols, suffix) {
- if (suffix === void 0) { suffix = '. '; }
- var codePointRangeLength = symbols.length;
- return (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, false, function (codePoint) { return symbols[Math.floor(codePoint % codePointRangeLength)]; }) + suffix);
- };
- var CJK_ZEROS = 1 << 0;
- var CJK_TEN_COEFFICIENTS = 1 << 1;
- var CJK_TEN_HIGH_COEFFICIENTS = 1 << 2;
- var CJK_HUNDRED_COEFFICIENTS = 1 << 3;
- var createCJKCounter = function (value, numbers, multipliers, negativeSign, suffix, flags) {
- if (value < -9999 || value > 9999) {
- return createCounterText(value, 4 /* CJK_DECIMAL */, suffix.length > 0);
- }
- var tmp = Math.abs(value);
- var string = suffix;
- if (tmp === 0) {
- return numbers[0] + string;
- }
- for (var digit = 0; tmp > 0 && digit <= 4; digit++) {
- var coefficient = tmp % 10;
- if (coefficient === 0 && contains(flags, CJK_ZEROS) && string !== '') {
- string = numbers[coefficient] + string;
- }
- else if (coefficient > 1 ||
- (coefficient === 1 && digit === 0) ||
- (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_COEFFICIENTS)) ||
- (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_HIGH_COEFFICIENTS) && value > 100) ||
- (coefficient === 1 && digit > 1 && contains(flags, CJK_HUNDRED_COEFFICIENTS))) {
- string = numbers[coefficient] + (digit > 0 ? multipliers[digit - 1] : '') + string;
- }
- else if (coefficient === 1 && digit > 0) {
- string = multipliers[digit - 1] + string;
- }
- tmp = Math.floor(tmp / 10);
- }
- return (value < 0 ? negativeSign : '') + string;
- };
- var CHINESE_INFORMAL_MULTIPLIERS = '十百千萬';
- var CHINESE_FORMAL_MULTIPLIERS = '拾佰仟萬';
- var JAPANESE_NEGATIVE = 'マイナス';
- var KOREAN_NEGATIVE = '마이너스';
- var createCounterText = function (value, type, appendSuffix) {
- var defaultSuffix = appendSuffix ? '. ' : '';
- var cjkSuffix = appendSuffix ? '、' : '';
- var koreanSuffix = appendSuffix ? ', ' : '';
- var spaceSuffix = appendSuffix ? ' ' : '';
- switch (type) {
- case 0 /* DISC */:
- return '•' + spaceSuffix;
- case 1 /* CIRCLE */:
- return '◦' + spaceSuffix;
- case 2 /* SQUARE */:
- return '◾' + spaceSuffix;
- case 5 /* DECIMAL_LEADING_ZERO */:
- var string = createCounterStyleFromRange(value, 48, 57, true, defaultSuffix);
- return string.length < 4 ? "0" + string : string;
- case 4 /* CJK_DECIMAL */:
- return createCounterStyleFromSymbols(value, '〇一二三四五六七八九', cjkSuffix);
- case 6 /* LOWER_ROMAN */:
- return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix).toLowerCase();
- case 7 /* UPPER_ROMAN */:
- return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix);
- case 8 /* LOWER_GREEK */:
- return createCounterStyleFromRange(value, 945, 969, false, defaultSuffix);
- case 9 /* LOWER_ALPHA */:
- return createCounterStyleFromRange(value, 97, 122, false, defaultSuffix);
- case 10 /* UPPER_ALPHA */:
- return createCounterStyleFromRange(value, 65, 90, false, defaultSuffix);
- case 11 /* ARABIC_INDIC */:
- return createCounterStyleFromRange(value, 1632, 1641, true, defaultSuffix);
- case 12 /* ARMENIAN */:
- case 49 /* UPPER_ARMENIAN */:
- return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix);
- case 35 /* LOWER_ARMENIAN */:
- return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix).toLowerCase();
- case 13 /* BENGALI */:
- return createCounterStyleFromRange(value, 2534, 2543, true, defaultSuffix);
- case 14 /* CAMBODIAN */:
- case 30 /* KHMER */:
- return createCounterStyleFromRange(value, 6112, 6121, true, defaultSuffix);
- case 15 /* CJK_EARTHLY_BRANCH */:
- return createCounterStyleFromSymbols(value, '子丑寅卯辰巳午未申酉戌亥', cjkSuffix);
- case 16 /* CJK_HEAVENLY_STEM */:
- return createCounterStyleFromSymbols(value, '甲乙丙丁戊己庚辛壬癸', cjkSuffix);
- case 17 /* CJK_IDEOGRAPHIC */:
- case 48 /* TRAD_CHINESE_INFORMAL */:
- return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS);
- case 47 /* TRAD_CHINESE_FORMAL */:
- return createCJKCounter(value, '零壹貳參肆伍陸柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS);
- case 42 /* SIMP_CHINESE_INFORMAL */:
- return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS);
- case 41 /* SIMP_CHINESE_FORMAL */:
- return createCJKCounter(value, '零壹贰叁肆伍陆柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS);
- case 26 /* JAPANESE_INFORMAL */:
- return createCJKCounter(value, '〇一二三四五六七八九', '十百千万', JAPANESE_NEGATIVE, cjkSuffix, 0);
- case 25 /* JAPANESE_FORMAL */:
- return createCJKCounter(value, '零壱弐参四伍六七八九', '拾百千万', JAPANESE_NEGATIVE, cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS);
- case 31 /* KOREAN_HANGUL_FORMAL */:
- return createCJKCounter(value, '영일이삼사오육칠팔구', '십백천만', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS);
- case 33 /* KOREAN_HANJA_INFORMAL */:
- return createCJKCounter(value, '零一二三四五六七八九', '十百千萬', KOREAN_NEGATIVE, koreanSuffix, 0);
- case 32 /* KOREAN_HANJA_FORMAL */:
- return createCJKCounter(value, '零壹貳參四五六七八九', '拾百千', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS);
- case 18 /* DEVANAGARI */:
- return createCounterStyleFromRange(value, 0x966, 0x96f, true, defaultSuffix);
- case 20 /* GEORGIAN */:
- return createAdditiveCounter(value, 1, 19999, GEORGIAN, 3 /* DECIMAL */, defaultSuffix);
- case 21 /* GUJARATI */:
- return createCounterStyleFromRange(value, 0xae6, 0xaef, true, defaultSuffix);
- case 22 /* GURMUKHI */:
- return createCounterStyleFromRange(value, 0xa66, 0xa6f, true, defaultSuffix);
- case 22 /* HEBREW */:
- return createAdditiveCounter(value, 1, 10999, HEBREW, 3 /* DECIMAL */, defaultSuffix);
- case 23 /* HIRAGANA */:
- return createCounterStyleFromSymbols(value, 'あいうえおかきくけこさしすせそたちつてとなにぬねのはひふへほまみむめもやゆよらりるれろわゐゑをん');
- case 24 /* HIRAGANA_IROHA */:
- return createCounterStyleFromSymbols(value, 'いろはにほへとちりぬるをわかよたれそつねならむうゐのおくやまけふこえてあさきゆめみしゑひもせす');
- case 27 /* KANNADA */:
- return createCounterStyleFromRange(value, 0xce6, 0xcef, true, defaultSuffix);
- case 28 /* KATAKANA */:
- return createCounterStyleFromSymbols(value, 'アイウエオカキクケコサシスセソタチツテトナニヌネノハヒフヘホマミムメモヤユヨラリルレロワヰヱヲン', cjkSuffix);
- case 29 /* KATAKANA_IROHA */:
- return createCounterStyleFromSymbols(value, 'イロハニホヘトチリヌルヲワカヨタレソツネナラムウヰノオクヤマケフコエテアサキユメミシヱヒモセス', cjkSuffix);
- case 34 /* LAO */:
- return createCounterStyleFromRange(value, 0xed0, 0xed9, true, defaultSuffix);
- case 37 /* MONGOLIAN */:
- return createCounterStyleFromRange(value, 0x1810, 0x1819, true, defaultSuffix);
- case 38 /* MYANMAR */:
- return createCounterStyleFromRange(value, 0x1040, 0x1049, true, defaultSuffix);
- case 39 /* ORIYA */:
- return createCounterStyleFromRange(value, 0xb66, 0xb6f, true, defaultSuffix);
- case 40 /* PERSIAN */:
- return createCounterStyleFromRange(value, 0x6f0, 0x6f9, true, defaultSuffix);
- case 43 /* TAMIL */:
- return createCounterStyleFromRange(value, 0xbe6, 0xbef, true, defaultSuffix);
- case 44 /* TELUGU */:
- return createCounterStyleFromRange(value, 0xc66, 0xc6f, true, defaultSuffix);
- case 45 /* THAI */:
- return createCounterStyleFromRange(value, 0xe50, 0xe59, true, defaultSuffix);
- case 46 /* TIBETAN */:
- return createCounterStyleFromRange(value, 0xf20, 0xf29, true, defaultSuffix);
- case 3 /* DECIMAL */:
- default:
- return createCounterStyleFromRange(value, 48, 57, true, defaultSuffix);
- }
- };
-
- var IGNORE_ATTRIBUTE = 'data-html2canvas-ignore';
- var DocumentCloner = /** @class */ (function () {
- function DocumentCloner(context, element, options) {
- this.context = context;
- this.options = options;
- this.scrolledElements = [];
- this.referenceElement = element;
- this.counters = new CounterState();
- this.quoteDepth = 0;
- if (!element.ownerDocument) {
- throw new Error('Cloned element does not have an owner document');
- }
- this.documentElement = this.cloneNode(element.ownerDocument.documentElement, false);
- }
- DocumentCloner.prototype.toIFrame = function (ownerDocument, windowSize) {
- var _this = this;
- var iframe = createIFrameContainer(ownerDocument, windowSize);
- if (!iframe.contentWindow) {
- return Promise.reject("Unable to find iframe window");
- }
- var scrollX = ownerDocument.defaultView.pageXOffset;
- var scrollY = ownerDocument.defaultView.pageYOffset;
- var cloneWindow = iframe.contentWindow;
- var documentClone = cloneWindow.document;
- /* Chrome doesn't detect relative background-images assigned in inline