diff --git a/spaces/0xqtpie/doodle2vid/README.md b/spaces/0xqtpie/doodle2vid/README.md deleted file mode 100644 index 8ee44ba06dccfec088c6c5f5e8389a5dd56808ab..0000000000000000000000000000000000000000 --- a/spaces/0xqtpie/doodle2vid/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Doodle2vid -emoji: 🐢 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.44.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Cs6 Master Collection Keygen Xforce Rar Zip Learn How to Generate and Apply Xforce Keygen for Adobe Cs6.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Cs6 Master Collection Keygen Xforce Rar Zip Learn How to Generate and Apply Xforce Keygen for Adobe Cs6.md deleted file mode 100644 index 4642871978ceda76403902ec0f1e85ee3eb405af..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Cs6 Master Collection Keygen Xforce Rar Zip Learn How to Generate and Apply Xforce Keygen for Adobe Cs6.md +++ /dev/null @@ -1,148 +0,0 @@ -
-

Adobe Cs6 Master Collection Keygen Xforce Rar Zip

-

If you are looking for a way to get the most out of Adobe Creative Suite 6 Master Collection, you might be interested in using a keygen tool that can generate valid serial numbers and activation codes for you. In this article, we will explain what Adobe Cs6 Master Collection is, what Xforce Keygen is, and how to download and install Adobe Cs6 Master Collection with Xforce Keygen. We will also cover some of the benefits and risks of using this method, and answer some frequently asked questions.

-

What is Adobe Cs6 Master Collection?

-

Adobe Cs6 Master Collection is a software bundle that includes all the Adobe creative tools you need to create stunning digital content for any platform. Whether you are a graphic designer, web developer, video editor, photographer, or animator, you can find the right tool for your project in Adobe Cs6 Master Collection. Some of the applications included in this bundle are:

-

Adobe Cs6 Master Collection Keygen Xforce Rar Zip


Download File » https://byltly.com/2uKv0B



- -

Adobe Cs6 Master Collection also comes with Adobe Bridge CS6, a file management tool that lets you organize and preview your media files; Adobe Media Encoder CS6, a tool that lets you encode your videos to various formats; and Adobe Acrobat X Pro, a tool that lets you create, edit, and sign PDF documents.

-

Features of Adobe Cs6 Master Collection

-

Some of the features that make Adobe Cs6 Master Collection stand out are:

- -

System requirements for Adobe Cs6 Master Collection

-

To run Adobe Cs6 Master Collection smoothly on your computer, you need to meet the following system requirements:

- - - - - - - - -
Operating systemWindowsMac OS
ProcessorIntel® Pentium® 4 or AMD Athlon® 64 processor (Intel Core™2 Duo or AMD Phenom® II recommended); Intel Core i7 required for Adobe SpeedGrade™Multicore Intel processor with 64-bit support
RAM4 GB of RAM (8 GB recommended)4 GB of RAM (8 GB recommended)
Hard disk space15.5 GB of available hard-disk space for installation; additional free space required during installation (cannot install on removable flash storage devices)15.5 GB of available hard-disk space for installation; additional free space required during installation (cannot install on a volume that uses a case-sensitive file system or on removable flash storage devices)
Display1280 x 900 display (1280 x 1024 recommended) with 16-bit color and 512 MB of VRAM; OpenGL 2.0–capable system1280 x 900 display (1680 x 1050 recommended) with 16-bit color and 512 MB of VRAM; OpenGL 2.0–capable system
DVD-ROM driveDVD-ROM drive compatible with dual-layer DVDs (DVD+-R burner for burning DVDs; Blu-ray burner for creating Blu-ray Disc media)DVD-ROM drive compatible with dual-layer DVDs (SuperDrive for burning DVDs; external Blu-ray burner for creating Blu-ray Disc media)
Other requirements- Java™ Runtime Environment 1.6 (included) - Eclipse™ 3.7 (for plug-in installation of Adobe Flash® Builder®); the following distributions are supported: Eclipse IDE for Java EE and Java Developers, Eclipse Classic, Eclipse for PHP Developers - QuickTime 7.6.6 software required for QuickTime features - Optional: GPU card for GPU-accelerated performance in Adobe Premiere Pro - Optional: Tangent CP200 family or Tangent Wave control surface for SpeedGrade - Optional: For SDI output, NVIDIA Quadro SDI Output card required for SpeedGrade - Optional: 7200 RPM hard drive (multiple fast disk drives preferred) for video products - This software will not operate without activation. Broadband Internet connection and registration are required for software activation,- Java Runtime Environment 1.6 - Eclipse 3.7 Cocoa version (for plug-in installation of Adobe Flash Builder); the following distributions are supported: Eclipse IDE for Java EE and Java Developers, Eclipse Classic, Eclipse for PHP Developers - QuickTime 7.6.6 software required for QuickTime features - Optional: GPU card for GPU-accelerated performance in Adobe Premiere Pro - Optional: Tangent CP200 family or Tangent Wave control surface for SpeedGrade - Optional: For SDI output, NVIDIA Quadro SDI Output card required for SpeedGrade - Optional: 7200 RPM hard drive (multiple fast disk drives preferred) for video products - This software will not operate without activation. Broadband Internet connection and registration are required for software activation,
-

What is Xforce Keygen?

-

Xforce Keygen is a tool that can generate valid serial numbers and activation codes for various software products. It is also known as a crack or a patch because it bypasses the original authentication process of the software. Xforce Keygen is created by a group of hackers called X-Force who are known for cracking many popular software products such as Autodesk AutoCAD, CorelDRAW Graphics Suite, Microsoft Office, etc.

-

How Xforce Keygen works

- ```html prevent the software from detecting the crack and asking for online activation. Xforce Keygen usually comes in a zip or rar file that contains the keygen executable file and a text file with instructions on how to use it.

-

Benefits of using Xforce Keygen

-

Some of the benefits of using Xforce Keygen are:

- -

Risks of using Xforce Keygen

-

Some of the risks of using Xforce Keygen are:

-

Adobe Cs6 Master Collection Crack Xforce Download
-How to Activate Adobe Cs6 Master Collection with Xforce Keygen
-Adobe Cs6 Master Collection Serial Number Generator by Xforce
-Xforce Keygen for Adobe Cs6 Master Collection Free Download
-Adobe Cs6 Master Collection Full Version with Xforce Crack
-Adobe Cs6 Master Collection Xforce Keygen Only
-Adobe Cs6 Master Collection Activation Code by Xforce
-Adobe Cs6 Master Collection License Key from Xforce
-Adobe Cs6 Master Collection Patch by Xforce Rar
-Adobe Cs6 Master Collection Xforce Keygen 64 Bit
-Adobe Cs6 Master Collection Xforce Keygen 32 Bit
-Adobe Cs6 Master Collection Xforce Keygen Mac
-Adobe Cs6 Master Collection Xforce Keygen Windows
-Adobe Cs6 Master Collection Xforce Keygen Offline Activation
-Adobe Cs6 Master Collection Xforce Keygen Not Working
-Adobe Cs6 Master Collection Xforce Keygen Invalid Request Code
-Adobe Cs6 Master Collection Xforce Keygen Error
-Adobe Cs6 Master Collection Xforce Keygen Virus
-Adobe Cs6 Master Collection Xforce Keygen Password
-Adobe Cs6 Master Collection Xforce Keygen Zip File
-Adobe Cs6 Master Collection Xforce Keygen Rar File
-Adobe Cs6 Master Collection Xforce Keygen Extract
-Adobe Cs6 Master Collection Xforce Keygen Install
-Adobe Cs6 Master Collection Xforce Keygen Tutorial
-Adobe Cs6 Master Collection Xforce Keygen Guide
-Adobe Cs6 Master Collection Xforce Keygen Review
-Adobe Cs6 Master Collection Xforce Keygen Test
-Adobe Cs6 Master Collection Xforce Keygen Forum
-Adobe Cs6 Master Collection Xforce Keygen Support
-Adobe Cs6 Master Collection Xforce Keygen Help
-Adobe Cs6 Master Collection Xforce Keygen Tips
-Adobe Cs6 Master Collection Xforce Keygen Tricks
-Adobe Cs6 Master Collection Xforce Keygen Hacks
-Adobe Cs6 Master Collection Xforce Keygen Cheats
-Adobe Cs6 Master Collection Xforce Keygen Tools
-Adobe Cs6 Master Collection Xforce Keygen Software
-Adobe Cs6 Master Collection Xforce Keygen Program
-Adobe Cs6 Master Collection Xforce Keygen Application
-Adobe Cs6 Master Collection Xforce Keygen Product
-Adobe Cs6 Master Collection Xforce Keygen Solution
-Adobe Cs6 Master Collection Xforce Keygen Alternative
-Adobe Cs6 Master Collection Xforce Keygen Comparison
-Adobe Cs6 Master Collection Xforce Keygen Benefits
-Adobe Cs6 Master Collection Xforce Keygen Features
-Adobe Cs6 Master Collection Xforce Keygen Advantages
-Adobe Cs6 Master Collection Xforce Keygen Disadvantages
-Adobe Cs6 Master Collection Xforce Keygen Pros and Cons
-Adobe Cs6 Master Collection Xforce Keygen Quality
-Adobe Cs6 Master Collection Xforce Keygen Reliability
-Adobe Cs6 Master Collection Xforce Keygen Satisfaction

- -

How to download and install Adobe Cs6 Master Collection with Xforce Keygen

-

If you want to download and install Adobe Cs6 Master Collection with Xforce Keygen, you need to follow these steps carefully:

-

Step 1: Disable your network card or pull the network cable out

-

This is to prevent the software from connecting to the internet and verifying your serial number and activation code. You also need to make sure you don't have any of these entries in your hosts file:

- 127.0.0.1 lmlicenses.wip4.adobe.com 127.0.0.1 lm.licenses.adobe.com -

The hosts file is located in C:\windows\system32\drivers\etc\hosts for Windows and /etc/hosts for Mac OS.

-

Step 2: Install the Master Collection CS6 with a serial generated from Xforce Keygen

-

You need to download Xforce Keygen from a reliable source and run it as administrator. Then, you need to select Adobe Cs6 Master Collection from the drop-down menu and click on Generate Serial. You will get a serial number that you need to copy and paste in the installation window of Adobe Cs6 Master Collection. Do not close the keygen yet. When the error "Please connect to the internet and retry" shows, click on Connect Later.

-

Step 3: Launch an Adobe application and confirm you have a connection problem

-

You need to launch any Adobe application from the Master Collection, such as Photoshop, Illustrator, or InDesign. You will see a message that says "We are unable to start your subscription for Adobe Cs6 Master Collection". Click on Having Trouble Connecting To The Internet. Then, click on Offline Activation and then on Generate Request Code. You will get a request code that you need to copy and paste in the keygen window.

-

Step 4: Generate and validate an activation code with Xforce Keygen

-

In the keygen window, click on Activate and then on Generate Activation Code. You will get an activation code that you need to copy and paste in the Adobe application window. Then, click on Activate and then on Close Application.

-

Step 5: Run disable_activation.cmd or disable_activation_osx as root

-

This is to block Adobe from accessing its servers and checking your activation status. You need to run disable_activation.cmd for Windows or disable_activation_osx for Mac OS as administrator or root. These files are usually included in the zip or rar file of Xforce Keygen. Alternatively, you can manually add these lines to your hosts file:

- # Adobe Blocker 127.0.0.1 lmlicenses.wip4.adobe.com 127.0.0.1 lm.licenses.adobe.com -

Step 6: Re-enable your network card and update your software to the latest version

-

This is to restore your internet connection and enjoy the latest features and updates of Adobe Cs6 Master Collection. You can use Adobe Updater to check for updates and install them without losing the crack.

-

Conclusion

-

In this article, we have explained what Adobe Cs6 Master Collection is, what Xforce Keygen is, and how to download and install Adobe Cs6 Master Collection with Xforce Keygen. We have also covered some of the benefits and risks of using this method, and answered some frequently asked questions. We hope you have found this article helpful and informative.

-

FAQs

-
    -
  1. Q: Is Xforce Keygen legal?
  2. -
  3. A: No, Xforce Keygen is not legal as it violates the software's terms of service and infringes its intellectual property rights.
  4. -
  5. Q: Is Xforce Keygen safe?
  6. -
  7. A: No, Xforce Keygen is not safe as it may contain malware or viruses that can harm your computer or compromise your work.
  8. -
  9. Q: Can I use Xforce Keygen for other software products?
  10. -
  11. A: Yes, Xforce Keygen can generate serial numbers and activation codes for other software products such as Autodesk AutoCAD, CorelDRAW Graphics Suite, Microsoft Office, etc.
  12. -
  13. Q: Can I use Adobe Cs6 Master Collection online after using Xforce Keygen?
  14. -
  15. A: Yes, you can use Adobe Cs6 Master Collection online after using Xforce Keygen, but you may not be able to access some features or services that require online verification or registration.
  16. -
  17. Q: Can I uninstall Adobe Cs6 Master Collection after using Xforce Keygen?
  18. -
  19. A: Yes, you can uninstall Adobe Cs6 Master Collection after using Xforce Keygen, but you need to delete these folders as well:
  20. - -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Como Ativar O Malwarebytes Premium.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Como Ativar O Malwarebytes Premium.md deleted file mode 100644 index fa9eb122a113cc3f28b203da4f479a7051b66fb6..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Como Ativar O Malwarebytes Premium.md +++ /dev/null @@ -1,33 +0,0 @@ -
-

Como ativar o Malwarebytes Premium

-

O Malwarebytes Premium é um programa de segurança que protege o seu computador contra malware, ransomware, exploits e outras ameaças online. Para ativar os recursos premium do Malwarebytes, você precisa de uma licença válida que pode ser comprada no site oficial do Malwarebytes ou em revendedores autorizados.

-

Neste artigo, vamos mostrar como ativar o Malwarebytes Premium em seu computador usando dois métodos: através da sua conta do Malwarebytes ou através de uma chave de licença.

-

como ativar o malwarebytes premium


Download Filehttps://byltly.com/2uKw0W



-

Ativar o Malwarebytes Premium através da sua conta do Malwarebytes

-

Este método requer que você tenha um login ativo da sua conta do Malwarebytes. Se você ainda não criou a sua conta, veja como fazer isso neste link: Criar e gerenciar sua conta do Malwarebytes.

-

Siga os passos abaixo para ativar o Malwarebytes Premium através da sua conta:

-
    -
  1. Baixe o software do Malwarebytes no site oficial: http://downloads.malwarebytes.org/file/mbam/ e instale-o em seu computador.
  2. -
  3. Abra o aplicativo do Malwarebytes.
  4. -
  5. No canto superior direito do Painel, clique em Ativar licença.
  6. -
  7. No campo Email, digite o endereço de email usado para entrar na sua conta do Malwarebytes.
  8. -
  9. No campo Senha, digite a senha usada para entrar na sua conta do Malwarebytes.
  10. -
  11. Clique em Entrar.
  12. -
  13. Quando a sua licença for ativada, clique em Concluído.
  14. -
-

Uma vez ativado, Premium será exibido no canto superior esquerdo do Painel do programa.

-

Ativar o Malwarebytes Premium através de uma chave de licença

-

Este método requer que você tenha a sua chave de licença, que pode ser encontrada na sua confirmação de compra por email ou na sua conta do Malwarebytes. Se você não sabe onde encontrar a sua chave de licença, veja como fazer isso neste link: Encontrar minha chave de licença do Malwarebytes.

-

Siga os passos abaixo para ativar o Malwarebytes Premium através de uma chave de licença:

-
    -
  1. Baixe o software do Malwarebytes no site oficial: http://downloads.malwarebytes.org/file/mbam/ e instale-o em seu computador.
  2. -
  3. Abra o aplicativo do Malwarebytes.
  4. -
  5. No canto superior direito do Painel, clique em Ativar licença.
  6. -
  7. Clique em Inserir chave de licença.
  8. -
  9. Se a sua chave de licença tem este formato XXXXX-XXXXX-XXXXX-XXXXX, digite-a no campo Chave de licença e clique em Ativar.
  10. -
  11. Se a sua chave de licença tem este formato XXXX-XXXX-XXXX-XXXX e tem uma ID de licença com o formato XXXXX ou XXXXX-XXXXX, selecione Minha licença veio com uma ID de licença abaixo da entrada Chave de licença. Digite a sua ID de licença e a sua chave de licença e clique em Ativar.
  12. -
-

Para verificar se a ativação foi bem-sucedida, Premium será exibido no canto superior esquerdo do Painel do programa.

-

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Adobe Photoshop Cs6 Amtlib Dll Files Everything You Need to Know.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Adobe Photoshop Cs6 Amtlib Dll Files Everything You Need to Know.md deleted file mode 100644 index 5e322f5b2a99b7184829f66237f8fb5f6e7fd8dc..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Adobe Photoshop Cs6 Amtlib Dll Files Everything You Need to Know.md +++ /dev/null @@ -1,150 +0,0 @@ -
-

How to Crack Adobe Photoshop CS6 with Amtlib.dll Files

-

Adobe Photoshop CS6 is one of the most popular and powerful image editing software in the world. However, it is also one of the most expensive ones, costing hundreds of dollars for a single license. If you want to use Adobe Photoshop CS6 without paying for it, you might be tempted to crack it using Amtlib.dll files. But what are these files and how do they work? In this article, we will explain everything you need to know about cracking Adobe Photoshop CS6 with Amtlib.dll files, including the benefits, risks, and alternatives.

-

Crack Adobe Photoshop Cs6 Amtlib Dll Files


DOWNLOAD ✵✵✵ https://byltly.com/2uKyoj



-

What is Adobe Photoshop CS6?

-

Adobe Photoshop CS6 is the 13th major release of the Adobe Photoshop software, which was launched in May 2012. It is a creative image editing suite that offers a range of features and tools for professional and amateur photographers, graphic designers, web developers, and video editors. Some of the new features and enhancements in Adobe Photoshop CS6 include:

- -

What is Amtlib.dll?

-

Amtlib.dll is a dynamic link library file that is part of the Adobe Application Manager. It is responsible for activating and validating the licenses of various Adobe products, such as Photoshop, Illustrator, Dreamweaver, Premiere Pro, After Effects, etc. It is located in the installation folder of each Adobe product.

-

When you crack Adobe Photoshop CS6 with Amtlib.dll files, you are essentially replacing the original Amtlib.dll file with a modified one that bypasses the license verification process. This way, you can use Adobe Photoshop CS6 without entering a serial number or signing in with an Adobe ID.

-

How to crack Adobe Photoshop Cs6 with Amtlib Dll file
-Amtlib Dll file download for Adobe Photoshop Cs6 crack
-Adobe Photoshop Cs6 crack Amtlib Dll file missing error
-Fix Adobe Photoshop Cs6 crack Amtlib Dll file corrupted issue
-Adobe Photoshop Cs6 crack Amtlib Dll file location on Windows
-Adobe Photoshop Cs6 crack Amtlib Dll file location on Mac
-Adobe Photoshop Cs6 crack Amtlib Dll file not working solution
-Adobe Photoshop Cs6 crack Amtlib Dll file virus scan
-Adobe Photoshop Cs6 crack Amtlib Dll file backup and restore
-Adobe Photoshop Cs6 crack Amtlib Dll file alternative methods
-Adobe Photoshop Cs6 crack Amtlib Dll file free download link
-Adobe Photoshop Cs6 crack Amtlib Dll file installation guide
-Adobe Photoshop Cs6 crack Amtlib Dll file compatibility check
-Adobe Photoshop Cs6 crack Amtlib Dll file update and patch
-Adobe Photoshop Cs6 crack Amtlib Dll file license key generator
-Adobe Photoshop Cs6 crack Amtlib Dll file activation code
-Adobe Photoshop Cs6 crack Amtlib Dll file serial number
-Adobe Photoshop Cs6 crack Amtlib Dll file registration code
-Adobe Photoshop Cs6 crack Amtlib Dll file product key
-Adobe Photoshop Cs6 crack Amtlib Dll file full version download
-Adobe Photoshop Cs6 crack Amtlib Dll file trial reset tool
-Adobe Photoshop Cs6 crack Amtlib Dll file offline installer
-Adobe Photoshop Cs6 crack Amtlib Dll file online activation
-Adobe Photoshop Cs6 crack Amtlib Dll file safe and secure download
-Adobe Photoshop Cs6 crack Amtlib Dll file latest version download
-Adobe Photoshop Cs6 crack Amtlib Dll file review and feedback
-Adobe Photoshop Cs6 crack Amtlib Dll file tutorial and tips
-Adobe Photoshop Cs6 crack Amtlib Dll file features and benefits
-Adobe Photoshop Cs6 crack Amtlib Dll file pros and cons
-Adobe Photoshop Cs6 crack Amtlib Dll file comparison and contrast
-Adobe Photoshop Cs6 crack Amtlib Dll file best practices and recommendations
-Adobe Photoshop Cs6 crack Amtlib Dll file troubleshooting and support
-Adobe Photoshop Cs6 crack Amtlib Dll file FAQs and answers
-Adobe Photoshop Cs6 crack Amtlib Dll file forum and community
-Adobe Photoshop Cs6 crack Amtlib Dll file blog and articles
-Adobe Photoshop Cs6 crack Amtlib Dll file video and audio tutorials
-Adobe Photoshop Cs6 crack Amtlib Dll file case studies and testimonials
-Adobe Photoshop Cs6 crack Amtlib Dll file coupons and discounts
-Adobe Photoshop Cs6 crack Amtlib Dll file affiliate program and commission
-Adobe Photoshop Cs6 crack Amtlib Dll file refund policy and guarantee
-How to uninstall Adobe Photoshop Cs6 crack Amtlib Dll file
-How to upgrade from Adobe Photoshop Cs5 to Cs6 with Amtlib Dll file
-How to use Adobe Photoshop Cs6 with other cracked software using Amtlib Dll files
-How to fix common errors and bugs in Adobe Photoshop Cs6 with cracked Amtlib Dll files
-How to optimize the performance of Adobe Photoshop Cs6 with cracked Amtlib Dll files
-How to customize the settings of Adobe Photoshop Cs6 with cracked Amtlib Dll files
-How to create stunning graphics and designs with Adobe Photoshop Cs6 with cracked Amtlib Dll files
-How to edit photos and images with Adobe Photoshop Cs6 with cracked Amtlib Dll files
-How to add filters and effects with Adobe Photoshop Cs6 with cracked Amtlib Dll files
-How to share your work with others using Adobe Photoshop Cs6 with cracked Amtlib Dll files

-

How to Download and Install Adobe Photoshop CS6

-

Before you can crack Adobe Photoshop CS6 with Amtlib.dll files, you need to download and install it on your computer. Here are the steps to do so:

-
    -
  1. Go to https://www.adobe.com/products/photoshop/free-trial-download.html and click on "Download now".
  2. -
  3. Follow the instructions on the screen to download the installer file.
  4. -
  5. Run the installer file and follow the instructions on the screen to install Adobe Photoshop CS6.
  6. -
  7. When prompted, choose "Try" instead of "Buy" or "Enter serial number".
  8. -
  9. Wait for the installation to complete.
  10. -
  11. You have now installed Adobe Photoshop CS6 as a trial version. You can use it for 30 days before it expires.
  12. -
-

How to Crack Adobe Photoshop CS6 with Amtlib.dll Files

-

Now that you have installed Adobe Photoshop CS6 as a trial version, you can crack it using Amtlib.dll files. Here are the steps to do so:

-

Step 1: Download Amtlib.dll Files

-

The first thing you need to do is download the cracked Amtlib.dll files for both 32-bit and 64-bit versions of Adobe Photoshop CS6. You can find them from various sources online, but make sure they are safe and reliable. One possible source is https://davi24.com/download-file-amtlib-dll/, where you can download them for free.

-

Step 2: Locate the Installation Folder of Adobe Photoshop CS6

-

The next thing you need to do is locate the installation folder of Adobe Photoshop CS6 on your computer. The default location depends on your operating system and whether you have installed the 32-bit or 64-bit version of Adobe Photoshop CS6. Here are some possible locations:

- -

Step 3: Replace the Original Amtlib.dll File with the Cracked One

-

The final thing you need to do is replace the original Amtlib.dll file with the cracked one. To do this:

-
    -
  1. Open the installation folder of Adobe Photoshop CS6.
  2. -
  3. Find and rename the original Amtlib.dll file as something else, such as "Amtlib.bak". This way, you can restore it later if needed.
  4. -
  5. Copy and paste the cracked Amtlib.dll file into the same folder.
  6. -
  7. You have now replaced the original Amtlib.dll file with the cracked one.
  8. -
-

Step 4: Run Adobe Photoshop CS6 and Enjoy

-

The last thing you need to do is run Adobe Photoshop CS6 and enjoy using it without any restrictions. To do this:

-
    -
  1. Launch Adobe Photoshop CS6 from your desktop or start menu.
  2. -
  3. You should not see any prompts asking for a serial number or an Adobe ID.
  4. -```html any limitations. -
  5. You have now successfully cracked Adobe Photoshop CS6 with Amtlib.dll files.
  6. -
-

Benefits of Cracking Adobe Photoshop CS6 with Amtlib.dll Files

-

Cracking Adobe Photoshop CS6 with Amtlib.dll files has some benefits, such as:

- -

Risks of Cracking Adobe Photoshop CS6 with Amtlib.dll Files

-

However, cracking Adobe Photoshop CS6 with Amtlib.dll files also has some risks, such as:

- -

Alternatives to Cracking Adobe Photoshop CS6 with Amtlib.dll Files

-

If you are not comfortable with cracking Adobe Photoshop CS6 with Amtlib.dll files, you may want to consider some alternatives, such as:

- -

Conclusion

-

In conclusion, cracking Adobe Photoshop CS6 with Amtlib.dll files is a way to use the software for free without any restrictions. However, it also comes with some drawbacks and dangers that you should be aware of. Therefore, you should weigh the pros and cons carefully before deciding to crack Adobe Photoshop CS6 with Amtlib.dll files. Alternatively, you can opt for some other options that may suit your needs and budget better. We hope this article has been helpful and informative for you. Thank you for reading!

-

FAQs

-

Here are some frequently asked questions and answers about cracking Adobe Photoshop CS6 with Amtlib.dll files:

-

Q: Is cracking Adobe Photoshop CS6 with Amtlib.dll files illegal?

-

A: Yes, cracking Adobe Photoshop CS6 with Amtlib.dll files is illegal. It violates the copyright and license agreement of Adobe and may result in legal action against you. You should respect the intellectual property rights of the software developers and pay for their products if you want to use them.

-

Q: Is cracking Adobe Photoshop CS6 with Amtlib.dll files safe?

-

A: No, cracking Adobe Photoshop CS6 with Amtlib.dll files is not safe. It may expose your computer to malicious software that may harm your system or steal your data. It may also cause errors, bugs, or crashes that may affect your work or damage your files. It may also prevent you from getting updates, patches, or new features that may improve your experience or fix issues. You should protect your computer and data by using only trusted and secure sources of software.

-

Q: Is cracking Adobe Photoshop CS6 with Amtlib.dll files worth it?

-

A: That depends on your personal preference and situation. Cracking Adobe Photoshop CS6 with Amtlib.dll files may save you some money and give you some freedom in using the software. However, it also comes with some risks and disadvantages that you should consider carefully. You may also miss out on some benefits and opportunities that come with using a legitimate version of the software. You should weigh the pros and cons carefully before deciding to crack Adobe Photoshop CS6 with Amtlib.dll files.

-

Q: How can I crack other Adobe products with Amtlib.dll files?

-

A: The process of cracking other Adobe products with Amtlib.dll files is similar to cracking Adobe Photoshop CS6. You need to download and install the trial version of the product you want to crack, then download and replace the original Amtlib.dll file with the cracked one in the installation folder of the product. However, you should be careful about the compatibility and reliability of the cracked Amtlib.dll files for different products and versions. You should also be aware of the risks and consequences of cracking other Adobe products with Amtlib.dll files.

-

Q: Where can I find more information about cracking Adobe Photoshop CS6 with Amtlib.dll files?

-

A: You can find more information about cracking Adobe Photoshop CS6 with Amtlib.dll files from various sources online, such as blogs, forums, videos, etc. However, you should be careful about the accuracy and credibility of these sources. You should also be careful about the safety and security of these sources. You should not download or click on any links or files that may contain viruses, malware, or spyware. You should also not share any personal or sensitive information that may compromise your privacy or identity.

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Minecraft on Windows 10 Everything You Need to Know.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Minecraft on Windows 10 Everything You Need to Know.md deleted file mode 100644 index b51a878b2f746e676413d8e2dc07e1a64d01d98f..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Minecraft on Windows 10 Everything You Need to Know.md +++ /dev/null @@ -1,37 +0,0 @@ - -

How long does it take to download Minecraft on Windows 10?

-

Minecraft is one of the most popular games in the world, with millions of players exploring, building and fighting in its blocky world. If you want to join them, you might wonder how long it takes to download Minecraft on Windows 10. The answer depends on a few factors, such as your internet speed, the version of Minecraft you want to install, and the size of the game files.

-

how long does it take to download minecraft on windows 10


DOWNLOAD ✑ ✑ ✑ https://byltly.com/2uKvHZ



-

In this article, we will explain how to download Minecraft for Windows 10, and how long you can expect it to take. We will also compare the two versions of Minecraft available for PC: Java Edition and Bedrock Edition (also known as Windows 10 Edition).

-

How to download Minecraft for Windows 10

-

Before you can download Minecraft for Windows 10, you need to purchase the game from either the Microsoft Store or the Minecraft website. The game costs $29.99 / £24.99 / AUS$39.95, but you can get it for free or at a discounted price if you have an Xbox Game Pass subscription.

-

Once you have bought the game, you will need to create a Microsoft account if you don't have one already. This is an Outlook email address that you can use to sign in to the Minecraft Launcher and access online features. You will also need to verify your email address and enter your birthdate and country/region.

-

After creating your Microsoft account, you can download and open the Minecraft Launcher from either the Microsoft Store or the Minecraft website. This is where you can choose which version of Minecraft you want to install: Java Edition or Bedrock Edition.

-

-

Which version of Minecraft should you install?

-

Minecraft Java Edition and Bedrock Edition are both compatible with Windows 10, but they have some differences in features, performance and cross-play options. Here are some of the main differences between them:

- -

The good news is that you don't have to choose between them. If you buy Minecraft for Windows 10 from the Minecraft website, you will get both Java Edition and Bedrock Edition for free. You can install both versions on your PC and switch between them using the Minecraft Launcher.

-

How long does it take to download Minecraft on Windows 10?

-

The download time for Minecraft on Windows 10 depends on your internet speed and the size of the game files. According to our tests, these are the approximate download times for each version of Minecraft:

- - - - -
VersionFile sizeDownload time
Java Edition500 MB5 minutes
Bedrock Edition300 MB3 minutes
-

Note that these are only estimates based on average internet speeds of 25 Mbps. Your actual download time may vary depending on your internet connection and other factors.

-

If you are having trouble downloading Minecraft on Windows 10, you can try some of these troubleshooting steps:

- -

Conclusion

-

Minecraft is a fun and creative game that you can enjoy on your Windows 10 PC. To download it, you need to buy it from either the Microsoft Store or

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Cccam 2.3.0 Ipk Vix [BEST].md b/spaces/1gistliPinn/ChatGPT4/Examples/Cccam 2.3.0 Ipk Vix [BEST].md deleted file mode 100644 index d21fc508b0191e3f043d0f18e32c5fa49e365b37..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Cccam 2.3.0 Ipk Vix [BEST].md +++ /dev/null @@ -1,10 +0,0 @@ - -

a cccam is a device that is connected to a dreambox, and it controls the picture output of the dreambox. the cccam allows you to control the dreambox picture quality and settings. you can control the contrast, brightness, and color of the picture. the cccam will allow you to preview the picture quality on the dreambox.

-

Cccam 2.3.0 Ipk Vix


DOWNLOADhttps://imgfil.com/2uxYTr



-

this can be a great option if you have a dreambox that is out of warranty and you want to upgrade the firmware. the cccam firmware upgrade can be installed on your dreambox if you have the dreambox installed and you have the right cables. (the dreambox will not work without the right cables, and the cccam firmware upgrade can only be installed after the dreambox is installed.)

-

there are a few different dreambox models that support cccam firmware upgrades. the dreambox is listed on this website if it supports cccam firmware upgrades. make sure that you are installing the correct cccam firmware for your dreambox model.

-

cccam is a great feature for those that want to upgrade their dreambox firmware. the cccam firmware upgrade will allow you to upgrade the firmware of the dreambox. the dreambox firmware upgrade can be installed on your dreambox if you have the dreambox installed and you have the right cables. the cccam firmware upgrade can only be installed after the dreambox is installed.

-

-

the cccam is a device that is connected to a dreambox, and it controls the picture output of the dreambox. the cccam allows you to control the dreambox picture quality and settings. you can control the contrast, brightness, and color of the picture.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Clash Royale 3.3024.2 APK and Join the Arena with Your Favorite Clash Characters.md b/spaces/1phancelerku/anime-remove-background/Download Clash Royale 3.3024.2 APK and Join the Arena with Your Favorite Clash Characters.md deleted file mode 100644 index 11675180ef4f0d5a6fec7d6c5ce52233f8c6b216..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Clash Royale 3.3024.2 APK and Join the Arena with Your Favorite Clash Characters.md +++ /dev/null @@ -1,123 +0,0 @@ -
-

Clash Royale 3.3024.2 APK: Everything You Need to Know

-

If you are a fan of strategy games, you have probably heard of Clash Royale, one of the most popular and addictive mobile games in the world. Clash Royale is a real-time multiplayer game where you can collect and upgrade dozens of cards featuring your favorite characters from Clash of Clans, as well as spells and defenses. You can also build your own battle deck and challenge other players online in fast-paced duels.

-

But did you know that there is a new version of Clash Royale available for download? Yes, you heard that right. Clash Royale 3.3024.2 APK is the latest update of the game, and it comes with a lot of new features, improvements, and bug fixes that will make your gaming experience even better. In this article, we will tell you everything you need to know about Clash Royale 3.3024.2 APK, including what's new, how to download and install it, and why you should play it. Let's get started!

-

clash royale 3.3024.2 apk


Download File ❤❤❤ https://jinyurl.com/2uNQ0p



-

What is Clash Royale?

-

Before we dive into the details of the new version, let's have a quick recap of what Clash Royale is all about. Clash Royale is a strategy game developed by Supercell, the same company behind the hit game Clash of Clans. It was released in 2016 and has since become one of the most downloaded and played games on both Android and iOS devices.

-

Clash Royale is a game where you can create your own army of troops, spells, and buildings, and use them to attack your opponent's towers and destroy their king tower. You can also defend your own towers from enemy attacks using various strategies and tactics. The game features different arenas, each with its own theme and difficulty level, where you can compete with other players from around the world.

-

Clash Royale is not only a game of skill, but also a game of luck. You never know what cards you will get in your hand or what cards your opponent will play next. You have to think fast and act smart to win the battles. You can also join or create clans, where you can chat with other players, share cards, request donations, and participate in clan wars.

-

Clash Royale is a game that is easy to learn but hard to master. It requires strategic thinking, quick reflexes, and constant adaptation to changing situations. It is also a game that is constantly updated with new content, such as cards, modes, events, rewards, and more.

-

What's new in Clash Royale 3.3024.2 APK?

-

Now that you have a general idea of what Clash Royale is, let's see what's new in the latest version of the game: Clash Royale 3.3024.2 APK. This version was released on June 19th, 2023, and it brings some exciting changes and additions to the game.

-

New cards and balance updates

-

The most noticeable change in Clash Royale 3.3024.2 APK is the introduction of two new cards: the Firecracker and the Royal Delivery. The Firecracker is a common card that costs 3 elixir and shoots fireworks that deal splash damage to enemies. The Royal Delivery is a rare card that costs 4 elixir and drops a Royal Recruit on the battlefield after a short delay. Both cards are available in Arena 7 and above.

-

Another change in Clash Royale 3.3024.2 APK is the balance update that affects several cards in the game. Some of the cards that have been buffed are the Electro Dragon, the Goblin Cage, the Zappies, and the Heal Spirit. Some of the cards that have been nerfed are the Magic Archer, the Battle Healer, the Elixir Golem, and the Skeleton Barrel. You can check the full list of balance changes on the official website of Clash Royale.

-

New game modes and events

-

Clash Royale 3.3024.2 APK also introduces some new game modes and events that will spice up your gameplay. One of them is the Firecracker Rush, where both players start with a Firecracker on each lane, and more Firecrackers spawn throughout the match. Another one is the Royal Delivery Challenge, where you can win the new card by reaching 12 wins. There are also some seasonal events, such as the Summer of 2v2, where you can play different 2v2 modes with your friends or random partners.

-

clash royale 3.3024.2 apk download for android
-clash royale 3.3024.2 apk mod unlimited gold/gems
-clash royale 3.3024.2 apk latest version uptodown
-clash royale 3.3024.2 apk free download softpedia
-clash royale 3.3024.2 apk update new features
-clash royale 3.3024.2 apk hack no root
-clash royale 3.3024.2 apk offline installer
-clash royale 3.3024.2 apk mirror link
-clash royale 3.3024.2 apk file size
-clash royale 3.3024.2 apk gameplay review
-clash royale 3.3024.2 apk old version download
-clash royale 3.3024.2 apk obb data
-clash royale 3.3024.2 apk direct download
-clash royale 3.3024.2 apk for pc windows 10
-clash royale 3.3024.2 apk cheats codes
-clash royale 3.3024.2 apk original from supercell
-clash royale 3.3024.2 apk android requirements
-clash royale 3.3024.2 apk how to install guide
-clash royale 3.3024.2 apk best decks tips
-clash royale 3.3024.2 apk changelog not available
-clash royale 3.3024.2 apk online multiplayer mode
-clash royale 3.3024.2 apk unlimited elixir hack
-clash royale 3.3024.2 apk private server download
-clash royale 3.3024.2 apk full unlocked all cards
-clash royale 3.3024.2 apk bug fixes and improvements
-clash royale 3.3024.2 apk strategy game genre
-clash royale 3.3024.2 apk compatible devices list
-clash royale 3.3024.2 apk safe and secure download
-clash royale 3.3024.2 apk ratings and reviews
-clash royale 3.3024.2 apk screenshots and videos
-clash royale 3.3024.2 apk clan wars update
-clash royale 3.3024.2 apk legendary cards unlock
-clash royale 3.3024.2 apk arena challenges rewards
-clash royale 3.3024.2 apk new characters and skins
-clash royale 3.3024.2 apk fun and addictive gameplay
-clash royale 3.3024.2 apk support and feedback
-clash royale 3.3024.2 apk alternative download links
-clash royale 3.3024.2 apk frequently asked questions
-clash royale 3 .30242.apk no ads version premium

-

Additionally, Clash Royale 3.3024.2 APK brings back some of the classic game modes that have been missing for a while, such as Triple Elixir, Ramp Up, Sudden Death, and Draft. You can play these modes in friendly battles, tournaments, or special challenges.

-

New rewards and improvements

-

Finally, Clash Royale 3.3024.2 APK offers some new rewards and improvements that will make your gaming experience more enjoyable and rewarding. One of them is the Pass Royale Season 11, which gives you access to exclusive perks, such as unlimited entries to special challenges, a golden name, a unique tower skin, and more. You can also unlock new emotes, chests, gold, gems, and cards by completing quests and tiers.

-

Another improvement in Clash Royale 3.3024.2 APK is the Clan Wars 2.0 update, which is coming soon to the game. This update will revamp the clan wars system and make it more fun and competitive for all clans. You can expect new features such as boat battles, river tasks, clan leagues, and more.

-

How to download and install Clash Royale 3.3024.2 APK?

-

Now that you know what's new in Clash Royale 3.3024.2 APK, you might be wondering how to download and install it on your Android device. Don't worry, we have got you covered. Here is a step-by-step guide for you:

-

Requirements and permissions

-

Before you download and install Clash Royale 3.3024.2 APK, you need to make sure that your device meets the following requirements:

- -

You also need to enable the installation of apps from unknown sources on your device settings. To do this, follow these steps:

-
    -
  1. Go to your device settings and tap on Security or Privacy.
  2. -
  3. Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.
  4. -
  5. Confirm your choice by tapping OK or Allow.
  6. -
-

Download link and installation process

-

Once you have met the requirements and enabled the permissions, you can proceed to download and install Clash Royale 3.3024.2 APK by following these steps:

-
    -
  1. Click on this link to download Clash Royale 3.3024.2 APK file on your device.
  2. -
  3. Wait for the download to finish and then locate the file on your device storage or download folder.
  4. -
  5. Tap on the file and then tap on Install to start the installation process.
  6. -
  7. Wait for the installation to finish and then tap on Open to launch the game.
  8. -
-

Troubleshooting tips and FAQs

-

If you encounter any problems or errors while downloading or installing Clash Royale 3.3024.2 APK, here are some troubleshooting tips and FAQs that might help you:

- -

Why should you play Clash Royale 3.3024.2 APK?

-

Now that you know how to download and install Clash Royale 3.3024.2 APK, you might be wondering why you should play it. Well, there are many reasons why you should play the latest version of Clash Royale, and here are some of them:

-

Enjoy the new features and content

-

One of the main reasons why you should play Clash Royale 3.3024.2 APK is to enjoy the new features and content that it offers. You can try out the new cards, such as the Firecracker and the Royal Delivery, and see how they fit in your deck and strategy. You can also play the new game modes and events, such as the Firecracker Rush and the Royal Delivery Challenge, and have fun with different rules and objectives. You can also explore the new seasonal events, such as the Summer of 2v2, and team up with your friends or random partners for some epic battles.

-

Compete with other players online

-

Another reason why you should play Clash Royale 3.3024.2 APK is to compete with other players online and test your skills and knowledge. You can join or create clans, where you can chat with other players, share cards, request donations, and participate in clan wars. You can also enter tournaments, where you can play against players from around the world and win prizes. You can also climb the ladder, where you can rank up and earn trophies and rewards.

-

Have fun and challenge yourself

-

The last reason why you should play Clash Royale 3.3024.2 APK is to have fun and challenge yourself. Clash Royale is a game that is easy to learn but hard to master. It requires strategic thinking, quick reflexes, and constant adaptation to changing situations. It is also a game that is constantly updated with new content, such as cards, modes, events, rewards, and more. You will never get bored or run out of things to do in Clash Royale.

-

Conclusion

-

In conclusion, Clash Royale 3.3024.2 APK is the latest version of the game that brings a lot of new features, improvements, and bug fixes that will make your gaming experience even better. You can download and install it on your Android device by following our guide above. You can also enjoy the new cards, game modes, events, rewards, and more that it offers. You can also compete with other players online, join or create clans, enter tournaments, climb the ladder, and have fun and challenge yourself.

-

So what are you waiting for? Download Clash Royale 3.3024.2 APK now and join the millions of players who are already playing this amazing game!

-

FAQs

-

Here are some frequently asked questions about Clash Royale 3.3024.2 APK:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1yukikaze/img-to-music/style.css b/spaces/1yukikaze/img-to-music/style.css deleted file mode 100644 index 8f7397fe7f0971636015170df075cd2d070344ec..0000000000000000000000000000000000000000 --- a/spaces/1yukikaze/img-to-music/style.css +++ /dev/null @@ -1,51 +0,0 @@ -#col-container {max-width: 510px; margin-left: auto; margin-right: auto;} -a {text-decoration-line: underline; font-weight: 600;} -div#music-output .h-full { - min-height: 5rem; -} -.footer { - margin-bottom: 45px; - margin-top: 10px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} \ No newline at end of file diff --git a/spaces/801artistry/RVC801/Makefile b/spaces/801artistry/RVC801/Makefile deleted file mode 100644 index 44de020e6feb7fcd58016d7c3c736681f533b597..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/Makefile +++ /dev/null @@ -1,63 +0,0 @@ -.PHONY: -.ONESHELL: - -help: ## Show this help and exit - @grep -hE '^[A-Za-z0-9_ \-]*?:.*##.*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' - -install: ## Install dependencies (Do everytime you start up a paperspace machine) - apt-get -y install build-essential python3-dev ffmpeg - pip install --upgrade setuptools wheel - pip install --upgrade pip - pip install faiss-gpu fairseq gradio ffmpeg ffmpeg-python praat-parselmouth pyworld numpy==1.23.5 numba==0.56.4 librosa==0.9.1 - pip install -r requirements.txt - pip install --upgrade lxml - apt-get update - apt -y install -qq aria2 - -basev1: ## Download version 1 pre-trained models (Do only once after cloning the fork) - mkdir -p pretrained uvr5_weights - git pull - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d pretrained -o D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d pretrained -o D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d pretrained -o D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d pretrained -o G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d pretrained -o G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d pretrained -o G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth -d pretrained -o f0D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth -d pretrained -o f0D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth -d pretrained -o f0D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth -d pretrained -o f0G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth -d pretrained -o f0G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth -d pretrained -o f0G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d ./ -o hubert_base.pt - -basev2: ## Download version 2 pre-trained models (Do only once after cloning the fork) - mkdir -p pretrained_v2 uvr5_weights - git pull - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D32k.pth -d pretrained_v2 -o D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d pretrained_v2 -o D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D48k.pth -d pretrained_v2 -o D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G32k.pth -d pretrained_v2 -o G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d pretrained_v2 -o G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G48k.pth -d pretrained_v2 -o G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D32k.pth -d pretrained_v2 -o f0D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -d pretrained_v2 -o f0D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D48k.pth -d pretrained_v2 -o f0D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G32k.pth -d pretrained_v2 -o f0G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth -d pretrained_v2 -o f0G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G48k.pth -d pretrained_v2 -o f0G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d ./ -o hubert_base.pt - -run-ui: ## Run the python GUI - python infer-web.py --paperspace --pycmd python - -run-cli: ## Run the python CLI - python infer-web.py --pycmd python --is_cli - -tensorboard: ## Start the tensorboard (Run on separate terminal) - echo https://tensorboard-$$(hostname).clg07azjl.paperspacegradient.com - tensorboard --logdir logs --bind_all \ No newline at end of file diff --git a/spaces/A666sxr/Genshin_TTS/stft.py b/spaces/A666sxr/Genshin_TTS/stft.py deleted file mode 100644 index ef754544a88a1a4ff2e39760000d707c6b160b4b..0000000000000000000000000000000000000000 --- a/spaces/A666sxr/Genshin_TTS/stft.py +++ /dev/null @@ -1,209 +0,0 @@ -""" -BSD 3-Clause License -Copyright (c) 2017, Prem Seetharaman -All rights reserved. -* Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions are met: -* Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. -* Redistributions in binary form must reproduce the above copyright notice, this - list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. -* Neither the name of the copyright holder nor the names of its - contributors may be used to endorse or promote products derived from this - software without specific prior written permission. -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR -ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON -ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -""" - -import torch -import numpy as np -import torch.nn.functional as F -from torch.autograd import Variable -from scipy.signal import get_window -from librosa.util import pad_center, tiny -import librosa.util as librosa_util - -def window_sumsquare(window, n_frames, hop_length=200, win_length=800, - n_fft=800, dtype=np.float32, norm=None): - """ - # from librosa 0.6 - Compute the sum-square envelope of a window function at a given hop length. - This is used to estimate modulation effects induced by windowing - observations in short-time fourier transforms. - Parameters - ---------- - window : string, tuple, number, callable, or list-like - Window specification, as in `get_window` - n_frames : int > 0 - The number of analysis frames - hop_length : int > 0 - The number of samples to advance between frames - win_length : [optional] - The length of the window function. By default, this matches `n_fft`. - n_fft : int > 0 - The length of each analysis frame. - dtype : np.dtype - The data type of the output - Returns - ------- - wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))` - The sum-squared envelope of the window function - """ - if win_length is None: - win_length = n_fft - - n = n_fft + hop_length * (n_frames - 1) - x = np.zeros(n, dtype=dtype) - - # Compute the squared window at the desired length - win_sq = get_window(window, win_length, fftbins=True) - win_sq = librosa_util.normalize(win_sq, norm=norm)**2 - win_sq = librosa_util.pad_center(win_sq, n_fft) - - # Fill the envelope - for i in range(n_frames): - sample = i * hop_length - x[sample:min(n, sample + n_fft)] += win_sq[:max(0, min(n_fft, n - sample))] - return x - - -class STFT(torch.nn.Module): - """adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft""" - def __init__(self, filter_length=800, hop_length=200, win_length=800, - window='hann'): - super(STFT, self).__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length - self.window = window - self.forward_transform = None - scale = self.filter_length / self.hop_length - fourier_basis = np.fft.fft(np.eye(self.filter_length)) - - cutoff = int((self.filter_length / 2 + 1)) - fourier_basis = np.vstack([np.real(fourier_basis[:cutoff, :]), - np.imag(fourier_basis[:cutoff, :])]) - - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - inverse_basis = torch.FloatTensor( - np.linalg.pinv(scale * fourier_basis).T[:, None, :]) - - if window is not None: - assert(filter_length >= win_length) - # get window and zero center pad it to filter_length - fft_window = get_window(window, win_length, fftbins=True) - fft_window = pad_center(fft_window, filter_length) - fft_window = torch.from_numpy(fft_window).float() - - # window the bases - forward_basis *= fft_window - inverse_basis *= fft_window - - self.register_buffer('forward_basis', forward_basis.float()) - self.register_buffer('inverse_basis', inverse_basis.float()) - - def transform(self, input_data): - num_batches = input_data.size(0) - num_samples = input_data.size(1) - - self.num_samples = num_samples - - # similar to librosa, reflect-pad the input - input_data = input_data.view(num_batches, 1, num_samples) - input_data = F.pad( - input_data.unsqueeze(1), - (int(self.filter_length / 2), int(self.filter_length / 2), 0, 0), - mode='reflect') - input_data = input_data.squeeze(1) - - forward_transform = F.conv1d( - input_data, - Variable(self.forward_basis, requires_grad=False), - stride=self.hop_length, - padding=0) - - cutoff = int((self.filter_length / 2) + 1) - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - - magnitude = torch.sqrt(real_part**2 + imag_part**2) - phase = torch.autograd.Variable( - torch.atan2(imag_part.data, real_part.data)) - - return magnitude, phase - - def inverse(self, magnitude, phase): - recombine_magnitude_phase = torch.cat( - [magnitude*torch.cos(phase), magnitude*torch.sin(phase)], dim=1) - - inverse_transform = F.conv_transpose1d( - recombine_magnitude_phase, - Variable(self.inverse_basis, requires_grad=False), - stride=self.hop_length, - padding=0) - - if self.window is not None: - window_sum = window_sumsquare( - self.window, magnitude.size(-1), hop_length=self.hop_length, - win_length=self.win_length, n_fft=self.filter_length, - dtype=np.float32) - # remove modulation effects - approx_nonzero_indices = torch.from_numpy( - np.where(window_sum > tiny(window_sum))[0]) - window_sum = torch.autograd.Variable( - torch.from_numpy(window_sum), requires_grad=False) - window_sum = window_sum.to(inverse_transform.device()) if magnitude.is_cuda else window_sum - inverse_transform[:, :, approx_nonzero_indices] /= window_sum[approx_nonzero_indices] - - # scale by hop ratio - inverse_transform *= float(self.filter_length) / self.hop_length - - inverse_transform = inverse_transform[:, :, int(self.filter_length/2):] - inverse_transform = inverse_transform[:, :, :-int(self.filter_length/2):] - - return inverse_transform - - def forward(self, input_data): - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction - - -class TorchSTFT(torch.nn.Module): - def __init__(self, filter_length=800, hop_length=200, win_length=800, window='hann'): - super().__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length - self.window = torch.from_numpy(get_window(window, win_length, fftbins=True).astype(np.float32)) - - def transform(self, input_data): - forward_transform = torch.stft( - input_data, - self.filter_length, self.hop_length, self.win_length, window=self.window, - return_complex=True) - - return torch.abs(forward_transform), torch.angle(forward_transform) - - def inverse(self, magnitude, phase): - inverse_transform = torch.istft( - magnitude * torch.exp(phase * 1j), - self.filter_length, self.hop_length, self.win_length, window=self.window.to(magnitude.device)) - - return inverse_transform.unsqueeze(-2) # unsqueeze to stay consistent with conv_transpose1d implementation - - def forward(self, input_data): - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction - - diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/loss.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/loss.py deleted file mode 100644 index cc66298a14997da4aa2efc71e37c0a6bcda53fd1..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/loss.py +++ /dev/null @@ -1,398 +0,0 @@ -from multiprocessing.sharedctypes import Value -import torch -import torch.distributed.nn -from torch import distributed as dist, nn as nn -from torch.nn import functional as F -import numpy as np -from sklearn.metrics import average_precision_score, roc_auc_score, accuracy_score - -try: - import horovod.torch as hvd -except ImportError: - hvd = None - - -def gather_features( - audio_features, - text_features, - audio_features_mlp=None, - text_features_mlp=None, - local_loss=False, - gather_with_grad=False, - rank=0, - world_size=1, - use_horovod=False, - mlp_loss=False, -): - if use_horovod: - assert hvd is not None, "Please install horovod" - if gather_with_grad: - all_audio_features = hvd.allgather(audio_features) - all_text_features = hvd.allgather(text_features) - if mlp_loss: - all_audio_features_mlp = hvd.allgather(audio_features_mlp) - all_text_features_mlp = hvd.allgather(text_features_mlp) - else: - with torch.no_grad(): - all_audio_features = hvd.allgather(audio_features) - all_text_features = hvd.allgather(text_features) - if mlp_loss: - all_audio_features_mlp = hvd.allgather(audio_features_mlp) - all_text_features_mlp = hvd.allgather(text_features_mlp) - if not local_loss: - # ensure grads for local rank when all_* features don't have a gradient - gathered_audio_features = list( - all_audio_features.chunk(world_size, dim=0) - ) - gathered_text_features = list( - all_text_features.chunk(world_size, dim=0) - ) - gathered_audio_features[rank] = audio_features - gathered_text_features[rank] = text_features - all_audio_features = torch.cat(gathered_audio_features, dim=0) - all_text_features = torch.cat(gathered_text_features, dim=0) - if mlp_loss: - gathered_audio_features_mlp = list( - all_audio_features_mlp.chunk(world_size, dim=0) - ) - gathered_text_features_mlp = list( - all_text_features_mlp.chunk(world_size, dim=0) - ) - gathered_audio_features_mlp[rank] = audio_features_mlp - gathered_text_features_mlp[rank] = text_features_mlp - all_audio_features_mlp = torch.cat( - gathered_audio_features_mlp, dim=0 - ) - all_text_features_mlp = torch.cat(gathered_text_features_mlp, dim=0) - else: - # We gather tensors from all gpus - if gather_with_grad: - all_audio_features = torch.cat( - torch.distributed.nn.all_gather(audio_features), dim=0 - ) - all_text_features = torch.cat( - torch.distributed.nn.all_gather(text_features), dim=0 - ) - if mlp_loss: - all_audio_features_mlp = torch.cat( - torch.distributed.nn.all_gather(audio_features_mlp), dim=0 - ) - all_text_features_mlp = torch.cat( - torch.distributed.nn.all_gather(text_features_mlp), dim=0 - ) - else: - gathered_audio_features = [ - torch.zeros_like(audio_features) for _ in range(world_size) - ] - gathered_text_features = [ - torch.zeros_like(text_features) for _ in range(world_size) - ] - dist.all_gather(gathered_audio_features, audio_features) - dist.all_gather(gathered_text_features, text_features) - if mlp_loss: - gathered_audio_features_mlp = [ - torch.zeros_like(audio_features_mlp) for _ in range(world_size) - ] - gathered_text_features_mlp = [ - torch.zeros_like(text_features_mlp) for _ in range(world_size) - ] - dist.all_gather(gathered_audio_features_mlp, audio_features_mlp) - dist.all_gather(gathered_text_features_mlp, text_features_mlp) - if not local_loss: - # ensure grads for local rank when all_* features don't have a gradient - gathered_audio_features[rank] = audio_features - gathered_text_features[rank] = text_features - if mlp_loss: - gathered_audio_features_mlp[rank] = audio_features_mlp - gathered_text_features_mlp[rank] = text_features_mlp - - all_audio_features = torch.cat(gathered_audio_features, dim=0) - all_text_features = torch.cat(gathered_text_features, dim=0) - if mlp_loss: - all_audio_features_mlp = torch.cat(gathered_audio_features_mlp, dim=0) - all_text_features_mlp = torch.cat(gathered_text_features_mlp, dim=0) - if mlp_loss: - return ( - all_audio_features, - all_text_features, - all_audio_features_mlp, - all_text_features_mlp, - ) - else: - return all_audio_features, all_text_features - - -class ClipLoss(nn.Module): - def __init__( - self, - local_loss=False, - gather_with_grad=False, - cache_labels=False, - rank=0, - world_size=1, - use_horovod=False, - mlp_loss=False, - weight_loss_kappa=0, - ): - super().__init__() - self.local_loss = local_loss - self.gather_with_grad = gather_with_grad - self.cache_labels = cache_labels - self.rank = rank - self.world_size = world_size - self.use_horovod = use_horovod - self.mlp_loss = mlp_loss - self.weighted_loss = bool(weight_loss_kappa != 0) - self.weight_loss_kappa = weight_loss_kappa - # cache state - self.prev_num_logits = 0 - self.labels = {} - - def forward( - self, - audio_features, - text_features, - logit_scale_a, - logit_scale_t=None, - audio_features_mlp=None, - text_features_mlp=None, - ): - device = audio_features.device - if self.mlp_loss: - if self.world_size > 1: - ( - all_audio_features, - all_text_features, - all_audio_features_mlp, - all_text_features_mlp, - ) = gather_features( - audio_features=audio_features, - text_features=text_features, - audio_features_mlp=audio_features_mlp, - text_features_mlp=text_features_mlp, - local_loss=self.local_loss, - gather_with_grad=self.gather_with_grad, - rank=self.rank, - world_size=self.world_size, - use_horovod=self.use_horovod, - mlp_loss=self.mlp_loss, - ) - if self.local_loss: - a_logits_per_audio = ( - logit_scale_a * audio_features @ all_text_features_mlp.T - ) - a_logits_per_text = ( - logit_scale_a * text_features_mlp @ all_audio_features.T - ) - t_logits_per_audio = ( - logit_scale_t * audio_features_mlp @ all_text_features.T - ) - t_logits_per_text = ( - logit_scale_t * text_features @ all_audio_features_mlp.T - ) - else: - a_logits_per_audio = ( - logit_scale_a * all_audio_features @ all_text_features_mlp.T - ) - a_logits_per_text = a_logits_per_audio.T - t_logits_per_audio = ( - logit_scale_t * all_audio_features_mlp @ all_text_features.T - ) - t_logits_per_text = t_logits_per_audio.T - else: - a_logits_per_audio = ( - logit_scale_a * audio_features @ text_features_mlp.T - ) - a_logits_per_text = logit_scale_a * text_features_mlp @ audio_features.T - t_logits_per_audio = ( - logit_scale_t * audio_features_mlp @ text_features.T - ) - t_logits_per_text = logit_scale_t * text_features @ audio_features_mlp.T - - # calculated ground-truth and cache if enabled - num_logits = a_logits_per_audio.shape[0] - if self.prev_num_logits != num_logits or device not in self.labels: - labels = torch.arange(num_logits, device=device, dtype=torch.long) - if self.world_size > 1 and self.local_loss: - labels = labels + num_logits * self.rank - if self.cache_labels: - self.labels[device] = labels - self.prev_num_logits = num_logits - else: - labels = self.labels[device] - - if not self.weighted_loss: - total_loss = ( - F.cross_entropy(a_logits_per_audio, labels) - + F.cross_entropy(a_logits_per_text, labels) - + F.cross_entropy(t_logits_per_audio, labels) - + F.cross_entropy(t_logits_per_text, labels) - ) / 4 - else: - audio_weight = (audio_features @ audio_features.T).detach() - audio_weight = ( - torch.exp( - torch.sum(audio_weight, axis=1) - / (self.weight_loss_kappa * len(audio_weight)) - ) - ).detach() - text_weight = (text_features @ text_features.T).detach() - text_weight = ( - torch.exp( - torch.sum(text_weight, axis=1) - / (self.weight_loss_kappa * len(text_features)) - ) - ).detach() - total_loss = ( - F.cross_entropy(a_logits_per_audio, labels, weight=audio_weight) - + F.cross_entropy(a_logits_per_text, labels, weight=audio_weight) - + F.cross_entropy(t_logits_per_audio, labels, weight=text_weight) - + F.cross_entropy(t_logits_per_text, labels, weight=text_weight) - ) / 4 - else: - if self.world_size > 1: - all_audio_features, all_text_features = gather_features( - audio_features=audio_features, - text_features=text_features, - local_loss=self.local_loss, - gather_with_grad=self.gather_with_grad, - rank=self.rank, - world_size=self.world_size, - use_horovod=self.use_horovod, - mlp_loss=self.mlp_loss, - ) - - if self.local_loss: - logits_per_audio = ( - logit_scale_a * audio_features @ all_text_features.T - ) - logits_per_text = ( - logit_scale_a * text_features @ all_audio_features.T - ) - else: - logits_per_audio = ( - logit_scale_a * all_audio_features @ all_text_features.T - ) - logits_per_text = logits_per_audio.T - else: - logits_per_audio = logit_scale_a * audio_features @ text_features.T - logits_per_text = logit_scale_a * text_features @ audio_features.T - - # calculated ground-truth and cache if enabled - num_logits = logits_per_audio.shape[0] - if self.prev_num_logits != num_logits or device not in self.labels: - labels = torch.arange(num_logits, device=device, dtype=torch.long) - if self.world_size > 1 and self.local_loss: - labels = labels + num_logits * self.rank - if self.cache_labels: - self.labels[device] = labels - self.prev_num_logits = num_logits - else: - labels = self.labels[device] - if not self.weighted_loss: - total_loss = ( - F.cross_entropy(logits_per_audio, labels) - + F.cross_entropy(logits_per_text, labels) - ) / 2 - else: - audio_weight = (all_audio_features @ all_audio_features.T).detach() - audio_weight = ( - torch.exp( - torch.sum(audio_weight, axis=1) - / (self.weight_loss_kappa * len(all_audio_features)) - ) - ).detach() - text_weight = (all_text_features @ all_text_features.T).detach() - text_weight = ( - torch.exp( - torch.sum(text_weight, axis=1) - / (self.weight_loss_kappa * len(all_text_features)) - ) - ).detach() - total_loss = ( - F.cross_entropy(logits_per_audio, labels, weight=text_weight) - + F.cross_entropy(logits_per_text, labels, weight=audio_weight) - ) / 2 - return total_loss - - -def lp_gather_features(pred, target, world_size=1, use_horovod=False): - if use_horovod: - assert hvd is not None, "Please install horovod" - with torch.no_grad(): - all_preds = hvd.allgather(pred) - all_targets = hvd.allgath(target) - else: - gathered_preds = [torch.zeros_like(pred) for _ in range(world_size)] - gathered_targets = [torch.zeros_like(target) for _ in range(world_size)] - - dist.all_gather(gathered_preds, pred) - dist.all_gather(gathered_targets, target) - all_preds = torch.cat(gathered_preds, dim=0) - all_targets = torch.cat(gathered_targets, dim=0) - - return all_preds, all_targets - - -def get_map(pred, target): - pred = torch.sigmoid(pred).numpy() - target = target.numpy() - return np.mean(average_precision_score(target, pred, average=None)) - - -def get_acc(pred, target): - pred = torch.argmax(pred, 1).numpy() - target = torch.argmax(target, 1).numpy() - return accuracy_score(target, pred) - - -def get_mauc(pred, target): - pred = torch.sigmoid(pred).numpy() - target = target.numpy() - return np.mean(roc_auc_score(target, pred, average=None)) - - -class LPMetrics(object): - def __init__(self, metric_names=["map", "acc", "mauc"]): - self.metrics = [] - for name in metric_names: - self.metrics.append(self.get_metric(name)) - self.metric_names = metric_names - - def get_metric(self, name): - if name == "map": - return get_map - elif name == "acc": - return get_acc - elif name == "mauc": - return get_mauc - else: - raise ValueError(f"the metric should be at least one of [map, acc, mauc]") - - def evaluate_mertics(self, pred, target): - metric_dict = {} - for i in range(len(self.metric_names)): - metric_dict[self.metric_names[i]] = self.metrics[i](pred, target) - return metric_dict - - -def calc_celoss(pred, target): - target = torch.argmax(target, 1).long() - return nn.CrossEntropyLoss()(pred, target) - - -class LPLoss(nn.Module): - def __init__(self, loss_name): - super().__init__() - if loss_name == "bce": - self.loss_func = nn.BCEWithLogitsLoss() - elif loss_name == "ce": - self.loss_func = calc_celoss - elif loss_name == "mse": - self.loss_func = nn.MSELoss() - else: - raise ValueError(f"the loss func should be at least one of [bce, ce, mse]") - - def forward(self, pred, target): - loss = self.loss_func(pred, target) - return loss diff --git a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/openai.py b/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/openai.py deleted file mode 100644 index 3f4eb8b55fe960e1792b3da804b60b3d8f70fe26..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/openai.py +++ /dev/null @@ -1,156 +0,0 @@ -""" OpenAI pretrained model functions - -Adapted from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -""" - -import os -import warnings -from typing import Union, List - -import torch - -from .model import build_model_from_openai_state_dict -from .pretrained import ( - get_pretrained_url, - list_pretrained_tag_models, - download_pretrained, -) - -__all__ = ["list_openai_models", "load_openai_model"] - - -def list_openai_models() -> List[str]: - """Returns the names of available CLIP models""" - return list_pretrained_tag_models("openai") - - -def load_openai_model( - name: str, - model_cfg, - device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", - jit=True, - cache_dir=os.path.expanduser("~/.cache/clip"), - enable_fusion: bool = False, - fusion_type: str = "None", -): - """Load a CLIP model, preserve its text pretrained part, and set in the CLAP model - - Parameters - ---------- - name : str - A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict - device : Union[str, torch.device] - The device to put the loaded model - jit : bool - Whether to load the optimized JIT model (default) or more hackable non-JIT model. - - Returns - ------- - model : torch.nn.Module - The CLAP model - preprocess : Callable[[PIL.Image], torch.Tensor] - A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input - """ - if get_pretrained_url(name, "openai"): - model_path = download_pretrained( - get_pretrained_url(name, "openai"), root=cache_dir - ) - elif os.path.isfile(name): - model_path = name - else: - raise RuntimeError( - f"Model {name} not found; available models = {list_openai_models()}" - ) - - try: - # loading JIT archive - model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval() - state_dict = None - except RuntimeError: - # loading saved state dict - if jit: - warnings.warn( - f"File {model_path} is not a JIT archive. Loading as a state dict instead" - ) - jit = False - state_dict = torch.load(model_path, map_location="cpu") - - if not jit: - try: - model = build_model_from_openai_state_dict( - state_dict or model.state_dict(), model_cfg, enable_fusion, fusion_type - ).to(device) - except KeyError: - sd = {k[7:]: v for k, v in state_dict["state_dict"].items()} - model = build_model_from_openai_state_dict( - sd, model_cfg, enable_fusion, fusion_type - ).to(device) - - if str(device) == "cpu": - model.float() - return model - - # patch the device names - device_holder = torch.jit.trace( - lambda: torch.ones([]).to(torch.device(device)), example_inputs=[] - ) - device_node = [ - n - for n in device_holder.graph.findAllNodes("prim::Constant") - if "Device" in repr(n) - ][-1] - - def patch_device(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("prim::Constant"): - if "value" in node.attributeNames() and str(node["value"]).startswith( - "cuda" - ): - node.copyAttributes(device_node) - - model.apply(patch_device) - patch_device(model.encode_audio) - patch_device(model.encode_text) - - # patch dtype to float32 on CPU - if str(device) == "cpu": - float_holder = torch.jit.trace( - lambda: torch.ones([]).float(), example_inputs=[] - ) - float_input = list(float_holder.graph.findNode("aten::to").inputs())[1] - float_node = float_input.node() - - def patch_float(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("aten::to"): - inputs = list(node.inputs()) - for i in [ - 1, - 2, - ]: # dtype can be the second or third argument to aten::to() - if inputs[i].node()["value"] == 5: - inputs[i].node().copyAttributes(float_node) - - model.apply(patch_float) - patch_float(model.encode_audio) - patch_float(model.encode_text) - model.float() - - model.audio_branch.audio_length = model.audio_cfg.audio_length - return model diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/os_utils.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/os_utils.py deleted file mode 100644 index 4567d17c398c535884600cdd86a36a823acb886f..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/os_utils.py +++ /dev/null @@ -1,20 +0,0 @@ -import os -import subprocess - - -def link_file(from_file, to_file): - subprocess.check_call( - f'ln -s "`realpath --relative-to="{os.path.dirname(to_file)}" "{from_file}"`" "{to_file}"', shell=True) - - -def move_file(from_file, to_file): - subprocess.check_call(f'mv "{from_file}" "{to_file}"', shell=True) - - -def copy_file(from_file, to_file): - subprocess.check_call(f'cp -r "{from_file}" "{to_file}"', shell=True) - - -def remove_file(*fns): - for f in fns: - subprocess.check_call(f'rm -rf "{f}"', shell=True) diff --git a/spaces/ASJMO/freegpt/client/css/options.css b/spaces/ASJMO/freegpt/client/css/options.css deleted file mode 100644 index fb015a54e0a7f7ac521517357d812c994621592e..0000000000000000000000000000000000000000 --- a/spaces/ASJMO/freegpt/client/css/options.css +++ /dev/null @@ -1,10 +0,0 @@ -.options-container { - display: flex; - flex-wrap: wrap; -} - -@media screen and (max-width: 990px) { - .options-container { - justify-content: space-between; - } -} diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/builders.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/builders.py deleted file mode 100644 index 77ee5f96fea2e3c9e475fe961bc1a5ee473ed8eb..0000000000000000000000000000000000000000 --- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/builders.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -All the functions to build the relevant models and modules -from the Hydra config. -""" - -import typing as tp -import warnings - -import audiocraft -import omegaconf -import torch - -from .encodec import CompressionModel, EncodecModel, FlattenedCompressionModel # noqa -from .lm import LMModel -from ..modules.codebooks_patterns import ( - CodebooksPatternProvider, - DelayedPatternProvider, - ParallelPatternProvider, - UnrolledPatternProvider, - VALLEPattern, - MusicLMPattern, -) -from ..modules.conditioners import ( - BaseConditioner, - ConditioningProvider, - LUTConditioner, - T5Conditioner, - ConditionFuser, - ChromaStemConditioner, -) -from .. import quantization as qt -from ..utils.utils import dict_from_config - - -def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer: - klass = { - 'no_quant': qt.DummyQuantizer, - 'rvq': qt.ResidualVectorQuantizer - }[quantizer] - kwargs = dict_from_config(getattr(cfg, quantizer)) - if quantizer != 'no_quant': - kwargs['dimension'] = dimension - return klass(**kwargs) - - -def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig): - if encoder_name == 'seanet': - kwargs = dict_from_config(getattr(cfg, 'seanet')) - encoder_override_kwargs = kwargs.pop('encoder') - decoder_override_kwargs = kwargs.pop('decoder') - encoder_kwargs = {**kwargs, **encoder_override_kwargs} - decoder_kwargs = {**kwargs, **decoder_override_kwargs} - encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs) - return encoder, decoder - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel: - """Instantiate a compression model. - """ - if cfg.compression_model == 'encodec': - kwargs = dict_from_config(getattr(cfg, 'encodec')) - encoder_name = kwargs.pop('autoencoder') - quantizer_name = kwargs.pop('quantizer') - encoder, decoder = get_encodec_autoencoder(encoder_name, cfg) - quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension) - frame_rate = kwargs['sample_rate'] // encoder.hop_length - renormalize = kwargs.pop('renormalize', None) - renorm = kwargs.pop('renorm') - if renormalize is None: - renormalize = renorm is not None - warnings.warn("You are using a deprecated EnCodec model. Please migrate to new renormalization.") - return EncodecModel(encoder, decoder, quantizer, - frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device) - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel: - """Instantiate a transformer LM. - """ - if cfg.lm_model == 'transformer_lm': - kwargs = dict_from_config(getattr(cfg, 'transformer_lm')) - n_q = kwargs['n_q'] - q_modeling = kwargs.pop('q_modeling', None) - codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern') - attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout')) - cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance')) - cfg_prob, cfg_coef = cls_free_guidance["training_dropout"], cls_free_guidance["inference_coef"] - fuser = get_condition_fuser(cfg) - condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device) - if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programatically - kwargs['cross_attention'] = True - if codebooks_pattern_cfg.modeling is None: - assert q_modeling is not None, \ - 'LM model should either have a codebook pattern defined or transformer_lm.q_modeling' - codebooks_pattern_cfg = omegaconf.OmegaConf.create( - {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}} - ) - pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg) - return LMModel( - pattern_provider=pattern_provider, - condition_provider=condition_provider, - fuser=fuser, - cfg_dropout=cfg_prob, - cfg_coef=cfg_coef, - attribute_dropout=attribute_dropout, - dtype=getattr(torch, cfg.dtype), - device=cfg.device, - **kwargs - ).to(cfg.device) - else: - raise KeyError(f'Unexpected LM model {cfg.lm_model}') - - -def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider: - """Instantiate a conditioning model. - """ - device = cfg.device - duration = cfg.dataset.segment_duration - cfg = getattr(cfg, "conditioners") - cfg = omegaconf.OmegaConf.create({}) if cfg is None else cfg - conditioners: tp.Dict[str, BaseConditioner] = {} - with omegaconf.open_dict(cfg): - condition_provider_args = cfg.pop('args', {}) - for cond, cond_cfg in cfg.items(): - model_type = cond_cfg["model"] - model_args = cond_cfg[model_type] - if model_type == "t5": - conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args) - elif model_type == "lut": - conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args) - elif model_type == "chroma_stem": - model_args.pop('cache_path', None) - conditioners[str(cond)] = ChromaStemConditioner( - output_dim=output_dim, - duration=duration, - device=device, - **model_args - ) - else: - raise ValueError(f"unrecognized conditioning model: {model_type}") - conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args) - return conditioner - - -def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser: - """Instantiate a condition fuser object. - """ - fuser_cfg = getattr(cfg, "fuser") - fuser_methods = ["sum", "cross", "prepend", "input_interpolate"] - fuse2cond = {k: fuser_cfg[k] for k in fuser_methods} - kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods} - fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs) - return fuser - - -def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider: - """Instantiate a codebooks pattern provider object. - """ - pattern_providers = { - 'parallel': ParallelPatternProvider, - 'delay': DelayedPatternProvider, - 'unroll': UnrolledPatternProvider, - 'valle': VALLEPattern, - 'musiclm': MusicLMPattern, - } - name = cfg.modeling - kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {} - klass = pattern_providers[name] - return klass(n_q, **kwargs) - - -def get_debug_compression_model(device='cpu'): - """Instantiate a debug compression model to be used for unit tests. - """ - seanet_kwargs = { - 'n_filters': 4, - 'n_residual_layers': 1, - 'dimension': 32, - 'ratios': [10, 8, 16] # 25 Hz at 32kHz - } - encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs) - quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4) - init_x = torch.randn(8, 32, 128) - quantizer(init_x, 1) # initialize kmeans etc. - compression_model = EncodecModel( - encoder, decoder, quantizer, - frame_rate=25, sample_rate=32000, channels=1).to(device) - return compression_model.eval() - - -def get_debug_lm_model(device='cpu'): - """Instantiate a debug LM to be used for unit tests. - """ - pattern = DelayedPatternProvider(n_q=4) - dim = 16 - providers = { - 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"), - } - condition_provider = ConditioningProvider(providers) - fuser = ConditionFuser( - {'cross': ['description'], 'prepend': [], - 'sum': [], 'input_interpolate': []}) - lm = LMModel( - pattern, condition_provider, fuser, - n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2, - cross_attention=True, causal=True) - return lm.to(device).eval() diff --git a/spaces/Abdulkader/Abdulkader-T5-MedRepAnalyzer/app.py b/spaces/Abdulkader/Abdulkader-T5-MedRepAnalyzer/app.py deleted file mode 100644 index 7002bdb2bbf0aaa2efb06cc0ccf98da3f9ac05da..0000000000000000000000000000000000000000 --- a/spaces/Abdulkader/Abdulkader-T5-MedRepAnalyzer/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr -import requests -import transformers -from transformers import pipeline -model="https://huggingface.co/Abdulkader/autotrain-medical-reports-summarizer-2484176581" -pipe = pipeline( model=model) -gr.Interface.load("pipe").launch() \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/utils/Yoyo.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/utils/Yoyo.js deleted file mode 100644 index 7d1430b877a480359f744be55c9236d527ab5dfd..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/utils/Yoyo.js +++ /dev/null @@ -1,2 +0,0 @@ -import Yoyo from '../../../plugins/utils/math/Yoyo.js' -export default Yoyo; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/clickoutside/ClickOutside.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/clickoutside/ClickOutside.d.ts deleted file mode 100644 index 576f9c7a5f5a5da09443c2fa211303a0c73fcb40..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/clickoutside/ClickOutside.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import ClickOutside from '../../../plugins/clickoutside'; -export default ClickOutside; \ No newline at end of file diff --git a/spaces/Alfaxad/BioGalacticModels/model_list.py b/spaces/Alfaxad/BioGalacticModels/model_list.py deleted file mode 100644 index 886991b59a1049d2cfbd3ba35a54966480c4ee71..0000000000000000000000000000000000000000 --- a/spaces/Alfaxad/BioGalacticModels/model_list.py +++ /dev/null @@ -1,106 +0,0 @@ -from __future__ import annotations - -import numpy as np -import pandas as pd -import requests -from huggingface_hub.hf_api import SpaceInfo - -url = 'https://docs.google.com/spreadsheets/d/1XH7Jo3LXXfbSJ14z-QrSIQs21ArJMiV6_hMSAwY85PU/edit#gid=0' -csv_url = url.replace('/edit#gid=', '/export?format=csv&gid=') - -class ModelList: - def __init__(self): - self.table = pd.read_csv(csv_url) - self._preprocess_table() - - self.table_header = ''' - - Model Name - Type - Year - Paper - Code on Github - Weights on 🤗 - Other Weights - ''' - - def _preprocess_table(self) -> None: - self.table['name_lowercase'] = self.table.name.str.lower() - self.table['year'] = self.table['year'].apply(str) - - rows = [] - for row in self.table.itertuples(): - paper = f'Paper' if isinstance( - row.paper, str) else '' - github = f'GitHub' if isinstance( - row.github, str) else '' - hf_model = f'Hub Model' if isinstance( - row.hub, str) else '' - other_model = f'Other Weights' if isinstance( - row.other, str) else '' - data_type = f'{row.data_type}' if isinstance( - row.data_type, str) else '' - base_model = f'{row.base_model}' if isinstance( - row.base_model, str) else '' - year = f'{row.year}' if isinstance( - row.year, str) else '' - row = f''' - - {row.name} - {data_type} - {year} - {paper} - {github} - {hf_model} - {other_model} - ''' - rows.append(row) - self.table['html_table_content'] = rows - - def render(self, search_query: str, - case_sensitive: bool, - filter_names: list[str], - data_types: list[str], - years: list[str], - #model_types: list[str] - ) -> tuple[int, str]: - df = self.table - if search_query: - if case_sensitive: - df = df[df.name.str.contains(search_query)] - else: - df = df[df.name_lowercase.str.contains(search_query.lower())] - has_paper = 'Paper' in filter_names - has_github = 'Code' in filter_names - has_model = 'Model Weights' in filter_names - df = self.filter_table(df, has_paper, has_github, has_model, data_types, years) - #df = self.filter_table(df, has_paper, has_github, has_model, data_types, model_types) - return len(df), self.to_html(df, self.table_header) - - @staticmethod - def filter_table(df: pd.DataFrame, has_paper: bool, has_github: bool, - has_model: bool, - data_types: list[str], - years: list[str], - #model_types: list[str] - ) -> pd.DataFrame: - if has_paper: - df = df[~df.paper.isna()] - if has_github: - df = df[~df.github.isna()] - if has_model: - df = df[~df.hub.isna() | ~df.other.isna()] - df = df[df.data_type.isin(set(data_types))] - #df = df[df.base_model.isin(set(model_types))] - df = df[df.year.isin(set(years))] - return df - - @staticmethod - def to_html(df: pd.DataFrame, table_header: str) -> str: - table_data = ''.join(df.html_table_content) - html = f''' - - {table_header} - {table_data} -
''' - return html \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_dual_guided.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_dual_guided.py deleted file mode 100644 index ed61b37171f1d8b9025b392542db02ac1d5f4c31..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_dual_guided.py +++ /dev/null @@ -1,554 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import warnings -from typing import Callable, List, Optional, Tuple, Union - -import numpy as np -import PIL -import torch -import torch.utils.checkpoint -from transformers import ( - CLIPImageProcessor, - CLIPTextModelWithProjection, - CLIPTokenizer, - CLIPVisionModelWithProjection, -) - -from ...image_processor import VaeImageProcessor -from ...models import AutoencoderKL, DualTransformer2DModel, Transformer2DModel, UNet2DConditionModel -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import logging, randn_tensor -from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput -from .modeling_text_unet import UNetFlatConditionModel - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class VersatileDiffusionDualGuidedPipeline(DiffusionPipeline): - r""" - Pipeline for image-text dual-guided generation using Versatile Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods - implemented for all pipelines (downloading, saving, running on a particular device, etc.). - - Parameters: - vqvae ([`VQModel`]): - Vector-quantized (VQ) model to encode and decode images to and from latent representations. - bert ([`LDMBertModel`]): - Text-encoder model based on [`~transformers.BERT`]. - tokenizer ([`~transformers.BertTokenizer`]): - A `BertTokenizer` to tokenize text. - unet ([`UNet2DConditionModel`]): - A `UNet2DConditionModel` to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - """ - tokenizer: CLIPTokenizer - image_feature_extractor: CLIPImageProcessor - text_encoder: CLIPTextModelWithProjection - image_encoder: CLIPVisionModelWithProjection - image_unet: UNet2DConditionModel - text_unet: UNetFlatConditionModel - vae: AutoencoderKL - scheduler: KarrasDiffusionSchedulers - - _optional_components = ["text_unet"] - - def __init__( - self, - tokenizer: CLIPTokenizer, - image_feature_extractor: CLIPImageProcessor, - text_encoder: CLIPTextModelWithProjection, - image_encoder: CLIPVisionModelWithProjection, - image_unet: UNet2DConditionModel, - text_unet: UNetFlatConditionModel, - vae: AutoencoderKL, - scheduler: KarrasDiffusionSchedulers, - ): - super().__init__() - self.register_modules( - tokenizer=tokenizer, - image_feature_extractor=image_feature_extractor, - text_encoder=text_encoder, - image_encoder=image_encoder, - image_unet=image_unet, - text_unet=text_unet, - vae=vae, - scheduler=scheduler, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor) - - if self.text_unet is not None and ( - "dual_cross_attention" not in self.image_unet.config or not self.image_unet.config.dual_cross_attention - ): - # if loading from a universal checkpoint rather than a saved dual-guided pipeline - self._convert_to_dual_attention() - - def remove_unused_weights(self): - self.register_modules(text_unet=None) - - def _convert_to_dual_attention(self): - """ - Replace image_unet's `Transformer2DModel` blocks with `DualTransformer2DModel` that contains transformer blocks - from both `image_unet` and `text_unet` - """ - for name, module in self.image_unet.named_modules(): - if isinstance(module, Transformer2DModel): - parent_name, index = name.rsplit(".", 1) - index = int(index) - - image_transformer = self.image_unet.get_submodule(parent_name)[index] - text_transformer = self.text_unet.get_submodule(parent_name)[index] - - config = image_transformer.config - dual_transformer = DualTransformer2DModel( - num_attention_heads=config.num_attention_heads, - attention_head_dim=config.attention_head_dim, - in_channels=config.in_channels, - num_layers=config.num_layers, - dropout=config.dropout, - norm_num_groups=config.norm_num_groups, - cross_attention_dim=config.cross_attention_dim, - attention_bias=config.attention_bias, - sample_size=config.sample_size, - num_vector_embeds=config.num_vector_embeds, - activation_fn=config.activation_fn, - num_embeds_ada_norm=config.num_embeds_ada_norm, - ) - dual_transformer.transformers[0] = image_transformer - dual_transformer.transformers[1] = text_transformer - - self.image_unet.get_submodule(parent_name)[index] = dual_transformer - self.image_unet.register_to_config(dual_cross_attention=True) - - def _revert_dual_attention(self): - """ - Revert the image_unet `DualTransformer2DModel` blocks back to `Transformer2DModel` with image_unet weights Call - this function if you reuse `image_unet` in another pipeline, e.g. `VersatileDiffusionPipeline` - """ - for name, module in self.image_unet.named_modules(): - if isinstance(module, DualTransformer2DModel): - parent_name, index = name.rsplit(".", 1) - index = int(index) - self.image_unet.get_submodule(parent_name)[index] = module.transformers[0] - - self.image_unet.register_to_config(dual_cross_attention=False) - - def _encode_text_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - """ - - def normalize_embeddings(encoder_output): - embeds = self.text_encoder.text_projection(encoder_output.last_hidden_state) - embeds_pooled = encoder_output.text_embeds - embeds = embeds / torch.norm(embeds_pooled.unsqueeze(1), dim=-1, keepdim=True) - return embeds - - batch_size = len(prompt) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids - - if not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = normalize_embeddings(prompt_embeds) - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = prompt_embeds.shape - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens = [""] * batch_size - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = normalize_embeddings(negative_prompt_embeds) - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - def _encode_image_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - """ - - def normalize_embeddings(encoder_output): - embeds = self.image_encoder.vision_model.post_layernorm(encoder_output.last_hidden_state) - embeds = self.image_encoder.visual_projection(embeds) - embeds_pooled = embeds[:, 0:1] - embeds = embeds / torch.norm(embeds_pooled, dim=-1, keepdim=True) - return embeds - - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - # get prompt text embeddings - image_input = self.image_feature_extractor(images=prompt, return_tensors="pt") - pixel_values = image_input.pixel_values.to(device).to(self.image_encoder.dtype) - image_embeddings = self.image_encoder(pixel_values) - image_embeddings = normalize_embeddings(image_embeddings) - - # duplicate image embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = image_embeddings.shape - image_embeddings = image_embeddings.repeat(1, num_images_per_prompt, 1) - image_embeddings = image_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_images = [np.zeros((512, 512, 3)) + 0.5] * batch_size - uncond_images = self.image_feature_extractor(images=uncond_images, return_tensors="pt") - pixel_values = uncond_images.pixel_values.to(device).to(self.image_encoder.dtype) - negative_prompt_embeds = self.image_encoder(pixel_values) - negative_prompt_embeds = normalize_embeddings(negative_prompt_embeds) - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and conditional embeddings into a single batch - # to avoid doing two forward passes - image_embeddings = torch.cat([negative_prompt_embeds, image_embeddings]) - - return image_embeddings - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - warnings.warn( - "The decode_latents method is deprecated and will be removed in a future version. Please" - " use VaeImageProcessor instead", - FutureWarning, - ) - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents, return_dict=False)[0] - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs(self, prompt, image, height, width, callback_steps): - if not isinstance(prompt, str) and not isinstance(prompt, PIL.Image.Image) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` `PIL.Image` or `list` but is {type(prompt)}") - if not isinstance(image, str) and not isinstance(image, PIL.Image.Image) and not isinstance(image, list): - raise ValueError(f"`image` has to be of type `str` `PIL.Image` or `list` but is {type(image)}") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - def set_transformer_params(self, mix_ratio: float = 0.5, condition_types: Tuple = ("text", "image")): - for name, module in self.image_unet.named_modules(): - if isinstance(module, DualTransformer2DModel): - module.mix_ratio = mix_ratio - - for i, type in enumerate(condition_types): - if type == "text": - module.condition_lengths[i] = self.text_encoder.config.max_position_embeddings - module.transformer_index_for_condition[i] = 1 # use the second (text) transformer - else: - module.condition_lengths[i] = 257 - module.transformer_index_for_condition[i] = 0 # use the first (image) transformer - - @torch.no_grad() - def __call__( - self, - prompt: Union[PIL.Image.Image, List[PIL.Image.Image]], - image: Union[str, List[str]], - text_to_image_strength: float = 0.5, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - **kwargs, - ): - r""" - The call function to the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide image generation. - height (`int`, *optional*, defaults to `self.image_unet.config.sample_size * self.vae_scale_factor`): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to `self.image_unet.config.sample_size * self.vae_scale_factor`): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - A higher guidance scale value encourages the model to generate images closely linked to the text - `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide what to not include in image generation. If not defined, you need to - pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies - to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make - generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor is generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generated image. Choose between `PIL.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - - Examples: - - ```py - >>> from diffusers import VersatileDiffusionDualGuidedPipeline - >>> import torch - >>> import requests - >>> from io import BytesIO - >>> from PIL import Image - - >>> # let's download an initial image - >>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" - - >>> response = requests.get(url) - >>> image = Image.open(BytesIO(response.content)).convert("RGB") - >>> text = "a red car in the sun" - - >>> pipe = VersatileDiffusionDualGuidedPipeline.from_pretrained( - ... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 - ... ) - >>> pipe.remove_unused_weights() - >>> pipe = pipe.to("cuda") - - >>> generator = torch.Generator(device="cuda").manual_seed(0) - >>> text_to_image_strength = 0.75 - - >>> image = pipe( - ... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator - ... ).images[0] - >>> image.save("./car_variation.png") - ``` - - Returns: - [`~pipelines.ImagePipelineOutput`] or `tuple`: - If `return_dict` is `True`, [`~pipelines.ImagePipelineOutput`] is returned, otherwise a `tuple` is - returned where the first element is a list with the generated images. - """ - # 0. Default height and width to unet - height = height or self.image_unet.config.sample_size * self.vae_scale_factor - width = width or self.image_unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, image, height, width, callback_steps) - - # 2. Define call parameters - prompt = [prompt] if not isinstance(prompt, list) else prompt - image = [image] if not isinstance(image, list) else image - batch_size = len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompts - prompt_embeds = self._encode_text_prompt(prompt, device, num_images_per_prompt, do_classifier_free_guidance) - image_embeddings = self._encode_image_prompt(image, device, num_images_per_prompt, do_classifier_free_guidance) - dual_prompt_embeddings = torch.cat([prompt_embeds, image_embeddings], dim=1) - prompt_types = ("text", "image") - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.image_unet.config.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - dual_prompt_embeddings.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Combine the attention blocks of the image and text UNets - self.set_transformer_params(text_to_image_strength, prompt_types) - - # 8. Denoising loop - for i, t in enumerate(self.progress_bar(timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.image_unet(latent_model_input, t, encoder_hidden_states=dual_prompt_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - if not output_type == "latent": - image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0] - else: - image = latents - - image = self.image_processor.postprocess(image, output_type=output_type) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/detectors/htc_r50_sac_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/detectors/htc_r50_sac_1x_coco.py deleted file mode 100644 index 72d4db963ffd95851b945911b3db9941426583ab..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/detectors/htc_r50_sac_1x_coco.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = '../htc/htc_r50_fpn_1x_coco.py' - -model = dict( - backbone=dict( - type='DetectoRS_ResNet', - conv_cfg=dict(type='ConvAWS'), - sac=dict(type='SAC', use_deform=True), - stage_with_sac=(False, True, True, True))) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/standard_roi_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/standard_roi_head.py deleted file mode 100644 index c530f2a5ce904439492de12ff7d267cc1e757d3a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/standard_roi_head.py +++ /dev/null @@ -1,295 +0,0 @@ -import torch - -from mmdet.core import bbox2result, bbox2roi, build_assigner, build_sampler -from ..builder import HEADS, build_head, build_roi_extractor -from .base_roi_head import BaseRoIHead -from .test_mixins import BBoxTestMixin, MaskTestMixin - - -@HEADS.register_module() -class StandardRoIHead(BaseRoIHead, BBoxTestMixin, MaskTestMixin): - """Simplest base roi head including one bbox head and one mask head.""" - - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - self.bbox_assigner = None - self.bbox_sampler = None - if self.train_cfg: - self.bbox_assigner = build_assigner(self.train_cfg.assigner) - self.bbox_sampler = build_sampler( - self.train_cfg.sampler, context=self) - - def init_bbox_head(self, bbox_roi_extractor, bbox_head): - """Initialize ``bbox_head``""" - self.bbox_roi_extractor = build_roi_extractor(bbox_roi_extractor) - self.bbox_head = build_head(bbox_head) - - def init_mask_head(self, mask_roi_extractor, mask_head): - """Initialize ``mask_head``""" - if mask_roi_extractor is not None: - self.mask_roi_extractor = build_roi_extractor(mask_roi_extractor) - self.share_roi_extractor = False - else: - self.share_roi_extractor = True - self.mask_roi_extractor = self.bbox_roi_extractor - self.mask_head = build_head(mask_head) - - def init_weights(self, pretrained): - """Initialize the weights in head. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if self.with_shared_head: - self.shared_head.init_weights(pretrained=pretrained) - if self.with_bbox: - self.bbox_roi_extractor.init_weights() - self.bbox_head.init_weights() - if self.with_mask: - self.mask_head.init_weights() - if not self.share_roi_extractor: - self.mask_roi_extractor.init_weights() - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - bbox_results = self._bbox_forward(x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - # mask head - if self.with_mask: - mask_rois = rois[:100] - mask_results = self._mask_forward(x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """ - Args: - x (list[Tensor]): list of multi-level img features. - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposals (list[Tensors]): list of region proposals. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # assign gts and sample proposals - if self.with_bbox or self.with_mask: - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - sampling_results = [] - for i in range(num_imgs): - assign_result = self.bbox_assigner.assign( - proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i], - gt_labels[i]) - sampling_result = self.bbox_sampler.sample( - assign_result, - proposal_list[i], - gt_bboxes[i], - gt_labels[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - losses = dict() - # bbox head forward and loss - if self.with_bbox: - bbox_results = self._bbox_forward_train(x, sampling_results, - gt_bboxes, gt_labels, - img_metas) - losses.update(bbox_results['loss_bbox']) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train(x, sampling_results, - bbox_results['bbox_feats'], - gt_masks, img_metas) - losses.update(mask_results['loss_mask']) - - return losses - - def _bbox_forward(self, x, rois): - """Box head forward function used in both training and testing.""" - # TODO: a more flexible way to decide which feature maps to use - bbox_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - if self.with_shared_head: - bbox_feats = self.shared_head(bbox_feats) - cls_score, bbox_pred = self.bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels, - img_metas): - """Run forward function and calculate loss for box head in training.""" - rois = bbox2roi([res.bboxes for res in sampling_results]) - bbox_results = self._bbox_forward(x, rois) - - bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, self.train_cfg) - loss_bbox = self.bbox_head.loss(bbox_results['cls_score'], - bbox_results['bbox_pred'], rois, - *bbox_targets) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results - - def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks, - img_metas): - """Run forward function and calculate loss for mask head in - training.""" - if not self.share_roi_extractor: - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - mask_results = self._mask_forward(x, pos_rois) - else: - pos_inds = [] - device = bbox_feats.device - for res in sampling_results: - pos_inds.append( - torch.ones( - res.pos_bboxes.shape[0], - device=device, - dtype=torch.uint8)) - pos_inds.append( - torch.zeros( - res.neg_bboxes.shape[0], - device=device, - dtype=torch.uint8)) - pos_inds = torch.cat(pos_inds) - - mask_results = self._mask_forward( - x, pos_inds=pos_inds, bbox_feats=bbox_feats) - - mask_targets = self.mask_head.get_targets(sampling_results, gt_masks, - self.train_cfg) - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - loss_mask = self.mask_head.loss(mask_results['mask_pred'], - mask_targets, pos_labels) - - mask_results.update(loss_mask=loss_mask, mask_targets=mask_targets) - return mask_results - - def _mask_forward(self, x, rois=None, pos_inds=None, bbox_feats=None): - """Mask head forward function used in both training and testing.""" - assert ((rois is not None) ^ - (pos_inds is not None and bbox_feats is not None)) - if rois is not None: - mask_feats = self.mask_roi_extractor( - x[:self.mask_roi_extractor.num_inputs], rois) - if self.with_shared_head: - mask_feats = self.shared_head(mask_feats) - else: - assert bbox_feats is not None - mask_feats = bbox_feats[pos_inds] - - mask_pred = self.mask_head(mask_feats) - mask_results = dict(mask_pred=mask_pred, mask_feats=mask_feats) - return mask_results - - async def async_simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Async test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = await self.async_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - bbox_results = bbox2result(det_bboxes, det_labels, - self.bbox_head.num_classes) - if not self.with_mask: - return bbox_results - else: - segm_results = await self.async_test_mask( - x, - img_metas, - det_bboxes, - det_labels, - rescale=rescale, - mask_test_cfg=self.test_cfg.get('mask')) - return bbox_results, segm_results - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=rescale) - if torch.onnx.is_in_onnx_export(): - if self.with_mask: - segm_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return det_bboxes, det_labels, segm_results - else: - return det_bboxes, det_labels - - bbox_results = [ - bbox2result(det_bboxes[i], det_labels[i], - self.bbox_head.num_classes) - for i in range(len(det_bboxes)) - ] - - if not self.with_mask: - return bbox_results - else: - segm_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return list(zip(bbox_results, segm_results)) - - def aug_test(self, x, proposal_list, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - det_bboxes, det_labels = self.aug_test_bboxes(x, img_metas, - proposal_list, - self.test_cfg) - - if rescale: - _det_bboxes = det_bboxes - else: - _det_bboxes = det_bboxes.clone() - _det_bboxes[:, :4] *= det_bboxes.new_tensor( - img_metas[0][0]['scale_factor']) - bbox_results = bbox2result(_det_bboxes, det_labels, - self.bbox_head.num_classes) - - # det_bboxes always keep the original scale - if self.with_mask: - segm_results = self.aug_test_mask(x, img_metas, det_bboxes, - det_labels) - return [(bbox_results, segm_results)] - else: - return [bbox_results] diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/dmnet/dmnet_r50-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/dmnet/dmnet_r50-d8_769x769_40k_cityscapes.py deleted file mode 100644 index 19841547a42315164de547a4121cfd64739cf24b..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/dmnet/dmnet_r50-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/dmnet_r50-d8.py', - '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(align_corners=True), - auxiliary_head=dict(align_corners=True), - test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513))) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_40k_pascal_context.py b/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_40k_pascal_context.py deleted file mode 100644 index 5ff05aa595399d77ee51552c243e489f395a820e..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_40k_pascal_context.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_hr18.py', '../_base_/datasets/pascal_context.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(num_classes=60), - test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320))) -optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/fileio/handlers/base.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/fileio/handlers/base.py deleted file mode 100644 index 288878bc57282fbb2f12b32290152ca8e9d3cab0..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/fileio/handlers/base.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from abc import ABCMeta, abstractmethod - - -class BaseFileHandler(metaclass=ABCMeta): - # `str_like` is a flag to indicate whether the type of file object is - # str-like object or bytes-like object. Pickle only processes bytes-like - # objects but json only processes str-like object. If it is str-like - # object, `StringIO` will be used to process the buffer. - str_like = True - - @abstractmethod - def load_from_fileobj(self, file, **kwargs): - pass - - @abstractmethod - def dump_to_fileobj(self, obj, file, **kwargs): - pass - - @abstractmethod - def dump_to_str(self, obj, **kwargs): - pass - - def load_from_path(self, filepath, mode='r', **kwargs): - with open(filepath, mode) as f: - return self.load_from_fileobj(f, **kwargs) - - def dump_to_path(self, obj, filepath, mode='w', **kwargs): - with open(filepath, mode) as f: - self.dump_to_fileobj(obj, f, **kwargs) diff --git a/spaces/ArkanDash/rvc-models/app.py b/spaces/ArkanDash/rvc-models/app.py deleted file mode 100644 index 82708cf62d946b368cf5e1af470a1afd1c1735bb..0000000000000000000000000000000000000000 --- a/spaces/ArkanDash/rvc-models/app.py +++ /dev/null @@ -1,178 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--colab", action="store_true", default=False, help="share gradio app") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "#
RVC Models (Outdated)\n" - "##
The input audio should be clean and pure voice without background music.\n" - "#
[![New RVC Spaces](https://img.shields.io/badge/%F0%9F%A4%97_Spaces-RVC_Models_new-yellow?style=for-the-badge&logo=https%3A%2F%2Fhuggingface.co%2Ffront%2Fassets%2Fhuggingface_logo.svg&logoColor=yellow)](https://huggingface.co/spaces/ArkanDash/rvc-models-new)\n\n" - "[![Colab](https://img.shields.io/badge/Colab-RVC_Models-blue?style=for-the-badge&logo=googlecolab)](https://colab.research.google.com/drive/1hx6kKvIuv5XNY1Gai2PEuZhpO5z6xpVh?usp=sharing)\n\n" - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
' - f'
{title}
\n'+ - (f'
Model author: {author}
' if author else "")+ - (f'' if cover else "")+ - '
' - ) - with gr.Row(): - with gr.Column(): - vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.colab) \ No newline at end of file diff --git a/spaces/ArtyomKhyan/Detection/app.py b/spaces/ArtyomKhyan/Detection/app.py deleted file mode 100644 index c7d5352068a2685842d435ef8d69003ea6d5bb25..0000000000000000000000000000000000000000 --- a/spaces/ArtyomKhyan/Detection/app.py +++ /dev/null @@ -1,218 +0,0 @@ -import argparse -import time -import gradio as gr -import cv2 -import matplotlib.pyplot as plt -import torch -from PIL import Image -from torchvision.datasets import ImageFolder -import matplotlib.pyplot as plt -import torch.backends.cudnn as cudnn -import numpy as np -import matplotlib.pyplot as plt -import torchvision -from torchvision.transforms import transforms -import os -def class_model(): - model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet101', pretrained=True) - - class LinearModel(torch.nn.Module): - - def __init__(self): - super(LinearModel, self).__init__() - self.activation = torch.nn.ReLU() - self.linear1 = torch.nn.Linear(1024, 100) - self.linear2 = torch.nn.Linear(100, 3) - - def forward(self, x): - x = self.activation(x) - x = self.linear1(x) - x = self.activation(x) - x = self.linear2(x) - return x - full_c = torch.nn.Linear(in_features = 2048, out_features = 1024) - full_c.load_state_dict(torch.load('so.pt')) - model.fc = full_c - Linear = LinearModel() - Linear.load_state_dict(torch.load('som.pt')) - model = torch.nn.Sequential(model, Linear) - model.eval() - return model - -transform = transforms.Compose([ - transforms.ToPILImage(), - transforms.Resize((224, 224)), - transforms.ToTensor(), - transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - ]) - -def box_iou(box1, box2): - # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - box1 (Tensor[N, 4]) - box2 (Tensor[M, 4]) - Returns: - iou (Tensor[N, M]): the NxM matrix containing the pairwise - IoU values for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.t()) - area2 = box_area(box2.t()) - - # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter) - -def xywh2xyxy(x): - # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = torch.zeros_like(x) if isinstance(x, torch.Tensor) else np.zeros_like(x) - y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x - y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y - y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x - y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y - return y -def non_max_suppression(prediction, conf_thres=0.1, iou_thres=0.65, merge=False, classes=None, agnostic=False): - """Performs Non-Maximum Suppression (NMS) on inference results - - Returns: - detections with shape: nx6 (x1, y1, x2, y2, conf, cls) - """ - if prediction.dtype is torch.float16: - prediction = prediction.float() # to FP32 - - nc = prediction[0].shape[1] - 5 # number of classes - xc = prediction[..., 4] > conf_thres # candidates - - # Settings - min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height - max_det = 300 # maximum number of detections per image - time_limit = 10.0 # seconds to quit after - redundant = True # require redundant detections - multi_label = nc > 1 # multiple labels per box (adds 0.5ms/img) - - t = time.time() - output = [None] * prediction.shape[0] - for xi, x in enumerate(prediction): # image index, image inference - # Apply constraints - # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height - x = x[xc[xi]] # confidence - - # If none remain process next image - if not x.shape[0]: - continue - - # Compute conf - x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf - - # Box (center x, center y, width, height) to (x1, y1, x2, y2) - box = xywh2xyxy(x[:, :4]) - - # Detections matrix nx6 (xyxy, conf, cls) - if multi_label: - i, j = (x[:, 5:] > conf_thres).nonzero().t() - x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) - else: # best class only - conf, j = x[:, 5:].max(1, keepdim=True) - x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] - - # Filter by class - if classes: - x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] - - # Apply finite constraint - # if not torch.isfinite(x).all(): - # x = x[torch.isfinite(x).all(1)] - - # If none remain process next image - n = x.shape[0] # number of boxes - if not n: - continue - - # Sort by confidence - # x = x[x[:, 4].argsort(descending=True)] - - # Batched NMS - c = x[:, 5:6] * (0 if agnostic else max_wh) # classes - boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores - i = torchvision.ops.boxes.nms(boxes, scores, iou_thres) - if i.shape[0] > max_det: # limit detections - i = i[:max_det] - if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) - try: # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) - iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix - weights = iou * scores[None] # box weights - x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes - if redundant: - i = i[iou.sum(1) > 1] # require redundancy - except: # possible CUDA error https://github.com/ultralytics/yolov3/issues/1139 - print(x, i, x.shape, i.shape) - pass - - output[xi] = x[i] - if (time.time() - t) > time_limit: - break # time limit exceeded - - return output - - - -def run_detector(img): - device = torch.device('cpu') - print(1) - model = torch.load('last_yolov5s_results.pt', map_location=device)['model'].float() - print(2) - image = img - res_image = cv2.resize(image, (416, 416)) - image = torch.tensor(res_image).permute(2, 0, 1).unsqueeze(0).float() / 255. - count_dobri = 0 - count_all = 0 - with torch.no_grad(): - detection = model(image) - pred = non_max_suppression(detection[0], conf_thres=0.4, iou_thres=0.6) - main_model = class_model() - main_model.eval() - - for i in pred[0]: - if i[0] > 0 and i[1] > 0 and i[2] > 0 and i[3] > 0: - if i[3] > i[1] and i[2] > i[0]: - - cropped_image = res_image[int(i[1]):int(i[3]), int(i[0]):int(i[2])] - input_tensor = transform(cropped_image) - input_tensor = input_tensor.reshape(1, 3, 224, 224) - with torch.no_grad(): - input = main_model(input_tensor) - layer = torch.nn.Softmax(dim=1) - output_s = layer(input) - output = torch.argmax(output_s) - print(output) - if output == 0 and output_s[0][0] > 0.55: - count_dobri +=1 - cv2.rectangle(res_image, (int(i[0]), int(i[1])), (int(i[2]), int(i[3])), (0, 255, 0)) - elif output == 1: - count_all +=1 - cv2.rectangle(res_image, (int(i[0]), int(i[1])), (int(i[2]), int(i[3])), (0, 0, 255)) - elif output == 2: - cv2.rectangle(res_image, (int(i[0]), int(i[1])), (int(i[2]), int(i[3])), (255, 0, 0)) - count_all += count_dobri - global dobry_percent - dobry_percent = count_dobri/count_all - return res_image - - -def greet(name): - print(type(name)) - res_image = run_detector(name) - - return res_image, dobry_percent*100 - - -demo = gr.Interface(fn=greet, inputs="image", outputs=['image', 'text']) - -demo.launch() \ No newline at end of file diff --git a/spaces/Ashrafb/translate/app.py b/spaces/Ashrafb/translate/app.py deleted file mode 100644 index 644e3ffae892d8895908745d4bcdf405fc85c86d..0000000000000000000000000000000000000000 --- a/spaces/Ashrafb/translate/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import gradio as gr -from transformers import M2M100ForConditionalGeneration -from tokenization_small100 import SMALL100Tokenizer - -langs = """af,am,ar,ast,az,ba,be,bg,bn,br,bs,ca,ceb,cs,cy,da,de,el,en,es,et,fa,ff,fi,fr,fy,ga,gd,gl,gu,ha,he,hi,hr,ht,hu,hy,id,ig,ilo,is,it,ja,jv,ka,kk,km,kn,ko,lb,lg,ln,lo,lt,lv,mg,mk,ml,mn,mr,ms,my,ne,nl,no,ns,oc,or,pa,pl,ps,pt,ro,ru,sd,si,sk,sl,so,sq,sr,ss,su,sv,sw,ta,th,tl,tn,tr,uk,ur,uz,vi,wo,xh,yi,yo,zh,zu""" -lang_list = langs.split(',') - -model = M2M100ForConditionalGeneration.from_pretrained("alirezamsh/small100") -tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100") - -def translate(lang, text): - tokenizer.tgt_lang = lang - encoded_text = tokenizer(text, return_tensors="pt") - generated_tokens = model.generate(**encoded_text) - return tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0] - -with gr.Blocks(analytics_enabled=False) as app: - - Source = gr.Textbox( label="Source" ) - Language = gr.Dropdown( lang_list, label="Language" ) - Translate = gr.Button( "Translate" ) - Result = gr.Textbox( label="Result" ) - - - Translate.click( - translate, - inputs=[ Language, Source ], - outputs=[Result], - api_name="translate", - ) - - app.launch( inline=True ) - block.queue( concurrency_count=2 ) diff --git a/spaces/Awesimo/jojogan/e4e/criteria/lpips/__init__.py b/spaces/Awesimo/jojogan/e4e/criteria/lpips/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/builtin_datasets.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/builtin_datasets.md deleted file mode 100644 index 0ba82423ad498bdd86274ada56a201134a590d94..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/builtin_datasets.md +++ /dev/null @@ -1 +0,0 @@ -../../datasets/README.md \ No newline at end of file diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/models.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/models.md deleted file mode 100644 index 3cf918e7a145ee326c6cccf8a88835b7e02a7c30..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/models.md +++ /dev/null @@ -1,180 +0,0 @@ -# Use Models - -## Build Models from Yacs Config -From a yacs config object, -models (and their sub-models) can be built by -functions such as `build_model`, `build_backbone`, `build_roi_heads`: -```python -from detectron2.modeling import build_model -model = build_model(cfg) # returns a torch.nn.Module -``` - -`build_model` only builds the model structure and fills it with random parameters. -See below for how to load an existing checkpoint to the model and how to use the `model` object. - -### Load/Save a Checkpoint -```python -from detectron2.checkpoint import DetectionCheckpointer -DetectionCheckpointer(model).load(file_path_or_url) # load a file, usually from cfg.MODEL.WEIGHTS - -checkpointer = DetectionCheckpointer(model, save_dir="output") -checkpointer.save("model_999") # save to output/model_999.pth -``` - -Detectron2's checkpointer recognizes models in pytorch's `.pth` format, as well as the `.pkl` files -in our model zoo. -See [API doc](../modules/checkpoint.html#detectron2.checkpoint.DetectionCheckpointer) -for more details about its usage. - -The model files can be arbitrarily manipulated using `torch.{load,save}` for `.pth` files or -`pickle.{dump,load}` for `.pkl` files. - -### Use a Model - -A model can be called by `outputs = model(inputs)`, where `inputs` is a `list[dict]`. -Each dict corresponds to one image and the required keys -depend on the type of model, and whether the model is in training or evaluation mode. -For example, in order to do inference, -all existing models expect the "image" key, and optionally "height" and "width". -The detailed format of inputs and outputs of existing models are explained below. - -__Training__: When in training mode, all models are required to be used under an `EventStorage`. -The training statistics will be put into the storage: -```python -from detectron2.utils.events import EventStorage -with EventStorage() as storage: - losses = model(inputs) -``` - -__Inference__: If you only want to do simple inference using an existing model, -[DefaultPredictor](../modules/engine.html#detectron2.engine.defaults.DefaultPredictor) -is a wrapper around model that provides such basic functionality. -It includes default behavior including model loading, preprocessing, -and operates on single image rather than batches. See its documentation for usage. - -You can also run inference directly like this: -``` -model.eval() -with torch.no_grad(): - outputs = model(inputs) -``` - -### Model Input Format - -Users can implement custom models that support any arbitrary input format. -Here we describe the standard input format that all builtin models support in detectron2. -They all take a `list[dict]` as the inputs. Each dict -corresponds to information about one image. - -The dict may contain the following keys: - -* "image": `Tensor` in (C, H, W) format. The meaning of channels are defined by `cfg.INPUT.FORMAT`. - Image normalization, if any, will be performed inside the model using - `cfg.MODEL.PIXEL_{MEAN,STD}`. -* "height", "width": the **desired** output height and width **in inference**, which is not necessarily the same - as the height or width of the `image` field. - For example, the `image` field contains the resized image, if resize is used as a preprocessing step. - But you may want the outputs to be in **original** resolution. - If provided, the model will produce output in this resolution, - rather than in the resolution of the `image` as input into the model. This is more efficient and accurate. -* "instances": an [Instances](../modules/structures.html#detectron2.structures.Instances) - object for training, with the following fields: - + "gt_boxes": a [Boxes](../modules/structures.html#detectron2.structures.Boxes) object storing N boxes, one for each instance. - + "gt_classes": `Tensor` of long type, a vector of N labels, in range [0, num_categories). - + "gt_masks": a [PolygonMasks](../modules/structures.html#detectron2.structures.PolygonMasks) - or [BitMasks](../modules/structures.html#detectron2.structures.BitMasks) object storing N masks, one for each instance. - + "gt_keypoints": a [Keypoints](../modules/structures.html#detectron2.structures.Keypoints) - object storing N keypoint sets, one for each instance. -* "sem_seg": `Tensor[int]` in (H, W) format. The semantic segmentation ground truth for training. - Values represent category labels starting from 0. -* "proposals": an [Instances](../modules/structures.html#detectron2.structures.Instances) - object used only in Fast R-CNN style models, with the following fields: - + "proposal_boxes": a [Boxes](../modules/structures.html#detectron2.structures.Boxes) object storing P proposal boxes. - + "objectness_logits": `Tensor`, a vector of P scores, one for each proposal. - -For inference of builtin models, only "image" key is required, and "width/height" are optional. - -We currently don't define standard input format for panoptic segmentation training, -because models now use custom formats produced by custom data loaders. - -#### How it connects to data loader: - -The output of the default [DatasetMapper]( ../modules/data.html#detectron2.data.DatasetMapper) is a dict -that follows the above format. -After the data loader performs batching, it becomes `list[dict]` which the builtin models support. - - -### Model Output Format - -When in training mode, the builtin models output a `dict[str->ScalarTensor]` with all the losses. - -When in inference mode, the builtin models output a `list[dict]`, one dict for each image. -Based on the tasks the model is doing, each dict may contain the following fields: - -* "instances": [Instances](../modules/structures.html#detectron2.structures.Instances) - object with the following fields: - * "pred_boxes": [Boxes](../modules/structures.html#detectron2.structures.Boxes) object storing N boxes, one for each detected instance. - * "scores": `Tensor`, a vector of N confidence scores. - * "pred_classes": `Tensor`, a vector of N labels in range [0, num_categories). - + "pred_masks": a `Tensor` of shape (N, H, W), masks for each detected instance. - + "pred_keypoints": a `Tensor` of shape (N, num_keypoint, 3). - Each row in the last dimension is (x, y, score). Confidence scores are larger than 0. -* "sem_seg": `Tensor` of (num_categories, H, W), the semantic segmentation prediction. -* "proposals": [Instances](../modules/structures.html#detectron2.structures.Instances) - object with the following fields: - * "proposal_boxes": [Boxes](../modules/structures.html#detectron2.structures.Boxes) - object storing N boxes. - * "objectness_logits": a torch vector of N confidence scores. -* "panoptic_seg": A tuple of `(pred: Tensor, segments_info: Optional[list[dict]])`. - The `pred` tensor has shape (H, W), containing the segment id of each pixel. - - * If `segments_info` exists, each dict describes one segment id in `pred` and has the following fields: - - * "id": the segment id - * "isthing": whether the segment is a thing or stuff - * "category_id": the category id of this segment. - - If a pixel's id does not exist in `segments_info`, it is considered to be void label - defined in [Panoptic Segmentation](https://arxiv.org/abs/1801.00868). - - * If `segments_info` is None, all pixel values in `pred` must be ≥ -1. - Pixels with value -1 are assigned void labels. - Otherwise, the category id of each pixel is obtained by - `category_id = pixel // metadata.label_divisor`. - - -### Partially execute a model: - -Sometimes you may want to obtain an intermediate tensor inside a model, -such as the input of certain layer, the output before post-processing. -Since there are typically hundreds of intermediate tensors, there isn't an API that provides you -the intermediate result you need. -You have the following options: - -1. Write a (sub)model. Following the [tutorial](./write-models.md), you can - rewrite a model component (e.g. a head of a model), such that it - does the same thing as the existing component, but returns the output - you need. -2. Partially execute a model. You can create the model as usual, - but use custom code to execute it instead of its `forward()`. For example, - the following code obtains mask features before mask head. - - ```python - images = ImageList.from_tensors(...) # preprocessed input tensor - model = build_model(cfg) - model.eval() - features = model.backbone(images.tensor) - proposals, _ = model.proposal_generator(images, features) - instances, _ = model.roi_heads(images, features, proposals) - mask_features = [features[f] for f in model.roi_heads.in_features] - mask_features = model.roi_heads.mask_pooler(mask_features, [x.pred_boxes for x in instances]) - ``` - -3. Use [forward hooks](https://pytorch.org/tutorials/beginner/former_torchies/nnft_tutorial.html#forward-and-backward-function-hooks). - Forward hooks can help you obtain inputs or outputs of a certain module. - If they are not exactly what you want, they can at least be used together with partial execution - to obtain other tensors. - -All options require you to read documentation and sometimes code -of the existing models to understand the internal logic, -in order to write code to obtain the internal tensors. diff --git a/spaces/Baptlem/UCDR-Net/README.md b/spaces/Baptlem/UCDR-Net/README.md deleted file mode 100644 index ef2906b2c09c6783b9ba872eb82de32013f89c26..0000000000000000000000000000000000000000 --- a/spaces/Baptlem/UCDR-Net/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: UCDR-Net -emoji: 🚀 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: true -tags: -- jax-diffusers-event ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Bhool Bhulaiyaa 2 Tono De Llamada.md b/spaces/Benson/text-generation/Examples/Descargar Bhool Bhulaiyaa 2 Tono De Llamada.md deleted file mode 100644 index 48c544072e78a6146eef0420ea944d1a3cb3a4f9..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Bhool Bhulaiyaa 2 Tono De Llamada.md +++ /dev/null @@ -1,78 +0,0 @@ - -

Bhool Bhulaiyaa 2: Una secuela de comedia de terror que te hará reír y gritar

-

Si estás buscando una película que te entretenga con elementos de comedia y terror, entonces no debes perderte Bhool Bhulaiyaa 2, una película en hindi que fue lanzada el 20 de mayo de 2022. Esta película es una secuela independiente de Bhool Bhulaiyaa (2007), un éxito de taquilla protagonizado por Akshay Kumar y Vidya Balan. Bhool Bhulaiyaa 2 cuenta con Tabu, Kartik Aaryan y Kiara Advani en los papeles principales, junto con Rajpal Yadav, Sanjay Mishra, Ashwini Kalsekar, y otros en papeles secundarios. La película está dirigida por Anees Bazmee, escrita por Aakash Kaushik y Farhad Samji, y producida por Bhushan Kumar, Murad Khetani, Krishan Kumar y Anjum Khetani bajo el lema de T-Series Films y Cine1 Studios.

-

La trama de Bhool Bhulaiyaa 2 sigue a Ruhaan Randhawa (Kartik Aaryan), un falso psíquico que tiene que lidiar con el regreso de Manjulika (Tabu), un espíritu malévolo que está empeñado en vengarse de la familia Thakur. Ruhaan conoce a Reet Rathore (Kiara Advani), una novia poco dispuesta que está en camino a Rajasthan para casarse con su prometido Sagar (Amar Upadhyay). El destino los lleva a una mansión abandonada donde Manjulika ha estado atrapada durante 18 años por algunos sacerdotes. Mientras Ruhaan intenta ayudar a Reet a escapar de la presión de su familia y de la ira de Manjulika, descubre los oscuros secretos del pasado y la verdad sobre su propia identidad. ¿Podrá Ruhaan salvar a Reet y a sí mismo de la maldición de Manjulika? ¿O se convertirá en su próxima víctima?

-

descargar bhool bhulaiyaa 2 tono de llamada


Download Zip ->>->>->> https://bltlly.com/2v6LWr



-

¿Qué es Bhool Bhulaiyaa 2 Acerca de?

- -

El regreso de Manjulika

-

El principal villano de la película es Manjulika, un espíritu vengativo que una vez fue bailarín en la corte de Thakur Vikram Singh (Rajendra Gupta). Ella estaba enamorada de él, pero él la traicionó y se casó con otra mujer. Ella se suicidó y juró perseguir a su familia para siempre. Ella poseyó a su hija Radhika (Vidya Balan) en la primera película y trató de matar a su marido Siddharth (Shiney Ahuja). Fue exorcizada por el Dr. Aditya Shrivastav (Akshay Kumar), un psiquiatra que pretendía ser sacerdote.

-

En Bhool Bhulaiyaa 2, Manjulika regresa después de 18 años cuando algunos sacerdotes que custodiaban su tumba son asesinados por algunos matones. Ella escapa de su prisión y encuentra un nuevo anfitrión en Reet, que es la nieta de Thakur Vikram Singh. Ella quiere vengarse de la familia Thakur y también de Ruhaan, que es el hijo del Dr. Aditya Shrivastav. Usa sus poderes sobrenaturales para manipular, atormentar y matar a cualquiera que se interponga en su camino.

-

Manjulika es interpretada por Tabu, una de las actrices más versátiles y talentosas de Bollywood. Ella ofrece una actuación brillante como el espíritu maligno que puede cambiar de ser seductor a aterrador en cuestión de segundos. Ella también muestra sus habilidades de baile en la canción "Ami Je Tomar", que es un remix de la canción original de Bhool Bhulaiyaa. Tabu ha dicho que le gustaba interpretar a Manjulika ya que era un papel desafiante y divertido para ella.

-

El falso psíquico Ruhaan Randhawa

-

El héroe de la película es Ruhaan Randhawa, un falso psíquico que afirma tener habilidades sobrenaturales, pero en realidad utiliza trucos y aparatos para engañar a la gente. Gana dinero realizando sesiones de espiritismo, exorcismos y lecturas para sus clientes. También es una persona coqueta e ingeniosa a la que le gusta divertirse y disfrutar de la vida.

- -

Ruhaan es interpretado por Kartik Aaryan, uno de los actores más populares y encantadores de Bollywood. Él ofrece una actuación hilarante y heroica como el falso psíquico que tiene que enfrentar sus miedos y luchar contra Manjulika. También muestra su química con Kiara Advani, que interpreta a Reet, en las escenas y canciones románticas. Kartik Aaryan ha dicho que estaba emocionado de ser parte de Bhool Bhulaiyaa 2 ya que era una de sus películas favoritas cuando era pequeño.

La novia involuntaria Reet Rathore

-

La heroína de la película es Reet Rathore, una novia poco dispuesta que se ve obligada a casarse con Sagar, un hombre de negocios rico y arrogante que es el hijo del amigo de Thakur Vikram Singh. Ella no lo ama y quiere seguir su carrera como diseñadora de moda. También es una persona amable y valiente que cuida de su familia y amigos.

-

-

Reet conoce a Ruhaan en un tren y lo encuentra atractivo y divertido. Ella acepta ir con él a Rajasthan para escapar de su familia y Sagar. Ella también se convierte en el objetivo de Manjulika, que la ha poseído y quiere usar su cuerpo para matar a la familia Thakur. Lucha para luchar contra la influencia de Manjulika y también para expresar sus sentimientos por Ruhaan.

-

Reet es interpretada por Kiara Advani, una de las actrices más bellas y talentosas de Bollywood. Ella ofrece una actuación dulce y fuerte como la novia no dispuesta que tiene que enfrentar muchos desafíos y peligros. Ella también se ve impresionante en los trajes tradicionales y joyas que lleva en la película. Kiara Advani ha dicho que tuvo el honor de ser parte de Bhool Bhulaiyaa 2 ya que fue un sueño hecho realidad para ella.

-

¿En qué se diferencia Bhool Bhulaiyaa 2 de Bhool Bhulaiyaa?

-

Bhool Bhulaiyaa 2 no es una secuela directa de Bhool Bhulaiyaa sino una película independiente que tiene su propia historia, personajes y estilo. La película es diferente de la anterior de muchas maneras, como:

-

Comedia vs horror

- -

Independiente vs Secuela

-

Mientras que Bhulaiyaa fue un remake de la película malayalam Manichitrathazhu (1993), que también fue rehecho en varios otros idiomas, Bhulaiyaa 2 no es un remake de ninguna otra película sino una historia original con nuevos personajes. La película no sigue los acontecimientos de la película anterior, pero tiene algunas referencias y conexiones con ella. La película también tiene algunos cameos de Akshay Kumar y Vidya Balan, que repiten sus papeles de Bhool Bhulaiyaa.

-

Inspiración vs originalidad

-

Mientras que Bhool Bhulaiyaa se inspiró en una película malayalam y una novela de M.R. James llamada The Mystery of the Yellow Room, Bhool Bhulaiyaa 2 está inspirado en varias fuentes pero también tiene sus propios giros y sorpresas. La película se basa libremente en otra película malayalam llamada Ezra (2017), que también fue una película de comedia de terror sobre una mansión encantada y un espíritu vengativo. La película también está influenciada por algunas películas de Hollywood como The Conjuring, The Exorcist, y The Shining. La película también tiene algunos elementos originales como el falso personaje psíquico, la configuración de Rajasthan, y la escena clímax.

-

¿Cuáles son los aspectos más destacados de Bhool Bhulaiyaa 2?

-

Bhool Bhulaiyaa 2 tiene muchos aspectos destacados que lo convierten en una película imprescindible para todo tipo de espectadores. Algunos de ellos son:

-

El reparto estelar

-

La película cuenta con un reparto estelar que incluye algunos de los mejores actores de Bollywood. Tabu, Kartik Aaryan, y Kiara Advani dan excelentes actuaciones como Manjulika, Ruhaan y Reet respectivamente. Dan vida a sus personajes con sus expresiones, diálogos y acciones. También comparten gran química entre sí y crean escenas memorables juntos.

- -

Las canciones pegadizas

-

La película tiene una banda sonora pegadiza y melodiosa que consta de seis canciones compuestas por Pritam y Tanishk Bagchi. Las canciones son cantadas por cantantes populares como Arijit Singh, Shreya Ghoshal, Jubin Nautiyal, Neha Kakkar, y otros. Las canciones también son remezcladas por DJ Chetas, Lijo George, y otros. Las canciones son una mezcla de géneros románticos, de danza y de terror que se adaptan al estado de ánimo y al tema de la película.

-

El tema principal de la película, "Bhool Bhulaiyaa 2", es un remix de la canción original de Bhool Bhulaiyaa que fue compuesta por Pritam y cantada por Neeraj Shridhar. La nueva versión es cantada por Jubin Nautiyal y Tulsi Kumar y tiene nuevas letras de Tanishk Bagchi. La canción es un número vigoroso y enérgico que cuenta con Kartik Aaryan y Kiara Advani bailando en un gran set con muchos bailarines.

-

Otra canción popular de la película es "Ami Je Tomar", que también es un remix de la canción original de Bhool Bhulaiyaa que fue compuesta por Pritam y cantada por Shreya Ghoshal y K.K. La nueva versión es cantada por Arijit Singh y Shreya Ghoshal y tiene nuevas letras de Tanishk Bagchi. La canción es un número romántico y inquietante que cuenta con Tabu realizando una danza clásica en un traje tradicional.

-

Las impresionantes ubicaciones

-

La película tiene una cinematografía impresionante que muestra la belleza y el misterio de Rajasthan y otros lugares. La película se rodó en varios lugares como Jaipur, Jaisalmer, Udaipur, Lucknow, Mumbai y Londres. La película captura la cultura, la arquitectura y el paisaje de estos lugares con sus vivos colores, ángulos e iluminación. La película también utiliza algunos efectos especiales y conjuntos para crear una atmósfera realista y espeluznante para las escenas de terror.

- -

El emocionante clímax

-

La película tiene un clímax emocionante que te mantendrá al borde de tu asiento. El clímax implica un enfrentamiento final entre Ruhaan y Manjulika que tiene lugar en la mansión. Ruhaan tiene que usar su ingenio, coraje y artilugios para luchar contra los poderes sobrenaturales de Manjulika y salvar a Reet de sus garras. También tiene que enfrentarse a su padre el Dr. Aditya Shrivastav que llega a la escena para ayudarlo.

-

El clímax tiene muchos giros y vueltas que te sorprenderán y te harán jadear. El clímax también revela algunos secretos impactantes sobre el pasado de Ruhaan y el motivo de Manjulika. El clímax también tiene algunos momentos emocionales que tocarán tu corazón y te harán llorar. El clímax también tiene algunas secuencias llenas de acción que te harán animar y aplaudir. El clímax también tiene algunos momentos divertidos que te harán reír y aliviar tu tensión.

-

Cómo descargar Bhool Bhulaiyaa 2 tono de llamada?

-

Si te gustan las canciones de Bhool Bhulaiyaa 2 y quieres establecerlas como tu tono de llamada en tu teléfono, entonces puedes seguir estos sencillos pasos:

-
    -
  1. Ir a -
  2. Haga clic en el botón de descarga junto al nombre de la canción. Se le redirigirá a otra página donde puede obtener una vista previa del tono de llamada y elegir el formato que se adapte a su dispositivo. Puede elegir entre formatos MP3, M4R, OGG o WAV.
  3. -
  4. Haga clic en el botón de descarga de nuevo y guarde el archivo de tono de llamada en su dispositivo. También puede escanear el código QR en la página para descargar el tono de llamada directamente en su teléfono.
  5. - -
-

¡Felicidades! Has descargado exitosamente el tono de llamada Bhool Bhulaiyaa 2 en tu teléfono. Ahora, puedes disfrutar de las melodías pegadizas de la película cada vez que suena tu teléfono.

-

Conclusión

-

Bhool Bhulaiyaa 2 es una película de comedia de terror que te hará reír y gritar con sus escenas hilarantes y aterradoras. La película tiene un reparto estelar, canciones pegadizas, lugares impresionantes y un emocionante clímax que te mantendrá entretenido hasta el final. La película también es diferente de Bhool Bhulaiyaa en muchos sentidos y tiene su propia originalidad y sorpresas. La película es una elección perfecta para cualquier persona que ama la comedia y los géneros de terror y quiere tener un momento divertido y emocionante en las películas.

-

Si está interesado en ver Bhool Bhulaiyaa 2, puede reservar sus entradas en línea o visitar su teatro más cercano. También puede descargar los tonos de llamada de las canciones de la película en su teléfono y disfrutar de ellos en cualquier momento. También puede seguir las páginas oficiales de las redes sociales y el sitio web de la película para obtener más actualizaciones y noticias.

-

Esperamos que hayas disfrutado leyendo este artículo y hayas aprendido más sobre Bhool Bhulaiyaa 2. Si tiene alguna pregunta o comentario, no dude en dejarlos en la sección de comentarios a continuación. Gracias por su tiempo y atención.

-

Preguntas frecuentes

-
    -
  • Q: ¿Cuándo se lanzó Bhool Bhulaiyaa 2?
  • -
  • A: Bhool Bhulaiyaa 2 fue liberado el 20 de mayo de 2022 en la India y otros países.
  • -
  • P: ¿Quiénes son los actores principales en Bhool Bhulaiyaa 2?
  • -
  • A: Los actores principales de Bhool Bhulaiyaa 2 son Tabu, Kartik Aaryan y Kiara Advani.
  • -
  • Q: ¿Es Bhool Bhulaiyaa 2 una nueva versión de cualquier otra película?
  • -
  • A: No, Bhool Bhulaiyaa 2 no es un remake de ninguna otra película sino una historia original con nuevos personajes.
  • -
  • Q: ¿Cuál es el género de Bhool Bhulaiyaa 2?
  • - -
  • Q: ¿Cómo puedo descargar el tono de llamada de Bhool Bhulaiyaa 2 en mi teléfono?
  • -
  • A: Puede descargar el tono de llamada Bhool Bhulaiyaa 2 en su teléfono siguiendo estos pasos:
  • -
      -
    1. Ir a https://bltlly.com/2v6JW8



      -

      Si respondiste sí a cualquiera de estas preguntas, entonces es posible que desee intentar descargar Facebook APK Android 4. Esta es una versión modificada de la aplicación original de Facebook que es compatible con los dispositivos Android que se ejecutan en la versión 4.0.3 o superior. En este artículo, vamos a explicar lo que es Facebook APK Android 4, cómo descargarlo, cómo actualizarlo, y cómo solucionarlo. Vamos a empezar!

      -

      ¿Qué es Facebook APK Android 4?

      -

      Un APK (Android Package Kit) es un formato de archivo que contiene todos los componentes de una aplicación Android, como el código, los recursos, los activos y el manifiesto. Puedes instalar un archivo APK en tu dispositivo sin usar Google Play Store, que es la fuente oficial para las aplicaciones de Android.

      -

      Facebook APK Android 4 es una versión no oficial de la aplicación de Facebook que se ha modificado para funcionar en dispositivos Android más antiguos. Tiene algunas ventajas y desventajas en comparación con la aplicación oficial, que vamos a discutir a continuación.

      -

      Los beneficios de usar Facebook APK Android 4

      -
        -
      • Es compatible con dispositivos Android que se ejecutan en la versión 4.0.3 o superior, lo que significa que puede usarlo en dispositivos que no son compatibles con la aplicación oficial.
      • - -
      • Utiliza menos datos que la aplicación oficial, lo que significa que puede ahorrar algo de dinero en su plan de datos móviles.
      • -
      • Tiene algunas características que no están disponibles en la aplicación oficial, como la descarga de vídeos, la personalización de temas, y ocultar el estado en línea.
      • -
      -

      Las desventajas de usar Facebook APK Android 4

      -
        -
      • No está autorizado por Facebook, lo que significa que puede violar sus términos de servicio y política de privacidad. Puede arriesgarse a perder su cuenta o exponer su información personal si la usa.
      • -
      • Puede no ser seguro, ya que puede contener malware, virus o spyware que pueden dañar su dispositivo o robar sus datos. Solo debe descargarlo de fuentes confiables y escanearlo con un antivirus antes de instalarlo.
      • -
      • Puede no ser estable o confiable, ya que puede bloquearse, congelarse o funcionar mal en cualquier momento. También puede experimentar algunos errores o fallos que afectan su experiencia de usuario.
      • -
      • Es posible que no se actualice regularmente, lo que significa que puede perderse algunas características nuevas o mejoras que se agregan a la aplicación oficial.
      • -
      -

      Cómo descargar Facebook APK Android 4

      -

      Si desea probar Facebook APK Android 4 en su dispositivo, tendrá que seguir estos pasos:

      -

      Paso 1: Habilitar fuentes desconocidas en el dispositivo

      -

      De forma predeterminada, los dispositivos Android no permiten la instalación de aplicaciones desde fuentes distintas de Google Play Store. Esta es una medida de seguridad para evitar la instalación de aplicaciones maliciosas o dañinas. Sin embargo, si desea instalar Facebook APK Android 4, tendrá que habilitar la opción de instalar aplicaciones de fuentes desconocidas en su dispositivo. He aquí cómo hacerlo:

      -
        -
      • Ir a la configuración de su dispositivo y toque en la seguridad o la privacidad.
      • -
      • Encontrar la opción que dice fuentes desconocidas o instalar aplicaciones desconocidas y alternar en.
      • - -
      -

      Ahora ha habilitado fuentes desconocidas en su dispositivo, y puede proceder al siguiente paso.

      -

      -

      Paso 2: Encontrar una fuente confiable para el archivo APK

      -

      El siguiente paso es encontrar una fuente confiable para el archivo de Facebook APK Android 4. Debe tener cuidado al elegir una fuente, ya que algunos sitios web pueden ofrecer archivos falsos o infectados que pueden dañar su dispositivo o datos. Aquí hay algunos consejos para ayudarle a encontrar una fuente confiable:

      -
        -
      • Busca sitios web que tengan una buena reputación y críticas positivas de otros usuarios. También puede comprobar las calificaciones y comentarios del archivo APK en el sitio web.
      • -
      • Evite los sitios web que tienen ventanas emergentes, anuncios o redirecciones que pueden llevarlo a páginas no deseadas o dañinas. También puedes usar un bloqueador de anuncios o un navegador que tenga un bloqueador de anuncios incorporado para evitar estas molestias.
      • -
      • Compruebe los detalles del archivo APK, como el tamaño del archivo, versión, fecha y desarrollador. Asegúrese de que coinciden con la aplicación oficial de Facebook o la última versión de Facebook APK Android 4.
      • -
      • Escanear el archivo APK con un antivirus o un escáner de malware antes de descargarlo. También puede utilizar herramientas en línea como VirusTotal para comprobar el archivo en busca de amenazas.
      • -
      -

      Una de las fuentes que recomendamos para descargar Facebook APK Android 4 es APKPure, que es un sitio web de buena reputación que ofrece archivos APK seguros y verificados. También puede usar otras fuentes en las que confíe, pero asegúrese de seguir los consejos anteriores.

      -

      Paso 3: Descargar e instalar el archivo APK

      -

      El paso final es descargar e instalar el archivo de Facebook APK Android 4 en su dispositivo. Aquí está cómo hacerlo:

      -
        -
      • Abra su navegador y vaya al sitio web donde encontró el archivo APK. Toque en el botón de descarga o enlace y espere a que se descargue el archivo.
      • -
      • Una vez que la descarga se haya completado, vaya al administrador de archivos de su dispositivo y busque el archivo APK. Toque en él para abrirlo.
      • - -
      • Una vez que la instalación se realiza, puede abrir la aplicación e iniciar sesión con su cuenta de Facebook. También puede crear una nueva cuenta si no tiene una.
      • -
      -

      Felicidades! Usted ha descargado e instalado con éxito Facebook APK Android 4 en su dispositivo. Ahora puedes disfrutar usando la aplicación y sus características.

      -

      Cómo actualizar Facebook APK Android 4

      -

      Si desea seguir utilizando Facebook APK Android 4, tendrá que actualizarlo regularmente para obtener las últimas características y mejoras. Hay dos maneras de actualizar Facebook APK Android 4: utilizando el actualizador incorporado o descargar el último archivo APK desde el sitio web oficial.

      -

      Opción 1: Utilice el actualizador incorporado

      -

      Algunas versiones de Facebook APK Android 4 tienen un actualizador incorporado que le permite comprobar si hay actualizaciones y descargarlas directamente desde la aplicación. He aquí cómo usarlo:

      -
        -
      • Abrir Facebook APK Android 4 y toque en el icono del menú (tres líneas horizontales) en la esquina superior derecha de la pantalla.
      • -
      • Desplácese hacia abajo y toque en la configuración y la privacidad.
      • -
      • Toque en las actualizaciones de la aplicación.
      • -
      • Si hay una actualización disponible, toque en actualizar ahora y espere a que la actualización se descargue e instale.
      • -
      • Si no hay actualización disponible, toque en la comprobación de actualizaciones y espere a que la aplicación escanee para cualquier versión nueva.
      • -
      -

      Esta opción es conveniente y fácil, pero puede que no funcione para todas las versiones de Facebook APK Android 4. Si no ves la opción de actualizaciones de aplicaciones en su configuración & menú de privacidad, entonces tendrá que utilizar la opción 2 en su lugar.

      -

      Opción 2: Descargar el último archivo APK desde el sitio web oficial

      -

      Otra manera de actualizar Facebook APK Android 4 es descargar el último archivo APK desde el sitio web oficial de Facebook. Aquí está cómo hacerlo:

      -
        -
      • Abra su navegador y vaya a https://www.facebook.com/android, que es el sitio web oficial de Facebook para dispositivos Android.
      • - -
      • Aparecerá un mensaje pidiéndole que descargue el archivo APK. Toque en Aceptar o descargar y espere a que el archivo se descargue.
      • -
      • Una vez que la descarga se haya completado, vaya al administrador de archivos de su dispositivo y busque el archivo APK. Toque en él para abrirlo.
      • -
      • Aparecerá un mensaje pidiéndole que instale la aplicación. Toque en instalar y espere a que termine el proceso de instalación.
      • -
      • Una vez que la instalación se realiza, puede abrir la aplicación e iniciar sesión con su cuenta de Facebook. También puede crear una nueva cuenta si no tiene una.
      • -
      -

      Esta opción es confiable y segura, ya que está descargando el archivo APK de la fuente oficial. Sin embargo, puede tomar más tiempo y consumir más datos que la opción 1.

      -

      Cómo solucionar problemas de Facebook APK Android 4

      -

      A veces, puede encontrar algunos problemas o problemas cuando se utiliza Facebook APK Android 4. Estos pueden incluir errores, se bloquea, se congela, o mal rendimiento. No te preocupes, ya que hay algunas maneras de solucionar problemas de Facebook APK Android 4 y solucionar estos problemas. Aquí hay algunos problemas comunes y soluciones:

      -

      Problemas y soluciones comunes

      - - -Problema -Solución - - -La aplicación no se instalará ni actualizará -Asegúrese de que tiene suficiente espacio de almacenamiento en su dispositivo y una conexión a Internet estable. Además, compruebe si ha habilitado fuentes desconocidas en su dispositivo y si ha descargado el archivo APK correcto para su dispositivo. - - -La aplicación no se abrirá ni se cargará -Borrar la caché y los datos de la aplicación yendo a la configuración del dispositivo, aplicaciones, Facebook, almacenamiento, y tocando en la caché clara y datos claros. Además, compruebe si tiene la última versión de la aplicación y si su dispositivo cumple los requisitos mínimos para ejecutar la aplicación. - - -La aplicación se bloquea o se congela - - - -La aplicación es lenta o lenta -Reduzca el uso de datos de la aplicación yendo a la configuración del dispositivo, el uso de datos, Facebook y cambiando los datos de fondo. Además, desactive cualquier notificación o característica innecesaria que pueda ralentizar la aplicación. - - -La aplicación muestra información incorrecta o desactualizada -Actualizar la aplicación deslizando hacia abajo en la pantalla o tocando el icono de actualización en la esquina superior derecha de la pantalla. Además, compruebe si la fecha y la hora de su dispositivo son correctas y si tiene una buena conexión a Internet. - - -

      Consejos y trucos para optimizar tu experiencia

      -
        -
      • Utilice Facebook Lite en lugar de Facebook APK Android 4 si usted tiene un dispositivo de gama baja o una conexión a Internet lenta. Facebook Lite es una versión más ligera y rápida de Facebook que consume menos datos y recursos.
      • -
      • Usa Messenger Lite en lugar de Messenger si quieres chatear con tus amigos de Facebook sin usar la aplicación principal. Messenger Lite es una versión más simple y rápida de Messenger que también consume menos datos y recursos.
      • -
      • Utilice Facebook Web en lugar de Facebook APK Android 4 si desea acceder a Facebook desde su navegador sin instalar ninguna aplicación. Facebook Web es una versión móvil de Facebook que funciona en cualquier navegador.
      • -
      • Utilice el modo oscuro en lugar del modo de luz si desea ahorrar vida de la batería y reducir la tensión ocular. El modo oscuro es una característica que cambia el color de fondo de la aplicación de blanco a negro. Puede habilitar el modo oscuro yendo a la configuración del dispositivo, visualización, modo oscuro y activarlo.
      • -
      • Utilice atajos en lugar de menús si desea acceder a sus características favoritas más rápido. Los atajos son iconos que aparecen en la parte inferior de la pantalla y le permiten cambiar rápidamente entre noticias, grupos, ver, mercado y notificaciones. Puede personalizar sus accesos directos tocando en ellos y mantenerlos presionados hasta que se muevan, luego arrastrarlos a su posición preferida.
      • -
      - -

      En este artículo, hemos explicado lo que es Facebook APK Android 4, cómo descargarlo, cómo actualizarlo, y cómo solucionarlo. También hemos compartido algunos consejos y trucos para optimizar tu experiencia con la aplicación. Esperamos que este artículo te haya resultado útil e informativo.

      -

      Facebook APK Android 4 es una gran alternativa a la aplicación oficial de Facebook, especialmente si usted tiene un dispositivo Android más antiguo o desea acceder a algunas características adicionales. Sin embargo, también debes ser consciente de los riesgos y desafíos que conlleva el uso de una aplicación no oficial. Siempre debe descargar la aplicación de fuentes confiables, escanearla en busca de amenazas y actualizarla regularmente. También debe seguir los pasos de solución de problemas si encuentra algún problema o problemas con la aplicación.

      -

      Si usted tiene alguna pregunta o comentario sobre Facebook APK Android 4, no dude en dejar un comentario a continuación. Nos encantaría saber de usted!

      -

      Preguntas frecuentes

      -

      Aquí hay algunas preguntas frecuentes sobre Facebook APK Android 4:

      -

      ¿Es Facebook APK Android 4 legal?

      -

      Facebook APK Android 4 no es ilegal, pero no está autorizado por Facebook tampoco. Puede violar sus términos de servicio y política de privacidad, lo que significa que puede arriesgarse a perder su cuenta o exponer su información personal si la usa. Debes usarlo a tu propia discreción y responsabilidad.

      -

      ¿Es seguro Facebook APK Android 4?

      -

      Facebook APK Android 4 no puede ser seguro, ya que puede contener malware, virus o spyware que puede dañar su dispositivo o robar sus datos. Solo debe descargarlo de fuentes confiables y escanearlo con un antivirus antes de instalarlo. También debe evitar conceder permisos innecesarios o acceso a la aplicación.

      -

      Es Facebook APK Android 4 libre?

      - -

      ¿Cómo puedo desinstalar Facebook APK Android 4?

      -

      Si desea desinstalar Facebook APK Android 4 desde su dispositivo, puede seguir estos pasos:

      -
        -
      • Vaya a la configuración de su dispositivo y toque en aplicaciones o aplicaciones.
      • -
      • Encuentra y toca en Facebook.
      • -
      • Toque en desinstalar y confirme su acción.
      • -
      -

      También puede eliminar el archivo APK del administrador de archivos de su dispositivo si ya no lo necesita.

      -

      ¿Puedo usar Facebook APK Android 4 en otros dispositivos?

      -

      Facebook APK Android 4 está diseñado para dispositivos Android que se ejecutan en la versión 4.0.3 o superior. Es posible que pueda usarlo en otros dispositivos, como iOS o Windows, pero necesitará usar un emulador o un convertidor para hacerlo. Sin embargo, esto puede no funcionar bien o en absoluto, y puede causar algunos problemas de compatibilidad o rendimiento. Recomendamos usar la aplicación oficial de Facebook o la versión web para otros dispositivos.

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/tz/win.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/tz/win.py deleted file mode 100644 index cde07ba792c40903f0c334839140173b39fd8124..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/tz/win.py +++ /dev/null @@ -1,370 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module provides an interface to the native time zone data on Windows, -including :py:class:`datetime.tzinfo` implementations. - -Attempting to import this module on a non-Windows platform will raise an -:py:obj:`ImportError`. -""" -# This code was originally contributed by Jeffrey Harris. -import datetime -import struct - -from six.moves import winreg -from six import text_type - -try: - import ctypes - from ctypes import wintypes -except ValueError: - # ValueError is raised on non-Windows systems for some horrible reason. - raise ImportError("Running tzwin on non-Windows system") - -from ._common import tzrangebase - -__all__ = ["tzwin", "tzwinlocal", "tzres"] - -ONEWEEK = datetime.timedelta(7) - -TZKEYNAMENT = r"SOFTWARE\Microsoft\Windows NT\CurrentVersion\Time Zones" -TZKEYNAME9X = r"SOFTWARE\Microsoft\Windows\CurrentVersion\Time Zones" -TZLOCALKEYNAME = r"SYSTEM\CurrentControlSet\Control\TimeZoneInformation" - - -def _settzkeyname(): - handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) - try: - winreg.OpenKey(handle, TZKEYNAMENT).Close() - TZKEYNAME = TZKEYNAMENT - except WindowsError: - TZKEYNAME = TZKEYNAME9X - handle.Close() - return TZKEYNAME - - -TZKEYNAME = _settzkeyname() - - -class tzres(object): - """ - Class for accessing ``tzres.dll``, which contains timezone name related - resources. - - .. versionadded:: 2.5.0 - """ - p_wchar = ctypes.POINTER(wintypes.WCHAR) # Pointer to a wide char - - def __init__(self, tzres_loc='tzres.dll'): - # Load the user32 DLL so we can load strings from tzres - user32 = ctypes.WinDLL('user32') - - # Specify the LoadStringW function - user32.LoadStringW.argtypes = (wintypes.HINSTANCE, - wintypes.UINT, - wintypes.LPWSTR, - ctypes.c_int) - - self.LoadStringW = user32.LoadStringW - self._tzres = ctypes.WinDLL(tzres_loc) - self.tzres_loc = tzres_loc - - def load_name(self, offset): - """ - Load a timezone name from a DLL offset (integer). - - >>> from dateutil.tzwin import tzres - >>> tzr = tzres() - >>> print(tzr.load_name(112)) - 'Eastern Standard Time' - - :param offset: - A positive integer value referring to a string from the tzres dll. - - .. note:: - - Offsets found in the registry are generally of the form - ``@tzres.dll,-114``. The offset in this case is 114, not -114. - - """ - resource = self.p_wchar() - lpBuffer = ctypes.cast(ctypes.byref(resource), wintypes.LPWSTR) - nchar = self.LoadStringW(self._tzres._handle, offset, lpBuffer, 0) - return resource[:nchar] - - def name_from_string(self, tzname_str): - """ - Parse strings as returned from the Windows registry into the time zone - name as defined in the registry. - - >>> from dateutil.tzwin import tzres - >>> tzr = tzres() - >>> print(tzr.name_from_string('@tzres.dll,-251')) - 'Dateline Daylight Time' - >>> print(tzr.name_from_string('Eastern Standard Time')) - 'Eastern Standard Time' - - :param tzname_str: - A timezone name string as returned from a Windows registry key. - - :return: - Returns the localized timezone string from tzres.dll if the string - is of the form `@tzres.dll,-offset`, else returns the input string. - """ - if not tzname_str.startswith('@'): - return tzname_str - - name_splt = tzname_str.split(',-') - try: - offset = int(name_splt[1]) - except: - raise ValueError("Malformed timezone string.") - - return self.load_name(offset) - - -class tzwinbase(tzrangebase): - """tzinfo class based on win32's timezones available in the registry.""" - def __init__(self): - raise NotImplementedError('tzwinbase is an abstract base class') - - def __eq__(self, other): - # Compare on all relevant dimensions, including name. - if not isinstance(other, tzwinbase): - return NotImplemented - - return (self._std_offset == other._std_offset and - self._dst_offset == other._dst_offset and - self._stddayofweek == other._stddayofweek and - self._dstdayofweek == other._dstdayofweek and - self._stdweeknumber == other._stdweeknumber and - self._dstweeknumber == other._dstweeknumber and - self._stdhour == other._stdhour and - self._dsthour == other._dsthour and - self._stdminute == other._stdminute and - self._dstminute == other._dstminute and - self._std_abbr == other._std_abbr and - self._dst_abbr == other._dst_abbr) - - @staticmethod - def list(): - """Return a list of all time zones known to the system.""" - with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle: - with winreg.OpenKey(handle, TZKEYNAME) as tzkey: - result = [winreg.EnumKey(tzkey, i) - for i in range(winreg.QueryInfoKey(tzkey)[0])] - return result - - def display(self): - """ - Return the display name of the time zone. - """ - return self._display - - def transitions(self, year): - """ - For a given year, get the DST on and off transition times, expressed - always on the standard time side. For zones with no transitions, this - function returns ``None``. - - :param year: - The year whose transitions you would like to query. - - :return: - Returns a :class:`tuple` of :class:`datetime.datetime` objects, - ``(dston, dstoff)`` for zones with an annual DST transition, or - ``None`` for fixed offset zones. - """ - - if not self.hasdst: - return None - - dston = picknthweekday(year, self._dstmonth, self._dstdayofweek, - self._dsthour, self._dstminute, - self._dstweeknumber) - - dstoff = picknthweekday(year, self._stdmonth, self._stddayofweek, - self._stdhour, self._stdminute, - self._stdweeknumber) - - # Ambiguous dates default to the STD side - dstoff -= self._dst_base_offset - - return dston, dstoff - - def _get_hasdst(self): - return self._dstmonth != 0 - - @property - def _dst_base_offset(self): - return self._dst_base_offset_ - - -class tzwin(tzwinbase): - """ - Time zone object created from the zone info in the Windows registry - - These are similar to :py:class:`dateutil.tz.tzrange` objects in that - the time zone data is provided in the format of a single offset rule - for either 0 or 2 time zone transitions per year. - - :param: name - The name of a Windows time zone key, e.g. "Eastern Standard Time". - The full list of keys can be retrieved with :func:`tzwin.list`. - """ - - def __init__(self, name): - self._name = name - - with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle: - tzkeyname = text_type("{kn}\\{name}").format(kn=TZKEYNAME, name=name) - with winreg.OpenKey(handle, tzkeyname) as tzkey: - keydict = valuestodict(tzkey) - - self._std_abbr = keydict["Std"] - self._dst_abbr = keydict["Dlt"] - - self._display = keydict["Display"] - - # See http://ww_winreg.jsiinc.com/SUBA/tip0300/rh0398.htm - tup = struct.unpack("=3l16h", keydict["TZI"]) - stdoffset = -tup[0]-tup[1] # Bias + StandardBias * -1 - dstoffset = stdoffset-tup[2] # + DaylightBias * -1 - self._std_offset = datetime.timedelta(minutes=stdoffset) - self._dst_offset = datetime.timedelta(minutes=dstoffset) - - # for the meaning see the win32 TIME_ZONE_INFORMATION structure docs - # http://msdn.microsoft.com/en-us/library/windows/desktop/ms725481(v=vs.85).aspx - (self._stdmonth, - self._stddayofweek, # Sunday = 0 - self._stdweeknumber, # Last = 5 - self._stdhour, - self._stdminute) = tup[4:9] - - (self._dstmonth, - self._dstdayofweek, # Sunday = 0 - self._dstweeknumber, # Last = 5 - self._dsthour, - self._dstminute) = tup[12:17] - - self._dst_base_offset_ = self._dst_offset - self._std_offset - self.hasdst = self._get_hasdst() - - def __repr__(self): - return "tzwin(%s)" % repr(self._name) - - def __reduce__(self): - return (self.__class__, (self._name,)) - - -class tzwinlocal(tzwinbase): - """ - Class representing the local time zone information in the Windows registry - - While :class:`dateutil.tz.tzlocal` makes system calls (via the :mod:`time` - module) to retrieve time zone information, ``tzwinlocal`` retrieves the - rules directly from the Windows registry and creates an object like - :class:`dateutil.tz.tzwin`. - - Because Windows does not have an equivalent of :func:`time.tzset`, on - Windows, :class:`dateutil.tz.tzlocal` instances will always reflect the - time zone settings *at the time that the process was started*, meaning - changes to the machine's time zone settings during the run of a program - on Windows will **not** be reflected by :class:`dateutil.tz.tzlocal`. - Because ``tzwinlocal`` reads the registry directly, it is unaffected by - this issue. - """ - def __init__(self): - with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle: - with winreg.OpenKey(handle, TZLOCALKEYNAME) as tzlocalkey: - keydict = valuestodict(tzlocalkey) - - self._std_abbr = keydict["StandardName"] - self._dst_abbr = keydict["DaylightName"] - - try: - tzkeyname = text_type('{kn}\\{sn}').format(kn=TZKEYNAME, - sn=self._std_abbr) - with winreg.OpenKey(handle, tzkeyname) as tzkey: - _keydict = valuestodict(tzkey) - self._display = _keydict["Display"] - except OSError: - self._display = None - - stdoffset = -keydict["Bias"]-keydict["StandardBias"] - dstoffset = stdoffset-keydict["DaylightBias"] - - self._std_offset = datetime.timedelta(minutes=stdoffset) - self._dst_offset = datetime.timedelta(minutes=dstoffset) - - # For reasons unclear, in this particular key, the day of week has been - # moved to the END of the SYSTEMTIME structure. - tup = struct.unpack("=8h", keydict["StandardStart"]) - - (self._stdmonth, - self._stdweeknumber, # Last = 5 - self._stdhour, - self._stdminute) = tup[1:5] - - self._stddayofweek = tup[7] - - tup = struct.unpack("=8h", keydict["DaylightStart"]) - - (self._dstmonth, - self._dstweeknumber, # Last = 5 - self._dsthour, - self._dstminute) = tup[1:5] - - self._dstdayofweek = tup[7] - - self._dst_base_offset_ = self._dst_offset - self._std_offset - self.hasdst = self._get_hasdst() - - def __repr__(self): - return "tzwinlocal()" - - def __str__(self): - # str will return the standard name, not the daylight name. - return "tzwinlocal(%s)" % repr(self._std_abbr) - - def __reduce__(self): - return (self.__class__, ()) - - -def picknthweekday(year, month, dayofweek, hour, minute, whichweek): - """ dayofweek == 0 means Sunday, whichweek 5 means last instance """ - first = datetime.datetime(year, month, 1, hour, minute) - - # This will work if dayofweek is ISO weekday (1-7) or Microsoft-style (0-6), - # Because 7 % 7 = 0 - weekdayone = first.replace(day=((dayofweek - first.isoweekday()) % 7) + 1) - wd = weekdayone + ((whichweek - 1) * ONEWEEK) - if (wd.month != month): - wd -= ONEWEEK - - return wd - - -def valuestodict(key): - """Convert a registry key's values to a dictionary.""" - dout = {} - size = winreg.QueryInfoKey(key)[1] - tz_res = None - - for i in range(size): - key_name, value, dtype = winreg.EnumValue(key, i) - if dtype == winreg.REG_DWORD or dtype == winreg.REG_DWORD_LITTLE_ENDIAN: - # If it's a DWORD (32-bit integer), it's stored as unsigned - convert - # that to a proper signed integer - if value & (1 << 31): - value = value - (1 << 32) - elif dtype == winreg.REG_SZ: - # If it's a reference to the tzres DLL, load the actual string - if value.startswith('@tzres'): - tz_res = tz_res or tzres() - value = tz_res.name_from_string(value) - - value = value.rstrip('\x00') # Remove trailing nulls - - dout[key_name] = value - - return dout diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/intranges.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/intranges.py deleted file mode 100644 index 6a43b0475347cb50d0d65ada1000a82eeca9e882..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/intranges.py +++ /dev/null @@ -1,54 +0,0 @@ -""" -Given a list of integers, made up of (hopefully) a small number of long runs -of consecutive integers, compute a representation of the form -((start1, end1), (start2, end2) ...). Then answer the question "was x present -in the original list?" in time O(log(# runs)). -""" - -import bisect -from typing import List, Tuple - -def intranges_from_list(list_: List[int]) -> Tuple[int, ...]: - """Represent a list of integers as a sequence of ranges: - ((start_0, end_0), (start_1, end_1), ...), such that the original - integers are exactly those x such that start_i <= x < end_i for some i. - - Ranges are encoded as single integers (start << 32 | end), not as tuples. - """ - - sorted_list = sorted(list_) - ranges = [] - last_write = -1 - for i in range(len(sorted_list)): - if i+1 < len(sorted_list): - if sorted_list[i] == sorted_list[i+1]-1: - continue - current_range = sorted_list[last_write+1:i+1] - ranges.append(_encode_range(current_range[0], current_range[-1] + 1)) - last_write = i - - return tuple(ranges) - -def _encode_range(start: int, end: int) -> int: - return (start << 32) | end - -def _decode_range(r: int) -> Tuple[int, int]: - return (r >> 32), (r & ((1 << 32) - 1)) - - -def intranges_contain(int_: int, ranges: Tuple[int, ...]) -> bool: - """Determine if `int_` falls into one of the ranges in `ranges`.""" - tuple_ = _encode_range(int_, 0) - pos = bisect.bisect_left(ranges, tuple_) - # we could be immediately ahead of a tuple (start, end) - # with start < int_ <= end - if pos > 0: - left, right = _decode_range(ranges[pos-1]) - if left <= int_ < right: - return True - # or we could be immediately behind a tuple (int_, end) - if pos < len(ranges): - left, _ = _decode_range(ranges[pos]) - if left == int_: - return True - return False diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/core/util.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/core/util.h deleted file mode 100644 index ea4ed6400b1d1070f83994db7c57636f14024d03..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/core/util.h +++ /dev/null @@ -1,773 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -namespace thrust -{ - -namespace cuda_cub { -namespace core { - -#ifdef __NVCOMPILER_CUDA__ -# if (__NVCOMPILER_CUDA_ARCH__ >= 600) -# define THRUST_TUNING_ARCH sm60 -# elif (__NVCOMPILER_CUDA_ARCH__ >= 520) -# define THRUST_TUNING_ARCH sm52 -# elif (__NVCOMPILER_CUDA_ARCH__ >= 350) -# define THRUST_TUNING_ARCH sm35 -# else -# define THRUST_TUNING_ARCH sm30 -# endif -#else -# if (__CUDA_ARCH__ >= 600) -# define THRUST_TUNING_ARCH sm60 -# elif (__CUDA_ARCH__ >= 520) -# define THRUST_TUNING_ARCH sm52 -# elif (__CUDA_ARCH__ >= 350) -# define THRUST_TUNING_ARCH sm35 -# elif (__CUDA_ARCH__ >= 300) -# define THRUST_TUNING_ARCH sm30 -# elif !defined (__CUDA_ARCH__) -# define THRUST_TUNING_ARCH sm30 -# endif -#endif - - // Typelist - a container of types, supports up to 10 types - // -------------------------------------------------------------------------- - - class _; - template - struct typelist; - - // ------------------------------------- - - // supported SM arch - // --------------------- - struct sm30 { enum { ver = 300, warpSize = 32 }; }; - struct sm35 { enum { ver = 350, warpSize = 32 }; }; - struct sm52 { enum { ver = 520, warpSize = 32 }; }; - struct sm60 { enum { ver = 600, warpSize = 32 }; }; - - // list of sm, checked from left to right order - // the rightmost is the lowest sm arch supported - // -------------------------------------------- - typedef typelist sm_list; - - // lowest supported SM arch - // -------------------------------------------------------------------------- - - template - struct lowest_supported_sm_arch_impl; - - template - struct lowest_supported_sm_arch_impl > - : lowest_supported_sm_arch_impl<_0, typelist< _1, _2, _3, _4, _5, _6, _7, _8, _9> > {}; - template - struct lowest_supported_sm_arch_impl > - { - typedef SM type; - }; - - typedef typename lowest_supported_sm_arch_impl<_,sm_list>::type lowest_supported_sm_arch; - - // metafunction to match next viable PtxPlan specialization - // -------------------------------------------------------------------------- - - __THRUST_DEFINE_HAS_NESTED_TYPE(has_tuning_t, tuning) - __THRUST_DEFINE_HAS_NESTED_TYPE(has_type_t, type) - - template