diff --git a/spaces/101-5/gpt4free/g4f/Provider/__init__.py b/spaces/101-5/gpt4free/g4f/Provider/__init__.py deleted file mode 100644 index 3a86291d5d259697f5ed0a4e782f8a5d6193ed78..0000000000000000000000000000000000000000 --- a/spaces/101-5/gpt4free/g4f/Provider/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -from . import Provider -from .Providers import ( - Ails, - You, - Bing, - Yqcloud, - Theb, - Aichat, - Bard, - Vercel, - Forefront, - Lockchat, - Liaobots, - H2o, - ChatgptLogin, - DeepAi, - GetGpt, - AItianhu, - EasyChat, - Acytoo, - DFEHub, -) - -Palm = Bard diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl Delphi Autocom Cars and Trucks 2015 R3 Keygen (fast and direct download) - MHH AUTO - Page 1[2].md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl Delphi Autocom Cars and Trucks 2015 R3 Keygen (fast and direct download) - MHH AUTO - Page 1[2].md deleted file mode 100644 index bd56b42e38e83856d4e00ad9142d8c53ad6d472c..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl Delphi Autocom Cars and Trucks 2015 R3 Keygen (fast and direct download) - MHH AUTO - Page 1[2].md +++ /dev/null @@ -1,121 +0,0 @@ - -

What is Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl?

-

If you are looking for a reliable and versatile diagnostic software for cars and trucks, you might have come across Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl. But what is it exactly and how does it work?

-

Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl


DOWNLOADhttps://byltly.com/2uKw0F



-

Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl is a software package that allows you to perform various diagnostic tasks on different vehicles using a compatible device such as a laptop or a tablet. It is based on Autocom / Delphi software, which is one of the most popular and widely used diagnostic tools in the automotive industry.

-

Some of the main features of Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl are:

- -

In this article, we will show you how to download and install Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl on your device, how to use it for diagnostic purposes, what are its benefits and drawbacks, and what are some tips and tricks for using it effectively.

-

How to download and install Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl?

-

To download and install Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl on your device, you will need to follow these steps:

-
    -
  1. Download the software package from this link: http://www.mailbox.com.pt/obd/files/Delphi%202015%20R3%202015.3.rar. This is a fast and direct download link that does not require any password or registration.
  2. -
  3. Extract the compressed file using a program such as WinRAR or WinZip.
  4. -
  5. Run the setup.exe file and follow the instructions on the screen.
  6. -
  7. When prompted, choose the installation path for the software. The default location is C:\Program Files (x86)\Delphi Diagnostics\DS150E (New VCI).
  8. -
  9. When the installation is complete, do not run the software yet.
  10. -
  11. Copy the file Main.exe from the folder Activation (DS150E New VCI) to the installation folder (C:\Program Files (x86)\Delphi Diagnostics\DS150E (New VCI)). Replace the existing file if asked.
  12. -
  13. Run the file Main.exe from the installation folder.
  14. -
  15. You will see a window with a serial number (for example, DS150E). Copy this serial number.
  16. -
  17. Open the file FileActivation.xml from the folder Activation (DS150E New VCI) with a text editor such as Notepad.
  18. -
  19. Paste the serial number that you copied in step 8 into the line that says .
  20. -
  21. Save and close the file FileActivation.xml.
  22. -
  23. Run again the file Main.exe from the installation folder.
  24. -
  25. You will see a window with an activation request code (for example, A4DB). Copy this code.
  26. -
  27. Open again the file FileActivation.xml from the folder Activation (DS150E New VCI) with a text editor such as Notepad.
  28. -
  29. Paste the activation request code that you copied in step 13 into the line that says .
  30. -
  31. Save and close again the file FileActivation.xml.
  32. -
  33. Run again once more the file Main.exe from the installation folder.
  34. -
  35. You will see a window with an activation button. Click on it.
  36. -
  37. You will be asked to select the file FileActivation.xml from the folder Activation (DS150E New VCI). Do so and click Open.
  38. -
  39. You will see a message saying that the activation was successful. Click OK.
  40. -
-

Congratulations! You have successfully downloaded and installed Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl on your device. You can now run the software and enjoy its features.

-

How to use Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl for diagnostic purposes?

-

To use Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl for diagnostic purposes, you will need to connect your device to your vehicle using an OBD-II cable or a wireless adapter. Then, you can launch the software and select the vehicle make, model and system that you want to diagnose. You can also use the Intelligent System Scan (ISS) function to scan all the control modules on the vehicle and display the fault codes stored in each system. You can then select a specific control system to further analyse the results and perform various functions such as reading and clearing fault codes, viewing live data, performing tests and adjustments, programming keys, resetting service intervals, etc.

-

In the following sections, we will show you some examples of how to diagnose cars and trucks with Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl.

-

Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Zip
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Crack
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Download
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Torrent
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Free
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Full
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Serial
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci License
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Activator
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Generator
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Software
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Update
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Patch
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Code
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Installer
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Manual
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Review
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Guide
-Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Tutorial
-Delphi

-

How to diagnose cars with Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl?

-

Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl can diagnose a wide range of car models and systems, such as engine, transmission, ABS, airbag, steering, climate control, etc. Here are some examples of common car problems and how to solve them with the software:

- -

How to diagnose trucks with Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl?

-

Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl can also diagnose a wide range of truck models and systems, such as engine, transmission, brakes, suspension, instrument cluster, etc. Here are some examples of common truck problems and how to solve them with the software:

- -

What are the benefits and drawbacks of Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl?

-

Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl is a powerful and versatile diagnostic software that has many benefits for users such as:

- -

However, Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl also has some drawbacks that users should be aware of such as:

- -

What are some tips and tricks for using Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl effectively?

-

To use Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl effectively, users should follow some tips and tricks such as:

- -

Conclusion

-

In conclusion, Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl is a reliable and versatile diagnostic software for cars and trucks that allows users to perform various diagnostic tasks on different vehicles and systems using a compatible device. It has many benefits such as supporting a wide range of vehicles and systems, providing comprehensive information and data, allowing users to perform various diagnostic tasks, having a user-friendly interface, and having a keygen-activator. However, it also has some drawbacks such as requiring a compatible device and an OBD-II cable or a wireless adapter, not supporting some newer or older vehicle models or systems, and not being able to perform some advanced or specific functions. Therefore, users should weigh the pros and cons of the software before using it and follow some tips and tricks to use it effectively.

-

FAQs

-

Here are some frequently asked questions and answers about Delphi 2015.3 Keygen-activation 2015 Release 2 Cdp Ds150e Cdp Cars Trucks Vci Rarl:

-
    -
  1. Q: What are the system requirements for running the software on a PC or a laptop?
    -A: The minimum system requirements are: Windows XP SP3 / Vista / 7 / 8 / 10, Intel Core 2 Duo 1.8 GHz or equivalent processor, 2 GB RAM, 5 GB free disk space, USB port, DVD-ROM drive.
  2. -
  3. Q: What are the compatible devices for running the software on a tablet?
    -A: The software can run on any Windows-based tablet that meets the minimum system requirements. However, the recommended device is the Delphi DS450E tablet, which is specially designed for the software and has a 12-inch touch screen, a rugged case, a built-in camera, and a long battery life.
  4. -
  5. Q: What are the compatible OBD-II cables or wireless adapters for connecting the device to the vehicle?
    -A: The software can work with any OBD-II cable or wireless adapter that supports ISO 9141-2, ISO 14230-4 (KWP2000), ISO 15765-4 (CAN), SAE J1850 (PWM/VPW), and SAE J2534 (Pass-Thru) protocols. However, the recommended device is the Delphi DS150E VCI, which is specially designed for the software and has a Bluetooth connection, a LED indicator, and a multiplexer function.
  6. -
  7. Q: How can I update the software to get access to new features and functions?
    -A: You can update the software by downloading the latest version from the official website of Delphi or by using the built-in update function in the software. You will need to activate the software again after updating it.
  8. -
  9. Q: How can I get technical support or training for using the software?
    -A: You can get technical support or training by contacting Delphi customer service or by visiting their official website. You can also find useful information and tips in the help function or in the user manual of the software.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Movavi Video Editor for Free and Create Stunning Videos in Minutes.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Movavi Video Editor for Free and Create Stunning Videos in Minutes.md deleted file mode 100644 index 8805c4e548dcbca7ff53da08231962a22ea1761c..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Movavi Video Editor for Free and Create Stunning Videos in Minutes.md +++ /dev/null @@ -1,19 +0,0 @@ - -```html -

How to Download Movavi Video Editor for Free

-

Movavi Video Editor is a powerful and easy-to-use video editing software that lets you create stunning videos in minutes. You can trim, crop, rotate, add transitions, effects, titles, music, and more to your videos. You can also export your videos in various formats or upload them directly to YouTube, Facebook, Vimeo, or other platforms.

-

how to download movavi video editor for free


DOWNLOAD 🆗 https://byltly.com/2uKzXS



-

But what if you want to try Movavi Video Editor for free before buying it? Is there a way to download Movavi Video Editor for free without compromising the quality or functionality of the software? The answer is yes! In this article, we will show you how to download Movavi Video Editor for free and use it without any limitations.

-

Step 1: Visit the Official Movavi Website

-

The first step to download Movavi Video Editor for free is to visit the official Movavi website at https://www.movavi.com/videoeditor/. Here you will find all the information about the software, its features, pricing, system requirements, and customer reviews. You will also see a big green button that says "Download for Free". Click on it to start downloading the installation file.

-

Step 2: Install Movavi Video Editor on Your Computer

-

The next step is to install Movavi Video Editor on your computer. To do this, locate the downloaded file (usually in your Downloads folder) and double-click on it. Follow the instructions on the screen to complete the installation process. It should take only a few minutes. Once the installation is done, launch Movavi Video Editor by clicking on its icon on your desktop or in your Start menu.

-

Step 3: Activate Your Free Trial

-

The final step is to activate your free trial of Movavi Video Editor. When you launch the software for the first time, you will see a window that asks you to enter your email address and agree to the terms of use. Enter your email address and click on "Start My Trial". You will then receive an email from Movavi with a confirmation link. Click on the link to activate your free trial.

-

-

Congratulations! You have successfully downloaded Movavi Video Editor for free and activated your free trial. You can now use all the features of the software for 7 days without any limitations. You can create as many videos as you want and save them in any format or upload them online. You can also access the built-in library of stock media, filters, transitions, stickers, and more.

-

If you like Movavi Video Editor and want to continue using it after your free trial expires, you can buy a license key from the official Movavi website or from within the software. The license key will unlock the software permanently and allow you to enjoy free updates and technical support. You can also choose between different plans depending on your needs and budget.

-

We hope this article helped you learn how to download Movavi Video Editor for free and use it without any limitations. Movavi Video Editor is a great tool for anyone who wants to create amazing videos with ease. Try it today and see for yourself!

-```

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Noiseware for Photoshop for Free and Improve Your Photo Quality.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Noiseware for Photoshop for Free and Improve Your Photo Quality.md deleted file mode 100644 index e6fb9a9fbe351cb55491becf72c29abdb102f252..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Noiseware for Photoshop for Free and Improve Your Photo Quality.md +++ /dev/null @@ -1,40 +0,0 @@ - -

How to Download Noiseware for Photoshop and Reduce Noise in Your Photos

-

Noiseware is a plugin for Photoshop that helps you reduce noise in your photos. Noise is the unwanted grain or speckles that appear in your photos due to low light, high ISO, or poor camera quality. Noise can ruin the quality and detail of your photos and make them look unprofessional.

-

download noiseware for photoshop


Download Zip ———>>> https://byltly.com/2uKyAa



-

In this article, we will show you how to download Noiseware for Photoshop from a reliable source and how to use it to reduce noise in your photos. We will also give you some tips and tricks to get the best results with Noiseware.

-

Where to Download Noiseware for Photoshop

-

There are many websites that claim to offer free downloads of Noiseware for Photoshop, but not all of them are safe or legal. Some of them may contain viruses, malware, or spyware that can harm your computer or steal your personal information. Others may require you to complete surveys, sign up for subscriptions, or pay hidden fees before you can access the download link.

-

To avoid these risks, we recommend you to download Noiseware for Photoshop from Imagenomic, the official website of the plugin developer. Imagenomic is a trusted company that provides high-quality plugins for photo editing and retouching. Imagenomic has a free trial version of Noiseware for Photoshop that you can use for 15 days without any limitations.

-

To download Noiseware for Photoshop from Imagenomic, follow these steps:

-
    -
  1. Go to https://imagenomic.com/Products/Noiseware
  2. -
  3. Click on the "Download Trial" button at the top of the page.
  4. -
  5. Fill in your name and email address and click on "Submit".
  6. -
  7. Check your email inbox for a confirmation message from Imagenomic. Click on the link in the message to download Noiseware for Photoshop.
  8. -
  9. Save the downloaded file on your computer. The file size is about 3 MB.
  10. -
-

How to Install Noiseware for Photoshop on Your PC

-

Once you have downloaded Noiseware for Photoshop from Imagenomic, you need to install it on your PC. To do that, follow these steps:

-

-
    -
  1. Close Photoshop if it is running.
  2. -
  3. Open the downloaded file and run the setup file.
  4. -
  5. Follow the instructions on the screen to install Noiseware for Photoshop.
  6. -
  7. Restart Photoshop and check if Noiseware is available in the Filter menu.
  8. -
-

How to Use Noiseware for Photoshop to Reduce Noise in Your Photos

-

Now that you have installed Noiseware for Photoshop on your PC, you are ready to use it to reduce noise in your photos. To do that, follow these steps:

-
    -
  1. Open the photo that you want to edit in Photoshop.
  2. -
  3. Duplicate the background layer by pressing Ctrl+J (Windows) or Command+J (Mac).
  4. -
  5. Select the duplicate layer and go to Filter > Imagenomic > Noiseware.
  6. -
  7. A new window will open with a preview of your photo and some settings. You can adjust the settings manually or use one of the presets from the drop-down menu at the top right corner. The presets are categorized into Landscape, Portrait, Night Scene, etc. depending on the type of photo you are editing.
  8. -
  9. You can also use the Auto Profile button at the bottom left corner to let Noiseware analyze your photo and apply the optimal settings automatically.
  10. -
  11. You can zoom in and out of your photo using the slider at the bottom right corner or by using your mouse wheel. You can also drag your photo around to see different areas of it.
  12. -
  13. When you are satisfied with the result, click on OK to apply Noiseware to your photo.
  14. -
  15. You can compare the before and after images by toggling the visibility of the duplicate layer on and off.
  16. -
  17. You can also fine-tune the effect by changing the opacity or blending mode of the duplicate layer.
  18. -
  19. Save your edited photo as

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Entrare in dfu mode senza tasti i software da scaricare per facilitare loperazione.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Entrare in dfu mode senza tasti i software da scaricare per facilitare loperazione.md deleted file mode 100644 index 02ae22df1f3887d83d7d30416213bd0e59579a0a..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Entrare in dfu mode senza tasti i software da scaricare per facilitare loperazione.md +++ /dev/null @@ -1,16 +0,0 @@ -
    -

    Come entrare in DFU mode senza tasti

    - Se hai un iPhone che non si accende, che non si aggiorna o che presenta dei problemi di funzionamento, potresti aver bisogno di metterlo in modalità DFU. La modalità DFU, acronimo di Device Firmware Update, è una modalità speciale che consente di ripristinare il firmware dell'iPhone bypassando il suo boot loader. In questo modo, puoi eliminare eventuali errori o blocchi che impediscono il normale ripristino tramite iTunes o Finder. Ma come si fa ad entrare in modalità DFU se i tasti del tuo iPhone non funzionano? In questa guida ti spiegheremo cos'è la modalità DFU, a cosa serve, come attivarla con i tasti funzionanti e come farlo senza i tasti funzionanti.

    Cos'è la modalità DFU e a cosa serve

    - La modalità DFU è una modalità avanzata che permette di ripristinare il firmware dell'iPhone, ovvero il software di base che gestisce il funzionamento del dispositivo. A differenza del ripristino normale, che cancella solo i dati e le impostazioni dell'utente, la modalità DFU cancella anche il firmware e lo sostituisce con una versione pulita e aggiornata.

    La differenza tra DFU e ripristino normale

    - Quando ripristini il tuo iPhone tramite iTunes o Finder, il dispositivo entra in una modalità chiamata Recovery Mode. In questa modalità, l'iPhone comunica con il computer tramite il boot loader, ovvero il programma che avvia il sistema operativo. Il boot loader verifica che il firmware sia corretto e compatibile con il dispositivo prima di installarlo. Se il firmware è danneggiato o non corrisponde al modello di iPhone, il boot loader blocca il ripristino e mostra un messaggio di errore. Quando invece metti il tuo iPhone in modalità DFU, il dispositivo non comunica con il computer tramite il boot loader, ma direttamente tramite il firmware. In questo modo, puoi bypassare i controlli del boot loader e installare qualsiasi versione di firmware compatibile con il tuo iPhone. Questo può essere utile per risolvere problemi più gravi o per effettuare operazioni particolari, come il downgrade del firmware o il jailbreak.

    Quando usare la modalità DFU

    - La modalità DFU è una modalità molto potente ma anche molto delicata. Se non la usi correttamente, potresti danneggiare irreparabilmente il tuo iPhone. Per questo motivo, ti consigliamo di usare la modalità DFU solo quando hai dei problemi seri con il tuo dispositivo e il ripristino normale non funziona. Alcuni casi in cui potresti aver bisogno di usare la modalità DFU sono: - Il tuo iPhone non si accende o rimane bloccato sulla schermata con la mela. - Il tuo iPhone non si aggiorna o si blocca durante l'aggiornamento. - Il tuo iPhone presenta dei malfunzionamenti gravi o frequenti. - Il tuo iPhone ha subito un jailbreak e vuoi eliminarlo completamente. - Vuoi installare una versione precedente del firmware sul tuo iPhone.

    Come entrare in DFU mode con i tasti funzionanti

    - Se i tasti del tuo iPhone sono funzionanti, puoi entrare in modalità DFU seguendo una semplice procedura che varia a seconda del modello di iPhone che possiedi. Prima di iniziare, assicurati di avere un computer con iTunes installato (se hai un PC Windows) o con Finder (se hai un Mac). Collega poi l'iPhone al computer tramite il cavo Lightning e segui i passaggi indicati qui sotto.

    La procedura per iPhone X o modelli successivi, iPhone SE (2ª generazione), iPhone 8 e iPhone 8 Plus

    - - Premi rapidamente il tasto Volume Su. - Premi rapidamente il tasto Volume Giù. - Tieni premuto il tasto laterale finché lo schermo non diventa nero. - Continua a tenere premuto il tasto laterale e premi anche il tasto Volume Giù per 5 secondi. - Rilascia il tasto laterale ma continua a tenere premuto il tasto Volume Giù finché iTunes o Finder non riconosce l'iPhone in modalità di recupero. - Lo schermo dell'iPhone dovrebbe rimanere nero. Se compare il logo della mela o quello di iTunes, significa che sei entrato in Recovery Mode e devi ripetere la procedura.

    La procedura per iPhone 7 e iPhone 7 Plus

    - - Tieni premuti contemporaneamente i tasti laterale e Volume Giù finché lo schermo non diventa nero. - Continua a tenere premuti i due tasti per 10 secondi. - Rilascia il tasto laterale ma continua a tenere premuto il tasto Volume Giù finché iTunes o Finder non riconosce l'iPhone in modalità di recupero. - Lo schermo dell'iPhone dovrebbe rimanere nero. Se compare il logo della mela o quello di iTunes, significa che sei entrato in Recovery Mode e devi ripetere la procedura.

    La procedura per iPhone 6s o modelli precedenti, iPad e iPod touch

    - - Tieni premuti contemporaneamente i tasti Home e Sleep/Wake finché lo schermo non diventa nero. - Continua a tenere premuti i due tasti per 10 secondi. - Rilascia il tasto Sleep/Wake ma continua a tenere premuto il tasto Home finché iTunes o Finder non riconosce l'iPhone in modalità di recupero. - Lo schermo dell'iPhone dovrebbe rimanere nero. Se compare il logo della mela o quello di iTunes, significa che sei entrato in Recovery Mode e devi ripetere la procedura.

    Come entrare in DFU mode senza i tasti funzionanti

    - Se i tasti del tuo iPhone non funzionano, puoi provare ad entrare in modalità DFU usando dei metodi alternativi che sfruttano dei file o dei software appositi. Questi metodi non sono ufficiali e potrebbero non funzionare su tutti i dispositivi o su tutte le versioni di firmware. Inoltre, potrebbero comportare dei rischi per la sicurezza del tuo computer o del tuo iPhone. Pertanto, ti consigliamo di usarli solo se sei sicuro di quello che fai e se hai esaurito le altre opzioni.

    Il metodo con il file dfu iBSS.m68ap.RELEASE.dfu (solo per Windows)

    - Questo metodo consiste nell'utilizzare un file chiamato dfu iBSS.m68ap.RELEASE.dfu che permette di avviare la modalità DFU senza premere alcun tasto sull'iPhone. Questo file è compatibile solo con alcuni modelli di iPhone (fino all'iPhone 4) e richiede un PC Windows. Ecco come usarlo: - Scarica questo ità DFU è una modalità delicata e potenzialmente pericolosa, quindi usala solo se necessario e con cautela. Se hai dei dubbi o delle domande, puoi consultare le FAQ qui sotto o contattare l'assistenza Apple.

    FAQ

    - - **Cos'è la modalità DFU?** - La modalità DFU è una modalità speciale che consente di ripristinare il firmware dell'iPhone bypassando il suo boot loader. In questo modo, puoi eliminare eventuali errori o blocchi che impediscono il normale ripristino tramite iTunes o Finder. - **Come si entra in modalità DFU?** - Per entrare in modalità DFU, devi collegare l'iPhone al computer tramite il cavo Lightning e seguire una procedura che varia a seconda del modello di iPhone che possiedi. La procedura prevede di premere una combinazione di tasti per far diventare lo schermo nero. Puoi trovare le istruzioni dettagliate nella sezione "Come entrare in DFU mode con i tasti funzionanti" di questa guida. - **Come si esce dalla modalità DFU?** - Per uscire dalla modalità DFU, devi premere una combinazione di tasti diversa a seconda del modello di iPhone che possiedi. La combinazione prevede di premere rapidamente il tasto Volume Su, il tasto Volume Giù e il tasto laterale (per iPhone X o modelli successivi, iPhone SE (2ª generazione), iPhone 8 e iPhone 8 Plus), il tasto laterale e il tasto Volume Giù (per iPhone 7 e iPhone 7 Plus) o il tasto Home e il tasto Sleep/Wake (per iPhone 6s o modelli precedenti, iPad e iPod touch). Puoi trovare le istruzioni dettagliate nella fonte. - **Quando usare la modalità DFU?** - La modalità DFU è una modalità molto potente ma anche molto delicata. Se non la usi correttamente, potresti danneggiare irreparabilmente il tuo iPhone. Per questo motivo, ti consigliamo di usare la modalità DFU solo quando hai dei problemi seri con il tuo dispositivo e il ripristino normale non funziona. Alcuni casi in cui potresti aver bisogno di usare la modalità DFU sono: il tuo iPhone non si accende o rimane bloccato sulla schermata con la mela, il tuo iPhone non si aggiorna o si blocca durante l'aggiornamento, il tuo iPhone presenta dei malfunzionamenti gravi o frequenti, il tuo iPhone ha subito un jailbreak e vuoi eliminarlo completamente, vuoi installare una versione precedente del firmware sul tuo iPhone. - **Cosa fare se i tasti dell'iPhone non funzionano?** - Se i tasti dell'iPhone non funzionano, puoi provare ad entrare in modalità DFU usando dei metodi alternativi che sfruttano dei file o dei software appositi. Questi metodi non sono ufficiali e potrebbero non funzionare su tutti i dispositivi o su tutte le versioni di firmware. Inoltre, potrebbero comportare dei rischi per la sicurezza del tuo computer o del tuo iPhone. Pertanto, ti consigliamo di usarli solo se sei sicuro di quello che fai e se hai esaurito le altre opzioni. Puoi trovare i metodi alternativi nella sezione "Come entrare in DFU mode senza i tasti funzionanti" di questa guida.

    -

    entrare in dfu mode senza tasti


    Download Zip > https://byltly.com/2uKzSn



    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download My Mini Mart and Experience the Joy of Running Your Own Shop - No Ads No Interruptions.md b/spaces/1phancelerku/anime-remove-background/Download My Mini Mart and Experience the Joy of Running Your Own Shop - No Ads No Interruptions.md deleted file mode 100644 index 1dc21eda1a2d8488ff95a9a7a508f10bfd10dd19..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download My Mini Mart and Experience the Joy of Running Your Own Shop - No Ads No Interruptions.md +++ /dev/null @@ -1,78 +0,0 @@ - -

    How to Download a Minimarket App and Why You Should Do It

    -

    If you are looking for a convenient way to shop for groceries and other goods from your local store, you might want to consider downloading a minimarket app. A minimarket app is a mobile application that allows you to access the products and services of a small store, usually a convenience store or a supermarket, from your smartphone or tablet.

    -

    A minimarket app can offer many benefits for both customers and business owners, such as convenience, loyalty, engagement, and sales. In this article, we will explain what these benefits are, how to choose the best minimarket app for your needs, and how to download and use it.

    -

    download my minimarket no iklan


    Download Zip ->>->>->> https://jinyurl.com/2uNJbP



    -

    Benefits of Minimarket Apps for Customers

    -

    As a customer, you can enjoy several advantages by using a minimarket app to shop from your local store. Here are some of them:

    - -

    Reviews

    -

    The second thing that you should look for in a minimarket app is the reviews and ratings from other users. You should read the feedback and comments from other customers who have used the app and see what they liked and disliked about it. You should also check the ratings and scores that the app has received on the app store or the website of the store. You should look for an app that has positive reviews and high ratings from a large number of users, as this indicates that the app is trustworthy and reliable.

    -

    Comparisons

    -

    The third thing that you should look for in a minimarket app is the comparisons with other apps. You should compare different apps based on their features, reviews, ratings, prices, etc. and see how they stack up against each other. You should look for an app that offers the best value for your money and the best quality for your satisfaction. You should also look for an app that has a competitive edge over other apps, such as unique features, exclusive offers, or innovative solutions.

    -

    How to Download and Use a Minimarket App

    -

    Once you have chosen the best minimarket app for your needs, you can download and use it to shop from your local store. Here are some steps on how to do it:

    -

    Downloading

    -

    The first step is to download the minimarket app from the app store or the website of the store. You should search for the name of the app or the store on your device's app store or browser and follow the instructions to install it. You should make sure that you have enough storage space on your device and a stable internet connection to download the app. You should also check the compatibility of the app with your device's operating system and version.

    -

    Registering

    -

    The second step is to register an account and provide your personal information on the minimarket app. You should open the app and sign up with your email address, phone number, or social media account. You should then fill out your profile with your name, address, payment method, delivery preferences, etc. You should also verify your account with a code or a link that will be sent to your email or phone. You should make sure that you provide accurate and valid information on the app and keep it updated.

    -

    Shopping

    -

    The third step is to shop for products on the minimarket app. You should browse the products by category, brand, price, rating, etc. or search for specific products by name, barcode, or keyword. You should then select the products that you want to buy and add them to your cart. You should also check the product details, such as description, ingredients, nutrition facts, expiration date, etc. before buying them. You should then proceed to checkout and pay for your order with your preferred payment method.

    -

    download my mini mart game without ads
    -how to get my minimarket mod apk free
    -my mini mart simulation game no iklan
    -grow your own mini mart business download
    -my minimarket unlimited money mod apk
    -download my mini mart for android no ads
    -my mini mart by supersonic studios ltd no iklan
    -run your own mini mart store download
    -my minimarket 1.8.5 mod apk free download
    -my mini mart simulation game for android
    -download my minimarket tanpa iklan gratis
    -my mini mart organic plants and animals game
    -my minimarket latest version mod apk download
    -my mini mart casual tycoon game no ads
    -download my minimarket uang tidak terbatas
    -my mini mart management simulation game download
    -my minimarket apk for android no iklan
    -my mini mart fun and relaxing game download
    -my minimarket mod apk unlimited everything
    -download my mini mart without ads for free
    -my mini mart grow a pretend business game
    -my minimarket 1.8.5 apk no iklan download
    -my mini mart simulation game with tons of activities
    -download my minimarket mod apk terbaru
    -my mini mart expand your marts game no ads
    -download my mini mart simulation game offline
    -my minimarket free simulation game no iklan
    -my mini mart challenge of running a small store game
    -my minimarket mod apk tanpa iklan download
    -my mini mart simulation game with cute graphics
    -download my minimarket mod apk offline mode
    -my mini mart hire and build your marts game no ads
    -my minimarket simulation game by supersonic studios ltd
    -my mini mart become a successful business tycoon game download
    -download my minimarket mod apk unlimited coins and gems
    -my mini mart simulation game with in-app purchases no ads
    -my minimarket 1.8.5 mod apk no ads free download
    -my mini mart simulation game with rolling offer and daily spin
    -download my minimarket mod apk latest version 2023
    -my mini mart simulation game with 14 different marts no ads

    -

    Delivery

    -

    The fourth step is to choose your delivery option and track your order on the minimarket app. You should choose whether you want to pick up your order from the store or have it delivered to your address. You should also choose when you want to receive your order, such as same-day delivery, next-day delivery, or scheduled delivery. You should then confirm your order details and wait for a confirmation message from the store. You should also track your order status and progress on the app or contact customer service if you have any issues or questions.

    -

    Conclusion

    -

    In conclusion, downloading a minimarket app can be a great way to shop for groceries and other goods from your local store with convenience, loyalty, engagement, and sales benefits. To choose the best minimarket app for your needs, you should consider its features, reviews reviews, and comparisons. To download and use a minimarket app, you should follow the steps of downloading, registering, shopping, and delivery. We hope that this article has helped you understand how to download a minimarket app and why you should do it. If you have any questions or comments, please feel free to contact us. Thank you for reading and happy shopping!

    -

    FAQs

    -

    Here are some frequently asked questions related to downloading a minimarket app:

    -

    How do I contact customer service on the minimarket app?

    -

    Most minimarket apps have a customer service feature that allows you to chat, call, or email the store staff if you have any issues or questions. You can usually find this feature on the app's menu, settings, or help section. You can also check the app's website or social media pages for more contact information.

    -

    How do I update my payment information on the minimarket app?

    -

    To update your payment information on the minimarket app, you should go to your profile or account section and select the payment option. You can then add, edit, or delete your payment methods, such as credit card, debit card, PayPal, etc. You should make sure that your payment information is correct and secure before making any transactions.

    -

    How do I cancel or return my order on the minimarket app?

    -

    To cancel or return your order on the minimarket app, you should check the store's cancellation and return policy first. Some stores may allow you to cancel or return your order within a certain period of time or under certain conditions. You can then contact the store or use the app's order management feature to request a cancellation or return. You may need to provide your order number, reason, and proof of purchase. You may also need to pay for the shipping or restocking fees.

    -

    How do I share my feedback or review on the minimarket app?

    -

    To share your feedback or review on the minimarket app, you should go to the product page or the app's review section and rate and write your opinion about the product or the app. You can also upload photos or videos to show your experience. You should be honest, respectful, and constructive when sharing your feedback or review. You should also avoid spamming, trolling, or abusing other users or the store.

    -

    How do I find the best deals and offers on the minimarket app?

    -

    To find the best deals and offers on the minimarket app, you should check the app's homepage, banner, or notification section for any special events, promotions, or discounts that are available. You can also use the app's search filter, sorting, or recommendation feature to find the products that suit your budget and preferences. You can also join the app's loyalty program or newsletter to get exclusive deals and offers.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Voyage 4 MOD APK (v2.54) and Experience the Relaxation of Driving - Unlimited Money Included.md b/spaces/1phancelerku/anime-remove-background/Download Voyage 4 MOD APK (v2.54) and Experience the Relaxation of Driving - Unlimited Money Included.md deleted file mode 100644 index df4d6046e36f8b84bbe4d0c530f4342ae4121238..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Voyage 4 MOD APK (v2.54) and Experience the Relaxation of Driving - Unlimited Money Included.md +++ /dev/null @@ -1,106 +0,0 @@ - -

    Voyage 4 Mod APK Son Sürüm: A Guide to the Ultimate Road Trip Game

    -

    Do you love driving games that let you explore realistic and diverse landscapes? Do you want to experience a cinematic adventure game that captures the essence of shared exploration? Do you want to play a game that is simple, non-violent, and beautiful? If you answered yes to any of these questions, then you should try Voyage 4 Mod APK Son Sürüm, a game that will take you on a memorable road trip across Russia.

    -

    What is Voyage 4 and why should you play it?

    -

    Voyage 4 is a realistic driving simulator game that lets you travel on Russian roads with various cars. You can choose from over 50 vehicles, ranging from sedans and SUVs to trucks and buses. You can also customize your car with tuning parts, visual effects, and sounds.

    -

    voyage 4 mod apk son sürüm


    Download Filehttps://jinyurl.com/2uNN4R



    -

    The game features a large and detailed map of Russia, with over 1000 cities and towns, as well as different regions, weather conditions, and historical landmarks. You can drive on highways, country roads, dirt roads, and even off-road. You can also encounter traffic, police, accidents, and other events that make the game more realistic and challenging.

    -

    Voyage 4 is not just a driving game, but also an adventure game that tells a story through its environment and gameplay. You can discover secrets, mysteries, and surprises along the way. You can also interact with other drivers and passengers, who have their own personalities and stories. You can even play with a friend in co-op mode, where you can share the same car or drive separately.

    -

    What is Voyage 4 Mod APK Son Sürüm and how to download it?

    -

    Voyage 4 Mod APK Son Sürüm is a modified version of the game that gives you unlimited money, unlocked cars, and no ads. This means that you can enjoy the game without any limitations or interruptions. You can access all the cars and tuning parts without spending real money. You can also have more fun and challenge with the game's realistic physics and graphics.

    -

    To download Voyage 4 Mod APK Son Sürüm, you need to follow these simple steps:

    -
      -
    1. Go to a reliable source like APKCombo, which offers safe and fast downloads of APK files.
    2. -
    3. Search for Voyage 4 Mod APK Son Sürüm in the search bar or browse the categories.
    4. -
    5. Click on the download button and wait for the file to be downloaded.
    6. -
    7. Enable unknown sources in your device settings by going to Settings > Security > Unknown Sources.
    8. -
    9. Install the APK file by tapping on it and following the instructions.
    10. -
    11. Launch the game and enjoy!
    12. -
    -

    What are the features and benefits of Voyage 4 Mod APK Son Sürüm?

    -

    Voyage 4 Mod APK Son Sürüm has many features and benefits that make it a great choice for anyone who loves driving games. Here are some of them:

    -
      -
    • You can enjoy the game without any limitations or interruptions. You don't have to worry about running out of money, unlocking cars, or watching ads.
    • -
    • You can access all the cars and tuning parts without spending real money. You can choose from over 50 vehicles, each with its own characteristics and performance. You can also customize your car with tuning parts, visual effects, and sounds.
    • -
    • You can have more fun and challenge with the game's realistic physics and graphics. The game uses advanced physics engine that simulates the behavior of real cars on different surfaces and conditions. The game also has stunning graphics that create a realistic and immersive atmosphere.
    • -
    -

    What are some tips and tricks to play Voyage 4 Mod APK Son Sürüm?

    -

    Voyage 4 Mod APK Son Sürüm is a game that requires skill, patience, and attention. Here are some tips and tricks to help you play better:

    -
      -
    • Use the triangle button to get directions if you are lost or stuck. The game will show you the nearest city or town where you can find gas stations, repair shops, hotels, or other facilities.
    • -
    • Use the console to adjust the settings and optimize the game performance. You can change the graphics quality, sound volume, control sensitivity, camera angle, language, and other options.
    • -
    • Try different cars and routes to discover new places and secrets. The game has a lot of variety and diversity in its map, cars, events, and stories. You can drive on different roads, explore different regions, encounter different situations, and uncover different secrets.
    • -
    -

    What are some reviews and ratings of Voyage 4 Mod APK Son Sürüm?

    -

    Voyage 4 Mod APK Son Sürüm has a high rating and positive feedback from users who downloaded it from APKCombo. Here are some of their reviews:

    -

    voyage 4 mod apk son sürüm indir
    -voyage 4 mod apk son sürüm hileli
    -voyage 4 mod apk son sürüm android oyun club
    -voyage 4 mod apk son sürüm güncel
    -voyage 4 mod apk son sürüm para hilesi
    -voyage 4 mod apk son sürüm mega hile
    -voyage 4 mod apk son sürüm türkçe
    -voyage 4 mod apk son sürüm ücretsiz
    -voyage 4 mod apk son sürüm full
    -voyage 4 mod apk son sürüm kurulumu
    -voyage 4 mod apk son sürüm nasıl indirilir
    -voyage 4 mod apk son sürüm nasıl yüklenir
    -voyage 4 mod apk son sürüm nasıl oynanır
    -voyage 4 mod apk son sürüm oyun indir club
    -voyage 4 mod apk son sürüm oyunu indir
    -voyage 4 mod apk son sürüm oyunu oyna
    -voyage 4 mod apk son sürüm oyunu hakkında
    -voyage 4 mod apk son sürüm oyunu inceleme
    -voyage 4 mod apk son sürüm oyunu yorumlar
    -voyage 4 mod apk son sürüm oyunu özellikleri
    -voyage 4 mod apk son sürüm oyunu sistem gereksinimleri
    -voyage 4 mod apk son sürüm oyunu videoları
    -voyage 4 mod apk son sürüm oyunu resimleri
    -voyage 4 mod apk son sürüm oyunu hileleri
    -voyage 4 mod apk son sürüm oyunu ipuçları
    -voyage 4 mod apk son sürüm oyunu rehberi
    -voyage 4 mod apk son sürüm oyunu haritası
    -voyage 4 mod apk son sürüm oyunu araçları
    -voyage 4 mod apk son sürüm oyunu görevleri
    -voyage 4 mod apk son sürüm oyunu müzikleri
    -voyage 4 mod apk son sürüm download
    -voyage 4 mod apk son sürüm free download
    -voyage 4 mod apk son sürüm latest version download
    -voyage 4 mod apk son sürüm updated version download
    -voyage 4 mod apk son sürüm offline download
    -voyage 4 mod apk son sürüm online download
    -voyage 4 mod apk son sürüm direct download link
    -voyage 4 mod apk son sürüm mediafire download link
    -voyage 4 mod apk son sürüm google drive download link
    -voyage 4 mod apk son sürüm mega download link
    -voyage 4 mod apk son sürüm unlimited money download
    -voyage 4 mod apk son sürüm unlimited coins download
    -voyage 4 mod apk son sürüm unlimited gems download
    -voyage 4 mod apk son sürüm unlimited fuel download
    -voyage 4 mod apk son sürüm unlocked all cars download
    -voyage 4 mod apk son sürüm unlocked all maps download
    -voyage 4 mod apk son sürüm unlocked all features download
    -voyage 4 mod apk son sürüm no ads download
    -voyage 4 mod apk son sürüm no root download
    -voyage 4 mod apk son sürüm no virus download

    - - - - - -
    UserRatingReview
    Mehmet5 starsVery good game, I like the graphics and the physics. The mod apk is also very good, it gives me unlimited money and unlocked cars. I recommend it to everyone who likes driving games.
    Ayşe4 starsI enjoy playing this game, it is very relaxing and fun. The mod apk is also very helpful, it removes the ads and gives me more options to customize my car. The only thing I don't like is that the game sometimes crashes or freezes.
    Ali5 starsThis game is amazing, it is like a real road trip across Russia. The mod apk is also amazing, it gives me everything I need to play the game without any problems. I love the game and the mod apk.
    -

    Conclusion

    -

    Voyage 4 Mod APK Son Sürüm is a great way to experience the game's amazing features and benefits. You can download it easily and safely from APKCombo and enjoy a short and simple game with a beautiful aesthetic. You can also share your adventure with a friend or play solo with an AI companion. Voyage 4 Mod APK Son Sürüm is a game that will make you feel the joy of exploration and discovery.

    -

    FAQs

    -

    Q1. Is Voyage 4 Mod APK Son Sürüm safe to download and install?

    -

    A1. Yes, as long as you download it from a trusted source like APKCombo, which scans all the APK files for viruses and malware.

    -

    Q2. How long is Voyage 4 Mod APK Son Sürüm?

    -

    A2. The game's runtime is under 2 hours, but you can replay it with different cars and routes to see more of the world.

    -

    Q3. Can I play Voyage 4 Mod APK Son Sürüm without internet or Google Play service?

    -

    A3. Yes, you can play offline, but you will not be able to save your results or see other players' results in Google Play.

    -

    Q4. Does Voyage 4 Mod APK Son Sürüm have any dialogue or text?

    -

    A4. No, the game does not use any dialogue or text to tell its story. It relies on visuals, sounds, and gestures instead.

    -

    Q5. What are some other games similar to Voyage 4 Mod APK Son Sürüm?

    -

    A5. Some other games that have a similar style and theme are Journey, Limbo, Inside, Brothers: A Tale of Two Sons, and Unravel.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Evertale 2.0.64 Mod Apk Free Shopping and Unlimited Money.md b/spaces/1phancelerku/anime-remove-background/Evertale 2.0.64 Mod Apk Free Shopping and Unlimited Money.md deleted file mode 100644 index 9a2bfa97b473f3d777d609ad4779576df08b6a10..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Evertale 2.0.64 Mod Apk Free Shopping and Unlimited Money.md +++ /dev/null @@ -1,113 +0,0 @@ -
    -

    Evertale 2.0.64 Mod Apk: A Guide for Beginners

    -

    If you are a fan of fantasy RPGs with monster-catching and battling elements, you might have heard of Evertale, a popular game by ZigZaGame Inc. that has been compared to Pokémon and other similar games. But did you know that there is a modified version of the game that gives you access to some developer functions that can make your gameplay easier and more fun? In this article, we will tell you everything you need to know about Evertale 2.0.64 Mod Apk, including what it is, how to download and install it, and how to play it.

    -

    What is Evertale?

    -

    Before we dive into the details of the mod apk, let's first review what Evertale is and why it is worth playing.

    -

    evertale 2.0.64 mod apk


    DOWNLOADhttps://jinyurl.com/2uNPDX



    -

    A fantasy RPG with monster-catching and battling

    -

    Evertale is a game that takes you to the fantasy world of Erden, where you can catch, train, and evolve over 180 monsters and heroes across an impressive story-driven adventure. You can explore sprawling landscapes, bustling cities, and mythical dungeons in this expansive open-world RPG. You can also join a band of unlikely heroes and free the world of Erden from the deadly Pandemonium, an ancient curse that descends once every 100 years.

    -

    A rich story mode and online features

    -

    Evertale has a lot to offer for both offline and online players. You can immerse yourself in the engaging single-player story mode that has a lot of quests, characters, secrets, and rewards to discover. You can also jump online to compete in real-time PvP leagues and form guilds with other players to unlock limited-edition gear, power-ups, and more. You can also participate in weekly online events that offer exclusive unlockables and limited characters to add to your collection.

    -

    A game with positive reviews and high ratings

    -

    Evertale has been well-received by strategy RPG enthusiasts and beginners alike, with over 5 million downloads from the Google Play Store alone. It has also received positive reviews from critics and players who praised its solid writing, lovely art style, strategic combat, and variety of content. It has been rated 4.5 out of 5 stars on both Android and iOS platforms.

    -

    What is Evertale 2.0.64 Mod Apk?

    -

    Now that you have an idea of what Evertale is, let's talk about what Evertale 2.0.64 Mod Apk is and how it differs from the original game.

    -

    evertale 2.0.64 mod apk unlimited money
    -evertale 2.0.64 mod apk free download
    -evertale 2.0.64 mod apk latest version
    -evertale 2.0.64 mod apk android 1
    -evertale 2.0.64 mod apk offline
    -evertale 2.0.64 mod apk unlimited soul stones
    -evertale 2.0.64 mod apk no root
    -evertale 2.0.64 mod apk obb
    -evertale 2.0.64 mod apk rexdl
    -evertale 2.0.64 mod apk revdl
    -evertale 2.0.64 mod apk hack
    -evertale 2.0.64 mod apk god mode
    -evertale 2.0.64 mod apk unlimited everything
    -evertale 2.0.64 mod apk mega
    -evertale 2.0.64 mod apk data
    -evertale 2.0.64 mod apk full version
    -evertale 2.0.64 mod apk premium
    -evertale 2.0.64 mod apk unlocked
    -evertale 2.0.64 mod apk vip
    -evertale 2.0.64 mod apk pro
    -evertale 2.0.64 mod apk all characters
    -evertale 2.0.64 mod apk high damage
    -evertale 2.0.64 mod apk one hit kill
    -evertale 2.0.64 mod apk unlimited gems
    -evertale 2.0.64 mod apk unlimited coins
    -evertale 2.0.64 mod apk unlimited capture stones
    -evertale 2.0.64 mod apk unlimited mana
    -evertale 2.0.64 mod apk unlimited gold
    -evertale 2.0.64 mod apk unlimited silver
    -evertale 2.0.64 mod apk unlimited tickets
    -evertale 2.0.64 mod apk unlimited keys
    -evertale 2.0.64 mod apk unlimited chests
    -evertale 2.0.64 mod apk unlimited weapons
    -evertale 2.0.64 mod apk unlimited items
    -evertale 2.0.64 mod apk unlimited resources
    -evertale 2.0.64 mod apk unlimited skills
    -evertale 2.0.64 mod apk unlimited levels
    -evertale 2.0.64 mod apk unlimited stars
    -evertale 2.0.64 mod apk unlimited quests
    -evertale 2.0.64 mod apk unlimited events

    -

    A modified version of the game with developer functions

    -

    Evertale 2.0.64 Mod Apk is a modified version of the game that gives you access to some developer functions that are not available in the original game. These functions include:

    -
      -
    • Characters can't die
    • -
    • 100% catch rate
    • -
    • Free shopping
    • -
    • Unlimited soul stones
    • -
    • No ads
    • -
    -

    These functions can make your gameplay more convenient and enjoyable, as you can easily catch and upgrade your monsters, buy anything you want from the shop, and avoid annoying ads. However, they also come with some drawbacks that you should be aware of.

    -

    The benefits and risks of using the mod apk

    -

    Using the mod apk can have some benefits, such as:

    -
      -
    • You can save time and money by not having to grind for resources or spend real money on in-app purchases.
    • -
    • You can experiment with different combinations of monsters and heroes without worrying about losing battles or wasting soul stones.
    • -
    • You can experience the full story mode without any interruptions or difficulties.
    • -
    -

    However, using the mod apk can also have some risks, such as:

    -
      -
    • You might lose the thrill and challenge of the game, as you can easily breeze through any obstacle or enemy.
    • -
    • You might get bored of the game sooner, as you have nothing to strive for or achieve.
    • -
    • You might face legal issues or bans from the game developers, as using the mod apk is against their terms of service and can be considered as cheating or hacking.
    • -
    -

    Therefore, you should weigh the pros and cons of using the mod apk before deciding to download and install it. You should also be careful about where you download it from, as some sources might contain viruses or malware that can harm your device or steal your personal information.

    -

    How to download and install the mod apk

    -

    If you have decided to use the mod apk, here are the steps you need to follow to download and install it:

    -
      -
    1. Uninstall the original Evertale game from your device if you have it installed.
    2. -
    3. Go to a trusted website that provides the Evertale 2.0.64 Mod Apk file, such as [APKPure] or [APKDone].
    4. -
    5. Download the mod apk file to your device.
    6. -
    7. Enable the installation of apps from unknown sources in your device settings.
    8. -
    9. Locate and tap on the mod apk file to start the installation process.
    10. -
    11. Follow the instructions on the screen to complete the installation.
    12. -
    13. Launch the game and enjoy!
    14. -
    -

    How to play Evertale 2.0.64 Mod Apk?

    -

    Now that you have successfully installed the mod apk, you might be wondering how to play it. Here are some tips and tricks that can help you get started and master the game.

    -

    The basics of the battle system

    -

    Evertale has a turn-based battle system that allows you to control up to four characters at a time. Each character has a unique set of skills and abilities that can be activated by spending mana points (MP). You can also switch between different characters during battle by tapping on their icons at the bottom of the screen. You can win battles by defeating all enemies or by capturing them with soul stones.

    -

    The tips and tricks for catching and training monsters

    -

    Evertale has over 180 monsters that you can catch and train to become your allies. You can catch monsters by using soul stones during battle, which have a 100% success rate with the mod apk. You can also find monsters in chests, events, or by exploring the world map. You can train your monsters by leveling them up, evolving them, equipping them with gear, and teaching them new skills. You can also customize your monsters' names, appearances, and personalities.

    -

    The best strategies for progressing in the story mode and online events

    -

    Evertale has a captivating story mode that spans over six chapters, each with its own plot, characters, and locations. You can progress in the story mode by completing quests, solving puzzles, fighting enemies, and collecting items. You can also unlock new areas and secrets by revisiting previous locations with different characters or abilities. You can also participate in online events that offer exclusive rewards and challenges. You can compete in PvP leagues, join guilds, cooperate with other players, and more.

    -

    Conclusion

    -

    you can follow the steps we have provided to download and install it. You can also use our tips and tricks to play it and enjoy its features. Evertale is a game that can offer you hours of fun and entertainment, whether you play it with or without the mod apk. We hope you found this article helpful and informative. Happy gaming!

    -

    FAQs

    -

    Here are some frequently asked questions about Evertale 2.0.64 Mod Apk that you might find useful.

    -

    Q: Is Evertale 2.0.64 Mod Apk safe to use?

    -

    A: Evertale 2.0.64 Mod Apk is not an official version of the game, so it might not be safe to use. It might contain viruses or malware that can harm your device or steal your personal information. It might also cause legal issues or bans from the game developers, as using it is against their terms of service and can be considered as cheating or hacking. Therefore, you should use it at your own risk and discretion.

    -

    Q: Can I play Evertale 2.0.64 Mod Apk offline?

    -

    A: Yes, you can play Evertale 2.0.64 Mod Apk offline, as it does not require an internet connection to run. However, you will not be able to access some online features, such as PvP leagues, guilds, events, and more.

    -

    Q: Can I update Evertale 2.0.64 Mod Apk to the latest version?

    -

    A: No, you cannot update Evertale 2.0.64 Mod Apk to the latest version, as it is not compatible with the original game. If you want to update the game, you will have to uninstall the mod apk and install the original game from the official sources.

    -

    Q: Can I transfer my progress from Evertale 2.0.64 Mod Apk to the original game?

    -

    A: No, you cannot transfer your progress from Evertale 2.0.64 Mod Apk to the original game, as they are not compatible with each other. If you want to play the original game, you will have to start from scratch.

    -

    Q: Can I use Evertale 2.0.64 Mod Apk with other mods or hacks?

    -

    A: No, you cannot use Evertale 2.0.64 Mod Apk with other mods or hacks, as they might cause conflicts or errors in the game. You should only use one mod or hack at a time.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/2023Liu2023/bingo/README.md b/spaces/2023Liu2023/bingo/README.md deleted file mode 100644 index 218767d1d7debd26932ffddca2ec0f421c0171a9..0000000000000000000000000000000000000000 --- a/spaces/2023Liu2023/bingo/README.md +++ /dev/null @@ -1,195 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -pinned: true -license: mit -duplicated_from: hf4all/bingo ---- - -
    - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
    - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
    - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
    - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
    -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
    - -
    -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
    - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/22h/vintedois-diffusion-v0-2/app.py b/spaces/22h/vintedois-diffusion-v0-2/app.py deleted file mode 100644 index f4c735ca1fe08ee3ad8d8ac2ccfa989a9065506b..0000000000000000000000000000000000000000 --- a/spaces/22h/vintedois-diffusion-v0-2/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, EulerAncestralDiscreteScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = '22h/vintedois-diffusion-v0-2' -prefix = 'estilovintedois' - -scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) -pipe.enable_attention_slicing(1) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) -pipe_i2i.enable_attention_slicing(1) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=640, height=640, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
    -
    -

    22h Diffusion v0.2

    -
    -

    - Demo for 22h Diffusion v0-2 Stable Diffusion model.
    - {"Add the following tokens to your prompts for the model to work properly: estilovintedois" if prefix else ""} -

    - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"}

    - Duplicate Space -
    - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically ()", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=50, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=640, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=960, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
    -
    -

    This space was created using SD Space Creator.

    -
    - """) - -demo.queue(concurrency_count=1) -demo.launch() \ No newline at end of file diff --git a/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/utterance.py b/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/utterance.py deleted file mode 100644 index 0768c3420f422a7464f305b4c1fb6752c57ceda7..0000000000000000000000000000000000000000 --- a/spaces/44ov41za8i/FreeVC/speaker_encoder/data_objects/utterance.py +++ /dev/null @@ -1,26 +0,0 @@ -import numpy as np - - -class Utterance: - def __init__(self, frames_fpath, wave_fpath): - self.frames_fpath = frames_fpath - self.wave_fpath = wave_fpath - - def get_frames(self): - return np.load(self.frames_fpath) - - def random_partial(self, n_frames): - """ - Crops the frames into a partial utterance of n_frames - - :param n_frames: The number of frames of the partial utterance - :return: the partial utterance frames and a tuple indicating the start and end of the - partial utterance in the complete utterance. - """ - frames = self.get_frames() - if frames.shape[0] == n_frames: - start = 0 - else: - start = np.random.randint(0, frames.shape[0] - n_frames) - end = start + n_frames - return frames[start:end], (start, end) \ No newline at end of file diff --git a/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/Dockerfile b/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/Dockerfile deleted file mode 100644 index d6eb1d37aa999ff0fe9b11d60bcf6e26722f62fb..0000000000000000000000000000000000000000 --- a/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/Dockerfile +++ /dev/null @@ -1,19 +0,0 @@ -FROM python:3.9.9-bullseye - -WORKDIR /src - -RUN apt-get update && \ - apt-get install -y \ - libgl1 libglib2.0-0 - -COPY requirements.txt /src/ - -RUN pip3 install -r requirements.txt - -COPY stable_diffusion_engine.py demo.py demo_web.py /src/ -COPY data/ /src/data/ - -# download models -RUN python3 demo.py --num-inference-steps 1 --prompt "test" --output /tmp/test.jpg - -ENTRYPOINT ["python3", "demo.py"] diff --git a/spaces/7hao/bingo/src/components/ui/badge.tsx b/spaces/7hao/bingo/src/components/ui/badge.tsx deleted file mode 100644 index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from 'react' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const badgeVariants = cva( - 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2', - { - variants: { - variant: { - default: - 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80', - secondary: - 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80', - destructive: - 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80', - outline: 'text-foreground' - } - }, - defaultVariants: { - variant: 'default' - } - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
    - ) -} - -export { Badge, badgeVariants } diff --git a/spaces/7hao/bingo/src/pages/api/sydney.ts b/spaces/7hao/bingo/src/pages/api/sydney.ts deleted file mode 100644 index 0e7bbf23d77c2e1a6635185a060eeee58b8c8e66..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/pages/api/sydney.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { NextApiRequest, NextApiResponse } from 'next' -import { WebSocket, debug } from '@/lib/isomorphic' -import { BingWebBot } from '@/lib/bots/bing' -import { websocketUtils } from '@/lib/bots/bing/utils' -import { WatchDog, createHeaders } from '@/lib/utils' - - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const conversationContext = req.body - const headers = createHeaders(req.cookies) - debug(headers) - res.setHeader('Content-Type', 'text/stream; charset=UTF-8') - - const ws = new WebSocket('wss://sydney.bing.com/sydney/ChatHub', { - headers: { - ...headers, - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - pragma: 'no-cache', - } - }) - - const closeDog = new WatchDog() - const timeoutDog = new WatchDog() - ws.onmessage = (event) => { - timeoutDog.watch(() => { - ws.send(websocketUtils.packMessage({ type: 6 })) - }, 1500) - closeDog.watch(() => { - ws.close() - }, 10000) - res.write(event.data) - if (/\{"type":([367])\}/.test(String(event.data))) { - const type = parseInt(RegExp.$1, 10) - debug('connection type', type) - if (type === 3) { - ws.close() - } else { - ws.send(websocketUtils.packMessage({ type })) - } - } - } - - ws.onclose = () => { - timeoutDog.reset() - closeDog.reset() - debug('connection close') - res.end() - } - - await new Promise((resolve) => ws.onopen = resolve) - ws.send(websocketUtils.packMessage({ protocol: 'json', version: 1 })) - ws.send(websocketUtils.packMessage({ type: 6 })) - ws.send(websocketUtils.packMessage(BingWebBot.buildChatRequest(conversationContext!))) - req.socket.once('close', () => { - ws.close() - if (!res.closed) { - res.end() - } - }) -} diff --git a/spaces/A00001/bingothoo/src/components/ui/icons.tsx b/spaces/A00001/bingothoo/src/components/ui/icons.tsx deleted file mode 100644 index 742b489b50437c5b64c86082f2ebc712eeb6a2b0..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/src/components/ui/icons.tsx +++ /dev/null @@ -1,504 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' - -function IconNextChat({ - className, - inverted, - ...props -}: React.ComponentProps<'svg'> & { inverted?: boolean }) { - const id = React.useId() - - return ( - - - - - - - - - - - - - - - - - - - - - - ) -} - -function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - OpenAI icon - - - ) -} - -function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - GitHub - - - ) -} - -function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - ) -} - -function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUser({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMore({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconStop({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSun({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconClose({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconShare({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconExternalLink({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconChevronUpDown({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -export { - IconEdit, - IconNextChat, - IconOpenAI, - IconGitHub, - IconSeparator, - IconArrowDown, - IconArrowRight, - IconUser, - IconPlus, - IconArrowElbow, - IconSpinner, - IconMessage, - IconTrash, - IconMore, - IconRefresh, - IconStop, - IconSidebar, - IconMoon, - IconSun, - IconCopy, - IconCheck, - IconDownload, - IconClose, - IconShare, - IconUsers, - IconExternalLink, - IconChevronUpDown -} diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/vggishish/train_melception.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/vggishish/train_melception.py deleted file mode 100644 index 8adc5aa6e0e32a66cdbb7b449483a3b23d9b0ef9..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/vggishish/train_melception.py +++ /dev/null @@ -1,241 +0,0 @@ -import random - -import numpy as np -import torch -import torchvision -from omegaconf import OmegaConf -from torch.utils.data.dataloader import DataLoader -from torchvision.models.inception import BasicConv2d, Inception3 -from tqdm import tqdm - -from dataset import VGGSound -from logger import LoggerWithTBoard -from loss import WeightedCrossEntropy -from metrics import metrics -from transforms import Crop, StandardNormalizeAudio, ToTensor - - -# TODO: refactor ./evaluation/feature_extractors/melception.py to handle this class as well. -# So far couldn't do it because of the difference in outputs -class Melception(Inception3): - - def __init__(self, num_classes, **kwargs): - # inception = Melception(num_classes=309) - super().__init__(num_classes=num_classes, **kwargs) - # the same as https://github.com/pytorch/vision/blob/5339e63148/torchvision/models/inception.py#L95 - # but for 1-channel input instead of RGB. - self.Conv2d_1a_3x3 = BasicConv2d(1, 32, kernel_size=3, stride=2) - # also the 'hight' of the mel spec is 80 (vs 299 in RGB) we remove all max pool from Inception - self.maxpool1 = torch.nn.Identity() - self.maxpool2 = torch.nn.Identity() - - def forward(self, x): - x = x.unsqueeze(1) - return super().forward(x) - -def train_inception_scorer(cfg): - logger = LoggerWithTBoard(cfg) - - random.seed(cfg.seed) - np.random.seed(cfg.seed) - torch.manual_seed(cfg.seed) - torch.cuda.manual_seed_all(cfg.seed) - # makes iterations faster (in this case 30%) if your inputs are of a fixed size - # https://discuss.pytorch.org/t/what-does-torch-backends-cudnn-benchmark-do/5936/3 - torch.backends.cudnn.benchmark = True - - meta_path = './data/vggsound.csv' - train_ids_path = './data/vggsound_train.txt' - cache_path = './data/' - splits_path = cache_path - - transforms = [ - StandardNormalizeAudio(cfg.mels_path, train_ids_path, cache_path), - ] - if cfg.cropped_size not in [None, 'None', 'none']: - logger.print_logger.info(f'Using cropping {cfg.cropped_size}') - transforms.append(Crop(cfg.cropped_size)) - transforms.append(ToTensor()) - transforms = torchvision.transforms.transforms.Compose(transforms) - - datasets = { - 'train': VGGSound('train', cfg.mels_path, transforms, splits_path, meta_path), - 'valid': VGGSound('valid', cfg.mels_path, transforms, splits_path, meta_path), - 'test': VGGSound('test', cfg.mels_path, transforms, splits_path, meta_path), - } - - loaders = { - 'train': DataLoader(datasets['train'], batch_size=cfg.batch_size, shuffle=True, drop_last=True, - num_workers=cfg.num_workers, pin_memory=True), - 'valid': DataLoader(datasets['valid'], batch_size=cfg.batch_size, - num_workers=cfg.num_workers, pin_memory=True), - 'test': DataLoader(datasets['test'], batch_size=cfg.batch_size, - num_workers=cfg.num_workers, pin_memory=True), - } - - device = torch.device(cfg.device if torch.cuda.is_available() else 'cpu') - - model = Melception(num_classes=len(datasets['train'].target2label)) - model = model.to(device) - param_num = logger.log_param_num(model) - - if cfg.optimizer == 'adam': - optimizer = torch.optim.Adam( - model.parameters(), lr=cfg.learning_rate, betas=cfg.betas, weight_decay=cfg.weight_decay) - elif cfg.optimizer == 'sgd': - optimizer = torch.optim.SGD( - model.parameters(), lr=cfg.learning_rate, momentum=cfg.momentum, weight_decay=cfg.weight_decay) - else: - raise NotImplementedError - - if cfg.cls_weights_in_loss: - weights = 1 / datasets['train'].class_counts - else: - weights = torch.ones(len(datasets['train'].target2label)) - criterion = WeightedCrossEntropy(weights.to(device)) - - # loop over the train and validation multiple times (typical PT boilerplate) - no_change_epochs = 0 - best_valid_loss = float('inf') - early_stop_triggered = False - - for epoch in range(cfg.num_epochs): - - for phase in ['train', 'valid']: - if phase == 'train': - model.train() - else: - model.eval() - - running_loss = 0 - preds_from_each_batch = [] - targets_from_each_batch = [] - - prog_bar = tqdm(loaders[phase], f'{phase} ({epoch})', ncols=0) - for i, batch in enumerate(prog_bar): - inputs = batch['input'].to(device) - targets = batch['target'].to(device) - - # zero the parameter gradients - optimizer.zero_grad() - - # forward + backward + optimize - with torch.set_grad_enabled(phase == 'train'): - # inception v3 - if phase == 'train': - outputs, aux_outputs = model(inputs) - loss1 = criterion(outputs, targets) - loss2 = criterion(aux_outputs, targets) - loss = loss1 + 0.4*loss2 - loss = criterion(outputs, targets, to_weight=True) - else: - outputs = model(inputs) - loss = criterion(outputs, targets, to_weight=False) - - if phase == 'train': - loss.backward() - optimizer.step() - - # loss - running_loss += loss.item() - - # for metrics calculation later on - preds_from_each_batch += [outputs.detach().cpu()] - targets_from_each_batch += [targets.cpu()] - - # iter logging - if i % 50 == 0: - logger.log_iter_loss(loss.item(), epoch*len(loaders[phase])+i, phase) - # tracks loss in the tqdm progress bar - prog_bar.set_postfix(loss=loss.item()) - - # logging loss - epoch_loss = running_loss / len(loaders[phase]) - logger.log_epoch_loss(epoch_loss, epoch, phase) - - # logging metrics - preds_from_each_batch = torch.cat(preds_from_each_batch) - targets_from_each_batch = torch.cat(targets_from_each_batch) - metrics_dict = metrics(targets_from_each_batch, preds_from_each_batch) - logger.log_epoch_metrics(metrics_dict, epoch, phase) - - # Early stopping - if phase == 'valid': - if epoch_loss < best_valid_loss: - no_change_epochs = 0 - best_valid_loss = epoch_loss - logger.log_best_model(model, epoch_loss, epoch, optimizer, metrics_dict) - else: - no_change_epochs += 1 - logger.print_logger.info( - f'Valid loss hasnt changed for {no_change_epochs} patience: {cfg.patience}' - ) - if no_change_epochs >= cfg.patience: - early_stop_triggered = True - - if early_stop_triggered: - logger.print_logger.info(f'Training is early stopped @ {epoch}') - break - - logger.print_logger.info('Finished Training') - - # loading the best model - ckpt = torch.load(logger.best_model_path) - model.load_state_dict(ckpt['model']) - logger.print_logger.info(f'Loading the best model from {logger.best_model_path}') - logger.print_logger.info((f'The model was trained for {ckpt["epoch"]} epochs. Loss: {ckpt["loss"]:.4f}')) - - # Testing the model - model.eval() - running_loss = 0 - preds_from_each_batch = [] - targets_from_each_batch = [] - - for i, batch in enumerate(loaders['test']): - inputs = batch['input'].to(device) - targets = batch['target'].to(device) - - # zero the parameter gradients - optimizer.zero_grad() - - # forward + backward + optimize - with torch.set_grad_enabled(False): - outputs = model(inputs) - loss = criterion(outputs, targets, to_weight=False) - - # loss - running_loss += loss.item() - - # for metrics calculation later on - preds_from_each_batch += [outputs.detach().cpu()] - targets_from_each_batch += [targets.cpu()] - - # logging metrics - preds_from_each_batch = torch.cat(preds_from_each_batch) - targets_from_each_batch = torch.cat(targets_from_each_batch) - test_metrics_dict = metrics(targets_from_each_batch, preds_from_each_batch) - test_metrics_dict['avg_loss'] = running_loss / len(loaders['test']) - test_metrics_dict['param_num'] = param_num - # TODO: I have no idea why tboard doesn't keep metrics (hparams) when - # I run this experiment from cli: `python train_melception.py config=./configs/vggish.yaml` - # while when I run it in vscode debugger the metrics are logger (wtf) - logger.log_test_metrics(test_metrics_dict, dict(cfg), ckpt['epoch']) - - logger.print_logger.info('Finished the experiment') - - -if __name__ == '__main__': - # input = torch.rand(16, 1, 80, 848) - # output, aux = inception(input) - # print(output.shape, aux.shape) - # Expected input size: (3, 299, 299) in RGB -> (1, 80, 848) in Mel Spec - # train_inception_scorer() - - cfg_cli = OmegaConf.from_cli() - cfg_yml = OmegaConf.load(cfg_cli.config) - # the latter arguments are prioritized - cfg = OmegaConf.merge(cfg_yml, cfg_cli) - OmegaConf.set_readonly(cfg, True) - print(OmegaConf.to_yaml(cfg)) - - train_inception_scorer(cfg) diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-210e_deepfashion2_sling_dress_256x192/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-210e_deepfashion2_sling_dress_256x192/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/hr_4xb32_1024e_4channel.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/hr_4xb32_1024e_4channel.py deleted file mode 100644 index 0a4b8d3739c2612647dbd0a01b43d709e51a7da4..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/hr_4xb32_1024e_4channel.py +++ /dev/null @@ -1,115 +0,0 @@ -_base_ = [ # 此配置文件将继承所有 `_base_` 中的配置 - '../configs/_base_/schedules/custom_schedule.py', # 训练策略配置 - '../configs/_base_/default_runtime.py' # 默认运行设置 -] - -default_hooks = dict( - # print log every 50 iterations. - logger=dict(type='LoggerHook', interval=10), - # save checkpoint per 8 epochs. - checkpoint=dict(save_best='auto', interval=16) -) - -visualizer = dict( - vis_backends=[dict(type='LocalVisBackend'), - dict(type='WandbVisBackend')]) - -dataset_type = 'CustomDataset' - -# config of pipline -train_pipeline = [ - dict(type='LoadImageFromFile', imdecode_backend='pillow', color_type='unchanged'), # 读取图像 - dict(type='RandomResizedCrop', scale=224), # 随机放缩裁剪 - dict(type='RandomFlip', prob=0.5, direction='horizontal'), # 随机水平翻转 - dict(type='PackInputs'), # 准备图像以及标签 -] - -test_pipeline = [ - dict(type='LoadImageFromFile', imdecode_backend='pillow', color_type='unchanged'), # 读取图像 - dict(type='ResizeEdge', scale=256, edge='short'), # 缩放短边尺寸至 256px - dict(type='CenterCrop', crop_size=224), # 中心裁剪 - dict(type='PackInputs'), # 准备图像以及标签 -] - -# config of dataloader -train_dataloader = dict( - batch_size=32, # 每张 GPU 的 batchsize - num_workers=5, # 每个 GPU 的线程数 - dataset=dict( # 训练数据集 - type=dataset_type, - data_root='../2_preprocess_data_3000', - with_label=True, - ann_file='', - data_prefix='train', - pipeline=train_pipeline), - sampler=dict(type='DefaultSampler', shuffle=True), # 默认采样器 - persistent_workers=True, # 是否保持进程,可以缩短每个 epoch 的准备时间 -) - -# 构造验证集 dataloader -val_dataloader = dict( - batch_size=32, - num_workers=5, - dataset=dict( - type=dataset_type, - data_root='../2_preprocess_data_3000', - with_label=True, - ann_file='', - data_prefix='val', - pipeline=test_pipeline), - sampler=dict(type='DefaultSampler', shuffle=False), - persistent_workers=True, -) - -# set evaluator of validation dataset. Here uses top1 and top3 accuracy -val_evaluator = dict(type='Accuracy', topk=(1, 3)) - -test_dataloader = val_dataloader -test_evaluator = val_evaluator - -model = dict( - type='ImageClassifier', # 主模型类型(对于图像分类任务,使用 `ImageClassifier`) - backbone=dict( - type='HRNet', # 主干网络类型 - arch='w32', # 主干网络架构 - in_channels=4, - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(32, 64)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(32, 64, 128)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(32, 64, 128, 256))), - ), - neck=dict(type='GlobalAveragePooling'), # 颈网络类型 - head=dict( - type='LinearClsHead', # 分类颈网络类型 - # 除了 `type` 之外的所有字段都来自 `LinearClsHead` 类的 __init__ 方法 - # 可查阅 https://mmpretrain.readthedocs.io/zh_CN/latest/api/generated/mmpretrain.models.heads.LinearClsHead.html - num_classes=7, # 分类类别数 - in_channels=256, - loss=dict(type='CrossEntropyLoss', loss_weight=1.0), # 损失函数配置信息 - topk=(1, 3), # 评估指标,Top-k 准确率 - )) - -optim_wrapper = dict( - accumulative_counts=8 -) diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/Lockchat.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/Lockchat.py deleted file mode 100644 index 4bd7c5fe57c3f5d8b210c901440ab0be8d17b51b..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/deprecated/Lockchat.py +++ /dev/null @@ -1,64 +0,0 @@ -from __future__ import annotations - -import json - -import requests - -from ...typing import Any, CreateResult -from ..base_provider import BaseProvider - - -class Lockchat(BaseProvider): - url: str = "http://supertest.lockchat.app" - supports_stream = True - supports_gpt_35_turbo = True - supports_gpt_4 = True - - @staticmethod - def create_completion( - model: str, - messages: list[dict[str, str]], - stream: bool, **kwargs: Any) -> CreateResult: - - temperature = float(kwargs.get("temperature", 0.7)) - payload = { - "temperature": temperature, - "messages" : messages, - "model" : model, - "stream" : True, - } - - headers = { - "user-agent": "ChatX/39 CFNetwork/1408.0.4 Darwin/22.5.0", - } - response = requests.post("http://supertest.lockchat.app/v1/chat/completions", - json=payload, headers=headers, stream=True) - - response.raise_for_status() - for token in response.iter_lines(): - if b"The model: `gpt-4` does not exist" in token: - print("error, retrying...") - Lockchat.create_completion( - model = model, - messages = messages, - stream = stream, - temperature = temperature, - **kwargs) - - if b"content" in token: - token = json.loads(token.decode("utf-8").split("data: ")[1]) - token = token["choices"][0]["delta"].get("content") - if token: - yield (token) - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ("temperature", "float"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/unfinished/PerplexityAi.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/unfinished/PerplexityAi.py deleted file mode 100644 index 3e0968084ea65f46a30d8f7527c63ae4efd1b146..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/unfinished/PerplexityAi.py +++ /dev/null @@ -1,100 +0,0 @@ -from __future__ import annotations - -import json -import time -import base64 -from curl_cffi.requests import AsyncSession - -from ..base_provider import AsyncProvider, format_prompt, get_cookies - - -class PerplexityAi(AsyncProvider): - url = "https://www.perplexity.ai" - supports_gpt_35_turbo = True - _sources = [] - - @classmethod - async def create_async( - cls, - model: str, - messages: list[dict[str, str]], - proxy: str = None, - **kwargs - ) -> str: - url = cls.url + "/socket.io/?EIO=4&transport=polling" - headers = { - "Referer": f"{cls.url}/" - } - async with AsyncSession(headers=headers, proxies={"https": proxy}, impersonate="chrome107") as session: - url_session = "https://www.perplexity.ai/api/auth/session" - response = await session.get(url_session) - response.raise_for_status() - - url_session = "https://www.perplexity.ai/api/auth/session" - response = await session.get(url_session) - response.raise_for_status() - - response = await session.get(url, params={"t": timestamp()}) - response.raise_for_status() - sid = json.loads(response.text[1:])["sid"] - - response = await session.get(url, params={"t": timestamp(), "sid": sid}) - response.raise_for_status() - - data = '40{"jwt":"anonymous-ask-user"}' - response = await session.post(url, params={"t": timestamp(), "sid": sid}, data=data) - response.raise_for_status() - - response = await session.get(url, params={"t": timestamp(), "sid": sid}) - response.raise_for_status() - - data = "424" + json.dumps([ - "perplexity_ask", - format_prompt(messages), - { - "version":"2.1", - "source":"default", - "language":"en", - "timezone": time.tzname[0], - "search_focus":"internet", - "mode":"concise" - } - ]) - response = await session.post(url, params={"t": timestamp(), "sid": sid}, data=data) - response.raise_for_status() - - while True: - response = await session.get(url, params={"t": timestamp(), "sid": sid}) - response.raise_for_status() - for line in response.text.splitlines(): - if line.startswith("434"): - result = json.loads(json.loads(line[3:])[0]["text"]) - - cls._sources = [{ - "title": source["name"], - "url": source["url"], - "snippet": source["snippet"] - } for source in result["web_results"]] - - return result["answer"] - - @classmethod - def get_sources(cls): - return cls._sources - - - @classmethod - @property - def params(cls): - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ("proxy", "str"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" - - -def timestamp() -> str: - return base64.urlsafe_b64encode(int(time.time()-1407782612).to_bytes(4, 'big')).decode() \ No newline at end of file diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/__init__.py b/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/__init__.py deleted file mode 100644 index 7836cada81f90ded99c58d5942eea4c3477f58fc..0000000000000000000000000000000000000000 --- a/spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr -from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light diff --git a/spaces/AkashKhamkar/QnA-generator/app.py b/spaces/AkashKhamkar/QnA-generator/app.py deleted file mode 100644 index 38b8fe289d8a01f0f521be35e61acbaf4f1eb906..0000000000000000000000000000000000000000 --- a/spaces/AkashKhamkar/QnA-generator/app.py +++ /dev/null @@ -1,188 +0,0 @@ -import streamlit as st -from youtube_transcript_api import YouTubeTranscriptApi -from transformers import AutoTokenizer, AutoModelWithLMHead -import torch -import nltk -import before_run -#nltk.download('wordnet') -#nltk.download('punkt') -#nltk.download('brown') -#nltk.download('stopwords') -from nltk.tokenize import sent_tokenize -from flashtext import KeywordProcessor -from nltk.corpus import stopwords -from urllib import response -import requests -import string -import traceback -import pke - -link = "http://127.0.0.1:8000/question" - -summary_tokenizer = AutoTokenizer.from_pretrained("t5-base") -summary_model = AutoModelWithLMHead.from_pretrained("t5-base") -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -summary_model = summary_model.to(device) -question_model = AutoModelWithLMHead.from_pretrained('ramsrigouthamg/t5_squad_v1') -question_tokenizer = AutoTokenizer.from_pretrained('ramsrigouthamg/t5_squad_v1') -question_model = question_model.to(device) - - -def query(url, payload): - return requests.post(url, json=payload) - -def fetch_transcript(url): - vid = url.split("=")[1] - transcript = YouTubeTranscriptApi.get_transcript(vid) - result = "" - for i in transcript: - result += ' ' + i['text'] - return result - -def postprocesstext (content): - final="" - for sent in sent_tokenize(content): - sent = sent.capitalize() - final = final +" "+sent - return final - - -def summarizer(text,model,tokenizer): - text = text.strip().replace("\n"," ") - text = "summarize: "+text - # print (text) - max_len = 512 - encoding = tokenizer.encode_plus(text,max_length=max_len, pad_to_max_length=False,truncation=True, return_tensors="pt").to(device) - - input_ids, attention_mask = encoding["input_ids"], encoding["attention_mask"] - - outs = model.generate(input_ids=input_ids, - attention_mask=attention_mask, - early_stopping=True, - num_beams=3, - num_return_sequences=1, - no_repeat_ngram_size=2, - min_length = 75, - max_length=300) - - - dec = [tokenizer.decode(ids,skip_special_tokens=True) for ids in outs] - summary = dec[0] - summary = postprocesstext(summary) - summary= summary.strip() - - return summary - -def get_nouns_multipartite(content): - out=[] - try: - extractor = pke.unsupervised.MultipartiteRank() - stoplist = list(string.punctuation) - stoplist += ['-lrb-', '-rrb-', '-lcb-', '-rcb-', '-lsb-', '-rsb-'] - stoplist += stopwords.words('english') - extractor.load_document(input=content, stoplist=stoplist) - # not contain punctuation marks or stopwords as candidates. - pos = {'PROPN','NOUN'} - - - extractor.candidate_selection(pos=pos) - - extractor.candidate_weighting(alpha=1.1, - threshold=0.75, - method='average') - keyphrases = extractor.get_n_best(n=15) - - - for val in keyphrases: - out.append(val[0]) - except: - out = [] - traceback.print_exc() - - return out - -def get_keywords(originaltext,summarytext,count): - keywords = get_nouns_multipartite(originaltext) - print ("keywords unsummarized: ",keywords) - keyword_processor = KeywordProcessor() - for keyword in keywords: - keyword_processor.add_keyword(keyword) - - keywords_found = keyword_processor.extract_keywords(summarytext) - keywords_found = list(set(keywords_found)) - print ("keywords_found in summarized: ",keywords_found) - - important_keywords =[] - for keyword in keywords: - if keyword in keywords_found: - important_keywords.append(keyword) - - return important_keywords[:int(count)] - -def get_question(context,answer,model,tokenizer): - text = "context: {} answer: {}".format(context,answer) - encoding = tokenizer.encode_plus(text,max_length=384, pad_to_max_length=False,truncation=True, return_tensors="pt").to(device) - input_ids, attention_mask = encoding["input_ids"], encoding["attention_mask"] - - outs = model.generate(input_ids=input_ids, - attention_mask=attention_mask, - early_stopping=True, - num_beams=5, - num_return_sequences=1, - no_repeat_ngram_size=2, - max_length=72) - - - dec = [tokenizer.decode(ids,skip_special_tokens=True) for ids in outs] - - - Question = dec[0].replace("question:","") - Question= Question.strip() - return Question - -def all(url,count): - transcript = fetch_transcript(url) - summarized_text = summarizer(transcript, summary_model, summary_tokenizer) - keywords = get_keywords(transcript,summarized_text,count) - qna = [] - for answer in keywords: - qna.append(get_question(summarized_text,answer,question_model,question_tokenizer)+' : '+answer) - - return qna - - - -def main(): - - if 'submitted' not in st.session_state: - st.session_state.submitted = False - - if 'opt' not in st.session_state: - st.session_state.opt = [] - - def callback(): - st.session_state.submitted = True - - st.title('QnA pair Generator') - url = st.text_input('Enter the Video Link') - count = st.text_input('Enter the number of questions you want to generate') - - if (st.button("Submit URL", on_click=callback) and url and count) : - st.write("Thanks for submission !") - opt = all(url, count) - st.session_state.opt = opt - - if st.session_state.submitted and st.session_state.opt: - option = st.multiselect('Select the question you want to add to database ', st.session_state.opt) - if option: - if st.button("Add question"): - for i in range(len(option)): - files = { - "question": option[i].split(":")[0], - "answer": option[i].split(":")[1] - } - response = query(link, files) - st.write(response.text) - - -main() diff --git a/spaces/Aki004/herta-so-vits/app.py b/spaces/Aki004/herta-so-vits/app.py deleted file mode 100644 index 19f4d83597ffedda46c0e11e643a91a9fd9f61f7..0000000000000000000000000000000000000000 --- a/spaces/Aki004/herta-so-vits/app.py +++ /dev/null @@ -1,93 +0,0 @@ -import gradio as gr -import edge_tts -import asyncio -import librosa -import soundfile -import io -import argparse -import numpy as np -from inference.infer_tool import Svc - -def get_or_create_eventloop(): - try: - return asyncio.get_event_loop() - except RuntimeError as ex: - if "There is no current event loop in thread" in str(ex): - loop = asyncio.new_event_loop() - asyncio.set_event_loop(loop) - return asyncio.get_event_loop() - -def tts_get_voices_list(): - voices = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - for item in tts_voice_list: - voices.append(item['ShortName']) - - return voices - -def infer(txt, tts_voice, input_audio, predict_f0, audio_mode): - if audio_mode: - if input_audio is None: - return 'Please upload your audio file' - - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - - if duration > 30: - return 'The audio file is too long, Please upload audio file that less than 30 seconds' - - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - - raw_path = io.BytesIO() - soundfile.write(raw_path, audio, 16000, format="wav") - raw_path.seek(0) - model = Svc(fr"Herta-Svc/G_10000.pth", f"Herta-Svc/config.json", device = 'cpu') - out_audio, out_sr = model.infer('speaker0', 0, raw_path, auto_predict_f0 = predict_f0,) - return (44100, out_audio.cpu().numpy()) - - tts = asyncio.run(edge_tts.Communicate(txt, tts_voice).save('audio.mp3')) - audio, sr = librosa.load('audio.mp3', sr=16000, mono=True) - raw_path = io.BytesIO() - soundfile.write(raw_path, audio, 16000, format="wav") - raw_path.seek(0) - model = Svc(fr"Herta-Svc/G_10000.pth", f"Herta-Svc/config.json", device = 'cpu') - out_audio, out_sr = model.infer('speaker0', 0, raw_path, auto_predict_f0 = True,) - return (44100, out_audio.cpu().numpy()) - -def change_to_audio_mode(audio_mode): - if audio_mode: - return gr.Audio.update(visible = True), gr.Textbox.update(visible= False), gr.Dropdown.update(visible = False), gr.Checkbox.update(value = True) - else: - return gr.Audio.update(visible = False), gr.Textbox.update(visible= True), gr.Dropdown.update(visible = True), gr.Checkbox.update(value = False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - args = parser.parse_args() - loop = asyncio.new_event_loop() - asyncio.set_event_loop(loop) - with gr.Blocks() as app: - with gr.Tabs(): - with gr.TabItem('Herta'): - title = gr.Label('Herta Sovits Model') - cover = gr.Markdown('
    ' - f'' - '
    ') - tts_text = gr.Textbox(label="TTS text (100 words limitation)") - audio_input = gr.Audio(label = 'Please upload audio file that less than 30 seconds', visible = False) - tts_voice = gr.Dropdown(choices= tts_get_voices_list()) - predict_f0 = gr.Checkbox(label = 'Auto predict F0', value = False) - audio_mode = gr.Checkbox(label = 'Upload audio instead', value = False) - audio_output = gr.Audio(label="Output Audio") - btn_submit = gr.Button("Generate") - - btn_submit.click(infer, [tts_text, tts_voice, audio_input, predict_f0, audio_mode], [audio_output]) - audio_mode.change(change_to_audio_mode, audio_mode, [audio_input, tts_text, tts_voice]) - - app.queue(concurrency_count=1, api_open=args.api).launch(share=args.share) diff --git a/spaces/AlekseyKorshuk/instagram-filter-removal/configs/default.py b/spaces/AlekseyKorshuk/instagram-filter-removal/configs/default.py deleted file mode 100644 index b46610ba35392fdbab20e208d28b5d93d1dcf547..0000000000000000000000000000000000000000 --- a/spaces/AlekseyKorshuk/instagram-filter-removal/configs/default.py +++ /dev/null @@ -1,88 +0,0 @@ -from yacs.config import CfgNode as CN - -_C = CN() - -_C.SYSTEM = CN() -_C.SYSTEM.NUM_GPU = 2 -_C.SYSTEM.NUM_WORKERS = 4 - -_C.WANDB = CN() -_C.WANDB.PROJECT_NAME = "instagram-filter-removal" -_C.WANDB.ENTITY = "vvgl-ozu" -_C.WANDB.RUN = 12 -_C.WANDB.LOG_DIR = "" -_C.WANDB.NUM_ROW = 0 - -_C.TRAIN = CN() -_C.TRAIN.NUM_TOTAL_STEP = 120000 -_C.TRAIN.START_STEP = 0 -_C.TRAIN.BATCH_SIZE = 8 -_C.TRAIN.SHUFFLE = True -_C.TRAIN.LOG_INTERVAL = 100 -_C.TRAIN.SAVE_INTERVAL = 5000 -_C.TRAIN.SAVE_DIR = "./weights" -_C.TRAIN.RESUME = True -_C.TRAIN.VISUALIZE_INTERVAL = 100 -_C.TRAIN.TUNE = False - -_C.MODEL = CN() -_C.MODEL.NAME = "ifr-no-aux" -_C.MODEL.IS_TRAIN = True -_C.MODEL.NUM_CLASS = 17 -_C.MODEL.CKPT = "" - -_C.MODEL.IFR = CN() -_C.MODEL.IFR.NAME = "InstaFilterRemovalNetwork" -_C.MODEL.IFR.NUM_CHANNELS = 32 -_C.MODEL.IFR.DESTYLER_CHANNELS = 32 -_C.MODEL.IFR.SOLVER = CN() -_C.MODEL.IFR.SOLVER.LR = 2e-4 -_C.MODEL.IFR.SOLVER.BETAS = (0.5, 0.9) -_C.MODEL.IFR.SOLVER.SCHEDULER = [] -_C.MODEL.IFR.SOLVER.DECAY_RATE = 0. - -_C.MODEL.D = CN() -_C.MODEL.D.NAME = "1-ChOutputDiscriminator" -_C.MODEL.D.NUM_CHANNELS = 32 -_C.MODEL.D.NUM_CRITICS = 5 -_C.MODEL.D.SOLVER = CN() -_C.MODEL.D.SOLVER.LR = 1e-3 -_C.MODEL.D.SOLVER.BETAS = (0.5, 0.9) -_C.MODEL.D.SOLVER.SCHEDULER = [] -_C.MODEL.D.SOLVER.DECAY_RATE = 0.5 - -_C.OPTIM = CN() -_C.OPTIM.GP = 10 -_C.OPTIM.MASK = 1 -_C.OPTIM.RECON = 1.4 -_C.OPTIM.SEMANTIC = 1e-4 -_C.OPTIM.TEXTURE = 1e-3 -_C.OPTIM.ADVERSARIAL = 1e-3 -_C.OPTIM.AUX = 0.5 - -_C.DATASET = CN() -_C.DATASET.NAME = "IFFI" # "IFFI" # "DIV2K?" # -_C.DATASET.ROOT = "../../Datasets/IFFI-dataset/train" # "../../Datasets/IFFI-dataset" # "/media/birdortyedi/e5042b8f-ca5e-4a22-ac68-7e69ff648bc4/IFFI-dataset" -_C.DATASET.TEST_ROOT = "../../Datasets/IFFI-dataset" -_C.DATASET.SIZE = 256 -_C.DATASET.CROP_SIZE = 512 -_C.DATASET.MEAN = [0.5, 0.5, 0.5] -_C.DATASET.STD = [0.5, 0.5, 0.5] - -_C.TEST = CN() -_C.TEST.OUTPUT_DIR = "./outputs" -_C.TEST.ABLATION = False -_C.TEST.WEIGHTS = "" -_C.TEST.BATCH_SIZE = 64 -_C.TEST.IMG_ID = 52 - - -def get_cfg_defaults(): - """Get a yacs CfgNode object with default values for my_project.""" - # Return a clone so that the defaults will not be altered - # This is for the "local variable" use pattern - return _C.clone() - - -# provide a way to import the defaults as a global singleton: -cfg = _C # users can `from config import cfg` diff --git a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_x101_64x4d_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_x101_64x4d_fpn_1x_coco.py deleted file mode 100644 index 1b18c2ba41d1493380bab3515be8e29547988ebf..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/guided_anchoring/ga_retinanet_x101_64x4d_fpn_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './ga_retinanet_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/evaluation/recall.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/evaluation/recall.py deleted file mode 100644 index 23ec744f552db1a4a76bfa63b7cc8b357deb3140..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/evaluation/recall.py +++ /dev/null @@ -1,189 +0,0 @@ -from collections.abc import Sequence - -import numpy as np -from mmcv.utils import print_log -from terminaltables import AsciiTable - -from .bbox_overlaps import bbox_overlaps - - -def _recalls(all_ious, proposal_nums, thrs): - - img_num = all_ious.shape[0] - total_gt_num = sum([ious.shape[0] for ious in all_ious]) - - _ious = np.zeros((proposal_nums.size, total_gt_num), dtype=np.float32) - for k, proposal_num in enumerate(proposal_nums): - tmp_ious = np.zeros(0) - for i in range(img_num): - ious = all_ious[i][:, :proposal_num].copy() - gt_ious = np.zeros((ious.shape[0])) - if ious.size == 0: - tmp_ious = np.hstack((tmp_ious, gt_ious)) - continue - for j in range(ious.shape[0]): - gt_max_overlaps = ious.argmax(axis=1) - max_ious = ious[np.arange(0, ious.shape[0]), gt_max_overlaps] - gt_idx = max_ious.argmax() - gt_ious[j] = max_ious[gt_idx] - box_idx = gt_max_overlaps[gt_idx] - ious[gt_idx, :] = -1 - ious[:, box_idx] = -1 - tmp_ious = np.hstack((tmp_ious, gt_ious)) - _ious[k, :] = tmp_ious - - _ious = np.fliplr(np.sort(_ious, axis=1)) - recalls = np.zeros((proposal_nums.size, thrs.size)) - for i, thr in enumerate(thrs): - recalls[:, i] = (_ious >= thr).sum(axis=1) / float(total_gt_num) - - return recalls - - -def set_recall_param(proposal_nums, iou_thrs): - """Check proposal_nums and iou_thrs and set correct format.""" - if isinstance(proposal_nums, Sequence): - _proposal_nums = np.array(proposal_nums) - elif isinstance(proposal_nums, int): - _proposal_nums = np.array([proposal_nums]) - else: - _proposal_nums = proposal_nums - - if iou_thrs is None: - _iou_thrs = np.array([0.5]) - elif isinstance(iou_thrs, Sequence): - _iou_thrs = np.array(iou_thrs) - elif isinstance(iou_thrs, float): - _iou_thrs = np.array([iou_thrs]) - else: - _iou_thrs = iou_thrs - - return _proposal_nums, _iou_thrs - - -def eval_recalls(gts, - proposals, - proposal_nums=None, - iou_thrs=0.5, - logger=None): - """Calculate recalls. - - Args: - gts (list[ndarray]): a list of arrays of shape (n, 4) - proposals (list[ndarray]): a list of arrays of shape (k, 4) or (k, 5) - proposal_nums (int | Sequence[int]): Top N proposals to be evaluated. - iou_thrs (float | Sequence[float]): IoU thresholds. Default: 0.5. - logger (logging.Logger | str | None): The way to print the recall - summary. See `mmcv.utils.print_log()` for details. Default: None. - - Returns: - ndarray: recalls of different ious and proposal nums - """ - - img_num = len(gts) - assert img_num == len(proposals) - - proposal_nums, iou_thrs = set_recall_param(proposal_nums, iou_thrs) - - all_ious = [] - for i in range(img_num): - if proposals[i].ndim == 2 and proposals[i].shape[1] == 5: - scores = proposals[i][:, 4] - sort_idx = np.argsort(scores)[::-1] - img_proposal = proposals[i][sort_idx, :] - else: - img_proposal = proposals[i] - prop_num = min(img_proposal.shape[0], proposal_nums[-1]) - if gts[i] is None or gts[i].shape[0] == 0: - ious = np.zeros((0, img_proposal.shape[0]), dtype=np.float32) - else: - ious = bbox_overlaps(gts[i], img_proposal[:prop_num, :4]) - all_ious.append(ious) - all_ious = np.array(all_ious) - recalls = _recalls(all_ious, proposal_nums, iou_thrs) - - print_recall_summary(recalls, proposal_nums, iou_thrs, logger=logger) - return recalls - - -def print_recall_summary(recalls, - proposal_nums, - iou_thrs, - row_idxs=None, - col_idxs=None, - logger=None): - """Print recalls in a table. - - Args: - recalls (ndarray): calculated from `bbox_recalls` - proposal_nums (ndarray or list): top N proposals - iou_thrs (ndarray or list): iou thresholds - row_idxs (ndarray): which rows(proposal nums) to print - col_idxs (ndarray): which cols(iou thresholds) to print - logger (logging.Logger | str | None): The way to print the recall - summary. See `mmcv.utils.print_log()` for details. Default: None. - """ - proposal_nums = np.array(proposal_nums, dtype=np.int32) - iou_thrs = np.array(iou_thrs) - if row_idxs is None: - row_idxs = np.arange(proposal_nums.size) - if col_idxs is None: - col_idxs = np.arange(iou_thrs.size) - row_header = [''] + iou_thrs[col_idxs].tolist() - table_data = [row_header] - for i, num in enumerate(proposal_nums[row_idxs]): - row = [f'{val:.3f}' for val in recalls[row_idxs[i], col_idxs].tolist()] - row.insert(0, num) - table_data.append(row) - table = AsciiTable(table_data) - print_log('\n' + table.table, logger=logger) - - -def plot_num_recall(recalls, proposal_nums): - """Plot Proposal_num-Recalls curve. - - Args: - recalls(ndarray or list): shape (k,) - proposal_nums(ndarray or list): same shape as `recalls` - """ - if isinstance(proposal_nums, np.ndarray): - _proposal_nums = proposal_nums.tolist() - else: - _proposal_nums = proposal_nums - if isinstance(recalls, np.ndarray): - _recalls = recalls.tolist() - else: - _recalls = recalls - - import matplotlib.pyplot as plt - f = plt.figure() - plt.plot([0] + _proposal_nums, [0] + _recalls) - plt.xlabel('Proposal num') - plt.ylabel('Recall') - plt.axis([0, proposal_nums.max(), 0, 1]) - f.show() - - -def plot_iou_recall(recalls, iou_thrs): - """Plot IoU-Recalls curve. - - Args: - recalls(ndarray or list): shape (k,) - iou_thrs(ndarray or list): same shape as `recalls` - """ - if isinstance(iou_thrs, np.ndarray): - _iou_thrs = iou_thrs.tolist() - else: - _iou_thrs = iou_thrs - if isinstance(recalls, np.ndarray): - _recalls = recalls.tolist() - else: - _recalls = recalls - - import matplotlib.pyplot as plt - f = plt.figure() - plt.plot(_iou_thrs + [1.0], _recalls + [0.]) - plt.xlabel('IoU') - plt.ylabel('Recall') - plt.axis([iou_thrs.min(), 1, 0, 1]) - f.show() diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/scripts/test.sh b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/scripts/test.sh deleted file mode 100644 index b70d6e93277362e285e86d65e7fdf066f3cb88a2..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/scripts/test.sh +++ /dev/null @@ -1,15 +0,0 @@ -python test.py \ ---name celeba \ ---img_file ./examples/imagenet/img/ \ ---mask_file ./examples/imagenet/mask/ \ ---results_dir ./results \ ---model tc \ ---coarse_or_refine refine \ ---gpu_id 0 \ ---no_shuffle \ ---batch_size 1 \ ---preprocess scale_shortside \ ---mask_type 3 \ ---load_size 512 \ ---attn_G \ ---add_noise diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/datasets/cityscapes.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/datasets/cityscapes.py deleted file mode 100644 index f21867c63e1835f6fceb61f066e802fd8fd2a735..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/configs/_base_/datasets/cityscapes.py +++ /dev/null @@ -1,54 +0,0 @@ -# dataset settings -dataset_type = 'CityscapesDataset' -data_root = 'data/cityscapes/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -crop_size = (512, 1024) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 1024), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='leftImg8bit/train', - ann_dir='gtFine/train', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='leftImg8bit/val', - ann_dir='gtFine/val', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='leftImg8bit/val', - ann_dir='gtFine/val', - pipeline=test_pipeline)) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/__init__.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/__init__.py deleted file mode 100644 index 2ed2c17ad357742e423beeaf4d35db03fe9af469..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/parallel/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .collate import collate -from .data_container import DataContainer -from .data_parallel import MMDataParallel -from .distributed import MMDistributedDataParallel -from .registry import MODULE_WRAPPERS -from .scatter_gather import scatter, scatter_kwargs -from .utils import is_module_wrapper - -__all__ = [ - 'collate', 'DataContainer', 'MMDataParallel', 'MMDistributedDataParallel', - 'scatter', 'scatter_kwargs', 'is_module_wrapper', 'MODULE_WRAPPERS' -] diff --git a/spaces/Armandoliv/whisper-biomedical-ner/app.py b/spaces/Armandoliv/whisper-biomedical-ner/app.py deleted file mode 100644 index 29a9c5a91da4c3ae1080ea62664e8ea1f41e9deb..0000000000000000000000000000000000000000 --- a/spaces/Armandoliv/whisper-biomedical-ner/app.py +++ /dev/null @@ -1,56 +0,0 @@ -import gradio as gr -import torch -import spacy -import os -import whisper - -os.system('pip install https://huggingface.co/Armandoliv/es_pipeline/resolve/main/es_pipeline-any-py3-none-any.whl') -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") -model_whisper = whisper.load_model("small") -nlp_ner = spacy.load("es_pipeline") - -def main_generator(youtube_id:str): - YouTubeID = youtube_id.split("https://www.youtube.com/watch?v=") # - if len(YouTubeID)>1: - YouTubeID = YouTubeID[1] - else: - YouTubeID ='XfyGv-xwjlI' - - OutputFile = f'test_audio_youtube_{YouTubeID}.m4a' - - os.system(f"youtube-dl -o {OutputFile} {YouTubeID} --extract-audio --restrict-filenames -f 'bestaudio[ext=m4a]'") - - result = model_whisper.transcribe(OutputFile) - text = result['text'] - doc = nlp_ner(text) - - output_list = [] - for ent in doc.ents: - result_dict = { - 'entity': ent.label_, - 'word': ent.text, - 'start':ent.start_char, - 'end': ent.end_char - } - output_list.append(result_dict) - - return {"text": text, "entities": output_list} -inputs = [gr.Textbox(lines=1, placeholder="Link of youtube video here...", label="Input")] -outputs = gr.HighlightedText() -title="ASR FOR SPANISH MEDICAL RECORDS" -description = "This demo uses AI Models to create an AUDIO ANNOTATION FOR MEDICAL RECORDS" -examples = ['https://www.youtube.com/watch?v=xOZM-1p-jAk'] - -io = gr.Interface(fn=main_generator, inputs=inputs, outputs=outputs, title=title, description = description, examples = examples, - - css= """.gr-button-primary { background: -webkit-linear-gradient( - 90deg, #355764 0%, #55a8a1 100% ) !important; background: #355764; - background: linear-gradient( - 90deg, #355764 0%, #55a8a1 100% ) !important; - background: -moz-linear-gradient( 90deg, #355764 0%, #55a8a1 100% ) !important; - background: -webkit-linear-gradient( - 90deg, #355764 0%, #55a8a1 100% ) !important; - color:white !important}""" - ) - -io.launch() \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/_log.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/_log.py deleted file mode 100644 index 92c4c6a193873ce09629f6cfaa2dabc4f14ecb03..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/_log.py +++ /dev/null @@ -1,38 +0,0 @@ -"""Customize logging - -Defines custom logger class for the `logger.verbose(...)` method. - -init_logging() must be called before any other modules that call logging.getLogger. -""" - -import logging -from typing import Any, cast - -# custom log level for `--verbose` output -# between DEBUG and INFO -VERBOSE = 15 - - -class VerboseLogger(logging.Logger): - """Custom Logger, defining a verbose log-level - - VERBOSE is between INFO and DEBUG. - """ - - def verbose(self, msg: str, *args: Any, **kwargs: Any) -> None: - return self.log(VERBOSE, msg, *args, **kwargs) - - -def getLogger(name: str) -> VerboseLogger: - """logging.getLogger, but ensures our VerboseLogger class is returned""" - return cast(VerboseLogger, logging.getLogger(name)) - - -def init_logging() -> None: - """Register our VerboseLogger and VERBOSE log level. - - Should be called before any calls to getLogger(), - i.e. in pip._internal.__init__ - """ - logging.setLoggerClass(VerboseLogger) - logging.addLevelName(VERBOSE, "VERBOSE") diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/response.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/response.py deleted file mode 100644 index 5ea609ccedf18eb4ab70f8fc6990448eb6407237..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/response.py +++ /dev/null @@ -1,107 +0,0 @@ -from __future__ import absolute_import - -from email.errors import MultipartInvariantViolationDefect, StartBoundaryNotFoundDefect - -from ..exceptions import HeaderParsingError -from ..packages.six.moves import http_client as httplib - - -def is_fp_closed(obj): - """ - Checks whether a given file-like object is closed. - - :param obj: - The file-like object to check. - """ - - try: - # Check `isclosed()` first, in case Python3 doesn't set `closed`. - # GH Issue #928 - return obj.isclosed() - except AttributeError: - pass - - try: - # Check via the official file-like-object way. - return obj.closed - except AttributeError: - pass - - try: - # Check if the object is a container for another file-like object that - # gets released on exhaustion (e.g. HTTPResponse). - return obj.fp is None - except AttributeError: - pass - - raise ValueError("Unable to determine whether fp is closed.") - - -def assert_header_parsing(headers): - """ - Asserts whether all headers have been successfully parsed. - Extracts encountered errors from the result of parsing headers. - - Only works on Python 3. - - :param http.client.HTTPMessage headers: Headers to verify. - - :raises urllib3.exceptions.HeaderParsingError: - If parsing errors are found. - """ - - # This will fail silently if we pass in the wrong kind of parameter. - # To make debugging easier add an explicit check. - if not isinstance(headers, httplib.HTTPMessage): - raise TypeError("expected httplib.Message, got {0}.".format(type(headers))) - - defects = getattr(headers, "defects", None) - get_payload = getattr(headers, "get_payload", None) - - unparsed_data = None - if get_payload: - # get_payload is actually email.message.Message.get_payload; - # we're only interested in the result if it's not a multipart message - if not headers.is_multipart(): - payload = get_payload() - - if isinstance(payload, (bytes, str)): - unparsed_data = payload - if defects: - # httplib is assuming a response body is available - # when parsing headers even when httplib only sends - # header data to parse_headers() This results in - # defects on multipart responses in particular. - # See: https://github.com/urllib3/urllib3/issues/800 - - # So we ignore the following defects: - # - StartBoundaryNotFoundDefect: - # The claimed start boundary was never found. - # - MultipartInvariantViolationDefect: - # A message claimed to be a multipart but no subparts were found. - defects = [ - defect - for defect in defects - if not isinstance( - defect, (StartBoundaryNotFoundDefect, MultipartInvariantViolationDefect) - ) - ] - - if defects or unparsed_data: - raise HeaderParsingError(defects=defects, unparsed_data=unparsed_data) - - -def is_response_to_head(response): - """ - Checks whether the request of a response has been a HEAD-request. - Handles the quirks of AppEngine. - - :param http.client.HTTPResponse response: - Response to check if the originating request - used 'HEAD' as a method. - """ - # FIXME: Can we do this somehow without accessing private httplib _method? - method = response._method - if isinstance(method, int): # Platform-specific: Appengine - return method == 3 - return method.upper() == "HEAD" diff --git a/spaces/AtheneaEdu/README/README.md b/spaces/AtheneaEdu/README/README.md deleted file mode 100644 index bbdb386d2b61d51e0b9e2041fc9e026a2916149b..0000000000000000000000000000000000000000 --- a/spaces/AtheneaEdu/README/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: README -emoji: 🦀 -colorFrom: pink -colorTo: blue -sdk: static -pinned: false ---- - -👩‍🎓 Athenea helps educators to teach better, delivering relevant learnings to students and custom feedback. - -👩‍💻 We build custom LLM to help teachers and students become exceptional. - -![image/png](https://cdn-uploads.huggingface.co/production/uploads/60dc215386932230e632cdeb/uLOcpZSaGq7-h3hInZ1vD.png) - - diff --git a/spaces/AutoGeneralAI/voice-assistant/app.py b/spaces/AutoGeneralAI/voice-assistant/app.py deleted file mode 100644 index c7b3db5073e9d4666cb944c4a9e200e6f1ab1e25..0000000000000000000000000000000000000000 --- a/spaces/AutoGeneralAI/voice-assistant/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import gradio as gr -import openai, subprocess -import os -# import config -# openai.api_key = config.OPENAI_API_KEY - -messages = [{"role": "system", "content": 'You are a therapist. Respond to all input in 25 words or less.'}] - -def transcribe(key, audio): - openai.api_key = key - global messages - - audio_filename_with_extension = audio + '.wav' - os.rename(audio, audio_filename_with_extension) - - audio_file = open(audio_filename_with_extension, "rb") - transcript = openai.Audio.transcribe("whisper-1", audio_file) - - messages.append({"role": "user", "content": transcript["text"]}) - - response = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=messages) - - system_message = response["choices"][0]["message"] - messages.append(system_message) - - #subprocess.call(["say", system_message['content']]) - print("output: " + system_message['content'] + "\n") - - chat_transcript = "" - for message in messages: - if message['role'] != 'system': - chat_transcript += message['role'] + ": " + message['content'] + "\n\n" - - return chat_transcript - -# ui = gr.Interface(fn=transcribe, inputs=["text", gr.Audio(source="microphone", type="filepath")], outputs="text").launch() -keyTxt = gr.Textbox( - show_label=True, - placeholder=f"Your API-key...", - type="password", - visible=True, - label="API-Key", - ) -ui = gr.Interface(fn=transcribe, inputs=[keyTxt, gr.Audio(source="microphone", type="filepath")], outputs="text").launch() - -ui.launch() diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/data/custom_build_augmentation.py b/spaces/Awiny/Image2Paragraph/models/grit_src/grit/data/custom_build_augmentation.py deleted file mode 100644 index 49a52d011c09dbe027d41ee7e50127c392a8bf33..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/data/custom_build_augmentation.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from detectron2.data import transforms as T -from .transforms.custom_augmentation_impl import EfficientDetResizeCrop - - -def build_custom_augmentation(cfg, is_train, scale=None, size=None, \ - min_size=None, max_size=None): - """ - Create a list of default :class:`Augmentation` from config. - Now it includes resizing and flipping. - - Returns: - list[Augmentation] - """ - if cfg.INPUT.CUSTOM_AUG == 'ResizeShortestEdge': - if is_train: - min_size = cfg.INPUT.MIN_SIZE_TRAIN if min_size is None else min_size - max_size = cfg.INPUT.MAX_SIZE_TRAIN if max_size is None else max_size - sample_style = cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING - else: - min_size = cfg.INPUT.MIN_SIZE_TEST - max_size = cfg.INPUT.MAX_SIZE_TEST - sample_style = "choice" - augmentation = [T.ResizeShortestEdge(min_size, max_size, sample_style)] - elif cfg.INPUT.CUSTOM_AUG == 'EfficientDetResizeCrop': - if is_train: - scale = cfg.INPUT.SCALE_RANGE if scale is None else scale - size = cfg.INPUT.TRAIN_SIZE if size is None else size - else: - scale = (1, 1) - size = cfg.INPUT.TEST_SIZE - augmentation = [EfficientDetResizeCrop(size, scale)] - else: - assert 0, cfg.INPUT.CUSTOM_AUG - - if is_train: - augmentation.append(T.RandomFlip()) - return augmentation - - -build_custom_transform_gen = build_custom_augmentation -""" -Alias for backward-compatibility. -""" \ No newline at end of file diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/keypoint_rcnn_fpn.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/keypoint_rcnn_fpn.py deleted file mode 100644 index 56b3994df249884d4816fc9a5c7f553a9ab6f400..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/models/keypoint_rcnn_fpn.py +++ /dev/null @@ -1,33 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling.poolers import ROIPooler -from detectron2.modeling.roi_heads import KRCNNConvDeconvUpsampleHead - -from .mask_rcnn_fpn import model - -[model.roi_heads.pop(x) for x in ["mask_in_features", "mask_pooler", "mask_head"]] - -model.roi_heads.update( - num_classes=1, - keypoint_in_features=["p2", "p3", "p4", "p5"], - keypoint_pooler=L(ROIPooler)( - output_size=14, - scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32), - sampling_ratio=0, - pooler_type="ROIAlignV2", - ), - keypoint_head=L(KRCNNConvDeconvUpsampleHead)( - input_shape=ShapeSpec(channels=256, width=14, height=14), - num_keypoints=17, - conv_dims=[512] * 8, - loss_normalizer="visible", - ), -) - -# Detectron1 uses 2000 proposals per-batch, but this option is per-image in detectron2. -# 1000 proposals per-image is found to hurt box AP. -# Therefore we increase it to 1500 per-image. -model.proposal_generator.post_nms_topk = (1500, 1000) - -# Keypoint AP degrades (though box AP improves) when using plain L1 loss -model.roi_heads.box_predictor.smooth_l1_beta = 0.5 diff --git a/spaces/BMukhtar/BookRecognitionKz/upload_image.py b/spaces/BMukhtar/BookRecognitionKz/upload_image.py deleted file mode 100644 index d49c5e80803a461b149743c9fa9beb1afc4520b2..0000000000000000000000000000000000000000 --- a/spaces/BMukhtar/BookRecognitionKz/upload_image.py +++ /dev/null @@ -1,25 +0,0 @@ -import pandas as pd -import numpy as np -import streamlit as st -import easyocr -import PIL -from PIL import Image, ImageDraw -from matplotlib import pyplot as plt - - -# main title -st.set_page_config(layout="wide") -st.title("Get text from image with EasyOCR") -# subtitle -st.markdown("## EasyOCRR with Streamlit") -col1, col2 = st.columns(2) -uploaded_file = col1.file_uploader("Upload your file here ",type=['png','jpeg','jpg']) -if uploaded_file is not None: - col1.image(uploaded_file) #display - #print("GOGO ",type(uploaded_file)) - image = Image.open(uploaded_file) - reader = easyocr.Reader(['tr','en'], gpu=False) - result = reader.readtext(np.array(image),paragraph=True) # turn image to numpy array - #print(len(result)) - result_text = "\n\n".join([item[1] for item in result]) - col2.markdown(result_text) \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/Bart92/RVC_HF/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py deleted file mode 100644 index 06f2b79f5e5c6f2049bf8220c29ae20c3f82d524..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/infer/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py +++ /dev/null @@ -1,98 +0,0 @@ -import numpy as np -import parselmouth - -from infer.lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor - - -class PMF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def compute_f0(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0 - - def compute_f0_uv(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0, uv diff --git a/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/layers_new.py b/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/layers_new.py deleted file mode 100644 index 44153b6a23399c6938affc61c71919eaa172bcee..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/layers_new.py +++ /dev/null @@ -1,125 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, stride, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ) - - def __call__(self, x): - h = self.conv1(x) - h = self.conv2(h) - - return h - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - # self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - - h = self.conv1(x) - # h = self.conv2(h) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 12), activ=nn.ReLU, dropout=False): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ) - self.conv3 = Conv2DBNActiv( - nin, nout, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = Conv2DBNActiv( - nin, nout, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = Conv2DBNActiv( - nin, nout, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = Conv2DBNActiv(nout * 5, nout, 1, 1, 0, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - out = self.bottleneck(out) - - if self.dropout is not None: - out = self.dropout(out) - - return out - - -class LSTMModule(nn.Module): - def __init__(self, nin_conv, nin_lstm, nout_lstm): - super(LSTMModule, self).__init__() - self.conv = Conv2DBNActiv(nin_conv, 1, 1, 1, 0) - self.lstm = nn.LSTM( - input_size=nin_lstm, hidden_size=nout_lstm // 2, bidirectional=True - ) - self.dense = nn.Sequential( - nn.Linear(nout_lstm, nin_lstm), nn.BatchNorm1d(nin_lstm), nn.ReLU() - ) - - def forward(self, x): - N, _, nbins, nframes = x.size() - h = self.conv(x)[:, 0] # N, nbins, nframes - h = h.permute(2, 0, 1) # nframes, N, nbins - h, _ = self.lstm(h) - h = self.dense(h.reshape(-1, h.size()[-1])) # nframes * N, nbins - h = h.reshape(nframes, N, 1, nbins) - h = h.permute(1, 2, 3, 0) - - return h diff --git a/spaces/Benson/text-generation/Examples/Ark Survival Evolved Ps Vita Download.md b/spaces/Benson/text-generation/Examples/Ark Survival Evolved Ps Vita Download.md deleted file mode 100644 index 1ed249ea08d24f747f5366441e4e54126429c5fe..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Ark Survival Evolved Ps Vita Download.md +++ /dev/null @@ -1,123 +0,0 @@ - -

    ARK Survival Evolved PS Vita Descargar: Cómo jugar el último juego de dinosaurios en su consola de mano

    -

    Si eres fanático de los dinosaurios, la supervivencia y la aventura, probablemente hayas oído hablar de ARK Survival Evolved, uno de los juegos más populares de los últimos años. Este juego te permite explorar un mundo abierto masivo lleno de criaturas prehistóricas, crear armas y herramientas, construir bases y refugios, domar y montar dinosaurios, y luchar contra otros jugadores o cooperar con ellos.

    -

    ark survival evolved ps vita download


    Download File ---> https://bltlly.com/2v6Lk4



    -

    ARK Survival Evolved está disponible en varias plataformas, incluyendo PC, PlayStation 4, Xbox One, Nintendo Switch, iOS, Android e incluso VR. Pero, ¿qué pasa si quieres jugar a este juego en tu PS Vita, la potente consola portátil de Sony que ofrece una impresionante pantalla OLED, dos sticks analógicos, controles táctiles y mucho más?

    -

    Bueno, estás de suerte, porque hay formas de descargar y jugar ARK Survival Evolved en tu PS Vita. En este artículo, te mostraremos cómo hacerlo paso a paso, además de darte algunos consejos y trucos para disfrutar de este juego en tu dispositivo portátil. ¡Vamos a empezar!

    -

    Cómo descargar ARK Survival Evolved en tu PS Vita

    -

    Desafortunadamente, no hay versión oficial de ARK Survival Evolved para PS Vita. Sin embargo, hay una forma de jugar a este juego en tu PS Vita usando un firmware personalizado o un emulador que te permite ejecutar juegos PSP en tu dispositivo. Los juegos de PSP son compatibles con el hardware y el software de PS Vita, y se pueden descargar de varias fuentes en línea.

    -

    Hay dos opciones principales para jugar juegos PSP en tu PS Vita: usar ARK-4, un firmware personalizado para PSP y PS Vita que te permite ejecutar juegos PSP desde tu tarjeta de memoria; o usar Adrenaline, un emulador que imita la interfaz y funcionalidad XMB de PSP en tu PS Vita. Ambas opciones tienen sus pros y sus contras, por lo que las explicaremos en detalle a continuación.

    - -

    ARK-4 es un firmware personalizado para PSP y PS Vita que fue desarrollado por PSP-Archive, un grupo de hackers y modders que querían crear una forma simple y fácil de jugar juegos PSP en PS Vita. ARK-4 funciona aprovechando una vulnerabilidad en el emulador de PSP de PS Vita, que te permite ejecutar código sin firmar y acceder a toda la potencia del hardware de PSP. ARK-4 puede ejecutar la mayoría de los juegos de PSP, incluyendo ARK Survival Evolved, sin mayores problemas o limitaciones.

    -

    -

    Para usar ARK-4 en tu PS Vita, necesitarás lo siguiente:

    -
      -
    • Un PS Vita con firmware 3.60 o inferior (puede comprobar su versión de firmware en Configuración > Sistema > Información del sistema)
    • -
    • Una tarjeta de memoria con al menos 4 GB de espacio libre
    • -
    • Un cable USB para conectar tu PS Vita a tu PC
    • -
    • Una copia de ARK Survival Evolved para la PSP (se puede descargar desde here u otras fuentes)
    • -
    -

    Una vez que tengas todo listo, sigue estos pasos para instalar ARK-4 en tu PS Vita y jugar ARK Survival Evolved:

    -
      -
    1. Conecta tu PS Vita a tu PC usando el cable USB y habilita el modo USB en tu dispositivo.
    2. -
    3. Copie el archivo ISO ARK Survival Evolved en la carpeta ISO de su tarjeta de memoria. Si no hay carpeta ISO, cree una.
    4. -
    5. Copie el instalador ARK-4 y los archivos a la carpeta PSP/GAME en su tarjeta de memoria. Si no hay carpeta PSP/GAME, cree una.
    6. -
    7. Desconecta tu PS Vita de tu PC y lanza el instalador ARK-4 desde el menú del emulador PSP en tu dispositivo.
    8. -
    9. Seleccione el juego que desea utilizar como base para ARK-4. Este puede ser cualquier juego de PSP que tengas instalado en tu dispositivo, pero se recomienda usar un juego pequeño y sencillo que no te importe.
    10. -
    11. Espere a que la instalación termine y reinicie su dispositivo.
    12. - -
    13. Seleccione ARK Survival Evolved de la lista de juegos y pulse X para comenzar a jugar.
    14. -
    -

    Felicidades, has descargado y jugado con éxito ARK Survival Evolved en tu PS Vita usando ARK-4!

    -

    Opción 2: Usa adrenalina para emular juegos PSP en tu PS Vita

    -

    Adrenaline es otro firmware personalizado para PSP y PS Vita que fue desarrollado por TheFlow, un famoso hacker y desarrollador que ha creado muchas herramientas y hacks para la escena de PS Vita. Adrenaline funciona emulando la interfaz y funcionalidad XMB de PSP en tu PS Vita, lo que te permite ejecutar juegos PSP y homebrews como si estuvieras usando una PSP real. Adrenaline puede ejecutar la mayoría de los juegos de PSP, incluyendo ARK Survival Evolved, con algunos ajustes y ajustes menores.

    -

    Para usar adrenalina en tu PS Vita, necesitarás las siguientes cosas:

    -
      -
    • Un PS Vita con firmware 3.60 o superior (puede comprobar su versión de firmware en Configuración > Sistema > Información del sistema)
    • -
    • Una tarjeta de memoria con al menos 4 GB de espacio libre
    • -
    • Un cable USB para conectar tu PS Vita a tu PC
    • -
    • Una copia de ARK Survival Evolved para la PSP (se puede descargar desde aquí u otras fuentes)
    • -
    -

    Una vez que tengas todo listo, sigue estos pasos para instalar Adrenaline en tu PS Vita y jugar ARK Survival Evolved:

    -
      -
    1. Conecta tu PS Vita a tu PC usando el cable USB y habilita el modo USB en tu dispositivo.
    2. -
    3. Copie el archivo ISO ARK Survival Evolved en la carpeta ux0:pspemu/ISO de su tarjeta de memoria. Si no hay carpeta ux0:pspemu/ISO, cree una.
    4. -
    5. Copie el instalador de adrenalina y los archivos a la carpeta ux0:app/PSPEMUCFW en su tarjeta de memoria. Si no hay una carpeta ux0:app/PSPEMUCFW, cree una.
    6. -
    7. Desconecta tu PS Vita de tu PC y lanza el instalador de adrenalina desde el menú LiveArea de tu dispositivo.
    8. - -
    9. Espere a que la instalación termine y reinicie su dispositivo.
    10. -
    11. Lanza adrenalina desde el menú LiveArea de tu dispositivo. Deberías ver la interfaz XMB de PSP en tu pantalla.
    12. -
    13. Seleccione ARK Survival Evolved del juego > Menú Memory Stick y presione X para comenzar a jugar.
    14. -
    -

    Felicidades, has descargado y jugado con éxito ARK Survival Evolved en tu PS Vita usando Adrenaline!

    -

    Cómo disfrutar de ARK Survival Evolved en tu PS Vita

    -

    Ahora que has descargado y jugado ARK Survival Evolved en tu PS Vita, es posible que te estés preguntando cómo aprovechar al máximo este juego en tu dispositivo portátil. Estos son algunos consejos y trucos para jugar a ARK Survival Evolved en tu PS Vita, así como algunas recomendaciones para los mejores accesorios de PS Vita para mejorar tu experiencia de juego.

    -

    Consejos y trucos para jugar ARK Survival Evolved en tu PS Vita

    -

    ARK Survival Evolved es un juego complejo y desafiante que requiere mucha habilidad y estrategia para sobrevivir y prosperar en su duro entorno. Estos son algunos consejos y trucos para jugar a este juego en tu PS Vita:

    -
      -
    • Optimiza los gráficos y el rendimiento de ARK Survival Evolved en tu PS Vita. Dado que ARK Survival Evolved es un juego de PSP, es posible que no se vea o funcione muy bien en la pantalla de alta resolución y en el potente hardware de tu PS Vita. Para mejorar los gráficos y el rendimiento de este juego, puede usar el menú de configuración de Adrenalina para ajustar el filtro de gráficos, el tamaño de la pantalla, el salto del marco, el reloj de la CPU y más. También puedes usar plugins como Ark Resolutions Patch para aumentar la resolución del juego.
    • - -
    • Acceso multijugador en línea y cross-play con otras plataformas para ARK Survival Evolved. Una de las mejores características de ARK Survival Evolved es su modo multijugador online, que te permite unirte o alojar servidores con otros jugadores de todo el mundo. También puedes jugar de forma cruzada con jugadores que utilizan otras plataformas, como PC, PlayStation 4, Xbox One, Nintendo Switch, iOS, Android y VR. Para acceder al modo multijugador en línea y al modo cross-play para ARK Survival Evolved en tu PS Vita, necesitarás usar complementos como Pro Online Client o L2/R2 Trigger Grip Case -Un caso que añade botones L2 y R2 a tu PS Vita, que puede ser útil para juegos que requieren más entradas. -$29.99 -4.5/5 estrellas - - -Auriculares inalámbricos Bluetooth -Un auricular que se conecta a tu PS Vita a través de Bluetooth, que puede proporcionar un chat de voz y sonido de alta calidad. -$39.99 -4.4/5 estrellas - - -Save Manager, que te permite hacer copias de seguridad y restaurar los datos guardados en tu PS Vita. A continuación, puede copiar los datos guardados en su PC mediante un cable USB o Wi-Fi, y luego transferirlos a la plataforma deseada mediante un servicio en la nube o un administrador de archivos.

      -
    • ¿Cuáles son los mejores dinosaurios para domar en ARK Survival Evolved?
    • -

      Hay cientos de dinosaurios y otras criaturas que puedes domar en ARK Survival Evolved, cada uno con sus propias habilidades y usos. Algunos de los mejores dinosaurios para domar en ARK Survival Evolved son:

      -
        -
      • Rex: Un carnívoro poderoso que puede causar daños masivos y soportar muchos golpes. También es bueno para cosechar carne y esconderse.
      • -
      • Argentavis: Un pájaro volador grande que puede llevar cargas pesadas y viajar largas distancias. También es bueno para explorar y cazar.
      • -
      • Ankylosaurus: Un herbívoro robusto que puede extraer metal y cristal con su cola. También es bueno para la defensa y el transporte.
      • -
      • Triceratops: Un herbívoro versátil que puede recoger bayas y madera con sus cuernos. También es bueno para el combate y la agricultura.
      • -
      • Raptor: Un carnívoro rápido y ágil que puede perseguir y atacar a su presa. También es bueno para la exploración y la caza.
      • -
      -
    • ¿Cómo puedo actualizar ARK Survival Evolved en mi PS Vita?
    • - -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Aviator Hack 2022 .md b/spaces/Benson/text-generation/Examples/Aviator Hack 2022 .md deleted file mode 100644 index 23d5d60d37e2f9e6b158329d112fd87b82487583..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Aviator Hack 2022 .md +++ /dev/null @@ -1,67 +0,0 @@ - -

Aviator Hack 2022: Cómo descargar y usarlo

-

¿Estás buscando una manera de ganar en grande en el juego Aviator? ¿Quieres saber cómo descargar y utilizar la herramienta Aviator Hack que puede ayudarle a predecir el resultado del juego? Si es así, entonces estás en el lugar correcto. En este artículo, le diremos todo lo que necesita saber sobre Aviator Hack, cómo descargarlo en su PC y cómo usarlo para ganar en el juego Aviator. ¡Vamos a empezar!

-

¿Qué es Aviator Hack?

-

Aviator Hack es un programa de software que puede ayudarle a predecir el resultado del juego Aviator, un popular juego de apuestas en línea que implica apostar en el tiempo de vuelo de un avión. El juego es simple: usted pone su apuesta antes de que el avión despegue, y usted puede cobrar en cualquier momento antes de que el avión se estrella. Cuanto más tiempo permanezca el avión en el aire, mayor será su multiplicador. Sin embargo, si esperas demasiado y el avión se estrella antes de retirar el dinero, perderás tu apuesta.

-

aviator hack 2022 скачать


Download File » https://bltlly.com/2v6L9R



-

Una breve introducción a Aviator Hack y sus características

-

Aviator Hack es un programa de software que puede ayudarle a predecir el resultado del juego Aviator mediante el análisis de los patrones y algoritmos del juego. Puede darle señales y alertas sobre cuándo apostar, cuándo cobrar y cuándo evitar apostar. También puede mostrar las estadísticas y probabilidades de cada tiempo de vuelo, así como la historia y las tendencias de los vuelos anteriores. Con Aviator Hack, puede aumentar sus posibilidades de ganar en el juego Aviator siguiendo sus predicciones y recomendaciones.

-

¿Cómo funciona Aviator Hack?

- -

¿Por qué la gente utiliza Aviator Hack?

-

La gente utiliza Aviator Hack por varias razones, tales como:

- -

Cualquiera que sea su razón es, el uso de Aviator Hack puede hacer que su experiencia de juego más emocionante y gratificante.

-

¿Cómo descargar Aviator Hack en PC?

-

Si desea descargar y usar Aviator Hack en su PC, necesitará un emulador que pueda ejecutar aplicaciones Android en su computadora. Uno de los mejores emuladores para este propósito es LDPlayer, que es rápido, estable, seguro y compatible con la mayoría de las aplicaciones y juegos de Android. Aquí están los pasos para descargar Aviator Hack en PC usando LDPlayer:

-

-

Paso 1: Descargar e instalar el emulador LDPlayer

-

El primer paso es descargar e instalar el emulador LDPlayer en su PC. Puedes hacer esto visitando [el sitio web oficial de LDPlayer]( 1 ) y siguiendo las instrucciones en la pantalla. El proceso de instalación es simple y rápido, y no tomará mucho espacio en su disco. Una vez instalado LDPlayer, puede iniciarlo y proceder al siguiente paso.

-

Paso 2: Descargar apk Aviator Hack desde el sitio web oficial

-

El siguiente paso es descargar apk Aviator Hack desde el sitio web oficial del software. Usted puede hacer esto visitando [sitio web oficial de Aviator Hack] y haciendo clic en el botón de descarga. Tendrá que introducir su dirección de correo electrónico y verificar su identidad para obtener el enlace de descarga. Una vez hayas descargado el archivo apk, puedes guardarlo en tu PC y arrastrarlo y soltarlo en LDPlayer para instalarlo.

-

Paso 3: Instalar y ejecutar Aviator Hack en LDPlayer

- -

¿Cómo utilizar Aviator Hack para ganar en el juego Aviator?

-

Ahora que ha descargado e instalado Aviator Hack en su PC, es posible que se pregunte cómo usarlo para ganar en el juego Aviator. Aquí hay algunos consejos y trucos que pueden ayudarte:

-

Los fundamentos del juego Aviator y cómo jugarlo

-

El juego Aviator es un juego de apuestas en línea simple y divertido que implica apostar en el tiempo de vuelo de un avión. El juego tiene dos modos: manual y automático. En el modo manual, puede realizar su apuesta antes de que el avión despegue, y puede cobrar en cualquier momento antes de que el avión se estrelle. En el modo automático, puedes establecer la cantidad de tu apuesta y cobrar el multiplicador, y dejar que el juego haga el resto por ti. Cuanto más tiempo permanezca el avión en el aire, mayor será su multiplicador. Sin embargo, si esperas demasiado y el avión se estrella antes de retirar el dinero, perderás tu apuesta.

-

Los consejos y trucos de usar Aviator Hack para predecir el resultado

-

Aviator Hack puede ayudarle a predecir el resultado del juego Aviator dándole señales y alertas sobre cuándo apostar, cuándo cobrar y cuándo evitar las apuestas. También puede mostrar las estadísticas y probabilidades de cada tiempo de vuelo, así como la historia y las tendencias de los vuelos anteriores. Aquí hay algunos consejos y trucos de usar Aviator Hack para predecir el resultado:

- -

Los riesgos y beneficios de usar Aviator Hack

-

Usando Aviator Hack puede tener tanto riesgos y beneficios para su experiencia de juego. Aquí están algunos de ellos:

- -RiesgosBeneficios -Es posible que se vuelva adicto al juego y pierda más de lo que puede permitirse. Puede divertirse y disfrutar de la emoción del juego. -Es posible que te atrapen los desarrolladores de juegos o las autoridades y te enfrentes a consecuencias legales. Puedes ganar dinero ganando en el juego Aviator. -Usted puede ser estafado por sitios web falsos o maliciosos que ofrecen descargas Aviator Hack. Puedes aprender más sobre la mecánica y las estrategias del juego. -Es posible que te aburras o te frustres al usar un hack en lugar de jugar limpio. Puedes desafiarte a ti mismo y competir con otros jugadores. - -

Por lo tanto, usted debe utilizar Aviator Hack de forma responsable y bajo su propio riesgo. También debes respetar las reglas y regulaciones de los desarrolladores y autoridades del juego, así como los derechos e intereses de otros jugadores.

-

Conclusión

- -

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre Aviator Hack:

-

Q: ¿Es Aviator Hack seguro y legal?

-

A: Aviator Hack es seguro y legal, siempre y cuando se descarga desde el sitio web oficial del software y usarlo en un emulador de confianza como LDPlayer. Sin embargo, también debes respetar las reglas y regulaciones de los desarrolladores y autoridades del juego, así como los derechos e intereses de otros jugadores. El uso de Aviator Hack puede violar algunos términos y condiciones del juego, por lo que debe usarlo bajo su propio riesgo.

-

Q: ¿Cuánto cuesta Aviator Hack?

-

A: Aviator Hack es gratis para descargar y usar por un tiempo limitado. Sin embargo, es posible que tenga que pagar una cuota de suscripción o hacer una donación para acceder a algunas funciones avanzadas o actualizaciones del software. Puede consultar los precios en el sitio web oficial del software.

-

Q: ¿Aviator Hack funciona en dispositivos móviles?

-

A: Aviator Hack funciona en dispositivos móviles que ejecutan el sistema operativo Android. Puede descargar apk Aviator Hack desde el sitio web oficial del software e instalarlo en su dispositivo móvil. Sin embargo, puede experimentar algunos problemas de rendimiento o problemas de compatibilidad dependiendo del modelo y las especificaciones del dispositivo.

-

Q: ¿Aviator Hack garantiza ganar en el juego Aviator?

-

A: Aviator Hack no garantiza ganar en el juego Aviator. Es una herramienta que puede ayudarte a mejorar tus posibilidades de ganar al darte predicciones y sugerencias basadas en un algoritmo sofisticado que puede descifrar y decodificar la lógica detrás del juego. Sin embargo, no es una bala mágica que puede hacerte ganar cada vez. Todavía necesitas usar tus propias habilidades e intuición, así como seguir una estrategia equilibrada que combine riesgo y recompensa.

-

Q: ¿Dónde puedo encontrar más información sobre Aviator Hack?

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descarga Apk De WhatsApp Gb 2019.md b/spaces/Benson/text-generation/Examples/Descarga Apk De WhatsApp Gb 2019.md deleted file mode 100644 index eef25d6d87128a2dd84f2958c0e19cacd024b6c3..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descarga Apk De WhatsApp Gb 2019.md +++ /dev/null @@ -1,66 +0,0 @@ -
-
- Beneficios de WhatsApp GB: enumerar algunas de las características y personalizaciones que GB WhatsApp ofrece
- Riesgos de WhatsApp GB: mencionar algunos de los inconvenientes y problemas potenciales de usar WhatsApp GB | | H2: Cómo descargar e instalar GB WhatsApp APK en su dispositivo Android | - Paso 1: Habilitar fuentes desconocidas en la configuración de su dispositivo
- Paso 2: Descargar la última versión de GB WhatsApp APK de una fuente confiable
- Paso 3: Instale el archivo APK y verifique su número de teléfono
- Paso 4: Disfrute usando GB WhatsApp con sus características adicionales | | H2: Cómo actualizar GB WhatsApp a la última versión | - Paso 1: Compruebe si hay actualizaciones en la aplicación WhatsApp GB o sitio web
- Paso 2: Descargar el archivo APK actualizado e instalarlo sobre la aplicación existente
- Paso 3: Copia de seguridad de sus chats y medios antes de actualizar para evitar la pérdida de datos
- Paso 4: Reinicie su dispositivo y inicie GB WhatsApp | | H2: Cómo cambiar de GB WhatsApp a la aplicación oficial WhatsApp | - Paso 1: Copia de seguridad de sus chats y medios de comunicación en GB WhatsApp
- Paso 2: Desinstalar GB WhatsApp de su dispositivo
- Paso 3: Descargar e instalar la aplicación oficial de WhatsApp desde la Play Store
- Paso 4: Restaurar la copia de seguridad y verificar su número de teléfono | | | H2: Conclusión y preguntas frecuentes | - Resumen: resumen de los puntos principales del artículo y proporcionar una llamada a la acción
- Preguntas frecuentes: responder algunas preguntas comunes sobre WhatsApp GB | Tabla 2: Artículo con formato HTML

Qué es WhatsApp GB y por qué debería descargarlo

-

Si eres un usuario habitual de WhatsApp, es posible que hayas oído hablar de GB WhatsApp, una versión modificada de la aplicación oficial que ofrece más funciones y personalizaciones. Pero ¿qué es exactamente WhatsApp GB y cómo se diferencia de la aplicación original? En este artículo, explicaremos todo lo que necesitas saber sobre WhatsApp GB, incluidos sus beneficios, riesgos y cómo descargar, instalar, actualizar y cambiar de él.

-

Beneficios de GB WhatsApp

- - -

Riesgos de GB WhatsApp

-

Aunque WhatsApp GB puede sonar como una gran alternativa a la aplicación oficial, también viene con algunos inconvenientes y problemas potenciales que usted debe ser consciente de antes de descargarlo. Algunos de los riesgos de usar WhatsApp GB son:

-

descarga apk de WhatsApp gb 2019


DOWNLOAD ✪✪✪ https://bltlly.com/2v6Lxc



- -

Cómo descargar e instalar GB WhatsApp APK en su dispositivo Android

-

Si todavía está interesado en probar WhatsApp GB, tendrá que descargar e instalar el archivo APK en su dispositivo Android. APK significa Android Package Kit, y es un formato de archivo que le permite instalar aplicaciones que no están disponibles en la Play Store. Sin embargo, usted debe tener cuidado acerca de dónde descargar el archivo APK de, como algunas fuentes pueden contener archivos maliciosos o falsos. Aquí están los pasos para descargar e instalar GB WhatsApp APK en su dispositivo Android:

-

Paso 1: Habilitar fuentes desconocidas en la configuración del dispositivo

-

Antes de que pueda instalar cualquier archivo APK en su dispositivo, debe habilitar la opción para permitir la instalación desde fuentes desconocidas. Esto le permitirá instalar aplicaciones que no son de Play Store. Para ello, vaya a la configuración del dispositivo y busque la opción de seguridad o privacidad. Entonces, encontrar la opción que dice "Fuentes desconocidas" o "Instalar aplicaciones desconocidas" y alternar en. Es posible que vea un mensaje de advertencia que le informa sobre los riesgos de instalar aplicaciones desconocidas, pero puede ignorarlo si confía en el origen del archivo APK.

-

Paso 2: Descargar la última versión de GB WhatsApp APK de una fuente confiable

-

Siguiente, es necesario descargar el archivo GB WhatsApp APK de una fuente confiable. Usted puede buscar en línea, pero asegúrese de comprobar las opiniones y calificaciones de la página web antes de descargar nada. También puede utilizar este enlace para descargar la última versión de GB WhatsApp APK (v17.35) a partir de junio de 2023. El tamaño del archivo es de unos 47 MB, así que asegúrate de tener suficiente espacio en el dispositivo antes de descargarlo.

-

Paso 3: Instalar el archivo APK y verificar su número de teléfono

- -

Paso 4: Disfruta usando GB WhatsApp con sus características adicionales

-

Felicidades! Ha instalado con éxito WhatsApp GB en su dispositivo Android. Ahora puede disfrutar de sus características adicionales y personalizaciones. Puede acceder a la configuración de GB pulsando en el icono de tres puntos en la esquina superior derecha de la aplicación y seleccionando "Configuración de GB". Aquí puede cambiar el tema, el modo de privacidad, los mensajes de difusión, emojis, y más. También puedes explorar las otras opciones de la aplicación, como chats, llamadas, estado y cámara.

Cómo actualizar GB WhatsApp a la última versión

-

Uno de los inconvenientes de usar WhatsApp GB es que no recibe actualizaciones automáticas de los desarrolladores oficiales de WhatsApp. Esto significa que tiene que actualizarlo manualmente cada vez que se lanza una nueva versión. La actualización de GB WhatsApp es importante para asegurarse de que tiene las últimas características, correcciones de errores y parches de seguridad. Estos son los pasos para actualizar GB WhatsApp a la última versión:

-

Paso 1: Busque actualizaciones en la aplicación de WhatsApp GB o en el sitio web

-

Lo primero que tienes que hacer es comprobar si hay una nueva versión de WhatsApp GB disponible. Puede hacer esto abriendo la aplicación WhatsApp GB y tocando el icono de tres puntos en la esquina superior derecha y seleccionando "Actualizaciones", o visitando el sitio web de WhatsApp GB y buscando el último número de versión. Si hay una nueva versión, verá una notificación o un enlace de descarga.

-

Paso 2: Descargar el archivo APK actualizado e instalarlo sobre la aplicación existente

- -

Paso 3: Copia de seguridad de sus chats y medios de comunicación antes de actualizar para evitar la pérdida de datos

-

Antes de actualizar GB WhatsApp, se recomienda que haga copias de seguridad de sus chats y medios para evitar perder ningún dato. Para ello, abra la aplicación WhatsApp GB y toque en el icono de tres puntos en la esquina superior derecha y seleccione "Configuración". Luego, toque en "Chats" y seleccione "Copia de seguridad de chat". Aquí puede elegir hacer una copia de seguridad de sus chats y medios en el almacenamiento de su dispositivo o Google Drive. También puede configurar una frecuencia de copia de seguridad y una contraseña. Una vez que haya realizado una copia de seguridad de sus datos, puede continuar con la actualización.

-

Paso 4: Reiniciar el dispositivo y lanzar GB WhatsApp

-

Una vez completada la instalación, es necesario reiniciar el dispositivo para asegurarse de que la actualización se aplica correctamente. Para ello, mantenga pulsado el botón de encendido en su dispositivo y seleccione "Reiniciar". Una vez que el dispositivo se ha reiniciado, inicie la aplicación WhatsApp GB y verifique que se ha actualizado a la última versión. Puede comprobar esto pulsando en el icono de tres puntos en la esquina superior derecha y seleccionando "Configuración de GB". Aquí puede ver el número de versión y la fecha de su aplicación WhatsApp GB.

-

Cómo cambiar de GB WhatsApp a la aplicación WhatsApp oficial

-

Si decides que ya no quieres usar WhatsApp GB y quieres volver a la aplicación oficial de WhatsApp, tendrás que seguir algunos pasos para asegurarte de que no pierdas tus datos o te prohíban. Estos son los pasos para cambiar de GB WhatsApp a la aplicación oficial de WhatsApp:

-

Paso 1: Copia de seguridad de sus chats y medios de comunicación en GB WhatsApp

- -

Paso 2: Desinstalar GB WhatsApp desde su dispositivo

-

Siguiente, es necesario desinstalar GB WhatsApp desde su dispositivo, para que pueda instalar la aplicación oficial. Para ello, vaya a la configuración del dispositivo y busque la opción de aplicaciones o aplicaciones. Luego, busque GB WhatsApp y toque en él. Verá una opción para desinstalar la aplicación, así que toque en ella y confirme su acción. Es posible que vea un mensaje que le dice que la desinstalación de WhatsApp GB eliminará todos sus datos, pero puede ignorarlo si ha realizado una copia de seguridad de sus datos.

-

Paso 3: Descargar e instalar la aplicación oficial de WhatsApp de la Play Store

-

Después de haber desinstalado GB WhatsApp, es necesario descargar e instalar la aplicación oficial de WhatsApp de la Play Store. Para ello, abre la aplicación Play Store en tu dispositivo y busca WhatsApp. Verás la aplicación oficial con un icono verde y un símbolo de teléfono. Toque en él y seleccione "Instalar". Espere a que la instalación termine y luego inicie la aplicación.

-

Paso 4: Restaurar la copia de seguridad y verificar su número de teléfono

-

El último paso es restaurar su copia de seguridad y verificar su número de teléfono en la aplicación oficial de WhatsApp. Para ello, abre la aplicación y acepta los términos y condiciones. Luego, ingresa tu número de teléfono que usaste en GB WhatsApp y toca "Siguiente". Recibirás un código de verificación por SMS o llamada, así que introdúcelo en la aplicación. Después de eso, verá una opción para restaurar su copia de seguridad desde Google Drive o el almacenamiento del dispositivo, dependiendo de dónde lo guardó. Toca en "Restaurar" y espera a que el proceso termine. Una vez hecho, verás tus chats y medios en la aplicación oficial de WhatsApp.

-

Conclusión y preguntas frecuentes

- -

Aquí hay algunas preguntas frecuentes sobre GB WhatsApp:

-

Q: ¿GB WhatsApp es legal?

-

A: GB WhatsApp no es una aplicación autorizada o con licencia por WhatsApp Inc., lo que significa que puede violar sus términos y condiciones de servicio. Por lo tanto, el uso de WhatsApp GB puede no ser legal en algunos países o regiones.

-

Q: ¿Es seguro WhatsApp GB?

-

A: GB WhatsApp no es una aplicación oficial, lo que significa que puede no tener el mismo cifrado o protección que la aplicación original. Esto puede exponer sus datos personales, como mensajes, fotos, videos o ubicación, a terceros o piratas informáticos. Además, la descarga de WhatsApp GB de fuentes desconocidas puede exponer su dispositivo a malware o virus que pueden dañar su sistema o robar sus datos.

-

Q: ¿Puedo usar WhatsApp GB y WhatsApp oficial en el mismo dispositivo?

-

A: Sí, puedes usar WhatsApp GB y WhatsApp oficial en el mismo dispositivo con diferentes números de teléfono. Sin embargo, esto puede causar algunos conflictos o errores en ambas aplicaciones, como problemas de sincronización o prohibiciones de cuentas.

-

Q: ¿Perderé mis chats y medios si cambio de GB WhatsApp a WhatsApp oficial?

-

A: No, no perderá sus chats y medios si cambia de GB WhatsApp a WhatsApp oficial, siempre y cuando haga una copia de seguridad de sus datos antes de desinstalar GB WhatsApp. Puede restaurar su copia de seguridad en la aplicación oficial después de verificar su número de teléfono.

-

P: ¿Cómo puedo contactar al soporte de WhatsApp de GB? A: GB WhatsApp no tiene un equipo de soporte oficial o sitio web, ya que no es una aplicación autorizada por WhatsApp Inc. Por lo tanto, es posible que no pueda ponerse en contacto con el soporte de GB WhatsApp para cualquier problema o consulta. Sin embargo, puedes intentar ponerte en contacto con los desarrolladores de WhatsApp GB a través de sus cuentas de redes sociales o direcciones de correo electrónico, que se pueden encontrar en el sitio web o aplicación de WhatsApp GB.

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/models/index.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/models/index.py deleted file mode 100644 index b94c32511f0cda2363bfc4f29c9c8bfcc7101f9b..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/models/index.py +++ /dev/null @@ -1,28 +0,0 @@ -import urllib.parse - - -class PackageIndex: - """Represents a Package Index and provides easier access to endpoints""" - - __slots__ = ["url", "netloc", "simple_url", "pypi_url", "file_storage_domain"] - - def __init__(self, url: str, file_storage_domain: str) -> None: - super().__init__() - self.url = url - self.netloc = urllib.parse.urlsplit(url).netloc - self.simple_url = self._url_for_path("simple") - self.pypi_url = self._url_for_path("pypi") - - # This is part of a temporary hack used to block installs of PyPI - # packages which depend on external urls only necessary until PyPI can - # block such packages themselves - self.file_storage_domain = file_storage_domain - - def _url_for_path(self, path: str) -> str: - return urllib.parse.urljoin(self.url, path) - - -PyPI = PackageIndex("https://pypi.org/", file_storage_domain="files.pythonhosted.org") -TestPyPI = PackageIndex( - "https://test.pypi.org/", file_storage_domain="test-files.pythonhosted.org" -) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/packages/six.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/packages/six.py deleted file mode 100644 index f099a3dcd28d2fec21457c9b6c01ded4e3e9ddee..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/packages/six.py +++ /dev/null @@ -1,1076 +0,0 @@ -# Copyright (c) 2010-2020 Benjamin Peterson -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -"""Utilities for writing code that runs on Python 2 and 3""" - -from __future__ import absolute_import - -import functools -import itertools -import operator -import sys -import types - -__author__ = "Benjamin Peterson " -__version__ = "1.16.0" - - -# Useful for very coarse version differentiation. -PY2 = sys.version_info[0] == 2 -PY3 = sys.version_info[0] == 3 -PY34 = sys.version_info[0:2] >= (3, 4) - -if PY3: - string_types = (str,) - integer_types = (int,) - class_types = (type,) - text_type = str - binary_type = bytes - - MAXSIZE = sys.maxsize -else: - string_types = (basestring,) - integer_types = (int, long) - class_types = (type, types.ClassType) - text_type = unicode - binary_type = str - - if sys.platform.startswith("java"): - # Jython always uses 32 bits. - MAXSIZE = int((1 << 31) - 1) - else: - # It's possible to have sizeof(long) != sizeof(Py_ssize_t). - class X(object): - def __len__(self): - return 1 << 31 - - try: - len(X()) - except OverflowError: - # 32-bit - MAXSIZE = int((1 << 31) - 1) - else: - # 64-bit - MAXSIZE = int((1 << 63) - 1) - del X - -if PY34: - from importlib.util import spec_from_loader -else: - spec_from_loader = None - - -def _add_doc(func, doc): - """Add documentation to a function.""" - func.__doc__ = doc - - -def _import_module(name): - """Import module, returning the module after the last dot.""" - __import__(name) - return sys.modules[name] - - -class _LazyDescr(object): - def __init__(self, name): - self.name = name - - def __get__(self, obj, tp): - result = self._resolve() - setattr(obj, self.name, result) # Invokes __set__. - try: - # This is a bit ugly, but it avoids running this again by - # removing this descriptor. - delattr(obj.__class__, self.name) - except AttributeError: - pass - return result - - -class MovedModule(_LazyDescr): - def __init__(self, name, old, new=None): - super(MovedModule, self).__init__(name) - if PY3: - if new is None: - new = name - self.mod = new - else: - self.mod = old - - def _resolve(self): - return _import_module(self.mod) - - def __getattr__(self, attr): - _module = self._resolve() - value = getattr(_module, attr) - setattr(self, attr, value) - return value - - -class _LazyModule(types.ModuleType): - def __init__(self, name): - super(_LazyModule, self).__init__(name) - self.__doc__ = self.__class__.__doc__ - - def __dir__(self): - attrs = ["__doc__", "__name__"] - attrs += [attr.name for attr in self._moved_attributes] - return attrs - - # Subclasses should override this - _moved_attributes = [] - - -class MovedAttribute(_LazyDescr): - def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None): - super(MovedAttribute, self).__init__(name) - if PY3: - if new_mod is None: - new_mod = name - self.mod = new_mod - if new_attr is None: - if old_attr is None: - new_attr = name - else: - new_attr = old_attr - self.attr = new_attr - else: - self.mod = old_mod - if old_attr is None: - old_attr = name - self.attr = old_attr - - def _resolve(self): - module = _import_module(self.mod) - return getattr(module, self.attr) - - -class _SixMetaPathImporter(object): - - """ - A meta path importer to import six.moves and its submodules. - - This class implements a PEP302 finder and loader. It should be compatible - with Python 2.5 and all existing versions of Python3 - """ - - def __init__(self, six_module_name): - self.name = six_module_name - self.known_modules = {} - - def _add_module(self, mod, *fullnames): - for fullname in fullnames: - self.known_modules[self.name + "." + fullname] = mod - - def _get_module(self, fullname): - return self.known_modules[self.name + "." + fullname] - - def find_module(self, fullname, path=None): - if fullname in self.known_modules: - return self - return None - - def find_spec(self, fullname, path, target=None): - if fullname in self.known_modules: - return spec_from_loader(fullname, self) - return None - - def __get_module(self, fullname): - try: - return self.known_modules[fullname] - except KeyError: - raise ImportError("This loader does not know module " + fullname) - - def load_module(self, fullname): - try: - # in case of a reload - return sys.modules[fullname] - except KeyError: - pass - mod = self.__get_module(fullname) - if isinstance(mod, MovedModule): - mod = mod._resolve() - else: - mod.__loader__ = self - sys.modules[fullname] = mod - return mod - - def is_package(self, fullname): - """ - Return true, if the named module is a package. - - We need this method to get correct spec objects with - Python 3.4 (see PEP451) - """ - return hasattr(self.__get_module(fullname), "__path__") - - def get_code(self, fullname): - """Return None - - Required, if is_package is implemented""" - self.__get_module(fullname) # eventually raises ImportError - return None - - get_source = get_code # same as get_code - - def create_module(self, spec): - return self.load_module(spec.name) - - def exec_module(self, module): - pass - - -_importer = _SixMetaPathImporter(__name__) - - -class _MovedItems(_LazyModule): - - """Lazy loading of moved objects""" - - __path__ = [] # mark as package - - -_moved_attributes = [ - MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"), - MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"), - MovedAttribute( - "filterfalse", "itertools", "itertools", "ifilterfalse", "filterfalse" - ), - MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"), - MovedAttribute("intern", "__builtin__", "sys"), - MovedAttribute("map", "itertools", "builtins", "imap", "map"), - MovedAttribute("getcwd", "os", "os", "getcwdu", "getcwd"), - MovedAttribute("getcwdb", "os", "os", "getcwd", "getcwdb"), - MovedAttribute("getoutput", "commands", "subprocess"), - MovedAttribute("range", "__builtin__", "builtins", "xrange", "range"), - MovedAttribute( - "reload_module", "__builtin__", "importlib" if PY34 else "imp", "reload" - ), - MovedAttribute("reduce", "__builtin__", "functools"), - MovedAttribute("shlex_quote", "pipes", "shlex", "quote"), - MovedAttribute("StringIO", "StringIO", "io"), - MovedAttribute("UserDict", "UserDict", "collections"), - MovedAttribute("UserList", "UserList", "collections"), - MovedAttribute("UserString", "UserString", "collections"), - MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"), - MovedAttribute("zip", "itertools", "builtins", "izip", "zip"), - MovedAttribute( - "zip_longest", "itertools", "itertools", "izip_longest", "zip_longest" - ), - MovedModule("builtins", "__builtin__"), - MovedModule("configparser", "ConfigParser"), - MovedModule( - "collections_abc", - "collections", - "collections.abc" if sys.version_info >= (3, 3) else "collections", - ), - MovedModule("copyreg", "copy_reg"), - MovedModule("dbm_gnu", "gdbm", "dbm.gnu"), - MovedModule("dbm_ndbm", "dbm", "dbm.ndbm"), - MovedModule( - "_dummy_thread", - "dummy_thread", - "_dummy_thread" if sys.version_info < (3, 9) else "_thread", - ), - MovedModule("http_cookiejar", "cookielib", "http.cookiejar"), - MovedModule("http_cookies", "Cookie", "http.cookies"), - MovedModule("html_entities", "htmlentitydefs", "html.entities"), - MovedModule("html_parser", "HTMLParser", "html.parser"), - MovedModule("http_client", "httplib", "http.client"), - MovedModule("email_mime_base", "email.MIMEBase", "email.mime.base"), - MovedModule("email_mime_image", "email.MIMEImage", "email.mime.image"), - MovedModule("email_mime_multipart", "email.MIMEMultipart", "email.mime.multipart"), - MovedModule( - "email_mime_nonmultipart", "email.MIMENonMultipart", "email.mime.nonmultipart" - ), - MovedModule("email_mime_text", "email.MIMEText", "email.mime.text"), - MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"), - MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"), - MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"), - MovedModule("cPickle", "cPickle", "pickle"), - MovedModule("queue", "Queue"), - MovedModule("reprlib", "repr"), - MovedModule("socketserver", "SocketServer"), - MovedModule("_thread", "thread", "_thread"), - MovedModule("tkinter", "Tkinter"), - MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"), - MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"), - MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"), - MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"), - MovedModule("tkinter_tix", "Tix", "tkinter.tix"), - MovedModule("tkinter_ttk", "ttk", "tkinter.ttk"), - MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"), - MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"), - MovedModule("tkinter_colorchooser", "tkColorChooser", "tkinter.colorchooser"), - MovedModule("tkinter_commondialog", "tkCommonDialog", "tkinter.commondialog"), - MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"), - MovedModule("tkinter_font", "tkFont", "tkinter.font"), - MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"), - MovedModule("tkinter_tksimpledialog", "tkSimpleDialog", "tkinter.simpledialog"), - MovedModule("urllib_parse", __name__ + ".moves.urllib_parse", "urllib.parse"), - MovedModule("urllib_error", __name__ + ".moves.urllib_error", "urllib.error"), - MovedModule("urllib", __name__ + ".moves.urllib", __name__ + ".moves.urllib"), - MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"), - MovedModule("xmlrpc_client", "xmlrpclib", "xmlrpc.client"), - MovedModule("xmlrpc_server", "SimpleXMLRPCServer", "xmlrpc.server"), -] -# Add windows specific modules. -if sys.platform == "win32": - _moved_attributes += [ - MovedModule("winreg", "_winreg"), - ] - -for attr in _moved_attributes: - setattr(_MovedItems, attr.name, attr) - if isinstance(attr, MovedModule): - _importer._add_module(attr, "moves." + attr.name) -del attr - -_MovedItems._moved_attributes = _moved_attributes - -moves = _MovedItems(__name__ + ".moves") -_importer._add_module(moves, "moves") - - -class Module_six_moves_urllib_parse(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_parse""" - - -_urllib_parse_moved_attributes = [ - MovedAttribute("ParseResult", "urlparse", "urllib.parse"), - MovedAttribute("SplitResult", "urlparse", "urllib.parse"), - MovedAttribute("parse_qs", "urlparse", "urllib.parse"), - MovedAttribute("parse_qsl", "urlparse", "urllib.parse"), - MovedAttribute("urldefrag", "urlparse", "urllib.parse"), - MovedAttribute("urljoin", "urlparse", "urllib.parse"), - MovedAttribute("urlparse", "urlparse", "urllib.parse"), - MovedAttribute("urlsplit", "urlparse", "urllib.parse"), - MovedAttribute("urlunparse", "urlparse", "urllib.parse"), - MovedAttribute("urlunsplit", "urlparse", "urllib.parse"), - MovedAttribute("quote", "urllib", "urllib.parse"), - MovedAttribute("quote_plus", "urllib", "urllib.parse"), - MovedAttribute("unquote", "urllib", "urllib.parse"), - MovedAttribute("unquote_plus", "urllib", "urllib.parse"), - MovedAttribute( - "unquote_to_bytes", "urllib", "urllib.parse", "unquote", "unquote_to_bytes" - ), - MovedAttribute("urlencode", "urllib", "urllib.parse"), - MovedAttribute("splitquery", "urllib", "urllib.parse"), - MovedAttribute("splittag", "urllib", "urllib.parse"), - MovedAttribute("splituser", "urllib", "urllib.parse"), - MovedAttribute("splitvalue", "urllib", "urllib.parse"), - MovedAttribute("uses_fragment", "urlparse", "urllib.parse"), - MovedAttribute("uses_netloc", "urlparse", "urllib.parse"), - MovedAttribute("uses_params", "urlparse", "urllib.parse"), - MovedAttribute("uses_query", "urlparse", "urllib.parse"), - MovedAttribute("uses_relative", "urlparse", "urllib.parse"), -] -for attr in _urllib_parse_moved_attributes: - setattr(Module_six_moves_urllib_parse, attr.name, attr) -del attr - -Module_six_moves_urllib_parse._moved_attributes = _urllib_parse_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_parse(__name__ + ".moves.urllib_parse"), - "moves.urllib_parse", - "moves.urllib.parse", -) - - -class Module_six_moves_urllib_error(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_error""" - - -_urllib_error_moved_attributes = [ - MovedAttribute("URLError", "urllib2", "urllib.error"), - MovedAttribute("HTTPError", "urllib2", "urllib.error"), - MovedAttribute("ContentTooShortError", "urllib", "urllib.error"), -] -for attr in _urllib_error_moved_attributes: - setattr(Module_six_moves_urllib_error, attr.name, attr) -del attr - -Module_six_moves_urllib_error._moved_attributes = _urllib_error_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_error(__name__ + ".moves.urllib.error"), - "moves.urllib_error", - "moves.urllib.error", -) - - -class Module_six_moves_urllib_request(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_request""" - - -_urllib_request_moved_attributes = [ - MovedAttribute("urlopen", "urllib2", "urllib.request"), - MovedAttribute("install_opener", "urllib2", "urllib.request"), - MovedAttribute("build_opener", "urllib2", "urllib.request"), - MovedAttribute("pathname2url", "urllib", "urllib.request"), - MovedAttribute("url2pathname", "urllib", "urllib.request"), - MovedAttribute("getproxies", "urllib", "urllib.request"), - MovedAttribute("Request", "urllib2", "urllib.request"), - MovedAttribute("OpenerDirector", "urllib2", "urllib.request"), - MovedAttribute("HTTPDefaultErrorHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPRedirectHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPCookieProcessor", "urllib2", "urllib.request"), - MovedAttribute("ProxyHandler", "urllib2", "urllib.request"), - MovedAttribute("BaseHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPPasswordMgr", "urllib2", "urllib.request"), - MovedAttribute("HTTPPasswordMgrWithDefaultRealm", "urllib2", "urllib.request"), - MovedAttribute("AbstractBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("ProxyBasicAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("AbstractDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("ProxyDigestAuthHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPSHandler", "urllib2", "urllib.request"), - MovedAttribute("FileHandler", "urllib2", "urllib.request"), - MovedAttribute("FTPHandler", "urllib2", "urllib.request"), - MovedAttribute("CacheFTPHandler", "urllib2", "urllib.request"), - MovedAttribute("UnknownHandler", "urllib2", "urllib.request"), - MovedAttribute("HTTPErrorProcessor", "urllib2", "urllib.request"), - MovedAttribute("urlretrieve", "urllib", "urllib.request"), - MovedAttribute("urlcleanup", "urllib", "urllib.request"), - MovedAttribute("URLopener", "urllib", "urllib.request"), - MovedAttribute("FancyURLopener", "urllib", "urllib.request"), - MovedAttribute("proxy_bypass", "urllib", "urllib.request"), - MovedAttribute("parse_http_list", "urllib2", "urllib.request"), - MovedAttribute("parse_keqv_list", "urllib2", "urllib.request"), -] -for attr in _urllib_request_moved_attributes: - setattr(Module_six_moves_urllib_request, attr.name, attr) -del attr - -Module_six_moves_urllib_request._moved_attributes = _urllib_request_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_request(__name__ + ".moves.urllib.request"), - "moves.urllib_request", - "moves.urllib.request", -) - - -class Module_six_moves_urllib_response(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_response""" - - -_urllib_response_moved_attributes = [ - MovedAttribute("addbase", "urllib", "urllib.response"), - MovedAttribute("addclosehook", "urllib", "urllib.response"), - MovedAttribute("addinfo", "urllib", "urllib.response"), - MovedAttribute("addinfourl", "urllib", "urllib.response"), -] -for attr in _urllib_response_moved_attributes: - setattr(Module_six_moves_urllib_response, attr.name, attr) -del attr - -Module_six_moves_urllib_response._moved_attributes = _urllib_response_moved_attributes - -_importer._add_module( - Module_six_moves_urllib_response(__name__ + ".moves.urllib.response"), - "moves.urllib_response", - "moves.urllib.response", -) - - -class Module_six_moves_urllib_robotparser(_LazyModule): - - """Lazy loading of moved objects in six.moves.urllib_robotparser""" - - -_urllib_robotparser_moved_attributes = [ - MovedAttribute("RobotFileParser", "robotparser", "urllib.robotparser"), -] -for attr in _urllib_robotparser_moved_attributes: - setattr(Module_six_moves_urllib_robotparser, attr.name, attr) -del attr - -Module_six_moves_urllib_robotparser._moved_attributes = ( - _urllib_robotparser_moved_attributes -) - -_importer._add_module( - Module_six_moves_urllib_robotparser(__name__ + ".moves.urllib.robotparser"), - "moves.urllib_robotparser", - "moves.urllib.robotparser", -) - - -class Module_six_moves_urllib(types.ModuleType): - - """Create a six.moves.urllib namespace that resembles the Python 3 namespace""" - - __path__ = [] # mark as package - parse = _importer._get_module("moves.urllib_parse") - error = _importer._get_module("moves.urllib_error") - request = _importer._get_module("moves.urllib_request") - response = _importer._get_module("moves.urllib_response") - robotparser = _importer._get_module("moves.urllib_robotparser") - - def __dir__(self): - return ["parse", "error", "request", "response", "robotparser"] - - -_importer._add_module( - Module_six_moves_urllib(__name__ + ".moves.urllib"), "moves.urllib" -) - - -def add_move(move): - """Add an item to six.moves.""" - setattr(_MovedItems, move.name, move) - - -def remove_move(name): - """Remove item from six.moves.""" - try: - delattr(_MovedItems, name) - except AttributeError: - try: - del moves.__dict__[name] - except KeyError: - raise AttributeError("no such move, %r" % (name,)) - - -if PY3: - _meth_func = "__func__" - _meth_self = "__self__" - - _func_closure = "__closure__" - _func_code = "__code__" - _func_defaults = "__defaults__" - _func_globals = "__globals__" -else: - _meth_func = "im_func" - _meth_self = "im_self" - - _func_closure = "func_closure" - _func_code = "func_code" - _func_defaults = "func_defaults" - _func_globals = "func_globals" - - -try: - advance_iterator = next -except NameError: - - def advance_iterator(it): - return it.next() - - -next = advance_iterator - - -try: - callable = callable -except NameError: - - def callable(obj): - return any("__call__" in klass.__dict__ for klass in type(obj).__mro__) - - -if PY3: - - def get_unbound_function(unbound): - return unbound - - create_bound_method = types.MethodType - - def create_unbound_method(func, cls): - return func - - Iterator = object -else: - - def get_unbound_function(unbound): - return unbound.im_func - - def create_bound_method(func, obj): - return types.MethodType(func, obj, obj.__class__) - - def create_unbound_method(func, cls): - return types.MethodType(func, None, cls) - - class Iterator(object): - def next(self): - return type(self).__next__(self) - - callable = callable -_add_doc( - get_unbound_function, """Get the function out of a possibly unbound function""" -) - - -get_method_function = operator.attrgetter(_meth_func) -get_method_self = operator.attrgetter(_meth_self) -get_function_closure = operator.attrgetter(_func_closure) -get_function_code = operator.attrgetter(_func_code) -get_function_defaults = operator.attrgetter(_func_defaults) -get_function_globals = operator.attrgetter(_func_globals) - - -if PY3: - - def iterkeys(d, **kw): - return iter(d.keys(**kw)) - - def itervalues(d, **kw): - return iter(d.values(**kw)) - - def iteritems(d, **kw): - return iter(d.items(**kw)) - - def iterlists(d, **kw): - return iter(d.lists(**kw)) - - viewkeys = operator.methodcaller("keys") - - viewvalues = operator.methodcaller("values") - - viewitems = operator.methodcaller("items") -else: - - def iterkeys(d, **kw): - return d.iterkeys(**kw) - - def itervalues(d, **kw): - return d.itervalues(**kw) - - def iteritems(d, **kw): - return d.iteritems(**kw) - - def iterlists(d, **kw): - return d.iterlists(**kw) - - viewkeys = operator.methodcaller("viewkeys") - - viewvalues = operator.methodcaller("viewvalues") - - viewitems = operator.methodcaller("viewitems") - -_add_doc(iterkeys, "Return an iterator over the keys of a dictionary.") -_add_doc(itervalues, "Return an iterator over the values of a dictionary.") -_add_doc(iteritems, "Return an iterator over the (key, value) pairs of a dictionary.") -_add_doc( - iterlists, "Return an iterator over the (key, [values]) pairs of a dictionary." -) - - -if PY3: - - def b(s): - return s.encode("latin-1") - - def u(s): - return s - - unichr = chr - import struct - - int2byte = struct.Struct(">B").pack - del struct - byte2int = operator.itemgetter(0) - indexbytes = operator.getitem - iterbytes = iter - import io - - StringIO = io.StringIO - BytesIO = io.BytesIO - del io - _assertCountEqual = "assertCountEqual" - if sys.version_info[1] <= 1: - _assertRaisesRegex = "assertRaisesRegexp" - _assertRegex = "assertRegexpMatches" - _assertNotRegex = "assertNotRegexpMatches" - else: - _assertRaisesRegex = "assertRaisesRegex" - _assertRegex = "assertRegex" - _assertNotRegex = "assertNotRegex" -else: - - def b(s): - return s - - # Workaround for standalone backslash - - def u(s): - return unicode(s.replace(r"\\", r"\\\\"), "unicode_escape") - - unichr = unichr - int2byte = chr - - def byte2int(bs): - return ord(bs[0]) - - def indexbytes(buf, i): - return ord(buf[i]) - - iterbytes = functools.partial(itertools.imap, ord) - import StringIO - - StringIO = BytesIO = StringIO.StringIO - _assertCountEqual = "assertItemsEqual" - _assertRaisesRegex = "assertRaisesRegexp" - _assertRegex = "assertRegexpMatches" - _assertNotRegex = "assertNotRegexpMatches" -_add_doc(b, """Byte literal""") -_add_doc(u, """Text literal""") - - -def assertCountEqual(self, *args, **kwargs): - return getattr(self, _assertCountEqual)(*args, **kwargs) - - -def assertRaisesRegex(self, *args, **kwargs): - return getattr(self, _assertRaisesRegex)(*args, **kwargs) - - -def assertRegex(self, *args, **kwargs): - return getattr(self, _assertRegex)(*args, **kwargs) - - -def assertNotRegex(self, *args, **kwargs): - return getattr(self, _assertNotRegex)(*args, **kwargs) - - -if PY3: - exec_ = getattr(moves.builtins, "exec") - - def reraise(tp, value, tb=None): - try: - if value is None: - value = tp() - if value.__traceback__ is not tb: - raise value.with_traceback(tb) - raise value - finally: - value = None - tb = None - -else: - - def exec_(_code_, _globs_=None, _locs_=None): - """Execute code in a namespace.""" - if _globs_ is None: - frame = sys._getframe(1) - _globs_ = frame.f_globals - if _locs_ is None: - _locs_ = frame.f_locals - del frame - elif _locs_ is None: - _locs_ = _globs_ - exec ("""exec _code_ in _globs_, _locs_""") - - exec_( - """def reraise(tp, value, tb=None): - try: - raise tp, value, tb - finally: - tb = None -""" - ) - - -if sys.version_info[:2] > (3,): - exec_( - """def raise_from(value, from_value): - try: - raise value from from_value - finally: - value = None -""" - ) -else: - - def raise_from(value, from_value): - raise value - - -print_ = getattr(moves.builtins, "print", None) -if print_ is None: - - def print_(*args, **kwargs): - """The new-style print function for Python 2.4 and 2.5.""" - fp = kwargs.pop("file", sys.stdout) - if fp is None: - return - - def write(data): - if not isinstance(data, basestring): - data = str(data) - # If the file has an encoding, encode unicode with it. - if ( - isinstance(fp, file) - and isinstance(data, unicode) - and fp.encoding is not None - ): - errors = getattr(fp, "errors", None) - if errors is None: - errors = "strict" - data = data.encode(fp.encoding, errors) - fp.write(data) - - want_unicode = False - sep = kwargs.pop("sep", None) - if sep is not None: - if isinstance(sep, unicode): - want_unicode = True - elif not isinstance(sep, str): - raise TypeError("sep must be None or a string") - end = kwargs.pop("end", None) - if end is not None: - if isinstance(end, unicode): - want_unicode = True - elif not isinstance(end, str): - raise TypeError("end must be None or a string") - if kwargs: - raise TypeError("invalid keyword arguments to print()") - if not want_unicode: - for arg in args: - if isinstance(arg, unicode): - want_unicode = True - break - if want_unicode: - newline = unicode("\n") - space = unicode(" ") - else: - newline = "\n" - space = " " - if sep is None: - sep = space - if end is None: - end = newline - for i, arg in enumerate(args): - if i: - write(sep) - write(arg) - write(end) - - -if sys.version_info[:2] < (3, 3): - _print = print_ - - def print_(*args, **kwargs): - fp = kwargs.get("file", sys.stdout) - flush = kwargs.pop("flush", False) - _print(*args, **kwargs) - if flush and fp is not None: - fp.flush() - - -_add_doc(reraise, """Reraise an exception.""") - -if sys.version_info[0:2] < (3, 4): - # This does exactly the same what the :func:`py3:functools.update_wrapper` - # function does on Python versions after 3.2. It sets the ``__wrapped__`` - # attribute on ``wrapper`` object and it doesn't raise an error if any of - # the attributes mentioned in ``assigned`` and ``updated`` are missing on - # ``wrapped`` object. - def _update_wrapper( - wrapper, - wrapped, - assigned=functools.WRAPPER_ASSIGNMENTS, - updated=functools.WRAPPER_UPDATES, - ): - for attr in assigned: - try: - value = getattr(wrapped, attr) - except AttributeError: - continue - else: - setattr(wrapper, attr, value) - for attr in updated: - getattr(wrapper, attr).update(getattr(wrapped, attr, {})) - wrapper.__wrapped__ = wrapped - return wrapper - - _update_wrapper.__doc__ = functools.update_wrapper.__doc__ - - def wraps( - wrapped, - assigned=functools.WRAPPER_ASSIGNMENTS, - updated=functools.WRAPPER_UPDATES, - ): - return functools.partial( - _update_wrapper, wrapped=wrapped, assigned=assigned, updated=updated - ) - - wraps.__doc__ = functools.wraps.__doc__ - -else: - wraps = functools.wraps - - -def with_metaclass(meta, *bases): - """Create a base class with a metaclass.""" - # This requires a bit of explanation: the basic idea is to make a dummy - # metaclass for one level of class instantiation that replaces itself with - # the actual metaclass. - class metaclass(type): - def __new__(cls, name, this_bases, d): - if sys.version_info[:2] >= (3, 7): - # This version introduced PEP 560 that requires a bit - # of extra care (we mimic what is done by __build_class__). - resolved_bases = types.resolve_bases(bases) - if resolved_bases is not bases: - d["__orig_bases__"] = bases - else: - resolved_bases = bases - return meta(name, resolved_bases, d) - - @classmethod - def __prepare__(cls, name, this_bases): - return meta.__prepare__(name, bases) - - return type.__new__(metaclass, "temporary_class", (), {}) - - -def add_metaclass(metaclass): - """Class decorator for creating a class with a metaclass.""" - - def wrapper(cls): - orig_vars = cls.__dict__.copy() - slots = orig_vars.get("__slots__") - if slots is not None: - if isinstance(slots, str): - slots = [slots] - for slots_var in slots: - orig_vars.pop(slots_var) - orig_vars.pop("__dict__", None) - orig_vars.pop("__weakref__", None) - if hasattr(cls, "__qualname__"): - orig_vars["__qualname__"] = cls.__qualname__ - return metaclass(cls.__name__, cls.__bases__, orig_vars) - - return wrapper - - -def ensure_binary(s, encoding="utf-8", errors="strict"): - """Coerce **s** to six.binary_type. - - For Python 2: - - `unicode` -> encoded to `str` - - `str` -> `str` - - For Python 3: - - `str` -> encoded to `bytes` - - `bytes` -> `bytes` - """ - if isinstance(s, binary_type): - return s - if isinstance(s, text_type): - return s.encode(encoding, errors) - raise TypeError("not expecting type '%s'" % type(s)) - - -def ensure_str(s, encoding="utf-8", errors="strict"): - """Coerce *s* to `str`. - - For Python 2: - - `unicode` -> encoded to `str` - - `str` -> `str` - - For Python 3: - - `str` -> `str` - - `bytes` -> decoded to `str` - """ - # Optimization: Fast return for the common case. - if type(s) is str: - return s - if PY2 and isinstance(s, text_type): - return s.encode(encoding, errors) - elif PY3 and isinstance(s, binary_type): - return s.decode(encoding, errors) - elif not isinstance(s, (text_type, binary_type)): - raise TypeError("not expecting type '%s'" % type(s)) - return s - - -def ensure_text(s, encoding="utf-8", errors="strict"): - """Coerce *s* to six.text_type. - - For Python 2: - - `unicode` -> `unicode` - - `str` -> `unicode` - - For Python 3: - - `str` -> `str` - - `bytes` -> decoded to `str` - """ - if isinstance(s, binary_type): - return s.decode(encoding, errors) - elif isinstance(s, text_type): - return s - else: - raise TypeError("not expecting type '%s'" % type(s)) - - -def python_2_unicode_compatible(klass): - """ - A class decorator that defines __unicode__ and __str__ methods under Python 2. - Under Python 3 it does nothing. - - To support Python 2 and 3 with a single code base, define a __str__ method - returning text and apply this decorator to the class. - """ - if PY2: - if "__str__" not in klass.__dict__: - raise ValueError( - "@python_2_unicode_compatible cannot be applied " - "to %s because it doesn't define __str__()." % klass.__name__ - ) - klass.__unicode__ = klass.__str__ - klass.__str__ = lambda self: self.__unicode__().encode("utf-8") - return klass - - -# Complete the moves implementation. -# This code is at the end of this module to speed up module loading. -# Turn this module into a package. -__path__ = [] # required for PEP 302 and PEP 451 -__package__ = __name__ # see PEP 366 @ReservedAssignment -if globals().get("__spec__") is not None: - __spec__.submodule_search_locations = [] # PEP 451 @UndefinedVariable -# Remove other six meta path importers, since they cause problems. This can -# happen if six is removed from sys.modules and then reloaded. (Setuptools does -# this for some reason.) -if sys.meta_path: - for i, importer in enumerate(sys.meta_path): - # Here's some real nastiness: Another "instance" of the six module might - # be floating around. Therefore, we can't use isinstance() to check for - # the six meta path importer, since the other six instance will have - # inserted an importer with different class. - if ( - type(importer).__name__ == "_SixMetaPathImporter" - and importer.name == __name__ - ): - del sys.meta_path[i] - break - del i, importer -# Finally, add the importer to the meta path import hook. -sys.meta_path.append(_importer) diff --git a/spaces/CC123123/blip2_t/utils.py b/spaces/CC123123/blip2_t/utils.py deleted file mode 100644 index a5a67d654a67ee37847d428c94524c7cabee3e1d..0000000000000000000000000000000000000000 --- a/spaces/CC123123/blip2_t/utils.py +++ /dev/null @@ -1,27 +0,0 @@ -import os - - -class Endpoint: - def __init__(self): - self._url = None - - @property - def url(self): - if self._url is None: - self._url = self.get_url() - - return self._url - - def get_url(self): - endpoint = os.environ.get("endpoint") - - return endpoint - - -def get_token(): - token = os.environ.get("auth_token") - - if token is None: - raise ValueError("auth-token not found in environment variables") - - return token diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/utils/serialize.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/utils/serialize.py deleted file mode 100644 index 734a62c2c4ecfd520eb9e8b941857b6f7e17d4c8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/utils/serialize.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import cloudpickle - - -class PicklableWrapper(object): - """ - Wrap an object to make it more picklable, note that it uses - heavy weight serialization libraries that are slower than pickle. - It's best to use it only on closures (which are usually not picklable). - - This is a simplified version of - https://github.com/joblib/joblib/blob/master/joblib/externals/loky/cloudpickle_wrapper.py - """ - - def __init__(self, obj): - self._obj = obj - - def __reduce__(self): - s = cloudpickle.dumps(self._obj) - return cloudpickle.loads, (s,) - - def __call__(self, *args, **kwargs): - return self._obj(*args, **kwargs) - - def __getattr__(self, attr): - # Ensure that the wrapped object can be used seamlessly as the previous object. - if attr not in ["_obj"]: - return getattr(self._obj, attr) - return getattr(self, attr) diff --git a/spaces/CVPR/LIVE/pybind11/include/pybind11/detail/internals.h b/spaces/CVPR/LIVE/pybind11/include/pybind11/detail/internals.h deleted file mode 100644 index cf40e9fe995cd952e0dec8378b44b3ac8477f235..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/include/pybind11/detail/internals.h +++ /dev/null @@ -1,352 +0,0 @@ -/* - pybind11/detail/internals.h: Internal data structure and related functions - - Copyright (c) 2017 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#pragma once - -#include "../pytypes.h" - -PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE) -PYBIND11_NAMESPACE_BEGIN(detail) -// Forward declarations -inline PyTypeObject *make_static_property_type(); -inline PyTypeObject *make_default_metaclass(); -inline PyObject *make_object_base_type(PyTypeObject *metaclass); - -// The old Python Thread Local Storage (TLS) API is deprecated in Python 3.7 in favor of the new -// Thread Specific Storage (TSS) API. -#if PY_VERSION_HEX >= 0x03070000 -# define PYBIND11_TLS_KEY_INIT(var) Py_tss_t *var = nullptr -# define PYBIND11_TLS_GET_VALUE(key) PyThread_tss_get((key)) -# define PYBIND11_TLS_REPLACE_VALUE(key, value) PyThread_tss_set((key), (value)) -# define PYBIND11_TLS_DELETE_VALUE(key) PyThread_tss_set((key), nullptr) -# define PYBIND11_TLS_FREE(key) PyThread_tss_free(key) -#else - // Usually an int but a long on Cygwin64 with Python 3.x -# define PYBIND11_TLS_KEY_INIT(var) decltype(PyThread_create_key()) var = 0 -# define PYBIND11_TLS_GET_VALUE(key) PyThread_get_key_value((key)) -# if PY_MAJOR_VERSION < 3 -# define PYBIND11_TLS_DELETE_VALUE(key) \ - PyThread_delete_key_value(key) -# define PYBIND11_TLS_REPLACE_VALUE(key, value) \ - do { \ - PyThread_delete_key_value((key)); \ - PyThread_set_key_value((key), (value)); \ - } while (false) -# else -# define PYBIND11_TLS_DELETE_VALUE(key) \ - PyThread_set_key_value((key), nullptr) -# define PYBIND11_TLS_REPLACE_VALUE(key, value) \ - PyThread_set_key_value((key), (value)) -# endif -# define PYBIND11_TLS_FREE(key) (void)key -#endif - -// Python loads modules by default with dlopen with the RTLD_LOCAL flag; under libc++ and possibly -// other STLs, this means `typeid(A)` from one module won't equal `typeid(A)` from another module -// even when `A` is the same, non-hidden-visibility type (e.g. from a common include). Under -// libstdc++, this doesn't happen: equality and the type_index hash are based on the type name, -// which works. If not under a known-good stl, provide our own name-based hash and equality -// functions that use the type name. -#if defined(__GLIBCXX__) -inline bool same_type(const std::type_info &lhs, const std::type_info &rhs) { return lhs == rhs; } -using type_hash = std::hash; -using type_equal_to = std::equal_to; -#else -inline bool same_type(const std::type_info &lhs, const std::type_info &rhs) { - return lhs.name() == rhs.name() || std::strcmp(lhs.name(), rhs.name()) == 0; -} - -struct type_hash { - size_t operator()(const std::type_index &t) const { - size_t hash = 5381; - const char *ptr = t.name(); - while (auto c = static_cast(*ptr++)) - hash = (hash * 33) ^ c; - return hash; - } -}; - -struct type_equal_to { - bool operator()(const std::type_index &lhs, const std::type_index &rhs) const { - return lhs.name() == rhs.name() || std::strcmp(lhs.name(), rhs.name()) == 0; - } -}; -#endif - -template -using type_map = std::unordered_map; - -struct overload_hash { - inline size_t operator()(const std::pair& v) const { - size_t value = std::hash()(v.first); - value ^= std::hash()(v.second) + 0x9e3779b9 + (value<<6) + (value>>2); - return value; - } -}; - -/// Internal data structure used to track registered instances and types. -/// Whenever binary incompatible changes are made to this structure, -/// `PYBIND11_INTERNALS_VERSION` must be incremented. -struct internals { - type_map registered_types_cpp; // std::type_index -> pybind11's type information - std::unordered_map> registered_types_py; // PyTypeObject* -> base type_info(s) - std::unordered_multimap registered_instances; // void * -> instance* - std::unordered_set, overload_hash> inactive_overload_cache; - type_map> direct_conversions; - std::unordered_map> patients; - std::forward_list registered_exception_translators; - std::unordered_map shared_data; // Custom data to be shared across extensions - std::vector loader_patient_stack; // Used by `loader_life_support` - std::forward_list static_strings; // Stores the std::strings backing detail::c_str() - PyTypeObject *static_property_type; - PyTypeObject *default_metaclass; - PyObject *instance_base; -#if defined(WITH_THREAD) - PYBIND11_TLS_KEY_INIT(tstate); - PyInterpreterState *istate = nullptr; - ~internals() { - // This destructor is called *after* Py_Finalize() in finalize_interpreter(). - // That *SHOULD BE* fine. The following details what happens whe PyThread_tss_free is called. - // PYBIND11_TLS_FREE is PyThread_tss_free on python 3.7+. On older python, it does nothing. - // PyThread_tss_free calls PyThread_tss_delete and PyMem_RawFree. - // PyThread_tss_delete just calls TlsFree (on Windows) or pthread_key_delete (on *NIX). Neither - // of those have anything to do with CPython internals. - // PyMem_RawFree *requires* that the `tstate` be allocated with the CPython allocator. - PYBIND11_TLS_FREE(tstate); - } -#endif -}; - -/// Additional type information which does not fit into the PyTypeObject. -/// Changes to this struct also require bumping `PYBIND11_INTERNALS_VERSION`. -struct type_info { - PyTypeObject *type; - const std::type_info *cpptype; - size_t type_size, type_align, holder_size_in_ptrs; - void *(*operator_new)(size_t); - void (*init_instance)(instance *, const void *); - void (*dealloc)(value_and_holder &v_h); - std::vector implicit_conversions; - std::vector> implicit_casts; - std::vector *direct_conversions; - buffer_info *(*get_buffer)(PyObject *, void *) = nullptr; - void *get_buffer_data = nullptr; - void *(*module_local_load)(PyObject *, const type_info *) = nullptr; - /* A simple type never occurs as a (direct or indirect) parent - * of a class that makes use of multiple inheritance */ - bool simple_type : 1; - /* True if there is no multiple inheritance in this type's inheritance tree */ - bool simple_ancestors : 1; - /* for base vs derived holder_type checks */ - bool default_holder : 1; - /* true if this is a type registered with py::module_local */ - bool module_local : 1; -}; - -/// Tracks the `internals` and `type_info` ABI version independent of the main library version -#define PYBIND11_INTERNALS_VERSION 4 - -/// On MSVC, debug and release builds are not ABI-compatible! -#if defined(_MSC_VER) && defined(_DEBUG) -# define PYBIND11_BUILD_TYPE "_debug" -#else -# define PYBIND11_BUILD_TYPE "" -#endif - -/// Let's assume that different compilers are ABI-incompatible. -#if defined(_MSC_VER) -# define PYBIND11_COMPILER_TYPE "_msvc" -#elif defined(__INTEL_COMPILER) -# define PYBIND11_COMPILER_TYPE "_icc" -#elif defined(__clang__) -# define PYBIND11_COMPILER_TYPE "_clang" -#elif defined(__PGI) -# define PYBIND11_COMPILER_TYPE "_pgi" -#elif defined(__MINGW32__) -# define PYBIND11_COMPILER_TYPE "_mingw" -#elif defined(__CYGWIN__) -# define PYBIND11_COMPILER_TYPE "_gcc_cygwin" -#elif defined(__GNUC__) -# define PYBIND11_COMPILER_TYPE "_gcc" -#else -# define PYBIND11_COMPILER_TYPE "_unknown" -#endif - -#if defined(_LIBCPP_VERSION) -# define PYBIND11_STDLIB "_libcpp" -#elif defined(__GLIBCXX__) || defined(__GLIBCPP__) -# define PYBIND11_STDLIB "_libstdcpp" -#else -# define PYBIND11_STDLIB "" -#endif - -/// On Linux/OSX, changes in __GXX_ABI_VERSION__ indicate ABI incompatibility. -#if defined(__GXX_ABI_VERSION) -# define PYBIND11_BUILD_ABI "_cxxabi" PYBIND11_TOSTRING(__GXX_ABI_VERSION) -#else -# define PYBIND11_BUILD_ABI "" -#endif - -#if defined(WITH_THREAD) -# define PYBIND11_INTERNALS_KIND "" -#else -# define PYBIND11_INTERNALS_KIND "_without_thread" -#endif - -#define PYBIND11_INTERNALS_ID "__pybind11_internals_v" \ - PYBIND11_TOSTRING(PYBIND11_INTERNALS_VERSION) PYBIND11_INTERNALS_KIND PYBIND11_COMPILER_TYPE PYBIND11_STDLIB PYBIND11_BUILD_ABI PYBIND11_BUILD_TYPE "__" - -#define PYBIND11_MODULE_LOCAL_ID "__pybind11_module_local_v" \ - PYBIND11_TOSTRING(PYBIND11_INTERNALS_VERSION) PYBIND11_INTERNALS_KIND PYBIND11_COMPILER_TYPE PYBIND11_STDLIB PYBIND11_BUILD_ABI PYBIND11_BUILD_TYPE "__" - -/// Each module locally stores a pointer to the `internals` data. The data -/// itself is shared among modules with the same `PYBIND11_INTERNALS_ID`. -inline internals **&get_internals_pp() { - static internals **internals_pp = nullptr; - return internals_pp; -} - -inline void translate_exception(std::exception_ptr p) { - try { - if (p) std::rethrow_exception(p); - } catch (error_already_set &e) { e.restore(); return; - } catch (const builtin_exception &e) { e.set_error(); return; - } catch (const std::bad_alloc &e) { PyErr_SetString(PyExc_MemoryError, e.what()); return; - } catch (const std::domain_error &e) { PyErr_SetString(PyExc_ValueError, e.what()); return; - } catch (const std::invalid_argument &e) { PyErr_SetString(PyExc_ValueError, e.what()); return; - } catch (const std::length_error &e) { PyErr_SetString(PyExc_ValueError, e.what()); return; - } catch (const std::out_of_range &e) { PyErr_SetString(PyExc_IndexError, e.what()); return; - } catch (const std::range_error &e) { PyErr_SetString(PyExc_ValueError, e.what()); return; - } catch (const std::overflow_error &e) { PyErr_SetString(PyExc_OverflowError, e.what()); return; - } catch (const std::exception &e) { PyErr_SetString(PyExc_RuntimeError, e.what()); return; - } catch (...) { - PyErr_SetString(PyExc_RuntimeError, "Caught an unknown exception!"); - return; - } -} - -#if !defined(__GLIBCXX__) -inline void translate_local_exception(std::exception_ptr p) { - try { - if (p) std::rethrow_exception(p); - } catch (error_already_set &e) { e.restore(); return; - } catch (const builtin_exception &e) { e.set_error(); return; - } -} -#endif - -/// Return a reference to the current `internals` data -PYBIND11_NOINLINE inline internals &get_internals() { - auto **&internals_pp = get_internals_pp(); - if (internals_pp && *internals_pp) - return **internals_pp; - - // Ensure that the GIL is held since we will need to make Python calls. - // Cannot use py::gil_scoped_acquire here since that constructor calls get_internals. - struct gil_scoped_acquire_local { - gil_scoped_acquire_local() : state (PyGILState_Ensure()) {} - ~gil_scoped_acquire_local() { PyGILState_Release(state); } - const PyGILState_STATE state; - } gil; - - constexpr auto *id = PYBIND11_INTERNALS_ID; - auto builtins = handle(PyEval_GetBuiltins()); - if (builtins.contains(id) && isinstance(builtins[id])) { - internals_pp = static_cast(capsule(builtins[id])); - - // We loaded builtins through python's builtins, which means that our `error_already_set` - // and `builtin_exception` may be different local classes than the ones set up in the - // initial exception translator, below, so add another for our local exception classes. - // - // libstdc++ doesn't require this (types there are identified only by name) -#if !defined(__GLIBCXX__) - (*internals_pp)->registered_exception_translators.push_front(&translate_local_exception); -#endif - } else { - if (!internals_pp) internals_pp = new internals*(); - auto *&internals_ptr = *internals_pp; - internals_ptr = new internals(); -#if defined(WITH_THREAD) - - #if PY_VERSION_HEX < 0x03090000 - PyEval_InitThreads(); - #endif - PyThreadState *tstate = PyThreadState_Get(); - #if PY_VERSION_HEX >= 0x03070000 - internals_ptr->tstate = PyThread_tss_alloc(); - if (!internals_ptr->tstate || PyThread_tss_create(internals_ptr->tstate)) - pybind11_fail("get_internals: could not successfully initialize the TSS key!"); - PyThread_tss_set(internals_ptr->tstate, tstate); - #else - internals_ptr->tstate = PyThread_create_key(); - if (internals_ptr->tstate == -1) - pybind11_fail("get_internals: could not successfully initialize the TLS key!"); - PyThread_set_key_value(internals_ptr->tstate, tstate); - #endif - internals_ptr->istate = tstate->interp; -#endif - builtins[id] = capsule(internals_pp); - internals_ptr->registered_exception_translators.push_front(&translate_exception); - internals_ptr->static_property_type = make_static_property_type(); - internals_ptr->default_metaclass = make_default_metaclass(); - internals_ptr->instance_base = make_object_base_type(internals_ptr->default_metaclass); - } - return **internals_pp; -} - -/// Works like `internals.registered_types_cpp`, but for module-local registered types: -inline type_map ®istered_local_types_cpp() { - static type_map locals{}; - return locals; -} - -/// Constructs a std::string with the given arguments, stores it in `internals`, and returns its -/// `c_str()`. Such strings objects have a long storage duration -- the internal strings are only -/// cleared when the program exits or after interpreter shutdown (when embedding), and so are -/// suitable for c-style strings needed by Python internals (such as PyTypeObject's tp_name). -template -const char *c_str(Args &&...args) { - auto &strings = get_internals().static_strings; - strings.emplace_front(std::forward(args)...); - return strings.front().c_str(); -} - -PYBIND11_NAMESPACE_END(detail) - -/// Returns a named pointer that is shared among all extension modules (using the same -/// pybind11 version) running in the current interpreter. Names starting with underscores -/// are reserved for internal usage. Returns `nullptr` if no matching entry was found. -inline PYBIND11_NOINLINE void *get_shared_data(const std::string &name) { - auto &internals = detail::get_internals(); - auto it = internals.shared_data.find(name); - return it != internals.shared_data.end() ? it->second : nullptr; -} - -/// Set the shared data that can be later recovered by `get_shared_data()`. -inline PYBIND11_NOINLINE void *set_shared_data(const std::string &name, void *data) { - detail::get_internals().shared_data[name] = data; - return data; -} - -/// Returns a typed reference to a shared data entry (by using `get_shared_data()`) if -/// such entry exists. Otherwise, a new object of default-constructible type `T` is -/// added to the shared data under the given name and a reference to it is returned. -template -T &get_or_create_shared_data(const std::string &name) { - auto &internals = detail::get_internals(); - auto it = internals.shared_data.find(name); - T *ptr = (T *) (it != internals.shared_data.end() ? it->second : nullptr); - if (!ptr) { - ptr = new T(); - internals.shared_data[name] = ptr; - } - return *ptr; -} - -PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE) diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/async/transform.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/async/transform.h deleted file mode 100644 index abb2163ead0654a805deb3b31ca29f8c576ac9e9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/async/transform.h +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a transform of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -// The purpose of this header is to #include the async/transform.h header of the -// sequential, host, and device systems. It should be #included in any code -// which uses ADL to dispatch async transform. - -#pragma once - -#include - -//#include - -//#define __THRUST_HOST_SYSTEM_ASYNC_TRANSFORM_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/async/transform.h> -//#include __THRUST_HOST_SYSTEM_ASYNC_TRANSFORM_HEADER -//#undef __THRUST_HOST_SYSTEM_ASYNC_TRANSFORM_HEADER - -#define __THRUST_DEVICE_SYSTEM_ASYNC_TRANSFORM_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/async/transform.h> -#include __THRUST_DEVICE_SYSTEM_ASYNC_TRANSFORM_HEADER -#undef __THRUST_DEVICE_SYSTEM_ASYNC_TRANSFORM_HEADER - diff --git a/spaces/CVPR/lama-example/saicinpainting/utils.py b/spaces/CVPR/lama-example/saicinpainting/utils.py deleted file mode 100644 index d0914320eab96e197ae379b94ea7eeb2fe5dfd79..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/saicinpainting/utils.py +++ /dev/null @@ -1,174 +0,0 @@ -import bisect -import functools -import logging -import numbers -import os -import signal -import sys -import traceback -import warnings - -import torch -from pytorch_lightning import seed_everything - -LOGGER = logging.getLogger(__name__) - - -def check_and_warn_input_range(tensor, min_value, max_value, name): - actual_min = tensor.min() - actual_max = tensor.max() - if actual_min < min_value or actual_max > max_value: - warnings.warn(f"{name} must be in {min_value}..{max_value} range, but it ranges {actual_min}..{actual_max}") - - -def sum_dict_with_prefix(target, cur_dict, prefix, default=0): - for k, v in cur_dict.items(): - target_key = prefix + k - target[target_key] = target.get(target_key, default) + v - - -def average_dicts(dict_list): - result = {} - norm = 1e-3 - for dct in dict_list: - sum_dict_with_prefix(result, dct, '') - norm += 1 - for k in list(result): - result[k] /= norm - return result - - -def add_prefix_to_keys(dct, prefix): - return {prefix + k: v for k, v in dct.items()} - - -def set_requires_grad(module, value): - for param in module.parameters(): - param.requires_grad = value - - -def flatten_dict(dct): - result = {} - for k, v in dct.items(): - if isinstance(k, tuple): - k = '_'.join(k) - if isinstance(v, dict): - for sub_k, sub_v in flatten_dict(v).items(): - result[f'{k}_{sub_k}'] = sub_v - else: - result[k] = v - return result - - -class LinearRamp: - def __init__(self, start_value=0, end_value=1, start_iter=-1, end_iter=0): - self.start_value = start_value - self.end_value = end_value - self.start_iter = start_iter - self.end_iter = end_iter - - def __call__(self, i): - if i < self.start_iter: - return self.start_value - if i >= self.end_iter: - return self.end_value - part = (i - self.start_iter) / (self.end_iter - self.start_iter) - return self.start_value * (1 - part) + self.end_value * part - - -class LadderRamp: - def __init__(self, start_iters, values): - self.start_iters = start_iters - self.values = values - assert len(values) == len(start_iters) + 1, (len(values), len(start_iters)) - - def __call__(self, i): - segment_i = bisect.bisect_right(self.start_iters, i) - return self.values[segment_i] - - -def get_ramp(kind='ladder', **kwargs): - if kind == 'linear': - return LinearRamp(**kwargs) - if kind == 'ladder': - return LadderRamp(**kwargs) - raise ValueError(f'Unexpected ramp kind: {kind}') - - -def print_traceback_handler(sig, frame): - LOGGER.warning(f'Received signal {sig}') - bt = ''.join(traceback.format_stack()) - LOGGER.warning(f'Requested stack trace:\n{bt}') - - -def register_debug_signal_handlers(sig=signal.SIGUSR1, handler=print_traceback_handler): - LOGGER.warning(f'Setting signal {sig} handler {handler}') - signal.signal(sig, handler) - - -def handle_deterministic_config(config): - seed = dict(config).get('seed', None) - if seed is None: - return False - - seed_everything(seed) - return True - - -def get_shape(t): - if torch.is_tensor(t): - return tuple(t.shape) - elif isinstance(t, dict): - return {n: get_shape(q) for n, q in t.items()} - elif isinstance(t, (list, tuple)): - return [get_shape(q) for q in t] - elif isinstance(t, numbers.Number): - return type(t) - else: - raise ValueError('unexpected type {}'.format(type(t))) - - -def get_has_ddp_rank(): - master_port = os.environ.get('MASTER_PORT', None) - node_rank = os.environ.get('NODE_RANK', None) - local_rank = os.environ.get('LOCAL_RANK', None) - world_size = os.environ.get('WORLD_SIZE', None) - has_rank = master_port is not None or node_rank is not None or local_rank is not None or world_size is not None - return has_rank - - -def handle_ddp_subprocess(): - def main_decorator(main_func): - @functools.wraps(main_func) - def new_main(*args, **kwargs): - # Trainer sets MASTER_PORT, NODE_RANK, LOCAL_RANK, WORLD_SIZE - parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None) - has_parent = parent_cwd is not None - has_rank = get_has_ddp_rank() - assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}' - - if has_parent: - # we are in the worker - sys.argv.extend([ - f'hydra.run.dir={parent_cwd}', - # 'hydra/hydra_logging=disabled', - # 'hydra/job_logging=disabled' - ]) - # do nothing if this is a top-level process - # TRAINING_PARENT_WORK_DIR is set in handle_ddp_parent_process after hydra initialization - - main_func(*args, **kwargs) - return new_main - return main_decorator - - -def handle_ddp_parent_process(): - parent_cwd = os.environ.get('TRAINING_PARENT_WORK_DIR', None) - has_parent = parent_cwd is not None - has_rank = get_has_ddp_rank() - assert has_parent == has_rank, f'Inconsistent state: has_parent={has_parent}, has_rank={has_rank}' - - if parent_cwd is None: - os.environ['TRAINING_PARENT_WORK_DIR'] = os.getcwd() - - return has_parent diff --git a/spaces/CVPR/monoscene_lite/monoscene/unet3d_nyu.py b/spaces/CVPR/monoscene_lite/monoscene/unet3d_nyu.py deleted file mode 100644 index e9e3b3718999248efa1b2925658465ba59801b13..0000000000000000000000000000000000000000 --- a/spaces/CVPR/monoscene_lite/monoscene/unet3d_nyu.py +++ /dev/null @@ -1,90 +0,0 @@ -# encoding: utf-8 -import torch -import torch.nn as nn -import torch.nn.functional as F -import numpy as np -from monoscene.CRP3D import CPMegaVoxels -from monoscene.modules import ( - Process, - Upsample, - Downsample, - SegmentationHead, - ASPP, -) - - -class UNet3D(nn.Module): - def __init__( - self, - class_num, - norm_layer, - feature, - full_scene_size, - n_relations=4, - project_res=[], - context_prior=True, - bn_momentum=0.1, - ): - super(UNet3D, self).__init__() - self.business_layer = [] - self.project_res = project_res - - self.feature_1_4 = feature - self.feature_1_8 = feature * 2 - self.feature_1_16 = feature * 4 - - self.feature_1_16_dec = self.feature_1_16 - self.feature_1_8_dec = self.feature_1_8 - self.feature_1_4_dec = self.feature_1_4 - - self.process_1_4 = nn.Sequential( - Process(self.feature_1_4, norm_layer, bn_momentum, dilations=[1, 2, 3]), - Downsample(self.feature_1_4, norm_layer, bn_momentum), - ) - self.process_1_8 = nn.Sequential( - Process(self.feature_1_8, norm_layer, bn_momentum, dilations=[1, 2, 3]), - Downsample(self.feature_1_8, norm_layer, bn_momentum), - ) - self.up_1_16_1_8 = Upsample( - self.feature_1_16_dec, self.feature_1_8_dec, norm_layer, bn_momentum - ) - self.up_1_8_1_4 = Upsample( - self.feature_1_8_dec, self.feature_1_4_dec, norm_layer, bn_momentum - ) - self.ssc_head_1_4 = SegmentationHead( - self.feature_1_4_dec, self.feature_1_4_dec, class_num, [1, 2, 3] - ) - - self.context_prior = context_prior - size_1_16 = tuple(np.ceil(i / 4).astype(int) for i in full_scene_size) - - if context_prior: - self.CP_mega_voxels = CPMegaVoxels( - self.feature_1_16, - size_1_16, - n_relations=n_relations, - bn_momentum=bn_momentum, - ) - - # - def forward(self, input_dict): - res = {} - - x3d_1_4 = input_dict["x3d"] - x3d_1_8 = self.process_1_4(x3d_1_4) - x3d_1_16 = self.process_1_8(x3d_1_8) - - if self.context_prior: - ret = self.CP_mega_voxels(x3d_1_16) - x3d_1_16 = ret["x"] - for k in ret.keys(): - res[k] = ret[k] - - x3d_up_1_8 = self.up_1_16_1_8(x3d_1_16) + x3d_1_8 - x3d_up_1_4 = self.up_1_8_1_4(x3d_up_1_8) + x3d_1_4 - - ssc_logit_1_4 = self.ssc_head_1_4(x3d_up_1_4) - - res["ssc_logit"] = ssc_logit_1_4 - - return res diff --git a/spaces/ChandraMohanNayal/AutoGPT/benchmark/__init__.py b/spaces/ChandraMohanNayal/AutoGPT/benchmark/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ChandraMohanNayal/AutoGPT/tests/test_prompt_generator.py b/spaces/ChandraMohanNayal/AutoGPT/tests/test_prompt_generator.py deleted file mode 100644 index 6a0bfd6c7bbdbfaa3750e9dee621bd25e17a448b..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/tests/test_prompt_generator.py +++ /dev/null @@ -1,114 +0,0 @@ -from unittest import TestCase - -from autogpt.promptgenerator import PromptGenerator - - -class TestPromptGenerator(TestCase): - """ - Test cases for the PromptGenerator class, which is responsible for generating - prompts for the AI with constraints, commands, resources, and performance evaluations. - """ - - @classmethod - def setUpClass(cls): - """ - Set up the initial state for each test method by creating an instance of PromptGenerator. - """ - cls.generator = PromptGenerator() - - # Test whether the add_constraint() method adds a constraint to the generator's constraints list - def test_add_constraint(self): - """ - Test if the add_constraint() method adds a constraint to the generator's constraints list. - """ - constraint = "Constraint1" - self.generator.add_constraint(constraint) - self.assertIn(constraint, self.generator.constraints) - - # Test whether the add_command() method adds a command to the generator's commands list - def test_add_command(self): - """ - Test if the add_command() method adds a command to the generator's commands list. - """ - command_label = "Command Label" - command_name = "command_name" - args = {"arg1": "value1", "arg2": "value2"} - self.generator.add_command(command_label, command_name, args) - command = { - "label": command_label, - "name": command_name, - "args": args, - } - self.assertIn(command, self.generator.commands) - - def test_add_resource(self): - """ - Test if the add_resource() method adds a resource to the generator's resources list. - """ - resource = "Resource1" - self.generator.add_resource(resource) - self.assertIn(resource, self.generator.resources) - - def test_add_performance_evaluation(self): - """ - Test if the add_performance_evaluation() method adds an evaluation to the generator's - performance_evaluation list. - """ - evaluation = "Evaluation1" - self.generator.add_performance_evaluation(evaluation) - self.assertIn(evaluation, self.generator.performance_evaluation) - - def test_generate_prompt_string(self): - """ - Test if the generate_prompt_string() method generates a prompt string with all the added - constraints, commands, resources, and evaluations. - """ - # Define the test data - constraints = ["Constraint1", "Constraint2"] - commands = [ - { - "label": "Command1", - "name": "command_name1", - "args": {"arg1": "value1"}, - }, - { - "label": "Command2", - "name": "command_name2", - "args": {}, - }, - ] - resources = ["Resource1", "Resource2"] - evaluations = ["Evaluation1", "Evaluation2"] - - # Add test data to the generator - for constraint in constraints: - self.generator.add_constraint(constraint) - for command in commands: - self.generator.add_command( - command["label"], command["name"], command["args"] - ) - for resource in resources: - self.generator.add_resource(resource) - for evaluation in evaluations: - self.generator.add_performance_evaluation(evaluation) - - # Generate the prompt string and verify its correctness - prompt_string = self.generator.generate_prompt_string() - self.assertIsNotNone(prompt_string) - - # Check if all constraints, commands, resources, and evaluations are present in the prompt string - for constraint in constraints: - self.assertIn(constraint, prompt_string) - for command in commands: - self.assertIn(command["name"], prompt_string) - for key, value in command["args"].items(): - self.assertIn(f'"{key}": "{value}"', prompt_string) - for resource in resources: - self.assertIn(resource, prompt_string) - for evaluation in evaluations: - self.assertIn(evaluation, prompt_string) - - self.assertIn("constraints", prompt_string.lower()) - self.assertIn("commands", prompt_string.lower()) - self.assertIn("resources", prompt_string.lower()) - self.assertIn("performance evaluation", prompt_string.lower()) diff --git a/spaces/Chomkwoy/Nilkessye/cpool_new/src/bottom_pool.cpp b/spaces/Chomkwoy/Nilkessye/cpool_new/src/bottom_pool.cpp deleted file mode 100644 index 607d6366b063f4e59195085be5718cf6e22974c4..0000000000000000000000000000000000000000 --- a/spaces/Chomkwoy/Nilkessye/cpool_new/src/bottom_pool.cpp +++ /dev/null @@ -1,90 +0,0 @@ -#include - -#include - -std::vector pool_forward( - torch::Tensor input -) { - // Initialize output - torch::Tensor output = torch::zeros_like(input); - - // Get height - int64_t height = input.size(2); - - // Copy the last column - torch::Tensor input_temp = input.select(2, 0); - torch::Tensor output_temp = output.select(2, 0); - output_temp.copy_(input_temp); - - torch::Tensor max_temp; - for (int64_t ind = 0; ind < height - 1; ++ind) { - input_temp = input.select(2, ind + 1); - output_temp = output.select(2, ind); - max_temp = output.select(2, ind + 1); - - torch::max_out(max_temp, input_temp, output_temp); - } - - return { - output - }; -} - -std::vector pool_backward( - torch::Tensor input, - torch::Tensor grad_output -) { - auto output = torch::zeros_like(input); - - int32_t batch = input.size(0); - int32_t channel = input.size(1); - int32_t height = input.size(2); - int32_t width = input.size(3); - - // auto max_val = torch::zeros(torch::CUDA(torch::kFloat), {batch, channel, width}); - // auto max_ind = torch::zeros(torch::CUDA(torch::kLong), {batch, channel, width}); - auto max_val = torch::zeros({batch, channel, width}, torch::TensorOptions().dtype(torch::kFloat).device(torch::kCUDA)); - auto max_ind = torch::zeros({batch, channel, width}, torch::TensorOptions().dtype(torch::kLong).device(torch::kCUDA)); - - auto input_temp = input.select(2, 0); - max_val.copy_(input_temp); - - max_ind.fill_(0); - - auto output_temp = output.select(2, 0); - auto grad_output_temp = grad_output.select(2, 0); - output_temp.copy_(grad_output_temp); - - auto un_max_ind = max_ind.unsqueeze(2); - // auto gt_mask = torch::zeros(torch::CUDA(torch::kByte), {batch, channel, width}); - // auto max_temp = torch::zeros(torch::CUDA(torch::kFloat), {batch, channel, width}); - auto gt_mask = torch::zeros({batch, channel, width}, torch::TensorOptions().dtype(torch::kByte).device(torch::kCUDA)); - auto max_temp = torch::zeros({batch, channel, width}, torch::TensorOptions().dtype(torch::kFloat).device(torch::kCUDA)); - - for (int32_t ind = 0; ind < height - 1; ++ind) { - input_temp = input.select(2, ind + 1); - torch::gt_out(gt_mask, input_temp, max_val); - - torch::masked_select_out(max_temp, input_temp, gt_mask); - max_val.masked_scatter_(gt_mask, max_temp); - max_ind.masked_fill_(gt_mask, ind + 1); - - grad_output_temp = grad_output.select(2, ind + 1).unsqueeze(2); - output.scatter_add_(2, un_max_ind, grad_output_temp); - } - - return { - output - }; -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def( - "forward", &pool_forward, "Bottom Pool Forward", - py::call_guard() - ); - m.def( - "backward", &pool_backward, "Bottom Pool Backward", - py::call_guard() - ); -} diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/CHANGELOG.md b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/CHANGELOG.md deleted file mode 100644 index 4fbfdfef0354dc77f132f7224be3b60b2f8facae..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/CHANGELOG.md +++ /dev/null @@ -1,295 +0,0 @@ -# 0.5.15 - -* RedProtocol 尝试适配喵崽 - -# 0.5.14 - -* `#ws重新连接` 改为指定 - -# 0.5.13 - -* red 非win系统可以使用`#ws设置red转发2`使发送伪造消息为直接发送 - -# 0.5.12 - -* 修改red连接过程 -* onebot增加快速操作api -* 尝试存储trss icqq插件发送的消息 - -# 0.5.11 - -* Chronocat 0.0.48 支持私聊导入抽卡记录等 - -# 0.5.10 - -* Chronocat响应群临时消息 - -# 0.5.9 - -* 将消息存储到数据库,默认存储7天 - * 仅支持 Miao-Yunzai & TRSS-Yunzai, Yunzai-Bot还是存储在redis -* Chronocat 0.0.47之前的版本获取群历史消息没有user_id,可能会导致获取失败,建议更新至0.0.47 - -# 0.5.8 - -* 增加一个设置项`#ws设置禁言拦截 开启/关闭`,详细请查看`#ws设置` - -# 0.5.7 - -* 支持正向http 反向http -* 部分细节优化 - -# 0.5.6 - -* 正向ws连接支持/api /event接口 - -# 0.5.5 - -* QQNT 转发消息发送者改为固定 -* 可能适配了ReadStream的file - -# 0.5.4 - -* Chronocat 0.0.43 发送转发消息有message_id返回了 - -# 0.5.3 - -* 支持了TRSS-Yunzai 的 ICQQ-Plugin - * 目前 TRSS-Yunzai 的可用协议为 - * GenshinUID Core 全部协议均可使用 - * onebot 仅支持 QQNT 以及 ICQQ - -# 0.5.2 - -* QQNT 拦截配置项中没有uin的连接 - * uin为需要连接的账号 -* 优化添加连接之后的提示词 - -# 0.5.1 - -* QQNT 将嵌套转发消息改成不嵌套 - * 暂时不支持嵌套转发 - -# 0.5.0 - -* 重构一下QQNT相关的代码 -* 优化一下onebotApi相关的代码 - * 之后可能会用得上 -* QQNT支持正向和反向ws连接,需要对QQNT连接的账号发送#ws添加连接 -* QQNT增加群禁言,全体禁言,踢群 - * 如果QQNT不在前台且焦点不在指定的群,获取的群成员列表可能为空 - * 如果为空的话则只会获取启动机器人之后在群内发送过消息的人 -* QQNT修复使用Chronocat0.0.42时的某些bug -* QQNT增加请求Api超时输出日志 - -# 0.4.20 - -* QQNT 简单支持一下文字的转发消息,(图片会把消息发给自己然后撤回取直链) - * 需要更新Chronocat 0.0.40 - * 需要64位QQNT,如果不是64位不能伪造转发消息 - * 机器人所在的QQNT看不了发送的转发消息 - * 发送转发消息之前至少要有一条消息 - * 转发消息目前不能主动撤回 - -# 0.4.19 - -* QQNT 增加获取群历史消息 - * 因为回复消息没有msgId,所以将seq存在redis获取对应的msgId,默认存储10分钟 -* QQNT 优化使用TRSS脚本自动获取Token - -# 0.4.18 - -* QQNT 增加发送视频 需要ffmpeg - -# 0.4.17 - -* QQNT 增加发送文件 - -# 0.4.16 - -* QQNT 发送语音修改为使用ffmpeg,请自行配置ffmpeg,否则可能无法发送语音 - * 更新后需要重新安装依赖 pnpm install --filter=ws-plugin - -# 0.4.15 - -* QQNT 优化进群通知,增加禁言通知 - -# 0.4.14 - -* 增加 TRSS-Yunzai 连接 gsuid_core -* 优化CQ:node - -# 0.4.13 - -* QQNT 增加发送回复消息 - * 回复会默认带一个 `@` - -# 0.4.12 - -* QQNT 若不填Token则自动获取Token - -# 0.4.11 - -* QQNT 增加发送语音,小表情 - * 语音只有手机上能听 - -# 0.4.10 - -* QQNT 增加入群通知 - -# 0.4.9 - -* QQNT 增加主动撤回消息 - -# 0.4.8 - -* 优化`#ws添加连接` - -# 0.4.7 - -* 增加 TRSS-Yunzai 连接 QQNT - -# 0.4.6 - -* 增加白名单 - -# 0.4.5 - -* 优化代码 - -# 0.4.4 - -* 增加api - * get_essence_msg_list 获取精华消息列表 - * _get_group_notice 获取群公告 - -# 0.4.3 - -* 增加匿名以及群匿名用户禁言API - -# 0.4.2 - -* 增加设置项`#ws设置全部 开启/关闭` 一键操作所有设置项 - -# 0.4.1 - -* 优化存储消息,其他插件使用e.reply时也能存储,防止get_msg报错 - -# 0.4.0 - -* 适配锅巴 -* 增加大部分api -* 增加请求上报 -* 增加`#ws更新日志` - -# 0.3.12 - -* 修复更换为Bot.on之后仅at不生效的问题 -* 增加可单独禁用群聊 - * `#ws禁用群123456` 不带群号则默认为当前群 - * `#ws启用群123456` 不带群号则默认为当前群 - * `#ws查看禁用群` - * 详细可查看`#ws帮助` - -# 0.3.11 - -* 新增CQ码[CQ:music] 音乐自定义分享 Copyright xiaofei-plugin - -# 0.3.10 - -* 修改message_id存储于redis -* 增加设置项`#ws设置存储600` -* 增加对message_id为null的判断 -* 获取消息仅针对`用户发送`以及`ws-plugin插件发送`,如果是其他插件发送的消息会获取不到 - -# 0.3.9 - -* 修改为Bot.on -* 删除设置项`#ws设置优先级` - -# 0.3.8 - -* 修复前缀判断时错误的匹配e.msg的问题 -* 反向ws连接添加缺少的请求头X-Client-Role - -# 0.3.7 - -* 增加api - * get_group_root_files 获取群根目录文件列表 - * get_group_files_by_folder 获取群子目录文件列表 - * get_group_file_url 获取群文件资源链接 - -# 0.3.6 - -* 增加CQ码[CQ:record] 语音 - -# 0.3.5 - -* 增加发送消息成功后的日志输出 -* 适配根据歌曲id分享音乐 - -# 0.3.4 - -* 可能修复了连接失败时关闭和删除连接无效的bug -* 增加一个设置项`#ws设置优先级1` 设置完重启后生效 -* 增加了几个bug - -# 0.3.3 - -* 增加CQ码[CQ:face] QQ表情 - -# 0.3.2 - -* 重新开放一下正向ws连接 - -# 0.3.1 - -* 增加指令`#ws帮助` Copyright miao-plugin - -# 0.3.0 - -* 增加指令`#ws关闭连接``#ws打开连接``#ws查看连接` - *`#ws关闭连接` 不会删除已有连接,同时不进行连接 - *`#ws打开连接` 打开已关闭的连接 - *`#ws查看连接` 查看已有的所有连接名字和状态 - *`#ws添加连接` 添加一个新的连接 - *`#ws删除连接` 删除一个已有的连接 - *`#ws重新连接` 强制断开已有的所有连接并重新连接 -* 暂时关闭正向ws连接 - -# 0.2.0 - -* 增加通知事件上报,默认关闭,需要可自行使用`#ws设置`进行开启 - * 增加以下通知事件 - * 群管理员变动,群成员减少,群成员增加 - * 群禁言,好友添加,群消息撤回 - * 好友消息撤回,群内戳一戳 - -# 0.1.0 - -* 增加指令`#ws版本``#ws设置` Copyright miao-plugin - -# 0.0.5 - -* 增加指令`#ws重新连接` -* 增加首次连接时将结果通知主人设置 - -# 0.0.4 - -* 增加断线自动重新连接 -* 增加断线和重连通知主人设置 - -# 0.0.3 - -* 适配gsuid群聊导出抽卡记录和私聊发送抽卡记录json文件 - -# 0.0.2 - -* 增加指令`#ws添加连接``#ws删除连接` - -# 0.0.1 - -* 初始化插件 -* 可连接支持onebotv11协议的bot以及gsuid_core -* 适配了部分onebot api - diff --git a/spaces/CofAI/chat.b4/client/css/main.css b/spaces/CofAI/chat.b4/client/css/main.css deleted file mode 100644 index ec1f1dd80247747912e1976413a1e3897f1308db..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.b4/client/css/main.css +++ /dev/null @@ -1,14 +0,0 @@ -.main-container { - display: flex; - padding: var(--section-gap); - height: 100vh; - justify-content: center; - box-sizing: border-box; -} - -@media screen and (max-width: 360px) { - .main-container { - padding: 0px; - height: 90vh; - } -} \ No newline at end of file diff --git a/spaces/Coweed/BadTrip/README.md b/spaces/Coweed/BadTrip/README.md deleted file mode 100644 index 6cf0b15ad1a17a3bae11beefcea69b4a42eedaa7..0000000000000000000000000000000000000000 --- a/spaces/Coweed/BadTrip/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: BadTrip -emoji: 👁 -colorFrom: purple -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/exceptions.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/exceptions.py deleted file mode 100644 index 2883493085143c64c95d60a249e5444f8840cbc6..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/exceptions.py +++ /dev/null @@ -1,91 +0,0 @@ -# SPDX-License-Identifier: MIT - - -class FrozenError(AttributeError): - """ - A frozen/immutable instance or attribute have been attempted to be - modified. - - It mirrors the behavior of ``namedtuples`` by using the same error message - and subclassing `AttributeError`. - - .. versionadded:: 20.1.0 - """ - - msg = "can't set attribute" - args = [msg] - - -class FrozenInstanceError(FrozenError): - """ - A frozen instance has been attempted to be modified. - - .. versionadded:: 16.1.0 - """ - - -class FrozenAttributeError(FrozenError): - """ - A frozen attribute has been attempted to be modified. - - .. versionadded:: 20.1.0 - """ - - -class AttrsAttributeNotFoundError(ValueError): - """ - An *attrs* function couldn't find an attribute that the user asked for. - - .. versionadded:: 16.2.0 - """ - - -class NotAnAttrsClassError(ValueError): - """ - A non-*attrs* class has been passed into an *attrs* function. - - .. versionadded:: 16.2.0 - """ - - -class DefaultAlreadySetError(RuntimeError): - """ - A default has been set when defining the field and is attempted to be reset - using the decorator. - - .. versionadded:: 17.1.0 - """ - - -class UnannotatedAttributeError(RuntimeError): - """ - A class with ``auto_attribs=True`` has a field without a type annotation. - - .. versionadded:: 17.3.0 - """ - - -class PythonTooOldError(RuntimeError): - """ - It was attempted to use an *attrs* feature that requires a newer Python - version. - - .. versionadded:: 18.2.0 - """ - - -class NotCallableError(TypeError): - """ - A field requiring a callable has been set with a value that is not - callable. - - .. versionadded:: 19.2.0 - """ - - def __init__(self, msg, value): - super(TypeError, self).__init__(msg, value) - self.msg = msg - self.value = value - - def __str__(self): - return str(self.msg) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/cu2qu/cu2qu.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/cu2qu/cu2qu.py deleted file mode 100644 index e620b48a55bd0ce720a34c309d295839edabe5aa..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/cu2qu/cu2qu.py +++ /dev/null @@ -1,534 +0,0 @@ -# cython: language_level=3 -# distutils: define_macros=CYTHON_TRACE_NOGIL=1 - -# Copyright 2015 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -try: - import cython - - COMPILED = cython.compiled -except (AttributeError, ImportError): - # if cython not installed, use mock module with no-op decorators and types - from fontTools.misc import cython - - COMPILED = False - -import math - -from .errors import Error as Cu2QuError, ApproxNotFoundError - - -__all__ = ["curve_to_quadratic", "curves_to_quadratic"] - -MAX_N = 100 - -NAN = float("NaN") - - -@cython.cfunc -@cython.inline -@cython.returns(cython.double) -@cython.locals(v1=cython.complex, v2=cython.complex) -def dot(v1, v2): - """Return the dot product of two vectors. - - Args: - v1 (complex): First vector. - v2 (complex): Second vector. - - Returns: - double: Dot product. - """ - return (v1 * v2.conjugate()).real - - -@cython.cfunc -@cython.inline -@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -@cython.locals( - _1=cython.complex, _2=cython.complex, _3=cython.complex, _4=cython.complex -) -def calc_cubic_points(a, b, c, d): - _1 = d - _2 = (c / 3.0) + d - _3 = (b + c) / 3.0 + _2 - _4 = a + d + c + b - return _1, _2, _3, _4 - - -@cython.cfunc -@cython.inline -@cython.locals( - p0=cython.complex, p1=cython.complex, p2=cython.complex, p3=cython.complex -) -@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -def calc_cubic_parameters(p0, p1, p2, p3): - c = (p1 - p0) * 3.0 - b = (p2 - p1) * 3.0 - c - d = p0 - a = p3 - d - c - b - return a, b, c, d - - -@cython.cfunc -@cython.inline -@cython.locals( - p0=cython.complex, p1=cython.complex, p2=cython.complex, p3=cython.complex -) -def split_cubic_into_n_iter(p0, p1, p2, p3, n): - """Split a cubic Bezier into n equal parts. - - Splits the curve into `n` equal parts by curve time. - (t=0..1/n, t=1/n..2/n, ...) - - Args: - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - - Returns: - An iterator yielding the control points (four complex values) of the - subcurves. - """ - # Hand-coded special-cases - if n == 2: - return iter(split_cubic_into_two(p0, p1, p2, p3)) - if n == 3: - return iter(split_cubic_into_three(p0, p1, p2, p3)) - if n == 4: - a, b = split_cubic_into_two(p0, p1, p2, p3) - return iter( - split_cubic_into_two(a[0], a[1], a[2], a[3]) - + split_cubic_into_two(b[0], b[1], b[2], b[3]) - ) - if n == 6: - a, b = split_cubic_into_two(p0, p1, p2, p3) - return iter( - split_cubic_into_three(a[0], a[1], a[2], a[3]) - + split_cubic_into_three(b[0], b[1], b[2], b[3]) - ) - - return _split_cubic_into_n_gen(p0, p1, p2, p3, n) - - -@cython.locals( - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, - n=cython.int, -) -@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -@cython.locals( - dt=cython.double, delta_2=cython.double, delta_3=cython.double, i=cython.int -) -@cython.locals( - a1=cython.complex, b1=cython.complex, c1=cython.complex, d1=cython.complex -) -def _split_cubic_into_n_gen(p0, p1, p2, p3, n): - a, b, c, d = calc_cubic_parameters(p0, p1, p2, p3) - dt = 1 / n - delta_2 = dt * dt - delta_3 = dt * delta_2 - for i in range(n): - t1 = i * dt - t1_2 = t1 * t1 - # calc new a, b, c and d - a1 = a * delta_3 - b1 = (3 * a * t1 + b) * delta_2 - c1 = (2 * b * t1 + c + 3 * a * t1_2) * dt - d1 = a * t1 * t1_2 + b * t1_2 + c * t1 + d - yield calc_cubic_points(a1, b1, c1, d1) - - -@cython.cfunc -@cython.inline -@cython.locals( - p0=cython.complex, p1=cython.complex, p2=cython.complex, p3=cython.complex -) -@cython.locals(mid=cython.complex, deriv3=cython.complex) -def split_cubic_into_two(p0, p1, p2, p3): - """Split a cubic Bezier into two equal parts. - - Splits the curve into two equal parts at t = 0.5 - - Args: - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - - Returns: - tuple: Two cubic Beziers (each expressed as a tuple of four complex - values). - """ - mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - deriv3 = (p3 + p2 - p1 - p0) * 0.125 - return ( - (p0, (p0 + p1) * 0.5, mid - deriv3, mid), - (mid, mid + deriv3, (p2 + p3) * 0.5, p3), - ) - - -@cython.cfunc -@cython.inline -@cython.locals( - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, -) -@cython.locals( - mid1=cython.complex, - deriv1=cython.complex, - mid2=cython.complex, - deriv2=cython.complex, -) -def split_cubic_into_three(p0, p1, p2, p3): - """Split a cubic Bezier into three equal parts. - - Splits the curve into three equal parts at t = 1/3 and t = 2/3 - - Args: - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - - Returns: - tuple: Three cubic Beziers (each expressed as a tuple of four complex - values). - """ - mid1 = (8 * p0 + 12 * p1 + 6 * p2 + p3) * (1 / 27) - deriv1 = (p3 + 3 * p2 - 4 * p0) * (1 / 27) - mid2 = (p0 + 6 * p1 + 12 * p2 + 8 * p3) * (1 / 27) - deriv2 = (4 * p3 - 3 * p1 - p0) * (1 / 27) - return ( - (p0, (2 * p0 + p1) / 3.0, mid1 - deriv1, mid1), - (mid1, mid1 + deriv1, mid2 - deriv2, mid2), - (mid2, mid2 + deriv2, (p2 + 2 * p3) / 3.0, p3), - ) - - -@cython.cfunc -@cython.inline -@cython.returns(cython.complex) -@cython.locals( - t=cython.double, - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, -) -@cython.locals(_p1=cython.complex, _p2=cython.complex) -def cubic_approx_control(t, p0, p1, p2, p3): - """Approximate a cubic Bezier using a quadratic one. - - Args: - t (double): Position of control point. - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - - Returns: - complex: Location of candidate control point on quadratic curve. - """ - _p1 = p0 + (p1 - p0) * 1.5 - _p2 = p3 + (p2 - p3) * 1.5 - return _p1 + (_p2 - _p1) * t - - -@cython.cfunc -@cython.inline -@cython.returns(cython.complex) -@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -@cython.locals(ab=cython.complex, cd=cython.complex, p=cython.complex, h=cython.double) -def calc_intersect(a, b, c, d): - """Calculate the intersection of two lines. - - Args: - a (complex): Start point of first line. - b (complex): End point of first line. - c (complex): Start point of second line. - d (complex): End point of second line. - - Returns: - complex: Location of intersection if one present, ``complex(NaN,NaN)`` - if no intersection was found. - """ - ab = b - a - cd = d - c - p = ab * 1j - try: - h = dot(p, a - c) / dot(p, cd) - except ZeroDivisionError: - return complex(NAN, NAN) - return c + cd * h - - -@cython.cfunc -@cython.returns(cython.int) -@cython.locals( - tolerance=cython.double, - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, -) -@cython.locals(mid=cython.complex, deriv3=cython.complex) -def cubic_farthest_fit_inside(p0, p1, p2, p3, tolerance): - """Check if a cubic Bezier lies within a given distance of the origin. - - "Origin" means *the* origin (0,0), not the start of the curve. Note that no - checks are made on the start and end positions of the curve; this function - only checks the inside of the curve. - - Args: - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - tolerance (double): Distance from origin. - - Returns: - bool: True if the cubic Bezier ``p`` entirely lies within a distance - ``tolerance`` of the origin, False otherwise. - """ - # First check p2 then p1, as p2 has higher error early on. - if abs(p2) <= tolerance and abs(p1) <= tolerance: - return True - - # Split. - mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - if abs(mid) > tolerance: - return False - deriv3 = (p3 + p2 - p1 - p0) * 0.125 - return cubic_farthest_fit_inside( - p0, (p0 + p1) * 0.5, mid - deriv3, mid, tolerance - ) and cubic_farthest_fit_inside(mid, mid + deriv3, (p2 + p3) * 0.5, p3, tolerance) - - -@cython.cfunc -@cython.inline -@cython.locals(tolerance=cython.double) -@cython.locals( - q1=cython.complex, - c0=cython.complex, - c1=cython.complex, - c2=cython.complex, - c3=cython.complex, -) -def cubic_approx_quadratic(cubic, tolerance): - """Approximate a cubic Bezier with a single quadratic within a given tolerance. - - Args: - cubic (sequence): Four complex numbers representing control points of - the cubic Bezier curve. - tolerance (double): Permitted deviation from the original curve. - - Returns: - Three complex numbers representing control points of the quadratic - curve if it fits within the given tolerance, or ``None`` if no suitable - curve could be calculated. - """ - - q1 = calc_intersect(cubic[0], cubic[1], cubic[2], cubic[3]) - if math.isnan(q1.imag): - return None - c0 = cubic[0] - c3 = cubic[3] - c1 = c0 + (q1 - c0) * (2 / 3) - c2 = c3 + (q1 - c3) * (2 / 3) - if not cubic_farthest_fit_inside(0, c1 - cubic[1], c2 - cubic[2], 0, tolerance): - return None - return c0, q1, c3 - - -@cython.cfunc -@cython.locals(n=cython.int, tolerance=cython.double) -@cython.locals(i=cython.int) -@cython.locals(all_quadratic=cython.int) -@cython.locals( - c0=cython.complex, c1=cython.complex, c2=cython.complex, c3=cython.complex -) -@cython.locals( - q0=cython.complex, - q1=cython.complex, - next_q1=cython.complex, - q2=cython.complex, - d1=cython.complex, -) -def cubic_approx_spline(cubic, n, tolerance, all_quadratic): - """Approximate a cubic Bezier curve with a spline of n quadratics. - - Args: - cubic (sequence): Four complex numbers representing control points of - the cubic Bezier curve. - n (int): Number of quadratic Bezier curves in the spline. - tolerance (double): Permitted deviation from the original curve. - - Returns: - A list of ``n+2`` complex numbers, representing control points of the - quadratic spline if it fits within the given tolerance, or ``None`` if - no suitable spline could be calculated. - """ - - if n == 1: - return cubic_approx_quadratic(cubic, tolerance) - if n == 2 and all_quadratic == False: - return cubic - - cubics = split_cubic_into_n_iter(cubic[0], cubic[1], cubic[2], cubic[3], n) - - # calculate the spline of quadratics and check errors at the same time. - next_cubic = next(cubics) - next_q1 = cubic_approx_control( - 0, next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] - ) - q2 = cubic[0] - d1 = 0j - spline = [cubic[0], next_q1] - for i in range(1, n + 1): - # Current cubic to convert - c0, c1, c2, c3 = next_cubic - - # Current quadratic approximation of current cubic - q0 = q2 - q1 = next_q1 - if i < n: - next_cubic = next(cubics) - next_q1 = cubic_approx_control( - i / (n - 1), next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] - ) - spline.append(next_q1) - q2 = (q1 + next_q1) * 0.5 - else: - q2 = c3 - - # End-point deltas - d0 = d1 - d1 = q2 - c3 - - if abs(d1) > tolerance or not cubic_farthest_fit_inside( - d0, - q0 + (q1 - q0) * (2 / 3) - c1, - q2 + (q1 - q2) * (2 / 3) - c2, - d1, - tolerance, - ): - return None - spline.append(cubic[3]) - - return spline - - -@cython.locals(max_err=cython.double) -@cython.locals(n=cython.int) -@cython.locals(all_quadratic=cython.int) -def curve_to_quadratic(curve, max_err, all_quadratic=True): - """Approximate a cubic Bezier curve with a spline of n quadratics. - - Args: - cubic (sequence): Four 2D tuples representing control points of - the cubic Bezier curve. - max_err (double): Permitted deviation from the original curve. - all_quadratic (bool): If True (default) returned value is a - quadratic spline. If False, it's either a single quadratic - curve or a single cubic curve. - - Returns: - If all_quadratic is True: A list of 2D tuples, representing - control points of the quadratic spline if it fits within the - given tolerance, or ``None`` if no suitable spline could be - calculated. - - If all_quadratic is False: Either a quadratic curve (if length - of output is 3), or a cubic curve (if length of output is 4). - """ - - curve = [complex(*p) for p in curve] - - for n in range(1, MAX_N + 1): - spline = cubic_approx_spline(curve, n, max_err, all_quadratic) - if spline is not None: - # done. go home - return [(s.real, s.imag) for s in spline] - - raise ApproxNotFoundError(curve) - - -@cython.locals(l=cython.int, last_i=cython.int, i=cython.int) -@cython.locals(all_quadratic=cython.int) -def curves_to_quadratic(curves, max_errors, all_quadratic=True): - """Return quadratic Bezier splines approximating the input cubic Beziers. - - Args: - curves: A sequence of *n* curves, each curve being a sequence of four - 2D tuples. - max_errors: A sequence of *n* floats representing the maximum permissible - deviation from each of the cubic Bezier curves. - all_quadratic (bool): If True (default) returned values are a - quadratic spline. If False, they are either a single quadratic - curve or a single cubic curve. - - Example:: - - >>> curves_to_quadratic( [ - ... [ (50,50), (100,100), (150,100), (200,50) ], - ... [ (75,50), (120,100), (150,75), (200,60) ] - ... ], [1,1] ) - [[(50.0, 50.0), (75.0, 75.0), (125.0, 91.66666666666666), (175.0, 75.0), (200.0, 50.0)], [(75.0, 50.0), (97.5, 75.0), (135.41666666666666, 82.08333333333333), (175.0, 67.5), (200.0, 60.0)]] - - The returned splines have "implied oncurve points" suitable for use in - TrueType ``glif`` outlines - i.e. in the first spline returned above, - the first quadratic segment runs from (50,50) to - ( (75 + 125)/2 , (120 + 91.666..)/2 ) = (100, 83.333...). - - Returns: - If all_quadratic is True, a list of splines, each spline being a list - of 2D tuples. - - If all_quadratic is False, a list of curves, each curve being a quadratic - (length 3), or cubic (length 4). - - Raises: - fontTools.cu2qu.Errors.ApproxNotFoundError: if no suitable approximation - can be found for all curves with the given parameters. - """ - - curves = [[complex(*p) for p in curve] for curve in curves] - assert len(max_errors) == len(curves) - - l = len(curves) - splines = [None] * l - last_i = i = 0 - n = 1 - while True: - spline = cubic_approx_spline(curves[i], n, max_errors[i], all_quadratic) - if spline is None: - if n == MAX_N: - break - n += 1 - last_i = i - continue - splines[i] = spline - i = (i + 1) % l - if i == last_i: - # done. go home - return [[(s.real, s.imag) for s in spline] for spline in splines] - - raise ApproxNotFoundError(curves) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/G__l_o_c.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/G__l_o_c.py deleted file mode 100644 index 7973b9be911d450f2504e83704705c9bb8e4b810..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/G__l_o_c.py +++ /dev/null @@ -1,84 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import safeEval -from . import DefaultTable -import array -import sys - - -Gloc_header = """ - > # big endian - version: 16.16F # Table version - flags: H # bit 0: 1=long format, 0=short format - # bit 1: 1=attribute names, 0=no names - numAttribs: H # NUmber of attributes -""" - - -class table_G__l_o_c(DefaultTable.DefaultTable): - """ - Support Graphite Gloc tables - """ - - dependencies = ["Glat"] - - def __init__(self, tag=None): - DefaultTable.DefaultTable.__init__(self, tag) - self.attribIds = None - self.numAttribs = 0 - - def decompile(self, data, ttFont): - _, data = sstruct.unpack2(Gloc_header, data, self) - flags = self.flags - del self.flags - self.locations = array.array("I" if flags & 1 else "H") - self.locations.frombytes(data[: len(data) - self.numAttribs * (flags & 2)]) - if sys.byteorder != "big": - self.locations.byteswap() - self.attribIds = array.array("H") - if flags & 2: - self.attribIds.frombytes(data[-self.numAttribs * 2 :]) - if sys.byteorder != "big": - self.attribIds.byteswap() - - def compile(self, ttFont): - data = sstruct.pack( - Gloc_header, - dict( - version=1.0, - flags=(bool(self.attribIds) << 1) + (self.locations.typecode == "I"), - numAttribs=self.numAttribs, - ), - ) - if sys.byteorder != "big": - self.locations.byteswap() - data += self.locations.tobytes() - if sys.byteorder != "big": - self.locations.byteswap() - if self.attribIds: - if sys.byteorder != "big": - self.attribIds.byteswap() - data += self.attribIds.tobytes() - if sys.byteorder != "big": - self.attribIds.byteswap() - return data - - def set(self, locations): - long_format = max(locations) >= 65536 - self.locations = array.array("I" if long_format else "H", locations) - - def toXML(self, writer, ttFont): - writer.simpletag("attributes", number=self.numAttribs) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "attributes": - self.numAttribs = int(safeEval(attrs["number"])) - - def __getitem__(self, index): - return self.locations[index] - - def __len__(self): - return len(self.locations) - - def __iter__(self): - return iter(self.locations) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-8dee978a.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-8dee978a.js deleted file mode 100644 index 8678b06f01a5441601d37b931c6d04dc10386908..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-8dee978a.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as c,e as m,s as v,a9 as b,N as f,K as r,L as o,U as d,p as g,M as p,ab as h,ac as w,ad as y,z as G,v as j,A as k}from"./index-3370be2a.js";function C(n){let s,l,u,i;const _=n[4].default,a=b(_,n,n[3],null);return{c(){s=f("div"),l=f("div"),a&&a.c(),r(l,"class","styler svelte-iyf88w"),o(l,"--block-radius","0px"),o(l,"--block-border-width","0px"),o(l,"--layout-gap","1px"),o(l,"--form-gap-width","1px"),o(l,"--button-border-width","0px"),o(l,"--button-large-radius","0px"),o(l,"--button-small-radius","0px"),r(s,"id",n[0]),r(s,"class",u="gr-group "+n[1].join(" ")+" svelte-iyf88w"),d(s,"hide",!n[2])},m(e,t){g(e,s,t),p(s,l),a&&a.m(l,null),i=!0},p(e,[t]){a&&a.p&&(!i||t&8)&&h(a,_,e,e[3],i?y(_,e[3],t,null):w(e[3]),null),(!i||t&1)&&r(s,"id",e[0]),(!i||t&2&&u!==(u="gr-group "+e[1].join(" ")+" svelte-iyf88w"))&&r(s,"class",u),(!i||t&6)&&d(s,"hide",!e[2])},i(e){i||(G(a,e),i=!0)},o(e){j(a,e),i=!1},d(e){e&&k(s),a&&a.d(e)}}}function S(n,s,l){let{$$slots:u={},$$scope:i}=s,{elem_id:_=""}=s,{elem_classes:a=[]}=s,{visible:e=!0}=s;return n.$$set=t=>{"elem_id"in t&&l(0,_=t.elem_id),"elem_classes"in t&&l(1,a=t.elem_classes),"visible"in t&&l(2,e=t.visible),"$$scope"in t&&l(3,i=t.$$scope)},[_,a,e,i,u]}class q extends c{constructor(s){super(),m(this,s,S,C,v,{elem_id:0,elem_classes:1,visible:2})}}const A=q,K=["static"];export{A as Component,K as modes}; -//# sourceMappingURL=index-8dee978a.js.map diff --git a/spaces/DaleChen/AutoGPT/autogpt/cli.py b/spaces/DaleChen/AutoGPT/autogpt/cli.py deleted file mode 100644 index a2e99cb421cad005528cb160e948ce59ccfcdb66..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/autogpt/cli.py +++ /dev/null @@ -1,145 +0,0 @@ -"""Main script for the autogpt package.""" -import click - - -@click.group(invoke_without_command=True) -@click.option("-c", "--continuous", is_flag=True, help="Enable Continuous Mode") -@click.option( - "--skip-reprompt", - "-y", - is_flag=True, - help="Skips the re-prompting messages at the beginning of the script", -) -@click.option( - "--ai-settings", - "-C", - help="Specifies which ai_settings.yaml file to use, will also automatically skip the re-prompt.", -) -@click.option( - "-l", - "--continuous-limit", - type=int, - help="Defines the number of times to run in continuous mode", -) -@click.option("--speak", is_flag=True, help="Enable Speak Mode") -@click.option("--debug", is_flag=True, help="Enable Debug Mode") -@click.option("--gpt3only", is_flag=True, help="Enable GPT3.5 Only Mode") -@click.option("--gpt4only", is_flag=True, help="Enable GPT4 Only Mode") -@click.option( - "--use-memory", - "-m", - "memory_type", - type=str, - help="Defines which Memory backend to use", -) -@click.option( - "-b", - "--browser-name", - help="Specifies which web-browser to use when using selenium to scrape the web.", -) -@click.option( - "--allow-downloads", - is_flag=True, - help="Dangerous: Allows Auto-GPT to download files natively.", -) -@click.option( - "--skip-news", - is_flag=True, - help="Specifies whether to suppress the output of latest news on startup.", -) -@click.pass_context -def main( - ctx: click.Context, - continuous: bool, - continuous_limit: int, - ai_settings: str, - skip_reprompt: bool, - speak: bool, - debug: bool, - gpt3only: bool, - gpt4only: bool, - memory_type: str, - browser_name: str, - allow_downloads: bool, - skip_news: bool, -) -> None: - """ - Welcome to AutoGPT an experimental open-source application showcasing the capabilities of the GPT-4 pushing the boundaries of AI. - - Start an Auto-GPT assistant. - """ - # Put imports inside function to avoid importing everything when starting the CLI - import logging - - from colorama import Fore - - from autogpt.agent.agent import Agent - from autogpt.config import Config, check_openai_api_key - from autogpt.configurator import create_config - from autogpt.logs import logger - from autogpt.memory import get_memory - from autogpt.prompt import construct_prompt - from autogpt.utils import get_current_git_branch, get_latest_bulletin - - if ctx.invoked_subcommand is None: - cfg = Config() - # TODO: fill in llm values here - check_openai_api_key() - create_config( - continuous, - continuous_limit, - ai_settings, - skip_reprompt, - speak, - debug, - gpt3only, - gpt4only, - memory_type, - browser_name, - allow_downloads, - skip_news, - ) - logger.set_level(logging.DEBUG if cfg.debug_mode else logging.INFO) - ai_name = "" - if not cfg.skip_news: - motd = get_latest_bulletin() - if motd: - logger.typewriter_log("NEWS: ", Fore.GREEN, motd) - git_branch = get_current_git_branch() - if git_branch and git_branch != "stable": - logger.typewriter_log( - "WARNING: ", - Fore.RED, - f"You are running on `{git_branch}` branch " - "- this is not a supported branch.", - ) - system_prompt = construct_prompt() - # print(prompt) - # Initialize variables - full_message_history = [] - next_action_count = 0 - # Make a constant: - triggering_prompt = ( - "Determine which next command to use, and respond using the" - " format specified above:" - ) - # Initialize memory and make sure it is empty. - # this is particularly important for indexing and referencing pinecone memory - memory = get_memory(cfg, init=True) - logger.typewriter_log( - "Using memory of type:", Fore.GREEN, f"{memory.__class__.__name__}" - ) - logger.typewriter_log("Using Browser:", Fore.GREEN, cfg.selenium_web_browser) - agent = Agent( - ai_name=ai_name, - memory=memory, - full_message_history=full_message_history, - next_action_count=next_action_count, - system_prompt=system_prompt, - triggering_prompt=triggering_prompt, - ) - agent.start_interaction_loop() - - -if __name__ == "__main__": - main() diff --git a/spaces/DamarJati/DamarJati-NSFW-filter-DecentScan/README.md b/spaces/DamarJati/DamarJati-NSFW-filter-DecentScan/README.md deleted file mode 100644 index 4625ca074aa5cfac803813452d565cd75f751cac..0000000000000000000000000000000000000000 --- a/spaces/DamarJati/DamarJati-NSFW-filter-DecentScan/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: NSFW Filter DecentScan -emoji: 👀 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.45.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/DeeeTeeee01/VODAFONE-CUSTOMER-CHURN-PREDICTION-APP/app.py b/spaces/DeeeTeeee01/VODAFONE-CUSTOMER-CHURN-PREDICTION-APP/app.py deleted file mode 100644 index 3cdac0af665e66c74140f5059c0e17fedd297e13..0000000000000000000000000000000000000000 --- a/spaces/DeeeTeeee01/VODAFONE-CUSTOMER-CHURN-PREDICTION-APP/app.py +++ /dev/null @@ -1,139 +0,0 @@ -#Importing the libraries -import gradio as gr -import pickle -import pandas as pd -import numpy as np -import joblib -from PIL import Image - -#using joblib to load the model: -num_imputer = joblib.load('num_imputer.joblib') # loading the imputer -cat_imputer = joblib.load('cat_imputer.joblib') # loading the imputer -encoder = joblib.load('encoder.joblib') # loading the encoder -scaler = joblib.load('scaler.joblib') # loading the scaler -model = joblib.load('ml.joblib') # loading the model - - -# Create a function that applies the ML pipeline and makes predictions -def predict(gender,SeniorCitizen,Partner,Dependents, tenure, PhoneService,MultipleLines, - InternetService,OnlineSecurity,OnlineBackup,DeviceProtection,TechSupport,StreamingTV,StreamingMovies, - Contract,PaperlessBilling,PaymentMethod,MonthlyCharges,TotalCharges): - - - - # Create a dataframe with the input data - input_df = pd.DataFrame({ - 'gender': [gender], - 'SeniorCitizen': [SeniorCitizen], - 'Partner': [Partner], - 'Dependents': [Dependents], - 'tenure': [tenure], - 'PhoneService': [PhoneService], - 'MultipleLines': [MultipleLines], - 'InternetService': [InternetService], - 'OnlineSecurity': [OnlineSecurity], - 'OnlineBackup': [OnlineBackup], - 'DeviceProtection': [DeviceProtection], - 'TechSupport': [TechSupport], - 'StreamingTV': [StreamingTV], - 'StreamingMovies': [StreamingMovies], - 'Contract': [Contract], - 'PaperlessBilling': [PaperlessBilling], - 'PaymentMethod': [PaymentMethod], - 'MonthlyCharges': [MonthlyCharges], - 'TotalCharges': [TotalCharges] - - }) - -# Create a list with the categorical and numerical columns - cat_columns = [col for col in input_df.columns if input_df[col].dtype == 'object'] - num_columns = [col for col in input_df.columns if input_df[col].dtype != 'object'] - - # Impute the missing values - input_df_imputed_cat = cat_imputer.transform(input_df[cat_columns]) - input_df_imputed_num = num_imputer.transform(input_df[num_columns]) - - # Encode the categorical columns - input_encoded_df = pd.DataFrame(encoder.transform(input_df_imputed_cat).toarray(), - columns=encoder.get_feature_names_out(cat_columns)) - - # Scale the numerical columns - input_df_scaled = scaler.transform(input_df_imputed_num) - input_scaled_df = pd.DataFrame(input_df_scaled , columns = num_columns) - - - #joining the cat encoded and num scaled - final_df = pd.concat([input_encoded_df, input_scaled_df], axis=1) - - final_df = final_df.reindex(columns=['SeniorCitizen','tenure','MonthlyCharges','TotalCharges', - 'gender_Female','gender_Male','Partner_No','Partner_Yes','Dependents_No','Dependents_Yes','PhoneService_No', - 'PhoneService_Yes','MultipleLines_No','MultipleLines_Yes','InternetService_DSL','InternetService_Fiber optic', - 'InternetService_No','OnlineSecurity_No','OnlineSecurity_Yes','OnlineBackup_No','OnlineBackup_Yes','DeviceProtection_No', - 'DeviceProtection_Yes','TechSupport_No','TechSupport_Yes','StreamingTV_No','StreamingTV_Yes','StreamingMovies_No', - 'StreamingMovies_Yes','Contract_Month-to-month','Contract_One year','Contract_Two year','PaperlessBilling_No', - 'PaperlessBilling_Yes','PaymentMethod_Bank transfer (automatic)','PaymentMethod_Credit card (automatic)','PaymentMethod_Electronic check', - 'PaymentMethod_Mailed check']) - - # Make predictions using the model - predict = model.predict(final_df) - - - prediction_label = "THIS CUSTOMER WILL CHURN" if predict.item() == "Yes" else "THIS CUSTOMER WILL NOT CHURN" - - - return prediction_label - - #return predictions - -#define the input interface - - -input_interface = [] - -with gr.Blocks(css=".gradio-container {background-color:silver}") as app: - title = gr.Label('VODAFONE CUSTOMER CHURN PREDICTION') - img = gr.Image("VODA.png").style(height= 210 , width= 1250) - - - with gr.Row(): - gr.Markdown("This application provides predictions on whether a customer will churn or remain with the Company. Please enter the customer's information below and click PREDICT to view the prediction outcome.") - - with gr.Row(): - with gr.Column(scale=3.5, min_width=500): - input_interface = [ - gr.components.Radio(['male', 'female'], label='What is your Gender?'), - gr.components.Number(label="Are you a Seniorcitizen? (No=0 and Yes=1), 55years and above"), - gr.components.Radio(['Yes', 'No'], label='Do you have a Partner?'), - gr.components.Dropdown(['No', 'Yes'], label='Do you have any Dependents?'), - gr.components.Number(label='Length of Tenure (No. of months with Vodafone)'), - gr.components.Radio(['No', 'Yes'], label='Do you use Phone Service?'), - gr.components.Radio(['No', 'Yes'], label='Do you use Multiple Lines?'), - gr.components.Radio(['DSL', 'Fiber optic', 'No'], label='Do you use Internet Service?'), - gr.components.Radio(['No', 'Yes'], label='Do you use Online Security?'), - gr.components.Radio(['No', 'Yes'], label='Do you use Online Backup?'), - gr.components.Radio(['No', 'Yes'], label='Do you use Device Protection?'), - gr.components.Radio(['No', 'Yes'], label='Do you use the Tech Support?'), - gr.components.Radio(['No', 'Yes'], label='Do you Streaming TV?'), - gr.components.Radio(['No', 'Yes'], label='Do you Streaming Movies?'), - gr.components.Dropdown(['Month-to-month', 'One year', 'Two year'], label='Please what Contract Type do you Subscribe to?'), - gr.components.Radio(['Yes', 'No'], label='Do you use Paperless Billing?'), - gr.components.Dropdown(['Electronic check', 'Mailed check', 'Bank transfer (automatic)', - 'Credit card (automatic)'], label='What type of Payment Method do you use please?'), - gr.components.Number(label="How much is you Monthly Charges?"), - gr.components.Number(label="How much is your Total Charges?") - ] - - with gr.Row(): - predict_btn = gr.Button('Predict') - - - -# Define the output interfaces - output_interface = gr.Label(label="churn") - - predict_btn.click(fn=predict, inputs=input_interface, outputs=output_interface) - - - app.launch(share=False) - - diff --git a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/training_stats.py b/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/training_stats.py deleted file mode 100644 index 26f467f9eaa074ee13de1cf2625cd7da44880847..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/torch_utils/training_stats.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for reporting and collecting training statistics across -multiple processes and devices. The interface is designed to minimize -synchronization overhead as well as the amount of boilerplate in user -code.""" - -import re -import numpy as np -import torch -import dnnlib - -from . import misc - -#---------------------------------------------------------------------------- - -_num_moments = 3 # [num_scalars, sum_of_scalars, sum_of_squares] -_reduce_dtype = torch.float32 # Data type to use for initial per-tensor reduction. -_counter_dtype = torch.float64 # Data type to use for the internal counters. -_rank = 0 # Rank of the current process. -_sync_device = None # Device to use for multiprocess communication. None = single-process. -_sync_called = False # Has _sync() been called yet? -_counters = dict() # Running counters on each device, updated by report(): name => device => torch.Tensor -_cumulative = dict() # Cumulative counters on the CPU, updated by _sync(): name => torch.Tensor - -#---------------------------------------------------------------------------- - -def init_multiprocessing(rank, sync_device): - r"""Initializes `torch_utils.training_stats` for collecting statistics - across multiple processes. - - This function must be called after - `torch.distributed.init_process_group()` and before `Collector.update()`. - The call is not necessary if multi-process collection is not needed. - - Args: - rank: Rank of the current process. - sync_device: PyTorch device to use for inter-process - communication, or None to disable multi-process - collection. Typically `torch.device('cuda', rank)`. - """ - global _rank, _sync_device - assert not _sync_called - _rank = rank - _sync_device = sync_device - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def report(name, value): - r"""Broadcasts the given set of scalars to all interested instances of - `Collector`, across device and process boundaries. - - This function is expected to be extremely cheap and can be safely - called from anywhere in the training loop, loss function, or inside a - `torch.nn.Module`. - - Warning: The current implementation expects the set of unique names to - be consistent across processes. Please make sure that `report()` is - called at least once for each unique name by each process, and in the - same order. If a given process has no scalars to broadcast, it can do - `report(name, [])` (empty list). - - Args: - name: Arbitrary string specifying the name of the statistic. - Averages are accumulated separately for each unique name. - value: Arbitrary set of scalars. Can be a list, tuple, - NumPy array, PyTorch tensor, or Python scalar. - - Returns: - The same `value` that was passed in. - """ - if name not in _counters: - _counters[name] = dict() - - elems = torch.as_tensor(value) - if elems.numel() == 0: - return value - - elems = elems.detach().flatten().to(_reduce_dtype) - moments = torch.stack([ - torch.ones_like(elems).sum(), - elems.sum(), - elems.square().sum(), - ]) - assert moments.ndim == 1 and moments.shape[0] == _num_moments - moments = moments.to(_counter_dtype) - - device = moments.device - if device not in _counters[name]: - _counters[name][device] = torch.zeros_like(moments) - _counters[name][device].add_(moments) - return value - -#---------------------------------------------------------------------------- - -def report0(name, value): - r"""Broadcasts the given set of scalars by the first process (`rank = 0`), - but ignores any scalars provided by the other processes. - See `report()` for further details. - """ - report(name, value if _rank == 0 else []) - return value - -#---------------------------------------------------------------------------- - -class Collector: - r"""Collects the scalars broadcasted by `report()` and `report0()` and - computes their long-term averages (mean and standard deviation) over - user-defined periods of time. - - The averages are first collected into internal counters that are not - directly visible to the user. They are then copied to the user-visible - state as a result of calling `update()` and can then be queried using - `mean()`, `std()`, `as_dict()`, etc. Calling `update()` also resets the - internal counters for the next round, so that the user-visible state - effectively reflects averages collected between the last two calls to - `update()`. - - Args: - regex: Regular expression defining which statistics to - collect. The default is to collect everything. - keep_previous: Whether to retain the previous averages if no - scalars were collected on a given round - (default: True). - """ - def __init__(self, regex='.*', keep_previous=True): - self._regex = re.compile(regex) - self._keep_previous = keep_previous - self._cumulative = dict() - self._moments = dict() - self.update() - self._moments.clear() - - def names(self): - r"""Returns the names of all statistics broadcasted so far that - match the regular expression specified at construction time. - """ - return [name for name in _counters if self._regex.fullmatch(name)] - - def update(self): - r"""Copies current values of the internal counters to the - user-visible state and resets them for the next round. - - If `keep_previous=True` was specified at construction time, the - operation is skipped for statistics that have received no scalars - since the last update, retaining their previous averages. - - This method performs a number of GPU-to-CPU transfers and one - `torch.distributed.all_reduce()`. It is intended to be called - periodically in the main training loop, typically once every - N training steps. - """ - if not self._keep_previous: - self._moments.clear() - for name, cumulative in _sync(self.names()): - if name not in self._cumulative: - self._cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - delta = cumulative - self._cumulative[name] - self._cumulative[name].copy_(cumulative) - if float(delta[0]) != 0: - self._moments[name] = delta - - def _get_delta(self, name): - r"""Returns the raw moments that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - assert self._regex.fullmatch(name) - if name not in self._moments: - self._moments[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - return self._moments[name] - - def num(self, name): - r"""Returns the number of scalars that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - delta = self._get_delta(name) - return int(delta[0]) - - def mean(self, name): - r"""Returns the mean of the scalars that were accumulated for the - given statistic between the last two calls to `update()`, or NaN if - no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0: - return float('nan') - return float(delta[1] / delta[0]) - - def std(self, name): - r"""Returns the standard deviation of the scalars that were - accumulated for the given statistic between the last two calls to - `update()`, or NaN if no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0 or not np.isfinite(float(delta[1])): - return float('nan') - if int(delta[0]) == 1: - return float(0) - mean = float(delta[1] / delta[0]) - raw_var = float(delta[2] / delta[0]) - return np.sqrt(max(raw_var - np.square(mean), 0)) - - def as_dict(self): - r"""Returns the averages accumulated between the last two calls to - `update()` as an `dnnlib.EasyDict`. The contents are as follows: - - dnnlib.EasyDict( - NAME = dnnlib.EasyDict(num=FLOAT, mean=FLOAT, std=FLOAT), - ... - ) - """ - stats = dnnlib.EasyDict() - for name in self.names(): - stats[name] = dnnlib.EasyDict(num=self.num(name), mean=self.mean(name), std=self.std(name)) - return stats - - def __getitem__(self, name): - r"""Convenience getter. - `collector[name]` is a synonym for `collector.mean(name)`. - """ - return self.mean(name) - -#---------------------------------------------------------------------------- - -def _sync(names): - r"""Synchronize the global cumulative counters across devices and - processes. Called internally by `Collector.update()`. - """ - if len(names) == 0: - return [] - global _sync_called - _sync_called = True - - # Collect deltas within current rank. - deltas = [] - device = _sync_device if _sync_device is not None else torch.device('cpu') - for name in names: - delta = torch.zeros([_num_moments], dtype=_counter_dtype, device=device) - for counter in _counters[name].values(): - delta.add_(counter.to(device)) - counter.copy_(torch.zeros_like(counter)) - deltas.append(delta) - deltas = torch.stack(deltas) - - # Sum deltas across ranks. - if _sync_device is not None: - torch.distributed.all_reduce(deltas) - - # Update cumulative values. - deltas = deltas.cpu() - for idx, name in enumerate(names): - if name not in _cumulative: - _cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - _cumulative[name].add_(deltas[idx]) - - # Return name-value pairs. - return [(name, _cumulative[name]) for name in names] - -#---------------------------------------------------------------------------- diff --git a/spaces/ECCV2022/bytetrack/yolox/utils/metric.py b/spaces/ECCV2022/bytetrack/yolox/utils/metric.py deleted file mode 100644 index 4840b8dd0e97d26891fb8c515b6999cf35bd9544..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/yolox/utils/metric.py +++ /dev/null @@ -1,123 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# Copyright (c) 2014-2021 Megvii Inc. All rights reserved. -import numpy as np - -import torch - -import functools -import os -import time -from collections import defaultdict, deque - -__all__ = [ - "AverageMeter", - "MeterBuffer", - "get_total_and_free_memory_in_Mb", - "occupy_mem", - "gpu_mem_usage", -] - - -def get_total_and_free_memory_in_Mb(cuda_device): - devices_info_str = os.popen( - "nvidia-smi --query-gpu=memory.total,memory.used --format=csv,nounits,noheader" - ) - devices_info = devices_info_str.read().strip().split("\n") - total, used = devices_info[int(cuda_device)].split(",") - return int(total), int(used) - - -def occupy_mem(cuda_device, mem_ratio=0.95): - """ - pre-allocate gpu memory for training to avoid memory Fragmentation. - """ - total, used = get_total_and_free_memory_in_Mb(cuda_device) - max_mem = int(total * mem_ratio) - block_mem = max_mem - used - x = torch.cuda.FloatTensor(256, 1024, block_mem) - del x - time.sleep(5) - - -def gpu_mem_usage(): - """ - Compute the GPU memory usage for the current device (MB). - """ - mem_usage_bytes = torch.cuda.max_memory_allocated() - return mem_usage_bytes / (1024 * 1024) - - -class AverageMeter: - """Track a series of values and provide access to smoothed values over a - window or the global series average. - """ - - def __init__(self, window_size=50): - self._deque = deque(maxlen=window_size) - self._total = 0.0 - self._count = 0 - - def update(self, value): - self._deque.append(value) - self._count += 1 - self._total += value - - @property - def median(self): - d = np.array(list(self._deque)) - return np.median(d) - - @property - def avg(self): - # if deque is empty, nan will be returned. - d = np.array(list(self._deque)) - return d.mean() - - @property - def global_avg(self): - return self._total / max(self._count, 1e-5) - - @property - def latest(self): - return self._deque[-1] if len(self._deque) > 0 else None - - @property - def total(self): - return self._total - - def reset(self): - self._deque.clear() - self._total = 0.0 - self._count = 0 - - def clear(self): - self._deque.clear() - - -class MeterBuffer(defaultdict): - """Computes and stores the average and current value""" - - def __init__(self, window_size=20): - factory = functools.partial(AverageMeter, window_size=window_size) - super().__init__(factory) - - def reset(self): - for v in self.values(): - v.reset() - - def get_filtered_meter(self, filter_key="time"): - return {k: v for k, v in self.items() if filter_key in k} - - def update(self, values=None, **kwargs): - if values is None: - values = {} - values.update(kwargs) - for k, v in values.items(): - if isinstance(v, torch.Tensor): - v = v.detach() - self[k].update(v) - - def clear_meters(self): - for v in self.values(): - v.clear() diff --git a/spaces/EronSamez/RVC_HFmeu/train/data_utils.py b/spaces/EronSamez/RVC_HFmeu/train/data_utils.py deleted file mode 100644 index 71c0eff1815469a52399dc90a093a2f8a29223eb..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/train/data_utils.py +++ /dev/null @@ -1,512 +0,0 @@ -import os, traceback -import numpy as np -import torch -import torch.utils.data - -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text - - -class TextAudioLoaderMultiNSFsid(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 5000) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text, pitch, pitchf, dv in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text, pitch, pitchf, dv]) - lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - file = audiopath_and_text[0] - phone = audiopath_and_text[1] - pitch = audiopath_and_text[2] - pitchf = audiopath_and_text[3] - dv = audiopath_and_text[4] - - phone, pitch, pitchf = self.get_labels(phone, pitch, pitchf) - spec, wav = self.get_audio(file) - dv = self.get_sid(dv) - - len_phone = phone.size()[0] - len_spec = spec.size()[-1] - # print(123,phone.shape,pitch.shape,spec.shape) - if len_phone != len_spec: - len_min = min(len_phone, len_spec) - # amor - len_wav = len_min * self.hop_length - - spec = spec[:, :len_min] - wav = wav[:, :len_wav] - - phone = phone[:len_min, :] - pitch = pitch[:len_min] - pitchf = pitchf[:len_min] - - return (spec, wav, phone, pitch, pitchf, dv) - - def get_labels(self, phone, pitch, pitchf): - phone = np.load(phone) - phone = np.repeat(phone, 2, axis=0) - pitch = np.load(pitch) - pitchf = np.load(pitchf) - n_num = min(phone.shape[0], 900) # DistributedBucketSampler - # print(234,phone.shape,pitch.shape) - phone = phone[:n_num, :] - pitch = pitch[:n_num] - pitchf = pitchf[:n_num] - phone = torch.FloatTensor(phone) - pitch = torch.LongTensor(pitch) - pitchf = torch.FloatTensor(pitchf) - return phone, pitch, pitchf - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError( - "{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate - ) - ) - audio_norm = audio - # audio_norm = audio / self.max_wav_value - # audio_norm = audio / np.abs(audio).max() - - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - try: - spec = torch.load(spec_filename) - except: - print(spec_filename, traceback.format_exc()) - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - else: - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - return spec, audio_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollateMultiNSFsid: - """Zero-pads model inputs and targets""" - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True - ) - - max_spec_len = max([x[0].size(1) for x in batch]) - max_wave_len = max([x[1].size(1) for x in batch]) - spec_lengths = torch.LongTensor(len(batch)) - wave_lengths = torch.LongTensor(len(batch)) - spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len) - wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len) - spec_padded.zero_() - wave_padded.zero_() - - max_phone_len = max([x[2].size(0) for x in batch]) - phone_lengths = torch.LongTensor(len(batch)) - phone_padded = torch.FloatTensor( - len(batch), max_phone_len, batch[0][2].shape[1] - ) # (spec, wav, phone, pitch) - pitch_padded = torch.LongTensor(len(batch), max_phone_len) - pitchf_padded = torch.FloatTensor(len(batch), max_phone_len) - phone_padded.zero_() - pitch_padded.zero_() - pitchf_padded.zero_() - # dv = torch.FloatTensor(len(batch), 256)#gin=256 - sid = torch.LongTensor(len(batch)) - - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - spec = row[0] - spec_padded[i, :, : spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wave = row[1] - wave_padded[i, :, : wave.size(1)] = wave - wave_lengths[i] = wave.size(1) - - phone = row[2] - phone_padded[i, : phone.size(0), :] = phone - phone_lengths[i] = phone.size(0) - - pitch = row[3] - pitch_padded[i, : pitch.size(0)] = pitch - pitchf = row[4] - pitchf_padded[i, : pitchf.size(0)] = pitchf - - # dv[i] = row[5] - sid[i] = row[5] - - return ( - phone_padded, - phone_lengths, - pitch_padded, - pitchf_padded, - spec_padded, - spec_lengths, - wave_padded, - wave_lengths, - # dv - sid, - ) - - -class TextAudioLoader(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 5000) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text, dv in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text, dv]) - lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - file = audiopath_and_text[0] - phone = audiopath_and_text[1] - dv = audiopath_and_text[2] - - phone = self.get_labels(phone) - spec, wav = self.get_audio(file) - dv = self.get_sid(dv) - - len_phone = phone.size()[0] - len_spec = spec.size()[-1] - if len_phone != len_spec: - len_min = min(len_phone, len_spec) - len_wav = len_min * self.hop_length - spec = spec[:, :len_min] - wav = wav[:, :len_wav] - phone = phone[:len_min, :] - return (spec, wav, phone, dv) - - def get_labels(self, phone): - phone = np.load(phone) - phone = np.repeat(phone, 2, axis=0) - n_num = min(phone.shape[0], 900) # DistributedBucketSampler - phone = phone[:n_num, :] - phone = torch.FloatTensor(phone) - return phone - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError( - "{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate - ) - ) - audio_norm = audio - # audio_norm = audio / self.max_wav_value - # audio_norm = audio / np.abs(audio).max() - - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - try: - spec = torch.load(spec_filename) - except: - print(spec_filename, traceback.format_exc()) - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - else: - spec = spectrogram_torch( - audio_norm, - self.filter_length, - self.sampling_rate, - self.hop_length, - self.win_length, - center=False, - ) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename, _use_new_zipfile_serialization=False) - return spec, audio_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollate: - """Zero-pads model inputs and targets""" - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True - ) - - max_spec_len = max([x[0].size(1) for x in batch]) - max_wave_len = max([x[1].size(1) for x in batch]) - spec_lengths = torch.LongTensor(len(batch)) - wave_lengths = torch.LongTensor(len(batch)) - spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len) - wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len) - spec_padded.zero_() - wave_padded.zero_() - - max_phone_len = max([x[2].size(0) for x in batch]) - phone_lengths = torch.LongTensor(len(batch)) - phone_padded = torch.FloatTensor( - len(batch), max_phone_len, batch[0][2].shape[1] - ) - phone_padded.zero_() - sid = torch.LongTensor(len(batch)) - - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - spec = row[0] - spec_padded[i, :, : spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wave = row[1] - wave_padded[i, :, : wave.size(1)] = wave - wave_lengths[i] = wave.size(1) - - phone = row[2] - phone_padded[i, : phone.size(0), :] = phone - phone_lengths[i] = phone.size(0) - - sid[i] = row[3] - - return ( - phone_padded, - phone_lengths, - spec_padded, - spec_lengths, - wave_padded, - wave_lengths, - sid, - ) - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__( - self, - dataset, - batch_size, - boundaries, - num_replicas=None, - rank=None, - shuffle=True, - ): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, -1, -1): # - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = ( - total_batch_size - (len_bucket % total_batch_size) - ) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ( - ids_bucket - + ids_bucket * (rem // len_bucket) - + ids_bucket[: (rem % len_bucket)] - ) - - # subsample - ids_bucket = ids_bucket[self.rank :: self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [ - bucket[idx] - for idx in ids_bucket[ - j * self.batch_size : (j + 1) * self.batch_size - ] - ] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/FFZG-cleopatra/latvian-twitter-sentiment-classifier/app.py b/spaces/FFZG-cleopatra/latvian-twitter-sentiment-classifier/app.py deleted file mode 100644 index 1df372cac0b7ca30b6962d63231ef6a82cfa66ca..0000000000000000000000000000000000000000 --- a/spaces/FFZG-cleopatra/latvian-twitter-sentiment-classifier/app.py +++ /dev/null @@ -1,113 +0,0 @@ -import torch -from utils import label_full_decoder -import sys -import dataset -import engine -from model import BERTBaseUncased - -import config -from transformers import pipeline, AutoTokenizer, AutoModel -import gradio as gr - -from ekphrasis.classes.preprocessor import TextPreProcessor -from ekphrasis.classes.tokenizer import SocialTokenizer -from ekphrasis.dicts.emoticons import emoticons - -device = config.device -model = BERTBaseUncased() -model.load_state_dict(torch.load(config.MODEL_PATH, map_location=torch.device(device)),strict=False) -model.to(device) - -# T = tokenizer.TweetTokenizer( -# preserve_handles=True, preserve_hashes=True, preserve_case=False, preserve_url=False) - -# text_processor = TextPreProcessor( -# # terms that will be normalized -# normalize=['url', 'email', 'percent', 'money', 'phone', 'user'], -# # terms that will be annotated -# annotate={}, -# fix_html=True, # fix HTML tokens - -# # corpus from which the word statistics are going to be used -# # for word segmentation -# segmenter="twitter", - -# # corpus from which the word statistics are going to be used -# # for spell correction -# corrector="twitter", - -# unpack_hashtags=False, # perform word segmentation on hashtags -# unpack_contractions=False, # Unpack contractions (can't -> can not) -# spell_correct_elong=False, # spell correction for elongated words - -# # select a tokenizer. You can use SocialTokenizer, or pass your own -# # the tokenizer, should take as input a string and return a list of tokens -# tokenizer=SocialTokenizer(lowercase=True).tokenize, - -# # list of dictionaries, for replacing tokens extracted from the text, -# # with other expressions. You can pass more than one dictionaries. -# dicts=[] -# ) - - -social_tokenizer=SocialTokenizer(lowercase=True).tokenize - -def preprocess(text): - # tokens = T.tokenize(text) - # tokens = text_processor.pre_process_docs(text) - - tokens = social_tokenizer(text) - print(tokens, file=sys.stderr) - ptokens = [] - for index, token in enumerate(tokens): - if "@" in token: - if index > 0: - # check if previous token was mention - if "@" in tokens[index-1]: - pass - else: - ptokens.append("mention_0") - else: - ptokens.append("mention_0") - else: - ptokens.append(token) - - print(ptokens, file=sys.stderr) - return " ".join(ptokens) - - -def predict_sentiment(sentence = ""): - sentence = preprocess(sentence) - - model_path = config.MODEL_PATH - - test_dataset = dataset.BERTDataset( - review=[sentence], - target=[0] - ) - - test_data_loader = torch.utils.data.DataLoader( - test_dataset, - batch_size=config.VALID_BATCH_SIZE, - num_workers=2 - ) - - outputs, [] = engine.predict_fn(test_data_loader, model, device) - - print(outputs) - return label_full_decoder(outputs[0]) #{"label":outputs[0]} - - - - -interface = gr.Interface( - fn=predict_sentiment, - inputs='text', - outputs=['label'], - title='Latvian Twitter Sentiment Analysis', - examples= ["Es mīlu Tevi","Es ienīstu kafiju"], - description='Get the positive/neutral/negative sentiment for the given input.' -) - -interface.launch(inline = False) - diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/ops/dcn/deform_conv.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/ops/dcn/deform_conv.py deleted file mode 100644 index 734154f9ed9447d585eae7df6886acb136f8a3cf..0000000000000000000000000000000000000000 --- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/ops/dcn/deform_conv.py +++ /dev/null @@ -1,377 +0,0 @@ -import math -import torch -from torch import nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn import functional as F -from torch.nn.modules.utils import _pair, _single - -try: - from . import deform_conv_ext -except ImportError: - import os - BASICSR_JIT = os.getenv('BASICSR_JIT') - if BASICSR_JIT == 'True': - from torch.utils.cpp_extension import load - module_path = os.path.dirname(__file__) - deform_conv_ext = load( - 'deform_conv', - sources=[ - os.path.join(module_path, 'src', 'deform_conv_ext.cpp'), - os.path.join(module_path, 'src', 'deform_conv_cuda.cpp'), - os.path.join(module_path, 'src', 'deform_conv_cuda_kernel.cu'), - ], - ) - - -class DeformConvFunction(Function): - - @staticmethod - def forward(ctx, - input, - offset, - weight, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - im2col_step=64): - if input is not None and input.dim() != 4: - raise ValueError(f'Expected 4D tensor as input, got {input.dim()}' 'D tensor instead.') - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.im2col_step = im2col_step - - ctx.save_for_backward(input, offset, weight) - - output = input.new_empty(DeformConvFunction._output_size(input, weight, ctx.padding, ctx.dilation, ctx.stride)) - - ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones - - if not input.is_cuda: - raise NotImplementedError - else: - cur_im2col_step = min(ctx.im2col_step, input.shape[0]) - assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize' - deform_conv_ext.deform_conv_forward(input, weight, - offset, output, ctx.bufs_[0], ctx.bufs_[1], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1], - ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups, - ctx.deformable_groups, cur_im2col_step) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, weight = ctx.saved_tensors - - grad_input = grad_offset = grad_weight = None - - if not grad_output.is_cuda: - raise NotImplementedError - else: - cur_im2col_step = min(ctx.im2col_step, input.shape[0]) - assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize' - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - deform_conv_ext.deform_conv_backward_input(input, offset, grad_output, grad_input, - grad_offset, weight, ctx.bufs_[0], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1], - ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups, - ctx.deformable_groups, cur_im2col_step) - - if ctx.needs_input_grad[2]: - grad_weight = torch.zeros_like(weight) - deform_conv_ext.deform_conv_backward_parameters(input, offset, grad_output, grad_weight, - ctx.bufs_[0], ctx.bufs_[1], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], - ctx.padding[1], ctx.padding[0], ctx.dilation[1], - ctx.dilation[0], ctx.groups, ctx.deformable_groups, 1, - cur_im2col_step) - - return (grad_input, grad_offset, grad_weight, None, None, None, None, None) - - @staticmethod - def _output_size(input, weight, padding, dilation, stride): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = padding[d] - kernel = dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError('convolution input is too small (output would be ' f'{"x".join(map(str, output_size))})') - return output_size - - -class ModulatedDeformConvFunction(Function): - - @staticmethod - def forward(ctx, - input, - offset, - mask, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1): - ctx.stride = stride - ctx.padding = padding - ctx.dilation = dilation - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.with_bias = bias is not None - if not ctx.with_bias: - bias = input.new_empty(1) # fake tensor - if not input.is_cuda: - raise NotImplementedError - if weight.requires_grad or mask.requires_grad or offset.requires_grad \ - or input.requires_grad: - ctx.save_for_backward(input, offset, mask, weight, bias) - output = input.new_empty(ModulatedDeformConvFunction._infer_shape(ctx, input, weight)) - ctx._bufs = [input.new_empty(0), input.new_empty(0)] - deform_conv_ext.modulated_deform_conv_forward(input, weight, bias, ctx._bufs[0], offset, mask, output, - ctx._bufs[1], weight.shape[2], weight.shape[3], ctx.stride, - ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation, - ctx.groups, ctx.deformable_groups, ctx.with_bias) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - if not grad_output.is_cuda: - raise NotImplementedError - input, offset, mask, weight, bias = ctx.saved_tensors - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - grad_mask = torch.zeros_like(mask) - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(bias) - deform_conv_ext.modulated_deform_conv_backward(input, weight, bias, ctx._bufs[0], offset, mask, ctx._bufs[1], - grad_input, grad_weight, grad_bias, grad_offset, grad_mask, - grad_output, weight.shape[2], weight.shape[3], ctx.stride, - ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation, - ctx.groups, ctx.deformable_groups, ctx.with_bias) - if not ctx.with_bias: - grad_bias = None - - return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias, None, None, None, None, None) - - @staticmethod - def _infer_shape(ctx, input, weight): - n = input.size(0) - channels_out = weight.size(0) - height, width = input.shape[2:4] - kernel_h, kernel_w = weight.shape[2:4] - height_out = (height + 2 * ctx.padding - (ctx.dilation * (kernel_h - 1) + 1)) // ctx.stride + 1 - width_out = (width + 2 * ctx.padding - (ctx.dilation * (kernel_w - 1) + 1)) // ctx.stride + 1 - return n, channels_out, height_out, width_out - - -deform_conv = DeformConvFunction.apply -modulated_deform_conv = ModulatedDeformConvFunction.apply - - -class DeformConv(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=False): - super(DeformConv, self).__init__() - - assert not bias - assert in_channels % groups == 0, \ - f'in_channels {in_channels} is not divisible by groups {groups}' - assert out_channels % groups == 0, \ - f'out_channels {out_channels} is not divisible ' \ - f'by groups {groups}' - - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deformable_groups = deformable_groups - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // self.groups, *self.kernel_size)) - - self.reset_parameters() - - def reset_parameters(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - - def forward(self, x, offset): - # To fix an assert error in deform_conv_cuda.cpp:128 - # input image is smaller than kernel - input_pad = (x.size(2) < self.kernel_size[0] or x.size(3) < self.kernel_size[1]) - if input_pad: - pad_h = max(self.kernel_size[0] - x.size(2), 0) - pad_w = max(self.kernel_size[1] - x.size(3), 0) - x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - out = deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups, - self.deformable_groups) - if input_pad: - out = out[:, :, :out.size(2) - pad_h, :out.size(3) - pad_w].contiguous() - return out - - -class DeformConvPack(DeformConv): - """A Deformable Conv Encapsulation that acts as normal Conv layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(DeformConvPack, self).__init__(*args, **kwargs) - - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deformable_groups * 2 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_offset() - - def init_offset(self): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - offset = self.conv_offset(x) - return deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups, - self.deformable_groups) - - -class ModulatedDeformConv(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=True): - super(ModulatedDeformConv, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = stride - self.padding = padding - self.dilation = dilation - self.groups = groups - self.deformable_groups = deformable_groups - self.with_bias = bias - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // groups, *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.register_parameter('bias', None) - self.init_weights() - - def init_weights(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - if self.bias is not None: - self.bias.data.zero_() - - def forward(self, x, offset, mask): - return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, - self.groups, self.deformable_groups) - - -class ModulatedDeformConvPack(ModulatedDeformConv): - """A ModulatedDeformable Conv Encapsulation that acts as normal Conv layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(ModulatedDeformConvPack, self).__init__(*args, **kwargs) - - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deformable_groups * 3 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_weights() - - def init_weights(self): - super(ModulatedDeformConvPack, self).init_weights() - if hasattr(self, 'conv_offset'): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - out = self.conv_offset(x) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, - self.groups, self.deformable_groups) diff --git a/spaces/Fernando22/freegpt-webui/g4f/Provider/Provider.py b/spaces/Fernando22/freegpt-webui/g4f/Provider/Provider.py deleted file mode 100644 index d24df76b6a6ccfc9b244f13a51bfc124b398a271..0000000000000000000000000000000000000000 --- a/spaces/Fernando22/freegpt-webui/g4f/Provider/Provider.py +++ /dev/null @@ -1,16 +0,0 @@ -import os -from ..typing import sha256, Dict, get_type_hints - -url = None -model = None -supports_stream = False -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - return - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/text/korean.py b/spaces/FrankZxShen/vits-fast-finetuning-umamusume/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/two_stage.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/two_stage.py deleted file mode 100644 index ba5bdde980dc0cd76375455c9c7ffaae4b25531e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/detectors/two_stage.py +++ /dev/null @@ -1,215 +0,0 @@ -import torch -import torch.nn as nn - -# from mmdet.core import bbox2result, bbox2roi, build_assigner, build_sampler -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - - -@DETECTORS.register_module() -class TwoStageDetector(BaseDetector): - """Base class for two-stage detectors. - - Two-stage detectors typically consisting of a region proposal network and a - task-specific regression head. - """ - - def __init__(self, - backbone, - neck=None, - rpn_head=None, - roi_head=None, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(TwoStageDetector, self).__init__() - self.backbone = build_backbone(backbone) - - if neck is not None: - self.neck = build_neck(neck) - - if rpn_head is not None: - rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None - rpn_head_ = rpn_head.copy() - rpn_head_.update(train_cfg=rpn_train_cfg, test_cfg=test_cfg.rpn) - self.rpn_head = build_head(rpn_head_) - - if roi_head is not None: - # update train and test cfg here for now - # TODO: refactor assigner & sampler - rcnn_train_cfg = train_cfg.rcnn if train_cfg is not None else None - roi_head.update(train_cfg=rcnn_train_cfg) - roi_head.update(test_cfg=test_cfg.rcnn) - self.roi_head = build_head(roi_head) - - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - self.init_weights(pretrained=pretrained) - - @property - def with_rpn(self): - """bool: whether the detector has RPN""" - return hasattr(self, 'rpn_head') and self.rpn_head is not None - - @property - def with_roi_head(self): - """bool: whether the detector has a RoI head""" - return hasattr(self, 'roi_head') and self.roi_head is not None - - def init_weights(self, pretrained=None): - """Initialize the weights in detector. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - super(TwoStageDetector, self).init_weights(pretrained) - self.backbone.init_weights(pretrained=pretrained) - if self.with_neck: - if isinstance(self.neck, nn.Sequential): - for m in self.neck: - m.init_weights() - else: - self.neck.init_weights() - if self.with_rpn: - self.rpn_head.init_weights() - if self.with_roi_head: - self.roi_head.init_weights(pretrained) - - def extract_feat(self, img): - """Directly extract features from the backbone+neck.""" - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Used for computing network flops. - - See `mmdetection/tools/analysis_tools/get_flops.py` - """ - outs = () - # backbone - x = self.extract_feat(img) - # rpn - if self.with_rpn: - rpn_outs = self.rpn_head(x) - outs = outs + (rpn_outs, ) - proposals = torch.randn(1000, 4).to(img.device) - # roi_head - roi_outs = self.roi_head.forward_dummy(x, proposals) - outs = outs + (roi_outs, ) - return outs - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - proposals=None, - **kwargs): - """ - Args: - img (Tensor): of shape (N, C, H, W) encoding input images. - Typically these should be mean centered and std scaled. - - img_metas (list[dict]): list of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - - gt_labels (list[Tensor]): class indices corresponding to each box - - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - gt_masks (None | Tensor) : true segmentation masks for each box - used if the architecture supports a segmentation task. - - proposals : override rpn proposals with custom proposals. Use when - `with_rpn` is False. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - x = self.extract_feat(img) - - losses = dict() - - # RPN forward and loss - if self.with_rpn: - proposal_cfg = self.train_cfg.get('rpn_proposal', - self.test_cfg.rpn) - rpn_losses, proposal_list = self.rpn_head.forward_train( - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=gt_bboxes_ignore, - proposal_cfg=proposal_cfg) - losses.update(rpn_losses) - else: - proposal_list = proposals - - roi_losses = self.roi_head.forward_train(x, img_metas, proposal_list, - gt_bboxes, gt_labels, - gt_bboxes_ignore, gt_masks, - **kwargs) - losses.update(roi_losses) - - return losses - - async def async_simple_test(self, - img, - img_meta, - proposals=None, - rescale=False): - """Async test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - x = self.extract_feat(img) - - if proposals is None: - proposal_list = await self.rpn_head.async_simple_test_rpn( - x, img_meta) - else: - proposal_list = proposals - - return await self.roi_head.async_simple_test( - x, proposal_list, img_meta, rescale=rescale) - - def simple_test(self, img, img_metas, proposals=None, rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - x = self.extract_feat(img) - - # get origin input shape to onnx dynamic input shape - if torch.onnx.is_in_onnx_export(): - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - - if proposals is None: - proposal_list = self.rpn_head.simple_test_rpn(x, img_metas) - else: - proposal_list = proposals - - return self.roi_head.simple_test( - x, proposal_list, img_metas, rescale=rescale) - - def aug_test(self, imgs, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - x = self.extract_feats(imgs) - proposal_list = self.rpn_head.aug_test_rpn(x, img_metas) - return self.roi_head.aug_test( - x, proposal_list, img_metas, rescale=rescale) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/decode_heads/dnl_head.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/decode_heads/dnl_head.py deleted file mode 100644 index 52a662ccb6ae8ff00930eb54ed71113724b6494e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/decode_heads/dnl_head.py +++ /dev/null @@ -1,131 +0,0 @@ -import torch -from mmcv.cnn import NonLocal2d -from torch import nn - -from ..builder import HEADS -from .fcn_head import FCNHead - - -class DisentangledNonLocal2d(NonLocal2d): - """Disentangled Non-Local Blocks. - - Args: - temperature (float): Temperature to adjust attention. Default: 0.05 - """ - - def __init__(self, *arg, temperature, **kwargs): - super().__init__(*arg, **kwargs) - self.temperature = temperature - self.conv_mask = nn.Conv2d(self.in_channels, 1, kernel_size=1) - - def embedded_gaussian(self, theta_x, phi_x): - """Embedded gaussian with temperature.""" - - # NonLocal2d pairwise_weight: [N, HxW, HxW] - pairwise_weight = torch.matmul(theta_x, phi_x) - if self.use_scale: - # theta_x.shape[-1] is `self.inter_channels` - pairwise_weight /= theta_x.shape[-1]**0.5 - pairwise_weight /= self.temperature - pairwise_weight = pairwise_weight.softmax(dim=-1) - return pairwise_weight - - def forward(self, x): - # x: [N, C, H, W] - n = x.size(0) - - # g_x: [N, HxW, C] - g_x = self.g(x).view(n, self.inter_channels, -1) - g_x = g_x.permute(0, 2, 1) - - # theta_x: [N, HxW, C], phi_x: [N, C, HxW] - if self.mode == 'gaussian': - theta_x = x.view(n, self.in_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - if self.sub_sample: - phi_x = self.phi(x).view(n, self.in_channels, -1) - else: - phi_x = x.view(n, self.in_channels, -1) - elif self.mode == 'concatenation': - theta_x = self.theta(x).view(n, self.inter_channels, -1, 1) - phi_x = self.phi(x).view(n, self.inter_channels, 1, -1) - else: - theta_x = self.theta(x).view(n, self.inter_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - phi_x = self.phi(x).view(n, self.inter_channels, -1) - - # subtract mean - theta_x -= theta_x.mean(dim=-2, keepdim=True) - phi_x -= phi_x.mean(dim=-1, keepdim=True) - - pairwise_func = getattr(self, self.mode) - # pairwise_weight: [N, HxW, HxW] - pairwise_weight = pairwise_func(theta_x, phi_x) - - # y: [N, HxW, C] - y = torch.matmul(pairwise_weight, g_x) - # y: [N, C, H, W] - y = y.permute(0, 2, 1).contiguous().reshape(n, self.inter_channels, - *x.size()[2:]) - - # unary_mask: [N, 1, HxW] - unary_mask = self.conv_mask(x) - unary_mask = unary_mask.view(n, 1, -1) - unary_mask = unary_mask.softmax(dim=-1) - # unary_x: [N, 1, C] - unary_x = torch.matmul(unary_mask, g_x) - # unary_x: [N, C, 1, 1] - unary_x = unary_x.permute(0, 2, 1).contiguous().reshape( - n, self.inter_channels, 1, 1) - - output = x + self.conv_out(y + unary_x) - - return output - - -@HEADS.register_module() -class DNLHead(FCNHead): - """Disentangled Non-Local Neural Networks. - - This head is the implementation of `DNLNet - `_. - - Args: - reduction (int): Reduction factor of projection transform. Default: 2. - use_scale (bool): Whether to scale pairwise_weight by - sqrt(1/inter_channels). Default: False. - mode (str): The nonlocal mode. Options are 'embedded_gaussian', - 'dot_product'. Default: 'embedded_gaussian.'. - temperature (float): Temperature to adjust attention. Default: 0.05 - """ - - def __init__(self, - reduction=2, - use_scale=True, - mode='embedded_gaussian', - temperature=0.05, - **kwargs): - super(DNLHead, self).__init__(num_convs=2, **kwargs) - self.reduction = reduction - self.use_scale = use_scale - self.mode = mode - self.temperature = temperature - self.dnl_block = DisentangledNonLocal2d( - in_channels=self.channels, - reduction=self.reduction, - use_scale=self.use_scale, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - mode=self.mode, - temperature=self.temperature) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs[0](x) - output = self.dnl_block(output) - output = self.convs[1](output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/adversarial/discriminators/__init__.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/adversarial/discriminators/__init__.py deleted file mode 100644 index f9e5ff59950ee0b1d1a67c9b3831d67d08048148..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/adversarial/discriminators/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .mpd import MultiPeriodDiscriminator -from .msd import MultiScaleDiscriminator -from .msstftd import MultiScaleSTFTDiscriminator diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/metrics/__init__.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/metrics/__init__.py deleted file mode 100644 index 3474bdc4f1c88b21904d2a21ba077c93a8a70c8b..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/metrics/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Metrics like CLAP score, FAD, KLD, Visqol, Chroma similarity, etc. -""" -# flake8: noqa -from .clap_consistency import CLAPTextConsistencyMetric, TextConsistencyMetric -from .chroma_cosinesim import ChromaCosineSimilarityMetric -from .fad import FrechetAudioDistanceMetric -from .kld import KLDivergenceMetric, PasstKLDivergenceMetric -from .rvm import RelativeVolumeMel -from .visqol import ViSQOL diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/scripts/mos.py b/spaces/GrandaddyShmax/AudioCraft_Plus/scripts/mos.py deleted file mode 100644 index a711c9ece23e72ed3a07032c7834ef7c56ab4f11..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/scripts/mos.py +++ /dev/null @@ -1,286 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -""" -To run this script, from the root of the repo. Make sure to have Flask installed - - FLASK_DEBUG=1 FLASK_APP=scripts.mos flask run -p 4567 - # or if you have gunicorn - gunicorn -w 4 -b 127.0.0.1:8895 -t 120 'scripts.mos:app' --access-logfile - - -""" -from collections import defaultdict -from functools import wraps -from hashlib import sha1 -import json -import math -from pathlib import Path -import random -import typing as tp - -from flask import Flask, redirect, render_template, request, session, url_for - -from audiocraft import train -from audiocraft.utils.samples.manager import get_samples_for_xps - - -SAMPLES_PER_PAGE = 8 -MAX_RATING = 5 -storage = Path(train.main.dora.dir / 'mos_storage') -storage.mkdir(exist_ok=True) -surveys = storage / 'surveys' -surveys.mkdir(exist_ok=True) -magma_root = Path(train.__file__).parent.parent -app = Flask('mos', static_folder=str(magma_root / 'scripts/static'), - template_folder=str(magma_root / 'scripts/templates')) -app.secret_key = b'audiocraft makes the best songs' - - -def normalize_path(path: Path): - """Just to make path a bit nicer, make them relative to the Dora root dir. - """ - path = path.resolve() - dora_dir = train.main.dora.dir.resolve() / 'xps' - return path.relative_to(dora_dir) - - -def get_full_path(normalized_path: Path): - """Revert `normalize_path`. - """ - return train.main.dora.dir.resolve() / 'xps' / normalized_path - - -def get_signature(xps: tp.List[str]): - """Return a signature for a list of XP signatures. - """ - return sha1(json.dumps(xps).encode()).hexdigest()[:10] - - -def ensure_logged(func): - """Ensure user is logged in. - """ - @wraps(func) - def _wrapped(*args, **kwargs): - user = session.get('user') - if user is None: - return redirect(url_for('login', redirect_to=request.url)) - return func(*args, **kwargs) - return _wrapped - - -@app.route('/login', methods=['GET', 'POST']) -def login(): - """Login user if not already, then redirect. - """ - user = session.get('user') - if user is None: - error = None - if request.method == 'POST': - user = request.form['user'] - if not user: - error = 'User cannot be empty' - if user is None or error: - return render_template('login.html', error=error) - assert user - session['user'] = user - redirect_to = request.args.get('redirect_to') - if redirect_to is None: - redirect_to = url_for('index') - return redirect(redirect_to) - - -@app.route('/', methods=['GET', 'POST']) -@ensure_logged -def index(): - """Offer to create a new study. - """ - errors = [] - if request.method == 'POST': - xps_or_grids = [part.strip() for part in request.form['xps'].split()] - xps = set() - for xp_or_grid in xps_or_grids: - xp_path = train.main.dora.dir / 'xps' / xp_or_grid - if xp_path.exists(): - xps.add(xp_or_grid) - continue - grid_path = train.main.dora.dir / 'grids' / xp_or_grid - if grid_path.exists(): - for child in grid_path.iterdir(): - if child.is_symlink(): - xps.add(child.name) - continue - errors.append(f'{xp_or_grid} is neither an XP nor a grid!') - assert xps or errors - blind = 'true' if request.form.get('blind') == 'on' else 'false' - xps = list(xps) - if not errors: - signature = get_signature(xps) - manifest = { - 'xps': xps, - } - survey_path = surveys / signature - survey_path.mkdir(exist_ok=True) - with open(survey_path / 'manifest.json', 'w') as f: - json.dump(manifest, f, indent=2) - return redirect(url_for('survey', blind=blind, signature=signature)) - return render_template('index.html', errors=errors) - - -@app.route('/survey/', methods=['GET', 'POST']) -@ensure_logged -def survey(signature): - success = request.args.get('success', False) - seed = int(request.args.get('seed', 4321)) - blind = request.args.get('blind', 'false') in ['true', 'on', 'True'] - exclude_prompted = request.args.get('exclude_prompted', 'false') in ['true', 'on', 'True'] - exclude_unprompted = request.args.get('exclude_unprompted', 'false') in ['true', 'on', 'True'] - max_epoch = int(request.args.get('max_epoch', '-1')) - survey_path = surveys / signature - assert survey_path.exists(), survey_path - - user = session['user'] - result_folder = survey_path / 'results' - result_folder.mkdir(exist_ok=True) - result_file = result_folder / f'{user}_{seed}.json' - - with open(survey_path / 'manifest.json') as f: - manifest = json.load(f) - - xps = [train.main.get_xp_from_sig(xp) for xp in manifest['xps']] - names, ref_name = train.main.get_names(xps) - - samples_kwargs = { - 'exclude_prompted': exclude_prompted, - 'exclude_unprompted': exclude_unprompted, - 'max_epoch': max_epoch, - } - matched_samples = get_samples_for_xps(xps, epoch=-1, **samples_kwargs) # fetch latest epoch - models_by_id = { - id: [{ - 'xp': xps[idx], - 'xp_name': names[idx], - 'model_id': f'{xps[idx].sig}-{sample.id}', - 'sample': sample, - 'is_prompted': sample.prompt is not None, - 'errors': [], - } for idx, sample in enumerate(samples)] - for id, samples in matched_samples.items() - } - experiments = [ - {'xp': xp, 'name': names[idx], 'epoch': list(matched_samples.values())[0][idx].epoch} - for idx, xp in enumerate(xps) - ] - - keys = list(matched_samples.keys()) - keys.sort() - rng = random.Random(seed) - rng.shuffle(keys) - model_ids = keys[:SAMPLES_PER_PAGE] - - if blind: - for key in model_ids: - rng.shuffle(models_by_id[key]) - - ok = True - if request.method == 'POST': - all_samples_results = [] - for id in model_ids: - models = models_by_id[id] - result = { - 'id': id, - 'is_prompted': models[0]['is_prompted'], - 'models': {} - } - all_samples_results.append(result) - for model in models: - rating = request.form[model['model_id']] - if rating: - rating = int(rating) - assert rating <= MAX_RATING and rating >= 1 - result['models'][model['xp'].sig] = rating - model['rating'] = rating - else: - ok = False - model['errors'].append('Please rate this model.') - if ok: - result = { - 'results': all_samples_results, - 'seed': seed, - 'user': user, - 'blind': blind, - 'exclude_prompted': exclude_prompted, - 'exclude_unprompted': exclude_unprompted, - } - print(result) - with open(result_file, 'w') as f: - json.dump(result, f) - seed = seed + 1 - return redirect(url_for( - 'survey', signature=signature, blind=blind, seed=seed, - exclude_prompted=exclude_prompted, exclude_unprompted=exclude_unprompted, - max_epoch=max_epoch, success=True)) - - ratings = list(range(1, MAX_RATING + 1)) - return render_template( - 'survey.html', ratings=ratings, blind=blind, seed=seed, signature=signature, success=success, - exclude_prompted=exclude_prompted, exclude_unprompted=exclude_unprompted, max_epoch=max_epoch, - experiments=experiments, models_by_id=models_by_id, model_ids=model_ids, errors=[], - ref_name=ref_name, already_filled=result_file.exists()) - - -@app.route('/audio/') -def audio(path: str): - full_path = Path('/') / path - assert full_path.suffix in [".mp3", ".wav"] - return full_path.read_bytes(), {'Content-Type': 'audio/mpeg'} - - -def mean(x): - return sum(x) / len(x) - - -def std(x): - m = mean(x) - return math.sqrt(sum((i - m)**2 for i in x) / len(x)) - - -@app.route('/results/') -@ensure_logged -def results(signature): - - survey_path = surveys / signature - assert survey_path.exists(), survey_path - result_folder = survey_path / 'results' - result_folder.mkdir(exist_ok=True) - - # ratings per model, then per user. - ratings_per_model = defaultdict(list) - users = [] - for result_file in result_folder.iterdir(): - if result_file.suffix != '.json': - continue - with open(result_file) as f: - results = json.load(f) - users.append(results['user']) - for result in results['results']: - for sig, rating in result['models'].items(): - ratings_per_model[sig].append(rating) - - fmt = '{:.2f}' - models = [] - for model in sorted(ratings_per_model.keys()): - ratings = ratings_per_model[model] - - models.append({ - 'sig': model, - 'samples': len(ratings), - 'mean_rating': fmt.format(mean(ratings)), - # the value 1.96 was probably chosen to achieve some - # confidence interval assuming gaussianity. - 'std_rating': fmt.format(1.96 * std(ratings) / len(ratings)**0.5), - }) - return render_template('results.html', signature=signature, models=models, users=users) diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/models/pos_encoding.py b/spaces/Grezz/generate_human_motion/VQ-Trans/models/pos_encoding.py deleted file mode 100644 index 066be3e1f8a1636f7eaabd1c534b9c618ee3e9f8..0000000000000000000000000000000000000000 --- a/spaces/Grezz/generate_human_motion/VQ-Trans/models/pos_encoding.py +++ /dev/null @@ -1,43 +0,0 @@ -""" -Various positional encodings for the transformer. -""" -import math -import torch -from torch import nn - -def PE1d_sincos(seq_length, dim): - """ - :param d_model: dimension of the model - :param length: length of positions - :return: length*d_model position matrix - """ - if dim % 2 != 0: - raise ValueError("Cannot use sin/cos positional encoding with " - "odd dim (got dim={:d})".format(dim)) - pe = torch.zeros(seq_length, dim) - position = torch.arange(0, seq_length).unsqueeze(1) - div_term = torch.exp((torch.arange(0, dim, 2, dtype=torch.float) * - -(math.log(10000.0) / dim))) - pe[:, 0::2] = torch.sin(position.float() * div_term) - pe[:, 1::2] = torch.cos(position.float() * div_term) - - return pe.unsqueeze(1) - - -class PositionEmbedding(nn.Module): - """ - Absolute pos embedding (standard), learned. - """ - def __init__(self, seq_length, dim, dropout, grad=False): - super().__init__() - self.embed = nn.Parameter(data=PE1d_sincos(seq_length, dim), requires_grad=grad) - self.dropout = nn.Dropout(p=dropout) - - def forward(self, x): - # x.shape: bs, seq_len, feat_dim - l = x.shape[1] - x = x.permute(1, 0, 2) + self.embed[:l].expand(x.permute(1, 0, 2).shape) - x = self.dropout(x.permute(1, 0, 2)) - return x - - \ No newline at end of file diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/generate_saliency_maps.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/generate_saliency_maps.py deleted file mode 100644 index a1cbee97c5d425c44d07a8f944208e7c62acc100..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/generate_saliency_maps.py +++ /dev/null @@ -1,113 +0,0 @@ -from typing import List -import torch, os -from tqdm import tqdm -from torch.utils.data import DataLoader -from torch import nn, Tensor -import cv2 -import numpy as np -import torch.nn.functional as F - -from .rgbd_model import RGBDModel -from .configs.base_config import base_cfg -from .checkpoint import load_checkpoint -from .utils import clean_dir -from .dataset_fn import TestDataset -from .device import device - -@torch.no_grad() -def generate_saliency_maps_per_dataloader( - cfg: base_cfg, - dataloader: DataLoader, - model: RGBDModel, - save_dataset_dir: str, - is_padding: bool = True, is_fp16: bool = False -) -> None: - os.makedirs(save_dataset_dir, exist_ok=True) - for i_batch, (images, depths, gts, indices, image_sizes, image_names) in tqdm( - enumerate(dataloader), total=len(dataloader.dataset) // dataloader.batch_size - ): - gpu_images: Tensor = images.to(device) - gpu_depths: Tensor = depths.to(device) - gpu_gts: Tensor = depths.to(device) - with torch.cuda.amp.autocast(enabled=is_fp16): - preds_no_sigmoid: Tensor = model.inference(gpu_images, gpu_depths) - - for pred_no_sigmoid, image_name, image_size in zip(preds_no_sigmoid, image_names, image_sizes): - if is_padding: - w, h = image_size.numpy() - k = max(w, h) - res: Tensor = F.interpolate( - pred_no_sigmoid.unsqueeze(0), size=(k, k), - mode='bilinear', align_corners=False - ) - res = res[:, :, int((k-h)/2.): int((k+h)/2.), int((k-w)/2.): int((k+w)/2.)] - else: - res: Tensor = F.interpolate( - pred_no_sigmoid.unsqueeze(0), size=(image_size[1], image_size[0]), - mode='bilinear', align_corners=False - ) - res = res.sigmoid().data.cpu().numpy().squeeze() - res = (res - res.min()) / (res.max() - res.min() + 1e-8) - - if is_fp16: - res = np.float32(res) - cv2.imwrite(os.path.join(save_dataset_dir, image_name),res*255) - - del gpu_images, gpu_depths, gpu_gts, preds_no_sigmoid - -def get_experiment_saliency_maps_working_dir(cfg: base_cfg, epoch: int) -> str: - rs = f'{cfg.experiment_name}_epoch{epoch}{"_padding" if cfg.is_padding_for_test else ""}' - if cfg.is_inference_with_no_depth: - rs += '_nodepth' - return rs - -@torch.no_grad() -def generate_saliency_maps( - cfg: base_cfg, model: nn.Module, - epochs_lst: List[int], # List of epochs [400, 500, ...] - data_augmentation_version: int, - set_version: int = 1, # Set version 1, 2 -) -> List[str]: - experiment_names: List[str] = [] - - test_dataset_names = cfg.test_dataset_names - - for epoch in epochs_lst: - ckpt_path = os.path.join(cfg.experiment_dir_path, cfg.experiment_name, f'checkpoint_{epoch}.pt') - load_checkpoint(model, None, None, None, ckpt_path) - experiment_name = get_experiment_saliency_maps_working_dir(cfg, epoch) - experiment_names.append(experiment_name) - experiment_saliency_maps_working_dir = os.path.join( - cfg.sotas_working_dir, experiment_name - ) - clean_dir(experiment_saliency_maps_working_dir) - print(f'Output to directory {experiment_saliency_maps_working_dir}') - - model.to(device) - model.eval() - - batch_size = cfg.test_batch_size - - dataset_working_dir_paths: List[str] = [ - os.path.join(cfg.test_datasets_working_dir_path, dataset_name) \ - for dataset_name in test_dataset_names - ] # Colab - - for dataset_name in test_dataset_names: - print(f'Dataset {dataset_name}') - dataset_working_dir_path = os.path.join( - cfg.test_datasets_working_dir_path, dataset_name - ) - dataset = TestDataset(cfg, dataset_working_dir_path) - dataloader = DataLoader( - dataset, batch_size=batch_size, - shuffle=False, num_workers=cfg.num_workers - ) - generate_saliency_maps_per_dataloader( - cfg, dataloader, model, - os.path.join(experiment_saliency_maps_working_dir, dataset_name), - is_fp16 = cfg.is_fp16, - is_padding = cfg.is_padding_for_test - ) - return experiment_names - \ No newline at end of file diff --git a/spaces/HaloMaster/chinesesummary/fengshen/data/hubert/hubert_dataset.py b/spaces/HaloMaster/chinesesummary/fengshen/data/hubert/hubert_dataset.py deleted file mode 100644 index d8eaa25a5238740cc86a05af257aa3e0996f1499..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/data/hubert/hubert_dataset.py +++ /dev/null @@ -1,361 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import itertools -import logging -import os -import sys -from typing import Any, List, Optional, Union - -import numpy as np - -import torch -import torch.nn.functional as F -from fairseq.data import data_utils -from fairseq.data.fairseq_dataset import FairseqDataset - -logger = logging.getLogger(__name__) - - -def add_data_specific_args(parent_args): - parser = parent_args.add_argument_group('Hubert Dataset') - parser.add_argument('--data', type=str) - parser.add_argument('--sample_rate', type=float, default=16000) - parser.add_argument('--label_dir', type=str) - parser.add_argument('--labels', type=str, nargs='+') - parser.add_argument('--label_rate', type=float) - parser.add_argument('--max_keep_size', type=int, default=None) - parser.add_argument('--min_sample_size', type=int) - parser.add_argument('--max_sample_size', type=int) - parser.add_argument('--pad_audio', type=bool) - parser.add_argument('--normalize', type=bool) - parser.add_argument('--random_crop', type=bool) - parser.add_argument('--single_target', type=bool, default=False) - return parent_args - - -def load_audio(manifest_path, max_keep, min_keep): - n_long, n_short = 0, 0 - names, inds, sizes = [], [], [] - with open(manifest_path) as f: - root = f.readline().strip() - for ind, line in enumerate(f): - items = line.strip().split("\t") - assert len(items) == 2, line - sz = int(items[1]) - if min_keep is not None and sz < min_keep: - n_short += 1 - elif max_keep is not None and sz > max_keep: - n_long += 1 - else: - names.append(items[0]) - inds.append(ind) - sizes.append(sz) - tot = ind + 1 - logger.info( - ( - f"max_keep={max_keep}, min_keep={min_keep}, " - f"loaded {len(names)}, skipped {n_short} short and {n_long} long, " - f"longest-loaded={max(sizes)}, shortest-loaded={min(sizes)}" - ) - ) - return root, names, inds, tot, sizes - - -def load_label(label_path, inds, tot): - with open(label_path) as f: - labels = [line.rstrip() for line in f] - assert ( - len(labels) == tot - ), f"number of labels does not match ({len(labels)} != {tot})" - labels = [labels[i] for i in inds] - return labels - - -def load_label_offset(label_path, inds, tot): - with open(label_path) as f: - code_lengths = [len(line.encode("utf-8")) for line in f] - assert ( - len(code_lengths) == tot - ), f"number of labels does not match ({len(code_lengths)} != {tot})" - offsets = list(itertools.accumulate([0] + code_lengths)) - offsets = [(offsets[i], offsets[i + 1]) for i in inds] - return offsets - - -def verify_label_lengths( - audio_sizes, - audio_rate, - label_path, - label_rate, - inds, - tot, - tol=0.1, # tolerance in seconds -): - if label_rate < 0: - logger.info(f"{label_path} is sequence label. skipped") - return - - with open(label_path) as f: - lengths = [len(line.rstrip().split()) for line in f] - assert len(lengths) == tot - lengths = [lengths[i] for i in inds] - num_invalid = 0 - for i, ind in enumerate(inds): - dur_from_audio = audio_sizes[i] / audio_rate - dur_from_label = lengths[i] / label_rate - if abs(dur_from_audio - dur_from_label) > tol: - logger.warning( - ( - f"audio and label duration differ too much " - f"(|{dur_from_audio} - {dur_from_label}| > {tol}) " - f"in line {ind+1} of {label_path}. Check if `label_rate` " - f"is correctly set (currently {label_rate}). " - f"num. of samples = {audio_sizes[i]}; " - f"label length = {lengths[i]}" - ) - ) - num_invalid += 1 - if num_invalid > 0: - logger.warning( - f"total {num_invalid} (audio, label) pairs with mismatched lengths" - ) - - -class HubertDataset(FairseqDataset): - def __init__( - self, - manifest_path: str, - sample_rate: float, - label_paths: List[str], - label_rates: Union[List[float], float], # -1 for sequence labels - pad_list: List[str], - eos_list: List[str], - label_processors: Optional[List[Any]] = None, - max_keep_sample_size: Optional[int] = None, - min_keep_sample_size: Optional[int] = None, - max_sample_size: Optional[int] = None, - shuffle: bool = True, - pad_audio: bool = False, - normalize: bool = False, - store_labels: bool = True, - random_crop: bool = False, - single_target: bool = False, - ): - self.audio_root, self.audio_names, inds, tot, self.sizes = load_audio( - manifest_path, max_keep_sample_size, min_keep_sample_size - ) - self.sample_rate = sample_rate - self.shuffle = shuffle - self.random_crop = random_crop - - self.num_labels = len(label_paths) - self.pad_list = pad_list - self.eos_list = eos_list - self.label_processors = label_processors - self.single_target = single_target - self.label_rates = ( - [label_rates for _ in range(len(label_paths))] - if isinstance(label_rates, float) - else label_rates - ) - self.store_labels = store_labels - if store_labels: - self.label_list = [load_label(p, inds, tot) for p in label_paths] - else: - self.label_paths = label_paths - self.label_offsets_list = [ - load_label_offset(p, inds, tot) for p in label_paths - ] - assert label_processors is None or len(label_processors) == self.num_labels - for label_path, label_rate in zip(label_paths, self.label_rates): - verify_label_lengths( - self.sizes, sample_rate, label_path, label_rate, inds, tot - ) - - self.max_sample_size = ( - max_sample_size if max_sample_size is not None else sys.maxsize - ) - self.pad_audio = pad_audio - self.normalize = normalize - logger.info( - f"pad_audio={pad_audio}, random_crop={random_crop}, " - f"normalize={normalize}, max_sample_size={self.max_sample_size}" - ) - - def get_audio(self, index): - import soundfile as sf - - wav_path = os.path.join(self.audio_root, self.audio_names[index]) - wav, cur_sample_rate = sf.read(wav_path) - wav = torch.from_numpy(wav).float() - wav = self.postprocess(wav, cur_sample_rate) - return wav - - def get_label(self, index, label_idx): - if self.store_labels: - label = self.label_list[label_idx][index] - else: - with open(self.label_paths[label_idx]) as f: - offset_s, offset_e = self.label_offsets_list[label_idx][index] - f.seek(offset_s) - label = f.read(offset_e - offset_s) - - if self.label_processors is not None: - label = self.label_processors[label_idx](label) - return label - - def get_labels(self, index): - return [self.get_label(index, i) for i in range(self.num_labels)] - - def __getitem__(self, index): - wav = self.get_audio(index) - labels = self.get_labels(index) - return {"id": index, "source": wav, "label_list": labels} - - def __len__(self): - return len(self.sizes) - - def crop_to_max_size(self, wav, target_size): - size = len(wav) - diff = size - target_size - if diff <= 0: - return wav, 0 - - start, end = 0, target_size - if self.random_crop: - start = np.random.randint(0, diff + 1) - end = size - diff + start - return wav[start:end], start - - def collater(self, samples): - # target = max(sizes) -> random_crop not used - # target = max_sample_size -> random_crop used for long - samples = [s for s in samples if s["source"] is not None] - if len(samples) == 0: - return {} - - audios = [s["source"] for s in samples] - audio_sizes = [len(s) for s in audios] - if self.pad_audio: - audio_size = min(max(audio_sizes), self.max_sample_size) - else: - audio_size = min(min(audio_sizes), self.max_sample_size) - collated_audios, padding_mask, audio_starts = self.collater_audio( - audios, audio_size - ) - - targets_by_label = [ - [s["label_list"][i] for s in samples] for i in range(self.num_labels) - ] - targets_list, lengths_list, ntokens_list = self.collater_label( - targets_by_label, audio_size, audio_starts - ) - - net_input = {"source": collated_audios, "padding_mask": padding_mask} - batch = { - "id": torch.LongTensor([s["id"] for s in samples]), - "net_input": net_input, - } - - if self.single_target: - batch["target_lengths"] = lengths_list[0] - batch["ntokens"] = ntokens_list[0] - batch["target"] = targets_list[0] - else: - batch["target_lengths_list"] = lengths_list - batch["ntokens_list"] = ntokens_list - batch["target_list"] = targets_list - return batch - - def collater_audio(self, audios, audio_size): - collated_audios = audios[0].new_zeros(len(audios), audio_size) - padding_mask = ( - torch.BoolTensor(collated_audios.shape).fill_(False) - # if self.pad_audio else None - ) - audio_starts = [0 for _ in audios] - for i, audio in enumerate(audios): - diff = len(audio) - audio_size - if diff == 0: - collated_audios[i] = audio - elif diff < 0: - assert self.pad_audio - collated_audios[i] = torch.cat([audio, audio.new_full((-diff,), 0.0)]) - padding_mask[i, diff:] = True - else: - collated_audios[i], audio_starts[i] = self.crop_to_max_size( - audio, audio_size - ) - return collated_audios, padding_mask, audio_starts - - def collater_frm_label(self, targets, audio_size, audio_starts, label_rate, pad): - assert label_rate > 0 - s2f = label_rate / self.sample_rate - frm_starts = [int(round(s * s2f)) for s in audio_starts] - frm_size = int(round(audio_size * s2f)) - if not self.pad_audio: - rem_size = [len(t) - s for t, s in zip(targets, frm_starts)] - frm_size = min(frm_size, *rem_size) - targets = [t[s: s + frm_size] for t, s in zip(targets, frm_starts)] - logger.debug(f"audio_starts={audio_starts}") - logger.debug(f"frame_starts={frm_starts}") - logger.debug(f"frame_size={frm_size}") - - lengths = torch.LongTensor([len(t) for t in targets]) - ntokens = lengths.sum().item() - targets = data_utils.collate_tokens(targets, pad_idx=pad, left_pad=False) - return targets, lengths, ntokens - - def collater_seq_label(self, targets, pad): - lengths = torch.LongTensor([len(t) for t in targets]) - ntokens = lengths.sum().item() - targets = data_utils.collate_tokens(targets, pad_idx=pad, left_pad=False) - return targets, lengths, ntokens - - def collater_label(self, targets_by_label, audio_size, audio_starts): - targets_list, lengths_list, ntokens_list = [], [], [] - itr = zip(targets_by_label, self.label_rates, self.pad_list) - for targets, label_rate, pad in itr: - if label_rate == -1.0: - targets, lengths, ntokens = self.collater_seq_label(targets, pad) - else: - targets, lengths, ntokens = self.collater_frm_label( - targets, audio_size, audio_starts, label_rate, pad - ) - targets_list.append(targets) - lengths_list.append(lengths) - ntokens_list.append(ntokens) - return targets_list, lengths_list, ntokens_list - - def num_tokens(self, index): - return self.size(index) - - def size(self, index): - if self.pad_audio: - return self.sizes[index] - return min(self.sizes[index], self.max_sample_size) - - def ordered_indices(self): - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - - order.append(self.sizes) - return np.lexsort(order)[::-1] - - def postprocess(self, wav, cur_sample_rate): - if wav.dim() == 2: - wav = wav.mean(-1) - assert wav.dim() == 1, wav.dim() - - if cur_sample_rate != self.sample_rate: - raise Exception(f"sr {cur_sample_rate} != {self.sample_rate}") - - if self.normalize: - with torch.no_grad(): - wav = F.layer_norm(wav, wav.shape) - return wav diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/criss/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/criss/README.md deleted file mode 100644 index 4689ed7c10497a5100b28fe6d6801a7c089da569..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/criss/README.md +++ /dev/null @@ -1,61 +0,0 @@ -# Cross-lingual Retrieval for Iterative Self-Supervised Training - -https://arxiv.org/pdf/2006.09526.pdf - -## Introduction - -CRISS is a multilingual sequence-to-sequnce pretraining method where mining and training processes are applied iteratively, improving cross-lingual alignment and translation ability at the same time. - -## Requirements: - -* faiss: https://github.com/facebookresearch/faiss -* mosesdecoder: https://github.com/moses-smt/mosesdecoder -* flores: https://github.com/facebookresearch/flores -* LASER: https://github.com/facebookresearch/LASER - -## Unsupervised Machine Translation -##### 1. Download and decompress CRISS checkpoints -``` -cd examples/criss -wget https://dl.fbaipublicfiles.com/criss/criss_3rd_checkpoints.tar.gz -tar -xf criss_checkpoints.tar.gz -``` -##### 2. Download and preprocess Flores test dataset -Make sure to run all scripts from examples/criss directory -``` -bash download_and_preprocess_flores_test.sh -``` - -##### 3. Run Evaluation on Sinhala-English -``` -bash unsupervised_mt/eval.sh -``` - -## Sentence Retrieval -##### 1. Download and preprocess Tatoeba dataset -``` -bash download_and_preprocess_tatoeba.sh -``` - -##### 2. Run Sentence Retrieval on Tatoeba Kazakh-English -``` -bash sentence_retrieval/sentence_retrieval_tatoeba.sh -``` - -## Mining -##### 1. Install faiss -Follow instructions on https://github.com/facebookresearch/faiss/blob/master/INSTALL.md -##### 2. Mine pseudo-parallel data between Kazakh and English -``` -bash mining/mine_example.sh -``` - -## Citation -```bibtex -@article{tran2020cross, - title={Cross-lingual retrieval for iterative self-supervised training}, - author={Tran, Chau and Tang, Yuqing and Li, Xian and Gu, Jiatao}, - journal={arXiv preprint arXiv:2006.09526}, - year={2020} -} -``` diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py deleted file mode 100644 index 6177239dc75f6937d036462a5a2379aaee202e7d..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py +++ /dev/null @@ -1,707 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Run inference for pre-processed data with a trained model. -""" - -import ast -from collections import namedtuple -from dataclasses import dataclass, field -from enum import Enum, auto -import hydra -from hydra.core.config_store import ConfigStore -import logging -import math -import os -from omegaconf import OmegaConf -from typing import Optional -import sys - -import editdistance -import torch - -from hydra.core.hydra_config import HydraConfig - -from fairseq import checkpoint_utils, progress_bar, tasks, utils -from fairseq.data.data_utils import post_process -from fairseq.dataclass.configs import FairseqDataclass, FairseqConfig -from fairseq.logging.meters import StopwatchMeter -from omegaconf import open_dict - -from examples.speech_recognition.kaldi.kaldi_decoder import KaldiDecoderConfig - -logging.root.setLevel(logging.INFO) -logging.basicConfig(stream=sys.stdout, level=logging.INFO) -logger = logging.getLogger(__name__) - - -class DecoderType(Enum): - VITERBI = auto() - KENLM = auto() - FAIRSEQ = auto() - KALDI = auto() - - -@dataclass -class UnsupGenerateConfig(FairseqDataclass): - fairseq: FairseqConfig = FairseqConfig() - lm_weight: float = field( - default=2.0, - metadata={"help": "language model weight"}, - ) - w2l_decoder: DecoderType = field( - default=DecoderType.VITERBI, - metadata={"help": "type of decoder to use"}, - ) - kaldi_decoder_config: Optional[KaldiDecoderConfig] = None - lexicon: Optional[str] = field( - default=None, - metadata={ - "help": "path to lexicon. This is also used to 'phonemize' for unsupvised param tuning" - }, - ) - lm_model: Optional[str] = field( - default=None, - metadata={"help": "path to language model (kenlm or fairseq)"}, - ) - unit_lm: bool = field( - default=False, - metadata={"help": "whether to use unit lm"}, - ) - beam_threshold: float = field( - default=50.0, - metadata={"help": "beam score threshold"}, - ) - beam_size_token: float = field( - default=100.0, - metadata={"help": "max tokens per beam"}, - ) - beam: int = field( - default=5, - metadata={"help": "decoder beam size"}, - ) - nbest: int = field( - default=1, - metadata={"help": "number of results to return"}, - ) - word_score: float = field( - default=1.0, - metadata={"help": "word score to add at end of word"}, - ) - unk_weight: float = field( - default=-math.inf, - metadata={"help": "unknown token weight"}, - ) - sil_weight: float = field( - default=0.0, - metadata={"help": "silence token weight"}, - ) - targets: Optional[str] = field( - default=None, - metadata={"help": "extension of ground truth labels to compute UER"}, - ) - results_path: Optional[str] = field( - default=None, - metadata={"help": "where to store results"}, - ) - post_process: Optional[str] = field( - default=None, - metadata={"help": "how to post process results"}, - ) - vocab_usage_power: float = field( - default=2, - metadata={"help": "for unsupervised param tuning"}, - ) - - viterbi_transcript: Optional[str] = field( - default=None, - metadata={"help": "for unsupervised param tuning"}, - ) - min_lm_ppl: float = field( - default=0, - metadata={"help": "for unsupervised param tuning"}, - ) - min_vt_uer: float = field( - default=0, - metadata={"help": "for unsupervised param tuning"}, - ) - - blank_weight: float = field( - default=0, - metadata={"help": "value to add or set for blank emission"}, - ) - blank_mode: str = field( - default="set", - metadata={ - "help": "can be add or set, how to modify blank emission with blank weight" - }, - ) - sil_is_blank: bool = field( - default=False, - metadata={"help": "if true, token is same as blank token"}, - ) - - unsupervised_tuning: bool = field( - default=False, - metadata={ - "help": "if true, returns a score based on unsupervised param selection metric instead of UER" - }, - ) - is_ax: bool = field( - default=False, - metadata={ - "help": "if true, assumes we are using ax for tuning and returns a tuple for ax to consume" - }, - ) - - -def get_dataset_itr(cfg, task): - return task.get_batch_iterator( - dataset=task.dataset(cfg.fairseq.dataset.gen_subset), - max_tokens=cfg.fairseq.dataset.max_tokens, - max_sentences=cfg.fairseq.dataset.batch_size, - max_positions=(sys.maxsize, sys.maxsize), - ignore_invalid_inputs=cfg.fairseq.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=cfg.fairseq.dataset.required_batch_size_multiple, - num_shards=cfg.fairseq.dataset.num_shards, - shard_id=cfg.fairseq.dataset.shard_id, - num_workers=cfg.fairseq.dataset.num_workers, - data_buffer_size=cfg.fairseq.dataset.data_buffer_size, - ).next_epoch_itr(shuffle=False) - - -def process_predictions( - cfg: UnsupGenerateConfig, - hypos, - tgt_dict, - target_tokens, - res_files, -): - retval = [] - word_preds = [] - transcriptions = [] - dec_scores = [] - - for i, hypo in enumerate(hypos[: min(len(hypos), cfg.nbest)]): - if torch.is_tensor(hypo["tokens"]): - tokens = hypo["tokens"].int().cpu() - tokens = tokens[tokens >= tgt_dict.nspecial] - hyp_pieces = tgt_dict.string(tokens) - else: - hyp_pieces = " ".join(hypo["tokens"]) - - if "words" in hypo and len(hypo["words"]) > 0: - hyp_words = " ".join(hypo["words"]) - else: - hyp_words = post_process(hyp_pieces, cfg.post_process) - - to_write = {} - if res_files is not None: - to_write[res_files["hypo.units"]] = hyp_pieces - to_write[res_files["hypo.words"]] = hyp_words - - tgt_words = "" - if target_tokens is not None: - if isinstance(target_tokens, str): - tgt_pieces = tgt_words = target_tokens - else: - tgt_pieces = tgt_dict.string(target_tokens) - tgt_words = post_process(tgt_pieces, cfg.post_process) - - if res_files is not None: - to_write[res_files["ref.units"]] = tgt_pieces - to_write[res_files["ref.words"]] = tgt_words - - if not cfg.fairseq.common_eval.quiet: - logger.info(f"HYPO {i}:" + hyp_words) - if tgt_words: - logger.info("TARGET:" + tgt_words) - - if "am_score" in hypo and "lm_score" in hypo: - logger.info( - f"DECODER AM SCORE: {hypo['am_score']}, DECODER LM SCORE: {hypo['lm_score']}, DECODER SCORE: {hypo['score']}" - ) - elif "score" in hypo: - logger.info(f"DECODER SCORE: {hypo['score']}") - - logger.info("___________________") - - hyp_words_arr = hyp_words.split() - tgt_words_arr = tgt_words.split() - - retval.append( - ( - editdistance.eval(hyp_words_arr, tgt_words_arr), - len(hyp_words_arr), - len(tgt_words_arr), - hyp_pieces, - hyp_words, - ) - ) - word_preds.append(hyp_words_arr) - transcriptions.append(to_write) - dec_scores.append(-hypo.get("score", 0)) # negate cuz kaldi returns NLL - - if len(retval) > 1: - best = None - for r, t in zip(retval, transcriptions): - if best is None or r[0] < best[0][0]: - best = r, t - for dest, tran in best[1].items(): - print(tran, file=dest) - dest.flush() - return best[0] - - assert len(transcriptions) == 1 - for dest, tran in transcriptions[0].items(): - print(tran, file=dest) - - return retval[0] - - -def prepare_result_files(cfg: UnsupGenerateConfig): - def get_res_file(file_prefix): - if cfg.fairseq.dataset.num_shards > 1: - file_prefix = f"{cfg.fairseq.dataset.shard_id}_{file_prefix}" - path = os.path.join( - cfg.results_path, - "{}{}.txt".format( - cfg.fairseq.dataset.gen_subset, - file_prefix, - ), - ) - return open(path, "w", buffering=1) - - if not cfg.results_path: - return None - - return { - "hypo.words": get_res_file(""), - "hypo.units": get_res_file("_units"), - "ref.words": get_res_file("_ref"), - "ref.units": get_res_file("_ref_units"), - "hypo.nbest.words": get_res_file("_nbest_words"), - } - - -def optimize_models(cfg: UnsupGenerateConfig, use_cuda, models): - """Optimize ensemble for generation""" - for model in models: - model.eval() - if cfg.fairseq.common.fp16: - model.half() - if use_cuda: - model.cuda() - - -GenResult = namedtuple( - "GenResult", - [ - "count", - "errs_t", - "gen_timer", - "lengths_hyp_unit_t", - "lengths_hyp_t", - "lengths_t", - "lm_score_t", - "num_feats", - "num_sentences", - "num_symbols", - "vt_err_t", - "vt_length_t", - ], -) - - -def generate(cfg: UnsupGenerateConfig, models, saved_cfg, use_cuda): - task = tasks.setup_task(cfg.fairseq.task) - saved_cfg.task.labels = cfg.fairseq.task.labels - task.load_dataset(cfg.fairseq.dataset.gen_subset, task_cfg=saved_cfg.task) - # Set dictionary - tgt_dict = task.target_dictionary - logger.info( - "| {} {} {} examples".format( - cfg.fairseq.task.data, - cfg.fairseq.dataset.gen_subset, - len(task.dataset(cfg.fairseq.dataset.gen_subset)), - ) - ) - # Load dataset (possibly sharded) - itr = get_dataset_itr(cfg, task) - # Initialize generator - gen_timer = StopwatchMeter() - - def build_generator(cfg: UnsupGenerateConfig): - w2l_decoder = cfg.w2l_decoder - if w2l_decoder == DecoderType.VITERBI: - from examples.speech_recognition.w2l_decoder import W2lViterbiDecoder - - return W2lViterbiDecoder(cfg, task.target_dictionary) - elif w2l_decoder == DecoderType.KENLM: - from examples.speech_recognition.w2l_decoder import W2lKenLMDecoder - - return W2lKenLMDecoder(cfg, task.target_dictionary) - elif w2l_decoder == DecoderType.FAIRSEQ: - from examples.speech_recognition.w2l_decoder import W2lFairseqLMDecoder - - return W2lFairseqLMDecoder(cfg, task.target_dictionary) - elif w2l_decoder == DecoderType.KALDI: - from examples.speech_recognition.kaldi.kaldi_decoder import KaldiDecoder - - assert cfg.kaldi_decoder_config is not None - - return KaldiDecoder( - cfg.kaldi_decoder_config, - cfg.beam, - ) - else: - raise NotImplementedError( - "only wav2letter decoders with (viterbi, kenlm, fairseqlm) options are supported at the moment but found " - + str(w2l_decoder) - ) - - generator = build_generator(cfg) - - kenlm = None - fairseq_lm = None - if cfg.lm_model is not None: - import kenlm - - kenlm = kenlm.Model(cfg.lm_model) - - num_sentences = 0 - if cfg.results_path is not None and not os.path.exists(cfg.results_path): - os.makedirs(cfg.results_path) - - res_files = prepare_result_files(cfg) - errs_t = 0 - lengths_hyp_t = 0 - lengths_hyp_unit_t = 0 - lengths_t = 0 - count = 0 - num_feats = 0 - all_hyp_pieces = [] - all_hyp_words = [] - - num_symbols = ( - len([s for s in tgt_dict.symbols if not s.startswith("madeup")]) - - tgt_dict.nspecial - ) - targets = None - if cfg.targets is not None: - tgt_path = os.path.join( - cfg.fairseq.task.data, cfg.fairseq.dataset.gen_subset + "." + cfg.targets - ) - if os.path.exists(tgt_path): - with open(tgt_path, "r") as f: - targets = f.read().splitlines() - viterbi_transcript = None - if cfg.viterbi_transcript is not None and len(cfg.viterbi_transcript) > 0: - logger.info(f"loading viterbi transcript from {cfg.viterbi_transcript}") - with open(cfg.viterbi_transcript, "r") as vf: - viterbi_transcript = vf.readlines() - viterbi_transcript = [v.rstrip().split() for v in viterbi_transcript] - - gen_timer.start() - - start = 0 - end = len(itr) - - hypo_futures = None - if cfg.w2l_decoder == DecoderType.KALDI: - logger.info("Extracting features") - hypo_futures = [] - samples = [] - with progress_bar.build_progress_bar(cfg.fairseq.common, itr) as t: - for i, sample in enumerate(t): - if "net_input" not in sample or i < start or i >= end: - continue - if "padding_mask" not in sample["net_input"]: - sample["net_input"]["padding_mask"] = None - - hypos, num_feats = gen_hypos( - generator, models, num_feats, sample, task, use_cuda - ) - hypo_futures.append(hypos) - samples.append(sample) - itr = list(zip(hypo_futures, samples)) - start = 0 - end = len(itr) - logger.info("Finished extracting features") - - with progress_bar.build_progress_bar(cfg.fairseq.common, itr) as t: - for i, sample in enumerate(t): - if i < start or i >= end: - continue - - if hypo_futures is not None: - hypos, sample = sample - hypos = [h.result() for h in hypos] - else: - if "net_input" not in sample: - continue - - hypos, num_feats = gen_hypos( - generator, models, num_feats, sample, task, use_cuda - ) - - for i, sample_id in enumerate(sample["id"].tolist()): - if targets is not None: - target_tokens = targets[sample_id] - elif "target" in sample or "target_label" in sample: - toks = ( - sample["target"][i, :] - if "target_label" not in sample - else sample["target_label"][i, :] - ) - - target_tokens = utils.strip_pad(toks, tgt_dict.pad()).int().cpu() - else: - target_tokens = None - - # Process top predictions - ( - errs, - length_hyp, - length, - hyp_pieces, - hyp_words, - ) = process_predictions( - cfg, - hypos[i], - tgt_dict, - target_tokens, - res_files, - ) - errs_t += errs - lengths_hyp_t += length_hyp - lengths_hyp_unit_t += ( - len(hyp_pieces) if len(hyp_pieces) > 0 else len(hyp_words) - ) - lengths_t += length - count += 1 - all_hyp_pieces.append(hyp_pieces) - all_hyp_words.append(hyp_words) - - num_sentences += ( - sample["nsentences"] if "nsentences" in sample else sample["id"].numel() - ) - - lm_score_sum = 0 - if kenlm is not None: - - if cfg.unit_lm: - lm_score_sum = sum(kenlm.score(w) for w in all_hyp_pieces) - else: - lm_score_sum = sum(kenlm.score(w) for w in all_hyp_words) - elif fairseq_lm is not None: - lm_score_sum = sum(fairseq_lm.score([h.split() for h in all_hyp_words])[0]) - - vt_err_t = 0 - vt_length_t = 0 - if viterbi_transcript is not None: - unit_hyps = [] - if cfg.targets is not None and cfg.lexicon is not None: - lex = {} - with open(cfg.lexicon, "r") as lf: - for line in lf: - items = line.rstrip().split() - lex[items[0]] = items[1:] - for h in all_hyp_pieces: - hyp_ws = [] - for w in h.split(): - assert w in lex, w - hyp_ws.extend(lex[w]) - unit_hyps.append(hyp_ws) - - else: - unit_hyps.extend([h.split() for h in all_hyp_words]) - - vt_err_t = sum( - editdistance.eval(vt, h) for vt, h in zip(viterbi_transcript, unit_hyps) - ) - - vt_length_t = sum(len(h) for h in viterbi_transcript) - - if res_files is not None: - for r in res_files.values(): - r.close() - - gen_timer.stop(lengths_hyp_t) - - return GenResult( - count, - errs_t, - gen_timer, - lengths_hyp_unit_t, - lengths_hyp_t, - lengths_t, - lm_score_sum, - num_feats, - num_sentences, - num_symbols, - vt_err_t, - vt_length_t, - ) - - -def gen_hypos(generator, models, num_feats, sample, task, use_cuda): - sample = utils.move_to_cuda(sample) if use_cuda else sample - - if "features" in sample["net_input"]: - sample["net_input"]["dense_x_only"] = True - num_feats += ( - sample["net_input"]["features"].shape[0] - * sample["net_input"]["features"].shape[1] - ) - hypos = task.inference_step(generator, models, sample, None) - return hypos, num_feats - - -def main(cfg: UnsupGenerateConfig, model=None): - if ( - cfg.fairseq.dataset.max_tokens is None - and cfg.fairseq.dataset.batch_size is None - ): - cfg.fairseq.dataset.max_tokens = 1024000 - - use_cuda = torch.cuda.is_available() and not cfg.fairseq.common.cpu - - task = tasks.setup_task(cfg.fairseq.task) - - overrides = ast.literal_eval(cfg.fairseq.common_eval.model_overrides) - - if cfg.fairseq.task._name == "unpaired_audio_text": - overrides["model"] = { - "blank_weight": cfg.blank_weight, - "blank_mode": cfg.blank_mode, - "blank_is_sil": cfg.sil_is_blank, - "no_softmax": True, - "segmentation": { - "type": "NONE", - }, - } - else: - overrides["model"] = { - "blank_weight": cfg.blank_weight, - "blank_mode": cfg.blank_mode, - } - - if model is None: - # Load ensemble - logger.info("| loading model(s) from {}".format(cfg.fairseq.common_eval.path)) - models, saved_cfg = checkpoint_utils.load_model_ensemble( - cfg.fairseq.common_eval.path.split("\\"), - arg_overrides=overrides, - task=task, - suffix=cfg.fairseq.checkpoint.checkpoint_suffix, - strict=(cfg.fairseq.checkpoint.checkpoint_shard_count == 1), - num_shards=cfg.fairseq.checkpoint.checkpoint_shard_count, - ) - optimize_models(cfg, use_cuda, models) - else: - models = [model] - saved_cfg = cfg.fairseq - - with open_dict(saved_cfg.task): - saved_cfg.task.shuffle = False - saved_cfg.task.sort_by_length = False - - gen_result = generate(cfg, models, saved_cfg, use_cuda) - - wer = None - if gen_result.lengths_t > 0: - wer = gen_result.errs_t * 100.0 / gen_result.lengths_t - logger.info(f"WER: {wer}") - - lm_ppl = float("inf") - - if gen_result.lm_score_t != 0 and gen_result.lengths_hyp_t > 0: - hyp_len = gen_result.lengths_hyp_t - lm_ppl = math.pow( - 10, -gen_result.lm_score_t / (hyp_len + gen_result.num_sentences) - ) - logger.info(f"LM PPL: {lm_ppl}") - - logger.info( - "| Processed {} sentences ({} tokens) in {:.1f}s ({:.2f}" - " sentences/s, {:.2f} tokens/s)".format( - gen_result.num_sentences, - gen_result.gen_timer.n, - gen_result.gen_timer.sum, - gen_result.num_sentences / gen_result.gen_timer.sum, - 1.0 / gen_result.gen_timer.avg, - ) - ) - - vt_diff = None - if gen_result.vt_length_t > 0: - vt_diff = gen_result.vt_err_t / gen_result.vt_length_t - vt_diff = max(cfg.min_vt_uer, vt_diff) - - lm_ppl = max(cfg.min_lm_ppl, lm_ppl) - - if not cfg.unsupervised_tuning == 0: - weighted_score = wer - else: - weighted_score = math.log(lm_ppl) * (vt_diff or 1.0) - - res = ( - f"| Generate {cfg.fairseq.dataset.gen_subset} with beam={cfg.beam}, " - f"lm_weight={cfg.kaldi_decoder_config.acoustic_scale if cfg.kaldi_decoder_config else cfg.lm_weight}, " - f"word_score={cfg.word_score}, sil_weight={cfg.sil_weight}, blank_weight={cfg.blank_weight}, " - f"WER: {wer}, LM_PPL: {lm_ppl}, num feats: {gen_result.num_feats}, " - f"length: {gen_result.lengths_hyp_t}, UER to viterbi: {(vt_diff or 0) * 100}, score: {weighted_score}" - ) - - logger.info(res) - # print(res) - - return task, weighted_score - - -@hydra.main( - config_path=os.path.join("../../..", "fairseq", "config"), config_name="config" -) -def hydra_main(cfg): - with open_dict(cfg): - # make hydra logging work with ddp (see # see https://github.com/facebookresearch/hydra/issues/1126) - cfg.job_logging_cfg = OmegaConf.to_container( - HydraConfig.get().job_logging, resolve=True - ) - - cfg = OmegaConf.create( - OmegaConf.to_container(cfg, resolve=False, enum_to_str=False) - ) - OmegaConf.set_struct(cfg, True) - logger.info(cfg) - - utils.import_user_module(cfg.fairseq.common) - - _, score = main(cfg) - - if cfg.is_ax: - return score, None - return score - - -def cli_main(): - try: - from hydra._internal.utils import get_args - - cfg_name = get_args().config_name or "config" - except: - logger.warning("Failed to get config name from hydra args") - cfg_name = "config" - - cs = ConfigStore.instance() - cs.store(name=cfg_name, node=UnsupGenerateConfig) - hydra_main() - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/Hazem/Pub_face/README.md b/spaces/Hazem/Pub_face/README.md deleted file mode 100644 index fcc481fcbcc054b0f552030fc6efd94791b86910..0000000000000000000000000000000000000000 --- a/spaces/Hazem/Pub_face/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Image Face Upscale Restoration-GFPGAN -emoji: 📈 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: nightfury/Image_Face_Upscale_Restoration-GFPGAN ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Himanshusingh/KernAI-stock-news-distilbert/app.py b/spaces/Himanshusingh/KernAI-stock-news-distilbert/app.py deleted file mode 100644 index 3ea07041ef10da45cf98369b0aa1c9f219f547b2..0000000000000000000000000000000000000000 --- a/spaces/Himanshusingh/KernAI-stock-news-distilbert/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/KernAI/stock-news-distilbert").launch() \ No newline at end of file diff --git a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/celle/transformer.py b/spaces/HuangLab/CELL-E_2-Sequence_Prediction/celle/transformer.py deleted file mode 100644 index 9186ab4772261591cbe58c9db5882f14cf3bd66a..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/celle/transformer.py +++ /dev/null @@ -1,213 +0,0 @@ -from functools import partial - -import torch -from torch import nn -import torch.nn.functional as F -from einops import rearrange - -from celle.reversible import SequentialSequence -from celle.attention import Attention - -from rotary_embedding_torch import RotaryEmbedding, broadcat -from celle.utils import exists, default, cast_tuple - -# https://arxiv.org/abs/2103.17239 -class LayerScale(nn.Module): - def __init__(self, dim, depth, fn): - super().__init__() - if depth <= 18: - init_eps = 0.1 - elif depth > 18 and depth <= 24: - init_eps = 1e-5 - else: - init_eps = 1e-6 - - scale = torch.zeros(1, 1, dim).fill_(init_eps) - self.scale = nn.Parameter(scale) - self.fn = fn - - def forward(self, x, **kwargs): - return self.fn(x, **kwargs) * self.scale - - -# layer norm -class PreNorm(nn.Module): - def __init__(self, dim, fn): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.norm_out = nn.Identity() - self.fn = fn - - def forward(self, x, **kwargs): - x = self.norm(x) - x = self.fn(x, **kwargs) - return self.norm_out(x) - - -# feed forward - - -class GEGLU(nn.Module): - def forward(self, x): - x, gates = x.chunk(2, dim=-1) - return x * F.gelu(gates) - - -class FeedForward(nn.Module): - def __init__(self, dim, dropout=0.0, mult=4.0): - super().__init__() - self.net = nn.Sequential( - nn.Linear(dim, dim * mult * 2), - GEGLU(), - nn.Dropout(dropout), - nn.Linear(dim * mult, dim), - ) - - def forward(self, x): - return self.net(x) - - -# main transformer class -class Transformer(nn.Module): - def __init__( - self, - *, - dim, - depth, - seq_len, - causal=True, - heads=8, - dim_head=64, - ff_mult=4, - attn_dropout=0.0, - ff_dropout=0.0, - image_fmap_size=None, - num_images=None, - stable=False, - rotary_emb=True, - ): - super().__init__() - layers = nn.ModuleList([]) - - self.seq_len = seq_len - self.image_fmap_size = image_fmap_size - - for ind in range(depth): - - attn_class = partial(Attention, stable=stable) - - attn = attn_class( - dim, - causal=causal, - seq_len=seq_len, - heads=heads, - dim_head=dim_head, - dropout=attn_dropout, - ) - - ff = FeedForward(dim, mult=ff_mult, dropout=ff_dropout) - - layers.append( - nn.ModuleList( - [ - LayerScale( - dim, ind + 1, PreNorm(dim, attn) - ), - LayerScale( - dim, ind + 1, PreNorm(dim, ff) - ), - ] - ) - ) - - # pairs arguments with attention layer - route_attn = ((True, False),) * depth - attn_route_map = { - "mask": route_attn, - "rotary_pos_emb": route_attn, - } - - self.layers = SequentialSequence(layers, args_route=attn_route_map) - - # generate positional embeddings for rotary - - pos_emb = None - if rotary_emb: - rot_dim = dim_head // 3 - img_seq_len = ((image_fmap_size // num_images) ** 2) * num_images - - text_len = seq_len - img_seq_len + 1 - - text_pos_emb = RotaryEmbedding(dim=rot_dim) - - img_axial_pos_emb = RotaryEmbedding(dim=rot_dim, freqs_for="pixel") - - text_freqs = text_pos_emb(torch.arange(text_len)) - - img_to_text_freqs = text_pos_emb( - torch.full((img_seq_len,), 8192) - ) # image is given a position far away from text - - text_freqs = torch.cat((text_freqs, img_to_text_freqs), dim=0) - - img_freqs_axial = img_axial_pos_emb( - torch.linspace(-1, 1, steps=image_fmap_size) - ) - - if num_images > 1: - split_img_freqs_axial = torch.split( - img_freqs_axial, image_fmap_size // num_images, dim=0 - ) - - split_img_freqs = [ - broadcat( - ( - rearrange(img_freqs_axial_per_image, "i d -> i () d"), - rearrange(img_freqs_axial_per_image, "j d -> () j d"), - ), - dim=-1, - ) - for img_freqs_axial_per_image in split_img_freqs_axial - ] - - split_img_freqs = [ - rearrange(img_freqs_per_image, "h w d -> (h w) d") - for img_freqs_per_image in split_img_freqs - ] - - # concat per image-image_freqs - - img_freqs = torch.cat(split_img_freqs, dim=0) - - elif num_images == 1: - img_freqs = broadcat( - ( - rearrange(img_freqs_axial, "i d -> i () d"), - rearrange(img_freqs_axial, "j d -> () j d"), - ), - dim=-1, - ) - - img_freqs = rearrange(img_freqs, "h w d -> (h w) d") - - else: - assert False, "num_images must be int greater than 0" - self.img_axial_pos_emb = img_axial_pos_emb - self.text_pos_emb = text_pos_emb - - text_axial_freqs = img_axial_pos_emb( - torch.full((text_len,), -10.0) - ) # text is given a position of -10 apart from the image axial positions, which is from range [-1, 1] - - text_axial_freqs = torch.cat((text_axial_freqs, text_axial_freqs), dim=-1) - - img_freqs = torch.cat((text_axial_freqs, img_freqs), dim=0) - - pos_emb = torch.cat((text_freqs, img_freqs), dim=-1) - - pos_emb = rearrange(pos_emb, "n d -> () n d") - - self.register_buffer("pos_emb", pos_emb) - - def forward(self, x, **kwargs): - return self.layers(x, rotary_pos_emb=self.pos_emb, **kwargs) \ No newline at end of file diff --git a/spaces/ICML2022/resefa/utils/visualizers/grid_visualizer.py b/spaces/ICML2022/resefa/utils/visualizers/grid_visualizer.py deleted file mode 100644 index 291e5fee45816a9775242c3a138ebd0f55f1df20..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/resefa/utils/visualizers/grid_visualizer.py +++ /dev/null @@ -1,232 +0,0 @@ -# python3.7 -"""Contains the visualizer to visualize images by composing them as a gird.""" - -from ..image_utils import get_blank_image -from ..image_utils import get_grid_shape -from ..image_utils import parse_image_size -from ..image_utils import load_image -from ..image_utils import save_image -from ..image_utils import resize_image -from ..image_utils import list_images_from_dir - -__all__ = ['GridVisualizer'] - - -class GridVisualizer(object): - """Defines the visualizer that visualizes images as a grid. - - Basically, given a collection of images, this visualizer stitches them one - by one. Notably, this class also supports adding spaces between images, - adding borders around images, and using white/black background. - - Example: - - grid = GridVisualizer(num_rows, num_cols) - for i in range(num_rows): - for j in range(num_cols): - grid.add(i, j, image) - grid.save('visualize.jpg') - """ - - def __init__(self, - grid_size=0, - num_rows=0, - num_cols=0, - is_portrait=False, - image_size=None, - image_channels=0, - row_spacing=0, - col_spacing=0, - border_left=0, - border_right=0, - border_top=0, - border_bottom=0, - use_black_background=True): - """Initializes the grid visualizer. - - Args: - grid_size: Total number of cells, i.e., height * width. (default: 0) - num_rows: Number of rows. (default: 0) - num_cols: Number of columns. (default: 0) - is_portrait: Whether the grid should be portrait or landscape. - This is only used when it requires to compute `num_rows` and - `num_cols` automatically. See function `get_grid_shape()` in - file `./image_utils.py` for details. (default: False) - image_size: Size to visualize each image. (default: 0) - image_channels: Number of image channels. (default: 0) - row_spacing: Spacing between rows. (default: 0) - col_spacing: Spacing between columns. (default: 0) - border_left: Width of left border. (default: 0) - border_right: Width of right border. (default: 0) - border_top: Width of top border. (default: 0) - border_bottom: Width of bottom border. (default: 0) - use_black_background: Whether to use black background. - (default: True) - """ - self.reset(grid_size, num_rows, num_cols, is_portrait) - self.set_image_size(image_size) - self.set_image_channels(image_channels) - self.set_row_spacing(row_spacing) - self.set_col_spacing(col_spacing) - self.set_border_left(border_left) - self.set_border_right(border_right) - self.set_border_top(border_top) - self.set_border_bottom(border_bottom) - self.set_background(use_black_background) - self.grid = None - - def reset(self, - grid_size=0, - num_rows=0, - num_cols=0, - is_portrait=False): - """Resets the grid shape, i.e., number of rows/columns.""" - if grid_size > 0: - num_rows, num_cols = get_grid_shape(grid_size, - height=num_rows, - width=num_cols, - is_portrait=is_portrait) - self.grid_size = num_rows * num_cols - self.num_rows = num_rows - self.num_cols = num_cols - self.grid = None - - def set_image_size(self, image_size=None): - """Sets the image size of each cell in the grid.""" - height, width = parse_image_size(image_size) - self.image_height = height - self.image_width = width - - def set_image_channels(self, image_channels=0): - """Sets the number of channels of the grid.""" - self.image_channels = image_channels - - def set_row_spacing(self, row_spacing=0): - """Sets the spacing between grid rows.""" - self.row_spacing = row_spacing - - def set_col_spacing(self, col_spacing=0): - """Sets the spacing between grid columns.""" - self.col_spacing = col_spacing - - def set_border_left(self, border_left=0): - """Sets the width of the left border of the grid.""" - self.border_left = border_left - - def set_border_right(self, border_right=0): - """Sets the width of the right border of the grid.""" - self.border_right = border_right - - def set_border_top(self, border_top=0): - """Sets the width of the top border of the grid.""" - self.border_top = border_top - - def set_border_bottom(self, border_bottom=0): - """Sets the width of the bottom border of the grid.""" - self.border_bottom = border_bottom - - def set_background(self, use_black=True): - """Sets the grid background.""" - self.use_black_background = use_black - - def init_grid(self): - """Initializes the grid with a blank image.""" - assert self.num_rows > 0 - assert self.num_cols > 0 - assert self.image_height > 0 - assert self.image_width > 0 - assert self.image_channels > 0 - grid_height = (self.image_height * self.num_rows + - self.row_spacing * (self.num_rows - 1) + - self.border_top + self.border_bottom) - grid_width = (self.image_width * self.num_cols + - self.col_spacing * (self.num_cols - 1) + - self.border_left + self.border_right) - self.grid = get_blank_image(grid_height, grid_width, - channels=self.image_channels, - use_black=self.use_black_background) - - def add(self, i, j, image): - """Adds an image into the grid. - - NOTE: The input image is assumed to be with `RGB` channel order. - """ - if self.grid is None: - height, width = image.shape[0:2] - channels = 1 if image.ndim == 2 else image.shape[2] - height = self.image_height or height - width = self.image_width or width - channels = self.image_channels or channels - self.set_image_size((height, width)) - self.set_image_channels(channels) - self.init_grid() - if image.shape[0:2] != (self.image_height, self.image_width): - image = resize_image(image, (self.image_width, self.image_height)) - y = self.border_top + i * (self.image_height + self.row_spacing) - x = self.border_left + j * (self.image_width + self.col_spacing) - self.grid[y:y + self.image_height, x:x + self.image_width] = image - - def visualize_collection(self, - images, - save_path=None, - num_rows=0, - num_cols=0, - is_portrait=False, - is_row_major=True): - """Visualizes a collection of images one by one.""" - self.grid = None - self.reset(grid_size=len(images), - num_rows=num_rows, - num_cols=num_cols, - is_portrait=is_portrait) - for idx, image in enumerate(images): - if is_row_major: - row_idx, col_idx = divmod(idx, self.num_cols) - else: - col_idx, row_idx = divmod(idx, self.num_rows) - self.add(row_idx, col_idx, image) - if save_path: - self.save(save_path) - - def visualize_list(self, - image_list, - save_path=None, - num_rows=0, - num_cols=0, - is_portrait=False, - is_row_major=True): - """Visualizes a list of image files.""" - self.grid = None - self.reset(grid_size=len(image_list), - num_rows=num_rows, - num_cols=num_cols, - is_portrait=is_portrait) - for idx, filename in enumerate(image_list): - image = load_image(filename) - if is_row_major: - row_idx, col_idx = divmod(idx, self.num_cols) - else: - col_idx, row_idx = divmod(idx, self.num_rows) - self.add(row_idx, col_idx, image) - if save_path: - self.save(save_path) - - def visualize_directory(self, - directory, - save_path=None, - num_rows=0, - num_cols=0, - is_portrait=False, - is_row_major=True): - """Visualizes all images under a directory.""" - image_list = list_images_from_dir(directory) - self.visualize_list(image_list=image_list, - save_path=save_path, - num_rows=num_rows, - num_cols=num_cols, - is_portrait=is_portrait, - is_row_major=is_row_major) - - def save(self, path): - """Saves the grid.""" - save_image(path, self.grid) diff --git a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/__init__.py b/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/__init__.py deleted file mode 100644 index e3413961d1d184b99835eb1e919b052d70298bc6..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .GroundingDINO import build_groundingdino - - -def build_model(args): - # we use register to maintain models from catdet6 on. - from .registry import MODULE_BUILD_FUNCS - - assert args.modelname in MODULE_BUILD_FUNCS._module_dict - build_func = MODULE_BUILD_FUNCS.get(args.modelname) - model = build_func(args) - return model diff --git a/spaces/IXIAOHEII/NB/README.md b/spaces/IXIAOHEII/NB/README.md deleted file mode 100644 index afc64c1a83b524f02feac26a499f0d5089476943..0000000000000000000000000000000000000000 --- a/spaces/IXIAOHEII/NB/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: NB -emoji: 🔥 -colorFrom: blue -colorTo: yellow -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/__init__.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/__init__.py deleted file mode 100644 index abe3cbe49477fe37d4fc16249de8a10f4fb4a013..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/models/ade20k/segm_lib/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .th import * diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/onnxModelAPI.tsx b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/onnxModelAPI.tsx deleted file mode 100644 index 2e006c95b407ff4a7c0c071badf6a9cf2fe34ef0..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/onnxModelAPI.tsx +++ /dev/null @@ -1,71 +0,0 @@ -// Copyright (c) Meta Platforms, Inc. and affiliates. -// All rights reserved. - -// This source code is licensed under the license found in the -// LICENSE file in the root directory of this source tree. - -import { Tensor } from "onnxruntime-web"; -import { modeDataProps } from "./Interfaces"; - -const modelData = ({ clicks, tensor, modelScale }: modeDataProps) => { - const imageEmbedding = tensor; - let pointCoords; - let pointLabels; - let pointCoordsTensor; - let pointLabelsTensor; - - // Check there are input click prompts - if (clicks) { - let n = clicks.length; - - // If there is no box input, a single padding point with - // label -1 and coordinates (0.0, 0.0) should be concatenated - // so initialize the array to support (n + 1) points. - pointCoords = new Float32Array(2 * (n + 1)); - pointLabels = new Float32Array(n + 1); - - // Add clicks and scale to what SAM expects - for (let i = 0; i < n; i++) { - pointCoords[2 * i] = clicks[i].x * modelScale.samScale; - pointCoords[2 * i + 1] = clicks[i].y * modelScale.samScale; - pointLabels[i] = clicks[i].clickType; - } - - // Add in the extra point/label when only clicks and no box - // The extra point is at (0, 0) with label -1 - pointCoords[2 * n] = 0.0; - pointCoords[2 * n + 1] = 0.0; - pointLabels[n] = -1.0; - - // Create the tensor - pointCoordsTensor = new Tensor("float32", pointCoords, [1, n + 1, 2]); - pointLabelsTensor = new Tensor("float32", pointLabels, [1, n + 1]); - } - const imageSizeTensor = new Tensor("float32", [ - modelScale.height, - modelScale.width, - ]); - - if (pointCoordsTensor === undefined || pointLabelsTensor === undefined) - return; - - // There is no previous mask, so default to an empty tensor - const maskInput = new Tensor( - "float32", - new Float32Array(256 * 256), - [1, 1, 256, 256] - ); - // There is no previous mask, so default to 0 - const hasMaskInput = new Tensor("float32", [0]); - - return { - image_embeddings: imageEmbedding, - point_coords: pointCoordsTensor, - point_labels: pointLabelsTensor, - orig_im_size: imageSizeTensor, - mask_input: maskInput, - has_mask_input: hasMaskInput, - }; -}; - -export { modelData }; diff --git a/spaces/JadAssaf/STPIzeimer/app.py b/spaces/JadAssaf/STPIzeimer/app.py deleted file mode 100644 index e083af925b92f96e68f54d5066ea91c38ba06017..0000000000000000000000000000000000000000 --- a/spaces/JadAssaf/STPIzeimer/app.py +++ /dev/null @@ -1,62 +0,0 @@ -# %% -import gradio as gr -import joblib -loaded_rf_2way = joblib.load("STPI_2WAY_RandomForest.joblib") -loaded_rf_3way = joblib.load("STPI_3WAY_RandomForest.joblib") - - -def STPI(t_0_5_MaxValue,t_1_0_MaxValue,t_2_0_MaxValue, -# Acc_0_5__1_0_MaxValue, -Abs_Diff_t_0_5_MaxValue,Abs_Diff_t_1_0_MaxValue,Abs_Diff_t_2_0_MaxValue): - print('------------------') - - X = [t_0_5_MaxValue,t_1_0_MaxValue,t_2_0_MaxValue, - # Acc_0_5__1_0_MaxValue, - Abs_Diff_t_0_5_MaxValue,Abs_Diff_t_1_0_MaxValue,Abs_Diff_t_2_0_MaxValue] - print(X) - outcome_decoded = ['Normal','Keratoconic','Suspect'] - file_object = open('stpi_data.txt', 'a') - file_object.write(str(t_0_5_MaxValue)) - file_object.write(';') - file_object.write(str(t_1_0_MaxValue)) - file_object.write(';') - file_object.write(str(t_2_0_MaxValue)) - file_object.write(';') - # file_object.write(str(Acc_0_5__1_0_MaxValue)) - # file_object.write(';') - file_object.write(str(Abs_Diff_t_0_5_MaxValue)) - file_object.write(';') - file_object.write(str(Abs_Diff_t_1_0_MaxValue)) - file_object.write(';') - file_object.write(str(Abs_Diff_t_2_0_MaxValue)) - file_object.write(';') - file_object.write('\n') - file_object.close() - - result_2way = loaded_rf_2way.predict([X]) - print('The patient is ', outcome_decoded[int(result_2way)], ' through the 2way method') - - result_3way = loaded_rf_3way.predict([X]) - if result_2way == 0: - print('The patient is ', outcome_decoded[int(result_3way)], 'through the 3way method') - # result = 'The 3-way classification resulted in a ', outcome_decoded[int(result_3way)] + ' patient.' - # further_analysis = 'Futher analysis using the 2-way classification resulted in a ' + outcome_decoded[int(result_2way)] + ' label.' - return 'The patient is ' + outcome_decoded[int(result_3way)] + '.' - - # result = 'The 2-way classification resulted in a ', outcome_decoded[int(result_2way)] + ' patient.' - # further_analysis = 'Futher analysis using the 3-way classification resulted in a ' + outcome_decoded[int(result_3way)] + ' label.' - - return 'The patient is ' + outcome_decoded[int(result_2way)] + '.' - -iface = gr.Interface( - fn=STPI, - title='TSPI Calculator', - description='Calculates the Thickness Speed Progression Index (TSPI) through summarized tomographic parameters. Beta version made for Zeimer by Prof. Shady Awwad and Jad Assaf MD.', - inputs=["number", "number","number", - # "number", - "number", "number","number"], - outputs="text") -iface.launch( - # share=True - ) -# %% diff --git a/spaces/Jean-Baptiste/email_parser/README.md b/spaces/Jean-Baptiste/email_parser/README.md deleted file mode 100644 index 4c1fe5b42995d26ce750bfa089a24c1ff7f475e0..0000000000000000000000000000000000000000 --- a/spaces/Jean-Baptiste/email_parser/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Email_parser -emoji: 🌖 -colorFrom: yellow -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/JeffJing/ZookChatBot/steamship/invocable/invocable.py b/spaces/JeffJing/ZookChatBot/steamship/invocable/invocable.py deleted file mode 100644 index 757ea1c0a19b4a5c50c238f53e0a467d055c5f92..0000000000000000000000000000000000000000 --- a/spaces/JeffJing/ZookChatBot/steamship/invocable/invocable.py +++ /dev/null @@ -1,264 +0,0 @@ -"""Please see https://docs.steamship.com/ for information about building a Steamship Package""" -import inspect -import logging -import pathlib -import time -from abc import ABC -from collections import defaultdict -from functools import wraps -from http import HTTPStatus -from typing import Any, Dict, Optional, Type, Union - -import toml - -from steamship.base.package_spec import MethodSpec, PackageSpec -from steamship.client.steamship import Steamship -from steamship.invocable import Config -from steamship.invocable.config import ConfigParameter -from steamship.invocable.invocable_request import InvocableRequest, InvocationContext -from steamship.invocable.invocable_response import InvocableResponse -from steamship.utils.url import Verb - - -def make_registering_decorator(decorator): - """ - Returns a copy of foreignDecorator, which is identical in every - way(*), except also appends a .decorator property to the callable it - spits out. - - (*)We can be somewhat "hygienic", but newDecorator still isn't signature-preserving, - i.e. you will not be able to get a runtime list of parameters. - For that, you need hackish libraries...but in this case, the only argument is func, so it's not a big issue - """ - - def new_decorator(func): - # Call to newDecorator(method) - # Exactly like old decorator, but output keeps track of what decorated it - output = decorator( - func - ) # apply foreignDecorator, like call to foreignDecorator(method) would have done - output.decorator = new_decorator # keep track of decorator - # R.original = func # might as well keep track of everything! - return output - - new_decorator.__name__ = decorator.__name__ - new_decorator.__doc__ = decorator.__doc__ - new_decorator.__is_endpoint__ = True - return new_decorator - - -# https://stackoverflow.com/questions/2366713/can-a-decorator-of-an-instance-method-access-the-class -# noinspection PyUnusedLocal -def endpoint(verb: str = None, path: str = None, **kwargs): - """By using ``kwargs`` we can tag the function with Any parameters.""" # noqa: RST210 - - def decorator(function): - # This is used in conjunction with the __init_subclass__ code! - # Otherwise the __name__ won't be correct in maybeDecorated.__name__! - # noinspection PyShadowingNames - @wraps(function) - def wrap(self, *args, **kwargs): - return function(self, *args, **kwargs) - - # Build a dictionary of String->Primitive Types to pass back with endpoint - # This enables the Engine to add support for features like public=True, etc, without the Client changing. - config: Dict[str, Union[str, bool, int, float]] = {} - for key, val in kwargs.items(): - if isinstance(val, (str, bool, int, float)): - config[key] = val - - wrap.__path__ = path - wrap.__verb__ = verb - wrap.__endpoint_config__ = config - - return wrap - - decorator = make_registering_decorator(decorator) - return decorator - - -def get(path: str, **kwargs): - return endpoint(verb=Verb.GET, path=path, **kwargs) - - -def post(path: str, **kwargs): - return endpoint(verb=Verb.POST, path=path, **kwargs) - - -class Invocable(ABC): - """A Steamship microservice. - - This model.py class: - - 1. Provide a pre-authenticated instance of the Steamship client - 2. Provides a Lambda handler that routes to registered functions - 3. Provides useful methods connecting functions to the router. - """ - - _method_mappings = defaultdict(dict) - _package_spec: PackageSpec - config: Config - context: InvocationContext - - def __init__( - self, - client: Steamship = None, - config: Dict[str, Any] = None, - context: InvocationContext = None, - ): - self.context = context - - try: - secret_kwargs = toml.load(".steamship/secrets.toml") - except FileNotFoundError: # Support local secret loading - try: - local_secrets_file = ( - pathlib.Path(inspect.getfile(type(self))).parent / ".steamship" / "secrets.toml" - ) - secret_kwargs = toml.load(str(local_secrets_file)) - except (TypeError, FileNotFoundError): - secret_kwargs = {} - - # The configuration for the Invocable is the union of: - # - # 1) The `secret_kwargs` dict, read in from .steamship/secrets.toml, if it exists, and - # 2) The `config` dict, provided upon instantiation. - # - # When invoked from within Steamship, the `config` dict is frozen, at the instance level, upon instance - # creation. All subsequent method invocations reuse that frozen config. - config = { - **secret_kwargs, - **{k: v for k, v in (config or {}).items() if v != ""}, - } - - # Finally, we set the config object to an instance of the class returned by `self.config_cls` - if config: - self.config = self.config_cls()(**config) - else: - self.config = self.config_cls()() - - self.client = client - - def __init_subclass__(cls, **kwargs): - super().__init_subclass__(**kwargs) - - start_time = time.time() - cls._package_spec = PackageSpec(name=cls.__name__, doc=cls.__doc__, methods=[]) - cls._method_mappings = defaultdict(dict) - base_fn_list = [ - may_be_decorated - for base_cls in cls.__bases__ - for may_be_decorated in base_cls.__dict__.values() - ] - for attribute in base_fn_list + list(cls.__dict__.values()): - decorator = getattr(attribute, "decorator", None) - if decorator: - if getattr(decorator, "__is_endpoint__", False): - path = getattr(attribute, "__path__", None) - verb = getattr(attribute, "__verb__", None) - config = getattr(attribute, "__endpoint_config__", {}) - method_spec = cls._register_mapping( - name=attribute.__name__, verb=verb, path=path, config=config - ) - cls._package_spec.methods.append(method_spec) - - # Add the HTTP GET /__dir__ method which returns a serialization of the PackageSpec. - # Wired up to both GET and POST for convenience (since POST is the default from the Python client, but - # GET is the default if using from a browser). - cls._register_mapping(name="__steamship_dir__", verb=Verb.GET, path="/__dir__") - cls._register_mapping(name="__steamship_dir__", verb=Verb.POST, path="/__dir__") - end_time = time.time() - logging.info(f"Registered package functions in {end_time - start_time} seconds.") - - def __steamship_dir__(self) -> dict: - """Return this Invocable's PackageSpec for remote inspection -- e.g. documentation or OpenAPI generation.""" - return self._package_spec.dict() - - @classmethod - def config_cls(cls) -> Type[Config]: - """Returns the configuration object for the Invocable. - - By default, Steamship packages and plugins will not take any configuration. Steamship packages and plugins may - declare a configuration object which extends from Config, if needed, as follows: - - class MyPackageOrPlugin: - class MyConfig(Config): - ... - - @classmethod - def config_cls(cls): - return MyPackageOrPlugin.MyConfig - """ # noqa: RST301 - return Config - - @classmethod - def _register_mapping( - cls, - name: str, - verb: Optional[Verb] = None, - path: str = "", - config: Dict[str, Union[int, float, bool, str]] = None, - ) -> MethodSpec: - """Registering a mapping permits the method to be invoked via HTTP.""" - method_spec = MethodSpec(cls, name, path=path, verb=verb, config=config) - # It's important to use method_spec.path below since that's the CLEANED path. - cls._method_mappings[verb][method_spec.path] = name - logging.info(f"[{cls.__name__}] {verb} {path} => {name}") - return method_spec - - def __call__(self, request: InvocableRequest, context: Any = None) -> InvocableResponse: - """Invokes a method call if it is registered.""" - if not hasattr(self.__class__, "_method_mappings"): - logging.error("__call__: No mappings available on invocable.") - return InvocableResponse.error( - code=HTTPStatus.NOT_FOUND, message="No mappings available for invocable." - ) - - if request.invocation is None: - logging.error("__call__: No invocation on request.") - return InvocableResponse.error( - code=HTTPStatus.NOT_FOUND, message="No invocation was found." - ) - - verb = Verb(request.invocation.http_verb.strip().upper()) - path = request.invocation.invocation_path - - path = MethodSpec.clean_path(path) - - logging.info(f"[{verb}] {path}") - - method_mappings = self.__class__._method_mappings - - if verb not in method_mappings: - logging.error(f"__call__: Verb '{verb}' not found in method_mappings.") - return InvocableResponse.error( - code=HTTPStatus.NOT_FOUND, - message=f"No methods for verb {verb} available.", - ) - - if path not in method_mappings[verb]: - logging.error(f"__call__: Path '{path}' not found in method_mappings[{verb}].") - return InvocableResponse.error( - code=HTTPStatus.NOT_FOUND, - message=f"No handler for {verb} {path} available.", - ) - - method = method_mappings[verb][path] - if not (hasattr(self, method) and callable(getattr(self, method))): - logging.error( - f"__call__: Method not found or not callable for '{path}' in method_mappings[{verb}]." - ) - return InvocableResponse.error( - code=HTTPStatus.INTERNAL_SERVER_ERROR, - message=f"Handler for {verb} {path} not callable.", - ) - - arguments = request.invocation.arguments - if arguments is None: - return getattr(self, method)() - else: - return getattr(self, method)(**arguments) - - @classmethod - def get_config_parameters(cls) -> Dict[str, ConfigParameter]: - return cls.config_cls().get_config_parameters() diff --git a/spaces/Justin-Choo/QuickGen-Photo/README.md b/spaces/Justin-Choo/QuickGen-Photo/README.md deleted file mode 100644 index 1b82bad4ba4950496b43cb31e967cb21c305b7ff..0000000000000000000000000000000000000000 --- a/spaces/Justin-Choo/QuickGen-Photo/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Photo-Gen -emoji: 💩 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: creativeml-openrail-m -duplicated_from: pulpapps/QuickGen-Photo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/KAIST-Geometric-AI-Lab/syncdiffusion-demo/syncdiffusion/syncdiffusion_model.py b/spaces/KAIST-Geometric-AI-Lab/syncdiffusion-demo/syncdiffusion/syncdiffusion_model.py deleted file mode 100644 index 79dff962674eee942a447f8a4a9a3d76b7fc40f6..0000000000000000000000000000000000000000 --- a/spaces/KAIST-Geometric-AI-Lab/syncdiffusion-demo/syncdiffusion/syncdiffusion_model.py +++ /dev/null @@ -1,232 +0,0 @@ -import torch -import torch.nn as nn -import torchvision.transforms as T -from torch.autograd import grad -import argparse -from tqdm import tqdm - -from syncdiffusion.utils import * -import lpips -from transformers import CLIPTextModel, CLIPTokenizer -from diffusers import AutoencoderKL, UNet2DConditionModel, DDIMScheduler - -class SyncDiffusion(nn.Module): - def __init__(self, device='cuda', sd_version='2.0', hf_key=None): - super().__init__() - - self.device = device - self.sd_version = sd_version - - print(f'[INFO] loading stable diffusion...') - if hf_key is not None: - print(f'[INFO] using hugging face custom model key: {hf_key}') - model_key = hf_key - elif self.sd_version == '2.1': - model_key = "stabilityai/stable-diffusion-2-1-base" - elif self.sd_version == '2.0': - model_key = "stabilityai/stable-diffusion-2-base" - elif self.sd_version == '1.5': - model_key = "runwayml/stable-diffusion-v1-5" - else: - raise ValueError(f'Stable-diffusion version {self.sd_version} not supported.') - - # Load pretrained models from HuggingFace - self.vae = AutoencoderKL.from_pretrained(model_key, subfolder="vae").to(self.device) - self.tokenizer = CLIPTokenizer.from_pretrained(model_key, subfolder="tokenizer") - self.text_encoder = CLIPTextModel.from_pretrained(model_key, subfolder="text_encoder").to(self.device) - self.unet = UNet2DConditionModel.from_pretrained(model_key, subfolder="unet").to(self.device) - - # Freeze models - for p in self.unet.parameters(): - p.requires_grad_(False) - for p in self.vae.parameters(): - p.requires_grad_(False) - for p in self.text_encoder.parameters(): - p.requires_grad_(False) - - self.unet.eval() - self.vae.eval() - self.text_encoder.eval() - print(f'[INFO] loaded stable diffusion!') - - # Set DDIM scheduler - self.scheduler = DDIMScheduler.from_pretrained(model_key, subfolder="scheduler") - - # load perceptual loss (LPIPS) - self.percept_loss = lpips.LPIPS(net='vgg').to(self.device) - print(f'[INFO] loaded perceptual loss!') - - def get_text_embeds(self, prompt, negative_prompt): - # Tokenize text and get embeddings - text_input = self.tokenizer(prompt, padding='max_length', max_length=self.tokenizer.model_max_length, - truncation=True, return_tensors='pt') - text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0] - - # Repeat for unconditional embeddings - uncond_input = self.tokenizer(negative_prompt, padding='max_length', max_length=self.tokenizer.model_max_length, - return_tensors='pt') - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - - # Concatenate for final embeddings - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - return text_embeddings - - def decode_latents(self, latents): - latents = 1 / 0.18215 * latents - imgs = self.vae.decode(latents).sample - imgs = (imgs / 2 + 0.5).clamp(0, 1) - return imgs - - def sample_syncdiffusion( - self, - prompts, - negative_prompts="", - height=512, - width=2048, - latent_size=64, # fix latent size to 64 for Stable Diffusion - num_inference_steps=50, - guidance_scale=7.5, - sync_weight=20, # gradient descent weight 'w' in the paper - sync_freq=1, # sync_freq=n: perform gradient descent every n steps - sync_thres=50, # sync_thres=n: compute SyncDiffusion only for the first n steps - sync_decay_rate=0.95, # decay rate for sync_weight, set as 0.95 in the paper - stride=16, # stride for latents, set as 16 in the paper - ): - assert height >= 512 and width >= 512, 'height and width must be at least 512' - assert height % (stride * 8) == 0 and width % (stride * 8) == 0, 'height and width must be divisible by the stride multiplied by 8' - assert stride % 8 == 0 and stride < 64, 'stride must be divisible by 8 and smaller than the latent size of Stable Diffusion' - - if isinstance(prompts, str): - prompts = [prompts] - - if isinstance(negative_prompts, str): - negative_prompts = [negative_prompts] - - # obtain text embeddings - text_embeds = self.get_text_embeds(prompts, negative_prompts) # [2, 77, 768] - - # define a list of windows to process in parallel - views = get_views(height, width, stride=stride) - print(f"[INFO] number of views to process: {len(views)}") - - # Initialize latent - latent = torch.randn((1, self.unet.in_channels, height // 8, width // 8)) - - count = torch.zeros_like(latent, requires_grad=False, device=self.device) - value = torch.zeros_like(latent, requires_grad=False, device=self.device) - latent = latent.to(self.device) - - # set DDIM scheduler - self.scheduler.set_timesteps(num_inference_steps) - - # set the anchor view as the middle view - anchor_view_idx = len(views) // 2 - - # set SyncDiffusion scheduler - sync_scheduler = exponential_decay_list( - init_weight=sync_weight, - decay_rate=sync_decay_rate, - num_steps=num_inference_steps - ) - print(f'[INFO] using exponential decay scheduler with decay rate {sync_decay_rate}') - - with torch.autocast('cuda'): - for i, t in enumerate(tqdm(self.scheduler.timesteps)): - count.zero_() - value.zero_() - - ''' - (1) First, obtain the reference anchor view (for computing the perceptual loss) - ''' - with torch.no_grad(): - if (i + 1) % sync_freq == 0 and i < sync_thres: - # decode the anchor view - h_start, h_end, w_start, w_end = views[anchor_view_idx] - latent_view = latent[:, :, h_start:h_end, w_start:w_end].detach() - - latent_model_input = torch.cat([latent_view] * 2) # 2 x 4 x 64 x 64 - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeds)['sample'] - - # perform guidance - noise_pred_uncond, noise_pred_cond = noise_pred.chunk(2) - noise_pred_new = noise_pred_uncond + guidance_scale * (noise_pred_cond - noise_pred_uncond) - - # predict the 'foreseen denoised' latent (x0) of the anchor view - latent_pred_x0 = self.scheduler.step(noise_pred_new, t, latent_view)["pred_original_sample"] - decoded_image_anchor = self.decode_latents(latent_pred_x0) # 1 x 3 x 512 x 512 - - ''' - (2) Then perform SyncDiffusion and run a single denoising step - ''' - for view_idx, (h_start, h_end, w_start, w_end) in enumerate(views): - latent_view = latent[:, :, h_start:h_end, w_start:w_end].detach() - - ############################## BEGIN: PERFORM GRADIENT DESCENT (SyncDiffusion) ############################## - latent_view_copy = latent_view.clone().detach() - - #### TODO: TEST #### - # if i % sync_freq == 0 and i < sync_thres: - if (i + 1) % sync_freq == 0 and i < sync_thres: - - # gradient on latent_view - latent_view = latent_view.requires_grad_() - - # expand the latents for classifier-free guidance - latent_model_input = torch.cat([latent_view] * 2) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeds)['sample'] - - # perform guidance - noise_pred_uncond, noise_pred_cond = noise_pred.chunk(2) - noise_pred_new = noise_pred_uncond + guidance_scale * (noise_pred_cond - noise_pred_uncond) - - # compute the denoising step with the reference model - out = self.scheduler.step(noise_pred_new, t, latent_view) - - # predict the 'foreseen denoised' latent (x0) - latent_view_x0 = out['pred_original_sample'] - - # decode the denoised latent - decoded_x0 = self.decode_latents(latent_view_x0) # 1 x 3 x 512 x 512 - - # compute the perceptual loss (LPIPS) - percept_loss = self.percept_loss( - decoded_x0 * 2.0 - 1.0, - decoded_image_anchor * 2.0 - 1.0 - ) - - # compute the gradient of the perceptual loss w.r.t. the latent - norm_grad = grad(outputs=percept_loss, inputs=latent_view)[0] - - # SyncDiffusion: update the original latent - if view_idx != anchor_view_idx: - latent_view_copy = latent_view_copy - sync_scheduler[i] * norm_grad # 1 x 4 x 64 x 64 - ############################## END: PERFORM GRADIENT DESCENT (SyncDiffusion) ############################## - - # after gradient descent, perform a single denoising step - with torch.no_grad(): - latent_model_input = torch.cat([latent_view_copy] * 2) - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeds)['sample'] - - noise_pred_uncond, noise_pred_cond = noise_pred.chunk(2) - noise_pred_new = noise_pred_uncond + guidance_scale * (noise_pred_cond - noise_pred_uncond) - - out = self.scheduler.step(noise_pred_new, t, latent_view_copy) - latent_view_denoised = out['prev_sample'] - - # merge the latent views - value[:, :, h_start:h_end, w_start:w_end] += latent_view_denoised - count[:, :, h_start:h_end, w_start:w_end] += 1 - - # take the MultiDiffusion step (average the latents) - latent = torch.where(count > 0, value / count, value) - - # decode latents to panorama image - with torch.no_grad(): - imgs = self.decode_latents(latent) # [1, 3, 512, 512] - img = T.ToPILImage()(imgs[0].cpu()) - - print(f"[INFO] Done!") - - return img diff --git a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/layers_537238KB.py b/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/layers_537238KB.py deleted file mode 100644 index a38b7bb3ae3136b07eadfc2db445fef4c2de186b..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/layers_537238KB.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv6 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv7 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - feat6 = self.conv6(x) - feat7 = self.conv7(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/Karumoon/test007/app.py b/spaces/Karumoon/test007/app.py deleted file mode 100644 index 04b63907d6d8ee00b1ebc71972d9867d6d781405..0000000000000000000000000000000000000000 --- a/spaces/Karumoon/test007/app.py +++ /dev/null @@ -1,98 +0,0 @@ -import gradio as gr - -import os -import sys -from pathlib import Path -import time -import random -from PIL import Image - -from diffusers import DiffusionPipeline - -#repo_id = "Karumoon/test00a1" -repo_id = "runwayml/stable-diffusion-v1-5" -pipe = DiffusionPipeline.from_pretrained(repo_id) -print(pipe) - -m_pdir="/content/drive/MyDrive/aipic001/" - -models =[ - "", - "CompVis/stable-diffusion-v1-4", - "runwayml/stable-diffusion-v1-5", - "prompthero/openjourney", -#4 - "stabilityai/stable-diffusion-2-1", - "stabilityai/stable-diffusion-2-1-base", - "andite/anything-v4.0", - - "Linaqruf/anything-v3.0", - "eimiss/EimisAnimeDiffusion_1.0v", - "nitrosocke/Nitro-Diffusion", -#10 - "wavymulder/portraitplus", - "22h/vintedois-diffusion-v0-1", - "dreamlike-art/dreamlike-photoreal-2.0", -#11 - "dreamlike-art/dreamlike-diffusion-1.0", - "wavymulder/Analog-Diffusion", - "nitrosocke/redshift-diffusion", - "claudfuen/photorealistic-fuen-v1", - "prompthero/openjourney-v2", - "johnslegers/epic-diffusion", - "nitrosocke/Arcane-Diffusion", - "darkstorm2150/Protogen_x5.8_Official_Release", - -] - -model_1=models[1] -model_2=models[2] -model_3=models[3] -model_4=models[4] -model_5=models[5] -model_6=models[6] -model_7=models[7] -model_8=models[8] -model_9=models[9] -model_10=models[10] -model_11=models[11] -model_12=models[12] -model_13=models[13] -model_14=models[14] -model_15=models[15] -model_16=models[16] -model_17=models[17] -model_18=models[18] -model_19=models[19] -model_20=models[20] - - -text_gen=gr.Interface.load("spaces/Omnibus/MagicPrompt-Stable-Diffusion_link",live=True, preprocess=True) - -proc1=gr.Interface.load(f"models/{model_1}",live=False,preprocess=True, postprocess=False) -proc2=gr.Interface.load(f"models/{model_2}",live=False,preprocess=True, postprocess=False) -proc3=gr.Interface.load(f"models/{model_3}",live=False,preprocess=True, postprocess=False) -""" -proc4=gr.Interface.load(f"models/{model_4}",live=False,preprocess=True, postprocess=False) -proc5=gr.Interface.load(f"models/{model_5}",live=False,preprocess=True, postprocess=False) -proc6=gr.Interface.load(f"models/{model_6}",live=False,preprocess=True, postprocess=False) -proc7=gr.Interface.load(f"models/{model_7}",live=False,preprocess=True, postprocess=False) -proc8=gr.Interface.load(f"models/{model_8}",live=False,preprocess=True, postprocess=False) -proc9=gr.Interface.load(f"models/{model_9}",live=False,preprocess=True, postprocess=False) -proc10=gr.Interface.load(f"models/{model_10}",live=False,preprocess=True, postprocess=False) -proc11=gr.Interface.load(f"models/{model_11}",live=False,preprocess=True, postprocess=False) -proc12=gr.Interface.load(f"models/{model_12}",live=False,preprocess=True, postprocess=False) -proc13=gr.Interface.load(f"models/{model_13}",live=False,preprocess=True, postprocess=False) -proc14=gr.Interface.load(f"models/{model_14}",live=False,preprocess=True, postprocess=False) -proc15=gr.Interface.load(f"models/{model_15}",live=False,preprocess=True, postprocess=False) -proc16=gr.Interface.load(f"models/{model_16}",live=False,preprocess=True, postprocess=False) -proc17=gr.Interface.load(f"models/{model_17}",live=False,preprocess=True, postprocess=False) -proc18=gr.Interface.load(f"models/{model_18}",live=False,preprocess=True, postprocess=False) -proc19=gr.Interface.load(f"models/{model_19}",live=False,preprocess=True, postprocess=False) -proc20=gr.Interface.load(f"models/{model_20}",live=False,preprocess=True, postprocess=False) -""" -#https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading - - -proc1.launch() -#gr.Parallel(proc1, proc2, proc3).launch() \ No newline at end of file diff --git a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/deep/train.py b/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/deep/train.py deleted file mode 100644 index c95b55d7dce1f2f12a6c315bec9101faaeb45d6b..0000000000000000000000000000000000000000 --- a/spaces/KdaiP/yolov8-deepsort-tracking/deep_sort/deep_sort/deep/train.py +++ /dev/null @@ -1,192 +0,0 @@ -import argparse -import os -import time - -import numpy as np -import matplotlib.pyplot as plt -import torch -import torch.backends.cudnn as cudnn -import torchvision - -from model import Net - -parser = argparse.ArgumentParser(description="Train on market1501") -parser.add_argument("--data-dir",default='data',type=str) -parser.add_argument("--no-cuda",action="store_true") -parser.add_argument("--gpu-id",default=0,type=int) -parser.add_argument("--lr",default=0.1, type=float) -parser.add_argument("--interval",'-i',default=20,type=int) -parser.add_argument('--resume', '-r',action='store_true') -args = parser.parse_args() - -# device -device = "cuda:{}".format(args.gpu_id) if torch.cuda.is_available() and not args.no_cuda else "cpu" -if torch.cuda.is_available() and not args.no_cuda: - cudnn.benchmark = True - -# data loading -root = args.data_dir -train_dir = os.path.join(root,"train") -test_dir = os.path.join(root,"test") - -transform_train = torchvision.transforms.Compose([ - torchvision.transforms.RandomCrop((128,64),padding=4), - torchvision.transforms.RandomHorizontalFlip(), - torchvision.transforms.ToTensor(), - torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) -]) -transform_test = torchvision.transforms.Compose([ - torchvision.transforms.Resize((128,64)), - torchvision.transforms.ToTensor(), - torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) -]) -trainloader = torch.utils.data.DataLoader( - torchvision.datasets.ImageFolder(train_dir, transform=transform_train), - batch_size=64,shuffle=True -) -testloader = torch.utils.data.DataLoader( - torchvision.datasets.ImageFolder(test_dir, transform=transform_test), - batch_size=64,shuffle=True -) -num_classes = max(len(trainloader.dataset.classes), len(testloader.dataset.classes)) -print("num_classes = %s" %num_classes) - -# net definition -start_epoch = 0 -net = Net(num_classes=num_classes) -if args.resume: - assert os.path.isfile("./checkpoint/ckpt.t7"), "Error: no checkpoint file found!" - print('Loading from checkpoint/ckpt.t7') - checkpoint = torch.load("./checkpoint/ckpt.t7") - # import ipdb; ipdb.set_trace() - net_dict = checkpoint['net_dict'] - net.load_state_dict(net_dict) - best_acc = checkpoint['acc'] - start_epoch = checkpoint['epoch'] -net.to(device) - -# loss and optimizer -criterion = torch.nn.CrossEntropyLoss() -optimizer = torch.optim.SGD(net.parameters(), args.lr, momentum=0.9, weight_decay=5e-4) -best_acc = 0. - -# train function for each epoch -def train(epoch): - print("\nEpoch : %d"%(epoch+1)) - net.train() - training_loss = 0. - train_loss = 0. - correct = 0 - total = 0 - interval = args.interval - start = time.time() - for idx, (inputs, labels) in enumerate(trainloader): - # forward - inputs,labels = inputs.to(device),labels.to(device) - outputs = net(inputs) - loss = criterion(outputs, labels) - - # backward - optimizer.zero_grad() - loss.backward() - optimizer.step() - - # accumurating - training_loss += loss.item() - train_loss += loss.item() - correct += outputs.max(dim=1)[1].eq(labels).sum().item() - total += labels.size(0) - - # print - if (idx+1)%interval == 0: - end = time.time() - print("[progress:{:.1f}%]time:{:.2f}s Loss:{:.5f} Correct:{}/{} Acc:{:.3f}%".format( - 100.*(idx+1)/len(trainloader), end-start, training_loss/interval, correct, total, 100.*correct/total - )) - training_loss = 0. - start = time.time() - - return train_loss/len(trainloader), 1.- correct/total - -def test(epoch): - global best_acc - net.eval() - test_loss = 0. - correct = 0 - total = 0 - start = time.time() - with torch.no_grad(): - for idx, (inputs, labels) in enumerate(testloader): - inputs, labels = inputs.to(device), labels.to(device) - outputs = net(inputs) - loss = criterion(outputs, labels) - - test_loss += loss.item() - correct += outputs.max(dim=1)[1].eq(labels).sum().item() - total += labels.size(0) - - print("Testing ...") - end = time.time() - print("[progress:{:.1f}%]time:{:.2f}s Loss:{:.5f} Correct:{}/{} Acc:{:.3f}%".format( - 100.*(idx+1)/len(testloader), end-start, test_loss/len(testloader), correct, total, 100.*correct/total - )) - - # saving checkpoint - acc = 100.*correct/total - if acc > best_acc: - best_acc = acc - print("Saving parameters to checkpoint/ckpt.t7") - checkpoint = { - 'net_dict':net.state_dict(), - 'acc':acc, - 'epoch':epoch, - } - if not os.path.isdir('checkpoint'): - os.mkdir('checkpoint') - torch.save(checkpoint, './checkpoint/ckpt.t7') - - return test_loss/len(testloader), 1.- correct/total - -# plot figure -x_epoch = [] -record = {'train_loss':[], 'train_err':[], 'test_loss':[], 'test_err':[]} -fig = plt.figure() -ax0 = fig.add_subplot(121, title="loss") -ax1 = fig.add_subplot(122, title="top1err") -def draw_curve(epoch, train_loss, train_err, test_loss, test_err): - global record - record['train_loss'].append(train_loss) - record['train_err'].append(train_err) - record['test_loss'].append(test_loss) - record['test_err'].append(test_err) - - x_epoch.append(epoch) - ax0.plot(x_epoch, record['train_loss'], 'bo-', label='train') - ax0.plot(x_epoch, record['test_loss'], 'ro-', label='val') - ax1.plot(x_epoch, record['train_err'], 'bo-', label='train') - ax1.plot(x_epoch, record['test_err'], 'ro-', label='val') - if epoch == 0: - ax0.legend() - ax1.legend() - fig.savefig("train.jpg") - -# lr decay -def lr_decay(): - global optimizer - for params in optimizer.param_groups: - params['lr'] *= 0.1 - lr = params['lr'] - print("Learning rate adjusted to {}".format(lr)) - -def main(): - total_epoches = 40 - for epoch in range(start_epoch, start_epoch+total_epoches): - train_loss, train_err = train(epoch) - test_loss, test_err = test(epoch) - draw_curve(epoch, train_loss, train_err, test_loss, test_err) - if (epoch+1)%(total_epoches//2)==0: - lr_decay() - - -if __name__ == '__main__': - main() diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/gen_wavernn.py b/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/gen_wavernn.py deleted file mode 100644 index 2036737f805f6055893812e48f99d524624aab07..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/gen_wavernn.py +++ /dev/null @@ -1,31 +0,0 @@ -from vocoder.models.fatchord_version import WaveRNN -from vocoder.audio import * - - -def gen_testset(model: WaveRNN, test_set, samples, batched, target, overlap, save_path): - k = model.get_step() // 1000 - - for i, (m, x) in enumerate(test_set, 1): - if i > samples: - break - - print('\n| Generating: %i/%i' % (i, samples)) - - x = x[0].numpy() - - bits = 16 if hp.voc_mode == 'MOL' else hp.bits - - if hp.mu_law and hp.voc_mode != 'MOL' : - x = decode_mu_law(x, 2**bits, from_labels=True) - else : - x = label_2_float(x, bits) - - save_wav(x, save_path.joinpath("%dk_steps_%d_target.wav" % (k, i))) - - batch_str = "gen_batched_target%d_overlap%d" % (target, overlap) if batched else \ - "gen_not_batched" - save_str = save_path.joinpath("%dk_steps_%d_%s.wav" % (k, i, batch_str)) - - wav = model.generate(m, batched, target, overlap, hp.mu_law) - save_wav(wav, save_str) - diff --git a/spaces/KyanChen/RSPrompter/mmdet/engine/optimizers/layer_decay_optimizer_constructor.py b/spaces/KyanChen/RSPrompter/mmdet/engine/optimizers/layer_decay_optimizer_constructor.py deleted file mode 100644 index 73028a0aef698d63dcba8c4935d6ef6c577d0f46..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/engine/optimizers/layer_decay_optimizer_constructor.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import json -from typing import List - -import torch.nn as nn -from mmengine.dist import get_dist_info -from mmengine.logging import MMLogger -from mmengine.optim import DefaultOptimWrapperConstructor - -from mmdet.registry import OPTIM_WRAPPER_CONSTRUCTORS - - -def get_layer_id_for_convnext(var_name, max_layer_id): - """Get the layer id to set the different learning rates in ``layer_wise`` - decay_type. - - Args: - var_name (str): The key of the model. - max_layer_id (int): Maximum layer id. - - Returns: - int: The id number corresponding to different learning rate in - ``LearningRateDecayOptimizerConstructor``. - """ - - if var_name in ('backbone.cls_token', 'backbone.mask_token', - 'backbone.pos_embed'): - return 0 - elif var_name.startswith('backbone.downsample_layers'): - stage_id = int(var_name.split('.')[2]) - if stage_id == 0: - layer_id = 0 - elif stage_id == 1: - layer_id = 2 - elif stage_id == 2: - layer_id = 3 - elif stage_id == 3: - layer_id = max_layer_id - return layer_id - elif var_name.startswith('backbone.stages'): - stage_id = int(var_name.split('.')[2]) - block_id = int(var_name.split('.')[3]) - if stage_id == 0: - layer_id = 1 - elif stage_id == 1: - layer_id = 2 - elif stage_id == 2: - layer_id = 3 + block_id // 3 - elif stage_id == 3: - layer_id = max_layer_id - return layer_id - else: - return max_layer_id + 1 - - -def get_stage_id_for_convnext(var_name, max_stage_id): - """Get the stage id to set the different learning rates in ``stage_wise`` - decay_type. - - Args: - var_name (str): The key of the model. - max_stage_id (int): Maximum stage id. - - Returns: - int: The id number corresponding to different learning rate in - ``LearningRateDecayOptimizerConstructor``. - """ - - if var_name in ('backbone.cls_token', 'backbone.mask_token', - 'backbone.pos_embed'): - return 0 - elif var_name.startswith('backbone.downsample_layers'): - return 0 - elif var_name.startswith('backbone.stages'): - stage_id = int(var_name.split('.')[2]) - return stage_id + 1 - else: - return max_stage_id - 1 - - -@OPTIM_WRAPPER_CONSTRUCTORS.register_module() -class LearningRateDecayOptimizerConstructor(DefaultOptimWrapperConstructor): - # Different learning rates are set for different layers of backbone. - # Note: Currently, this optimizer constructor is built for ConvNeXt. - - def add_params(self, params: List[dict], module: nn.Module, - **kwargs) -> None: - """Add all parameters of module to the params list. - - The parameters of the given module will be added to the list of param - groups, with specific rules defined by paramwise_cfg. - - Args: - params (list[dict]): A list of param groups, it will be modified - in place. - module (nn.Module): The module to be added. - """ - logger = MMLogger.get_current_instance() - - parameter_groups = {} - logger.info(f'self.paramwise_cfg is {self.paramwise_cfg}') - num_layers = self.paramwise_cfg.get('num_layers') + 2 - decay_rate = self.paramwise_cfg.get('decay_rate') - decay_type = self.paramwise_cfg.get('decay_type', 'layer_wise') - logger.info('Build LearningRateDecayOptimizerConstructor ' - f'{decay_type} {decay_rate} - {num_layers}') - weight_decay = self.base_wd - for name, param in module.named_parameters(): - if not param.requires_grad: - continue # frozen weights - if len(param.shape) == 1 or name.endswith('.bias') or name in ( - 'pos_embed', 'cls_token'): - group_name = 'no_decay' - this_weight_decay = 0. - else: - group_name = 'decay' - this_weight_decay = weight_decay - if 'layer_wise' in decay_type: - if 'ConvNeXt' in module.backbone.__class__.__name__: - layer_id = get_layer_id_for_convnext( - name, self.paramwise_cfg.get('num_layers')) - logger.info(f'set param {name} as id {layer_id}') - else: - raise NotImplementedError() - elif decay_type == 'stage_wise': - if 'ConvNeXt' in module.backbone.__class__.__name__: - layer_id = get_stage_id_for_convnext(name, num_layers) - logger.info(f'set param {name} as id {layer_id}') - else: - raise NotImplementedError() - group_name = f'layer_{layer_id}_{group_name}' - - if group_name not in parameter_groups: - scale = decay_rate**(num_layers - layer_id - 1) - - parameter_groups[group_name] = { - 'weight_decay': this_weight_decay, - 'params': [], - 'param_names': [], - 'lr_scale': scale, - 'group_name': group_name, - 'lr': scale * self.base_lr, - } - - parameter_groups[group_name]['params'].append(param) - parameter_groups[group_name]['param_names'].append(name) - rank, _ = get_dist_info() - if rank == 0: - to_display = {} - for key in parameter_groups: - to_display[key] = { - 'param_names': parameter_groups[key]['param_names'], - 'lr_scale': parameter_groups[key]['lr_scale'], - 'lr': parameter_groups[key]['lr'], - 'weight_decay': parameter_groups[key]['weight_decay'], - } - logger.info(f'Param groups = {json.dumps(to_display, indent=2)}') - params.extend(parameter_groups.values()) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py deleted file mode 100644 index cb6aadd86d34af3605d432492931442026432cc8..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py +++ /dev/null @@ -1,249 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional, Tuple, Union - -import torch.nn as nn -from mmcv.cnn import ConvModule -from mmengine.config import ConfigDict -from torch import Tensor - -from mmdet.registry import MODELS -from .bbox_head import BBoxHead - - -@MODELS.register_module() -class ConvFCBBoxHead(BBoxHead): - r"""More general bbox head, with shared conv and fc layers and two optional - separated branches. - - .. code-block:: none - - /-> cls convs -> cls fcs -> cls - shared convs -> shared fcs - \-> reg convs -> reg fcs -> reg - """ # noqa: W605 - - def __init__(self, - num_shared_convs: int = 0, - num_shared_fcs: int = 0, - num_cls_convs: int = 0, - num_cls_fcs: int = 0, - num_reg_convs: int = 0, - num_reg_fcs: int = 0, - conv_out_channels: int = 256, - fc_out_channels: int = 1024, - conv_cfg: Optional[Union[dict, ConfigDict]] = None, - norm_cfg: Optional[Union[dict, ConfigDict]] = None, - init_cfg: Optional[Union[dict, ConfigDict]] = None, - *args, - **kwargs) -> None: - super().__init__(*args, init_cfg=init_cfg, **kwargs) - assert (num_shared_convs + num_shared_fcs + num_cls_convs + - num_cls_fcs + num_reg_convs + num_reg_fcs > 0) - if num_cls_convs > 0 or num_reg_convs > 0: - assert num_shared_fcs == 0 - if not self.with_cls: - assert num_cls_convs == 0 and num_cls_fcs == 0 - if not self.with_reg: - assert num_reg_convs == 0 and num_reg_fcs == 0 - self.num_shared_convs = num_shared_convs - self.num_shared_fcs = num_shared_fcs - self.num_cls_convs = num_cls_convs - self.num_cls_fcs = num_cls_fcs - self.num_reg_convs = num_reg_convs - self.num_reg_fcs = num_reg_fcs - self.conv_out_channels = conv_out_channels - self.fc_out_channels = fc_out_channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - # add shared convs and fcs - self.shared_convs, self.shared_fcs, last_layer_dim = \ - self._add_conv_fc_branch( - self.num_shared_convs, self.num_shared_fcs, self.in_channels, - True) - self.shared_out_channels = last_layer_dim - - # add cls specific branch - self.cls_convs, self.cls_fcs, self.cls_last_dim = \ - self._add_conv_fc_branch( - self.num_cls_convs, self.num_cls_fcs, self.shared_out_channels) - - # add reg specific branch - self.reg_convs, self.reg_fcs, self.reg_last_dim = \ - self._add_conv_fc_branch( - self.num_reg_convs, self.num_reg_fcs, self.shared_out_channels) - - if self.num_shared_fcs == 0 and not self.with_avg_pool: - if self.num_cls_fcs == 0: - self.cls_last_dim *= self.roi_feat_area - if self.num_reg_fcs == 0: - self.reg_last_dim *= self.roi_feat_area - - self.relu = nn.ReLU(inplace=True) - # reconstruct fc_cls and fc_reg since input channels are changed - if self.with_cls: - if self.custom_cls_channels: - cls_channels = self.loss_cls.get_cls_channels(self.num_classes) - else: - cls_channels = self.num_classes + 1 - cls_predictor_cfg_ = self.cls_predictor_cfg.copy() - cls_predictor_cfg_.update( - in_features=self.cls_last_dim, out_features=cls_channels) - self.fc_cls = MODELS.build(cls_predictor_cfg_) - if self.with_reg: - box_dim = self.bbox_coder.encode_size - out_dim_reg = box_dim if self.reg_class_agnostic else \ - box_dim * self.num_classes - reg_predictor_cfg_ = self.reg_predictor_cfg.copy() - if isinstance(reg_predictor_cfg_, (dict, ConfigDict)): - reg_predictor_cfg_.update( - in_features=self.reg_last_dim, out_features=out_dim_reg) - self.fc_reg = MODELS.build(reg_predictor_cfg_) - - if init_cfg is None: - # when init_cfg is None, - # It has been set to - # [[dict(type='Normal', std=0.01, override=dict(name='fc_cls'))], - # [dict(type='Normal', std=0.001, override=dict(name='fc_reg'))] - # after `super(ConvFCBBoxHead, self).__init__()` - # we only need to append additional configuration - # for `shared_fcs`, `cls_fcs` and `reg_fcs` - self.init_cfg += [ - dict( - type='Xavier', - distribution='uniform', - override=[ - dict(name='shared_fcs'), - dict(name='cls_fcs'), - dict(name='reg_fcs') - ]) - ] - - def _add_conv_fc_branch(self, - num_branch_convs: int, - num_branch_fcs: int, - in_channels: int, - is_shared: bool = False) -> tuple: - """Add shared or separable branch. - - convs -> avg pool (optional) -> fcs - """ - last_layer_dim = in_channels - # add branch specific conv layers - branch_convs = nn.ModuleList() - if num_branch_convs > 0: - for i in range(num_branch_convs): - conv_in_channels = ( - last_layer_dim if i == 0 else self.conv_out_channels) - branch_convs.append( - ConvModule( - conv_in_channels, - self.conv_out_channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - last_layer_dim = self.conv_out_channels - # add branch specific fc layers - branch_fcs = nn.ModuleList() - if num_branch_fcs > 0: - # for shared branch, only consider self.with_avg_pool - # for separated branches, also consider self.num_shared_fcs - if (is_shared - or self.num_shared_fcs == 0) and not self.with_avg_pool: - last_layer_dim *= self.roi_feat_area - for i in range(num_branch_fcs): - fc_in_channels = ( - last_layer_dim if i == 0 else self.fc_out_channels) - branch_fcs.append( - nn.Linear(fc_in_channels, self.fc_out_channels)) - last_layer_dim = self.fc_out_channels - return branch_convs, branch_fcs, last_layer_dim - - def forward(self, x: Tuple[Tensor]) -> tuple: - """Forward features from the upstream network. - - Args: - x (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: A tuple of classification scores and bbox prediction. - - - cls_score (Tensor): Classification scores for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_base_priors * num_classes. - - bbox_pred (Tensor): Box energies / deltas for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_base_priors * 4. - """ - # shared part - if self.num_shared_convs > 0: - for conv in self.shared_convs: - x = conv(x) - - if self.num_shared_fcs > 0: - if self.with_avg_pool: - x = self.avg_pool(x) - - x = x.flatten(1) - - for fc in self.shared_fcs: - x = self.relu(fc(x)) - # separate branches - x_cls = x - x_reg = x - - for conv in self.cls_convs: - x_cls = conv(x_cls) - if x_cls.dim() > 2: - if self.with_avg_pool: - x_cls = self.avg_pool(x_cls) - x_cls = x_cls.flatten(1) - for fc in self.cls_fcs: - x_cls = self.relu(fc(x_cls)) - - for conv in self.reg_convs: - x_reg = conv(x_reg) - if x_reg.dim() > 2: - if self.with_avg_pool: - x_reg = self.avg_pool(x_reg) - x_reg = x_reg.flatten(1) - for fc in self.reg_fcs: - x_reg = self.relu(fc(x_reg)) - - cls_score = self.fc_cls(x_cls) if self.with_cls else None - bbox_pred = self.fc_reg(x_reg) if self.with_reg else None - return cls_score, bbox_pred - - -@MODELS.register_module() -class Shared2FCBBoxHead(ConvFCBBoxHead): - - def __init__(self, fc_out_channels: int = 1024, *args, **kwargs) -> None: - super().__init__( - num_shared_convs=0, - num_shared_fcs=2, - num_cls_convs=0, - num_cls_fcs=0, - num_reg_convs=0, - num_reg_fcs=0, - fc_out_channels=fc_out_channels, - *args, - **kwargs) - - -@MODELS.register_module() -class Shared4Conv1FCBBoxHead(ConvFCBBoxHead): - - def __init__(self, fc_out_channels: int = 1024, *args, **kwargs) -> None: - super().__init__( - num_shared_convs=4, - num_shared_fcs=1, - num_cls_convs=0, - num_cls_fcs=0, - num_reg_convs=0, - num_reg_fcs=0, - fc_out_channels=fc_out_channels, - *args, - **kwargs) diff --git a/spaces/LaynzKunz/Advanced-RVC-Inference/lib/infer_pack/commons.py b/spaces/LaynzKunz/Advanced-RVC-Inference/lib/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Advanced-RVC-Inference/lib/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/Lbin123/Lbingo/src/lib/storage.ts b/spaces/Lbin123/Lbingo/src/lib/storage.ts deleted file mode 100644 index a5b7825c4f76a28c704da512ae39e8bb45addd09..0000000000000000000000000000000000000000 --- a/spaces/Lbin123/Lbingo/src/lib/storage.ts +++ /dev/null @@ -1,27 +0,0 @@ -import { getMany, set, del, clear } from 'idb-keyval'; - -export const Storage = { - async get(key: string | string[] | null): Promise { - if (key === null) return null; - if (typeof key === 'string') { - key = [key] - } - const returnData: Record = {} - const values = await getMany(key) - key.forEach((k, idx)=> { - returnData[k] = values[idx] - }) - return returnData; - }, - async set(object: any) { - for (let key of Object.keys(object)) { - await set(key, object[key]) - } - }, - async remove(key: string) { - return del(key); - }, - async clear() { - return clear(); - } -} diff --git a/spaces/Lianjd/stock_dashboard/backtrader/stores/ibstore.py b/spaces/Lianjd/stock_dashboard/backtrader/stores/ibstore.py deleted file mode 100644 index c261493eac61c82aceff29647f746374149625fa..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/stores/ibstore.py +++ /dev/null @@ -1,1512 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - -import collections -from copy import copy -from datetime import date, datetime, timedelta -import inspect -import itertools -import random -import threading -import time - -from ib.ext.Contract import Contract -import ib.opt as ibopt - -from backtrader import TimeFrame, Position -from backtrader.metabase import MetaParams -from backtrader.utils.py3 import bytes, bstr, queue, with_metaclass, long -from backtrader.utils import AutoDict, UTC - -bytes = bstr # py2/3 need for ibpy - - -def _ts2dt(tstamp=None): - # Transforms a RTVolume timestamp to a datetime object - if not tstamp: - return datetime.utcnow() - - sec, msec = divmod(long(tstamp), 1000) - usec = msec * 1000 - return datetime.utcfromtimestamp(sec).replace(microsecond=usec) - - -class RTVolume(object): - '''Parses a tickString tickType 48 (RTVolume) event from the IB API into its - constituent fields - - Supports using a "price" to simulate an RTVolume from a tickPrice event - ''' - _fields = [ - ('price', float), - ('size', int), - ('datetime', _ts2dt), - ('volume', int), - ('vwap', float), - ('single', bool) - ] - - def __init__(self, rtvol='', price=None, tmoffset=None): - # Use a provided string or simulate a list of empty tokens - tokens = iter(rtvol.split(';')) - - # Put the tokens as attributes using the corresponding func - for name, func in self._fields: - setattr(self, name, func(next(tokens)) if rtvol else func()) - - # If price was provided use it - if price is not None: - self.price = price - - if tmoffset is not None: - self.datetime += tmoffset - - -class MetaSingleton(MetaParams): - '''Metaclass to make a metaclassed class a singleton''' - def __init__(cls, name, bases, dct): - super(MetaSingleton, cls).__init__(name, bases, dct) - cls._singleton = None - - def __call__(cls, *args, **kwargs): - if cls._singleton is None: - cls._singleton = ( - super(MetaSingleton, cls).__call__(*args, **kwargs)) - - return cls._singleton - - -# Decorator to mark methods to register with ib.opt -def ibregister(f): - f._ibregister = True - return f - - -class IBStore(with_metaclass(MetaSingleton, object)): - '''Singleton class wrapping an ibpy ibConnection instance. - - The parameters can also be specified in the classes which use this store, - like ``IBData`` and ``IBBroker`` - - Params: - - - ``host`` (default:``127.0.0.1``): where IB TWS or IB Gateway are - actually running. And although this will usually be the localhost, it - must not be - - - ``port`` (default: ``7496``): port to connect to. The demo system uses - ``7497`` - - - ``clientId`` (default: ``None``): which clientId to use to connect to - TWS. - - ``None``: generates a random id between 1 and 65535 - An ``integer``: will be passed as the value to use. - - - ``notifyall`` (default: ``False``) - - If ``False`` only ``error`` messages will be sent to the - ``notify_store`` methods of ``Cerebro`` and ``Strategy``. - - If ``True``, each and every message received from TWS will be notified - - - ``_debug`` (default: ``False``) - - Print all messages received from TWS to standard output - - - ``reconnect`` (default: ``3``) - - Number of attempts to try to reconnect after the 1st connection attempt - fails - - Set it to a ``-1`` value to keep on reconnecting forever - - - ``timeout`` (default: ``3.0``) - - Time in seconds between reconnection attemps - - - ``timeoffset`` (default: ``True``) - - If True, the time obtained from ``reqCurrentTime`` (IB Server time) - will be used to calculate the offset to localtime and this offset will - be used for the price notifications (tickPrice events, for example for - CASH markets) to modify the locally calculated timestamp. - - The time offset will propagate to other parts of the ``backtrader`` - ecosystem like the **resampling** to align resampling timestamps using - the calculated offset. - - - ``timerefresh`` (default: ``60.0``) - - Time in seconds: how often the time offset has to be refreshed - - - ``indcash`` (default: ``True``) - - Manage IND codes as if they were cash for price retrieval - ''' - - # Set a base for the data requests (historical/realtime) to distinguish the - # id in the error notifications from orders, where the basis (usually - # starting at 1) is set by TWS - REQIDBASE = 0x01000000 - - BrokerCls = None # broker class will autoregister - DataCls = None # data class will auto register - - params = ( - ('host', '127.0.0.1'), - ('port', 7496), - ('clientId', None), # None generates a random clientid 1 -> 2^16 - ('notifyall', False), - ('_debug', False), - ('reconnect', 3), # -1 forever, 0 No, > 0 number of retries - ('timeout', 3.0), # timeout between reconnections - ('timeoffset', True), # Use offset to server for timestamps if needed - ('timerefresh', 60.0), # How often to refresh the timeoffset - ('indcash', True), # Treat IND codes as CASH elements - ) - - @classmethod - def getdata(cls, *args, **kwargs): - '''Returns ``DataCls`` with args, kwargs''' - return cls.DataCls(*args, **kwargs) - - @classmethod - def getbroker(cls, *args, **kwargs): - '''Returns broker with *args, **kwargs from registered ``BrokerCls``''' - return cls.BrokerCls(*args, **kwargs) - - def __init__(self): - super(IBStore, self).__init__() - - self._lock_q = threading.Lock() # sync access to _tickerId/Queues - self._lock_accupd = threading.Lock() # sync account updates - self._lock_pos = threading.Lock() # sync account updates - self._lock_notif = threading.Lock() # sync access to notif queue - - # Account list received - self._event_managed_accounts = threading.Event() - self._event_accdownload = threading.Event() - - self.dontreconnect = False # for non-recoverable connect errors - - self._env = None # reference to cerebro for general notifications - self.broker = None # broker instance - self.datas = list() # datas that have registered over start - self.ccount = 0 # requests to start (from cerebro or datas) - - self._lock_tmoffset = threading.Lock() - self.tmoffset = timedelta() # to control time difference with server - - # Structures to hold datas requests - self.qs = collections.OrderedDict() # key: tickerId -> queues - self.ts = collections.OrderedDict() # key: queue -> tickerId - self.iscash = dict() # tickerIds from cash products (for ex: EUR.JPY) - - self.histexreq = dict() # holds segmented historical requests - self.histfmt = dict() # holds datetimeformat for request - self.histsend = dict() # holds sessionend (data time) for request - self.histtz = dict() # holds sessionend (data time) for request - - self.acc_cash = AutoDict() # current total cash per account - self.acc_value = AutoDict() # current total value per account - self.acc_upds = AutoDict() # current account valueinfos per account - - self.port_update = False # indicate whether to signal to broker - - self.positions = collections.defaultdict(Position) # actual positions - - self._tickerId = itertools.count(self.REQIDBASE) # unique tickerIds - self.orderid = None # next possible orderid (will be itertools.count) - - self.cdetails = collections.defaultdict(list) # hold cdetails requests - - self.managed_accounts = list() # received via managedAccounts - - self.notifs = queue.Queue() # store notifications for cerebro - - # Use the provided clientId or a random one - if self.p.clientId is None: - self.clientId = random.randint(1, pow(2, 16) - 1) - else: - self.clientId = self.p.clientId - - # ibpy connection object - self.conn = ibopt.ibConnection( - host=self.p.host, port=self.p.port, clientId=self.clientId) - - # register a printall method if requested - if self.p._debug or self.p.notifyall: - self.conn.registerAll(self.watcher) - - # Register decorated methods with the conn - methods = inspect.getmembers(self, inspect.ismethod) - for name, method in methods: - if not getattr(method, '_ibregister', False): - continue - - message = getattr(ibopt.message, name) - self.conn.register(method, message) - - # This utility key function transforms a barsize into a: - # (Timeframe, Compression) tuple which can be sorted - def keyfn(x): - n, t = x.split() - tf, comp = self._sizes[t] - return (tf, int(n) * comp) - - # This utility key function transforms a duration into a: - # (Timeframe, Compression) tuple which can be sorted - def key2fn(x): - n, d = x.split() - tf = self._dur2tf[d] - return (tf, int(n)) - - # Generate a table of reverse durations - self.revdur = collections.defaultdict(list) - # The table (dict) is a ONE to MANY relation of - # duration -> barsizes - # Here it is reversed to get a ONE to MANY relation of - # barsize -> durations - for duration, barsizes in self._durations.items(): - for barsize in barsizes: - self.revdur[keyfn(barsize)].append(duration) - - # Once managed, sort the durations according to real duration and not - # to the text form using the utility key above - for barsize in self.revdur: - self.revdur[barsize].sort(key=key2fn) - - def start(self, data=None, broker=None): - self.reconnect(fromstart=True) # reconnect should be an invariant - - # Datas require some processing to kickstart data reception - if data is not None: - self._env = data._env - # For datas simulate a queue with None to kickstart co - self.datas.append(data) - - # if connection fails, get a fake registration that will force the - # datas to try to reconnect or else bail out - return self.getTickerQueue(start=True) - - elif broker is not None: - self.broker = broker - - def stop(self): - try: - self.conn.disconnect() # disconnect should be an invariant - except AttributeError: - pass # conn may have never been connected and lack "disconnect" - - # Unblock any calls set on these events - self._event_managed_accounts.set() - self._event_accdownload.set() - - def logmsg(self, *args): - # for logging purposes - if self.p._debug: - print(*args) - - def watcher(self, msg): - # will be registered to see all messages if debug is requested - self.logmsg(str(msg)) - if self.p.notifyall: - self.notifs.put((msg, tuple(msg.values()), dict(msg.items()))) - - def connected(self): - # The isConnected method is available through __getattr__ indirections - # and may not be present, which indicates that no connection has been - # made because the subattribute sender has not yet been created, hence - # the check for the AttributeError exception - try: - return self.conn.isConnected() - except AttributeError: - pass - - return False # non-connected (including non-initialized) - - def reconnect(self, fromstart=False, resub=False): - # This method must be an invariant in that it can be called several - # times from the same source and must be consistent. An exampler would - # be 5 datas which are being received simultaneously and all request a - # reconnect - - # Policy: - # - if dontreconnect has been set, no option to connect is possible - # - check connection and use the absence of isConnected as signal of - # first ever connection (add 1 to retries too) - # - Calculate the retries (forever or not) - # - Try to connct - # - If achieved and fromstart is false, the datas will be - # re-kickstarted to recreate the subscription - firstconnect = False - try: - if self.conn.isConnected(): - if resub: - self.startdatas() - return True # nothing to do - except AttributeError: - # Not connected, several __getattr__ indirections to - # self.conn.sender.client.isConnected - firstconnect = True - - if self.dontreconnect: - return False - - # This is only invoked from the main thread by datas and therefore no - # lock is needed to control synchronicity to it - retries = self.p.reconnect - if retries >= 0: - retries += firstconnect - - while retries < 0 or retries: - if not firstconnect: - time.sleep(self.p.timeout) - - firstconnect = False - - if self.conn.connect(): - if not fromstart or resub: - self.startdatas() - return True # connection successful - - if retries > 0: - retries -= 1 - - self.dontreconnect = True - return False # connection/reconnection failed - - def startdatas(self): - # kickstrat datas, not returning until all of them have been done - ts = list() - for data in self.datas: - t = threading.Thread(target=data.reqdata) - t.start() - ts.append(t) - - for t in ts: - t.join() - - def stopdatas(self): - # stop subs and force datas out of the loop (in LIFO order) - qs = list(self.qs.values()) - ts = list() - for data in self.datas: - t = threading.Thread(target=data.canceldata) - t.start() - ts.append(t) - - for t in ts: - t.join() - - for q in reversed(qs): # datamaster the last one to get a None - q.put(None) - - def get_notifications(self): - '''Return the pending "store" notifications''' - # The background thread could keep on adding notifications. The None - # mark allows to identify which is the last notification to deliver - self.notifs.put(None) # put a mark - notifs = list() - while True: - notif = self.notifs.get() - if notif is None: # mark is reached - break - notifs.append(notif) - - return notifs - - @ibregister - def error(self, msg): - # 100-199 Order/Data/Historical related - # 200-203 tickerId and Order Related - # 300-399 A mix of things: orders, connectivity, tickers, misc errors - # 400-449 Seem order related again - # 500-531 Connectivity/Communication Errors - # 10000-100027 Mix of special orders/routing - # 1100-1102 TWS connectivy to the outside - # 1300- Socket dropped in client-TWS communication - # 2100-2110 Informative about Data Farm status (id=-1) - - # All errors are logged to the environment (cerebro), because many - # errors in Interactive Brokers are actually informational and many may - # actually be of interest to the user - if not self.p.notifyall: - self.notifs.put((msg, tuple(msg.values()), dict(msg.items()))) - - # Manage those events which have to do with connection - if msg.errorCode is None: - # Usually received as an error in connection of just before disconn - pass - elif msg.errorCode in [200, 203, 162, 320, 321, 322]: - # cdetails 200 security not found, notify over right queue - # cdetails 203 security not allowed for acct - try: - q = self.qs[msg.id] - except KeyError: - pass # should not happend but it can - else: - self.cancelQueue(q, True) - - elif msg.errorCode in [354, 420]: - # 354 no subscription, 420 no real-time bar for contract - # the calling data to let the data know ... it cannot resub - try: - q = self.qs[msg.id] - except KeyError: - pass # should not happend but it can - else: - q.put(-msg.errorCode) - self.cancelQueue(q) - - elif msg.errorCode == 10225: - # 10225-Bust event occurred, current subscription is deactivated. - # Please resubscribe real-time bars immediately. - try: - q = self.qs[msg.id] - except KeyError: - pass # should not happend but it can - else: - q.put(-msg.errorCode) - - elif msg.errorCode == 326: # not recoverable, clientId in use - self.dontreconnect = True - self.conn.disconnect() - self.stopdatas() - - elif msg.errorCode == 502: - # Cannot connect to TWS: port, config not open, tws off (504 then) - self.conn.disconnect() - self.stopdatas() - - elif msg.errorCode == 504: # Not Connected for data op - # Once for each data - pass # don't need to manage it - - elif msg.errorCode == 1300: - # TWS has been closed. The port for a new connection is there - # newport = int(msg.errorMsg.split('-')[-1]) # bla bla bla -7496 - self.conn.disconnect() - self.stopdatas() - - elif msg.errorCode == 1100: - # Connection lost - Notify ... datas will wait on the queue - # with no messages arriving - for q in self.ts: # key: queue -> ticker - q.put(-msg.errorCode) - - elif msg.errorCode == 1101: - # Connection restored and tickerIds are gone - for q in self.ts: # key: queue -> ticker - q.put(-msg.errorCode) - - elif msg.errorCode == 1102: - # Connection restored and tickerIds maintained - for q in self.ts: # key: queue -> ticker - q.put(-msg.errorCode) - - elif msg.errorCode < 500: - # Given the myriad of errorCodes, start by assuming is an order - # error and if not, the checks there will let it go - if msg.id < self.REQIDBASE: - if self.broker is not None: - self.broker.push_ordererror(msg) - else: - # Cancel the queue if a "data" reqId error is given: sanity - q = self.qs[msg.id] - self.cancelQueue(q, True) - - @ibregister - def connectionClosed(self, msg): - # Sometmes this comes without 1300/502 or any other and will not be - # seen in error hence the need to manage the situation independently - self.conn.disconnect() - self.stopdatas() - - @ibregister - def managedAccounts(self, msg): - # 1st message in the stream - self.managed_accounts = msg.accountsList.split(',') - self._event_managed_accounts.set() - - # Request time to avoid synchronization issues - self.reqCurrentTime() - - def reqCurrentTime(self): - self.conn.reqCurrentTime() - - @ibregister - def currentTime(self, msg): - if not self.p.timeoffset: # only if requested ... apply timeoffset - return - curtime = datetime.fromtimestamp(float(msg.time)) - with self._lock_tmoffset: - self.tmoffset = curtime - datetime.now() - - threading.Timer(self.p.timerefresh, self.reqCurrentTime).start() - - def timeoffset(self): - with self._lock_tmoffset: - return self.tmoffset - - def nextTickerId(self): - # Get the next ticker using next on the itertools.count - return next(self._tickerId) - - @ibregister - def nextValidId(self, msg): - # Create a counter from the TWS notified value to apply to orders - self.orderid = itertools.count(msg.orderId) - - def nextOrderId(self): - # Get the next ticker using next on the itertools.count made with the - # notified value from TWS - return next(self.orderid) - - def reuseQueue(self, tickerId): - '''Reuses queue for tickerId, returning the new tickerId and q''' - with self._lock_q: - # Invalidate tickerId in qs (where it is a key) - q = self.qs.pop(tickerId, None) # invalidate old - iscash = self.iscash.pop(tickerId, None) - - # Update ts: q -> ticker - tickerId = self.nextTickerId() # get new tickerId - self.ts[q] = tickerId # Update ts: q -> tickerId - self.qs[tickerId] = q # Update qs: tickerId -> q - self.iscash[tickerId] = iscash - - return tickerId, q - - def getTickerQueue(self, start=False): - '''Creates ticker/Queue for data delivery to a data feed''' - q = queue.Queue() - if start: - q.put(None) - return q - - with self._lock_q: - tickerId = self.nextTickerId() - self.qs[tickerId] = q # can be managed from other thread - self.ts[q] = tickerId - self.iscash[tickerId] = False - - return tickerId, q - - def cancelQueue(self, q, sendnone=False): - '''Cancels a Queue for data delivery''' - # pop ts (tickers) and with the result qs (queues) - tickerId = self.ts.pop(q, None) - self.qs.pop(tickerId, None) - - self.iscash.pop(tickerId, None) - - if sendnone: - q.put(None) - - def validQueue(self, q): - '''Returns (bool) if a queue is still valid''' - return q in self.ts # queue -> ticker - - def getContractDetails(self, contract, maxcount=None): - cds = list() - q = self.reqContractDetails(contract) - while True: - msg = q.get() - if msg is None: - break - cds.append(msg) - - if not cds or (maxcount and len(cds) > maxcount): - err = 'Ambiguous contract: none/multiple answers received' - self.notifs.put((err, cds, {})) - return None - - return cds - - def reqContractDetails(self, contract): - # get a ticker/queue for identification/data delivery - tickerId, q = self.getTickerQueue() - self.conn.reqContractDetails(tickerId, contract) - return q - - @ibregister - def contractDetailsEnd(self, msg): - '''Signal end of contractdetails''' - self.cancelQueue(self.qs[msg.reqId], True) - - @ibregister - def contractDetails(self, msg): - '''Receive answer and pass it to the queue''' - self.qs[msg.reqId].put(msg) - - def reqHistoricalDataEx(self, contract, enddate, begindate, - timeframe, compression, - what=None, useRTH=False, tz='', sessionend=None, - tickerId=None): - ''' - Extension of the raw reqHistoricalData proxy, which takes two dates - rather than a duration, barsize and date - - It uses the IB published valid duration/barsizes to make a mapping and - spread a historical request over several historical requests if needed - ''' - # Keep a copy for error reporting purposes - kwargs = locals().copy() - kwargs.pop('self', None) # remove self, no need to report it - - if timeframe < TimeFrame.Seconds: - # Ticks are not supported - return self.getTickerQueue(start=True) - - if enddate is None: - enddate = datetime.now() - - if begindate is None: - duration = self.getmaxduration(timeframe, compression) - if duration is None: - err = ('No duration for historical data request for ' - 'timeframe/compresison') - self.notifs.put((err, (), kwargs)) - return self.getTickerQueue(start=True) - barsize = self.tfcomp_to_size(timeframe, compression) - if barsize is None: - err = ('No supported barsize for historical data request for ' - 'timeframe/compresison') - self.notifs.put((err, (), kwargs)) - return self.getTickerQueue(start=True) - - return self.reqHistoricalData(contract=contract, enddate=enddate, - duration=duration, barsize=barsize, - what=what, useRTH=useRTH, tz=tz, - sessionend=sessionend) - - # Check if the requested timeframe/compression is supported by IB - durations = self.getdurations(timeframe, compression) - if not durations: # return a queue and put a None in it - return self.getTickerQueue(start=True) - - # Get or reuse a queue - if tickerId is None: - tickerId, q = self.getTickerQueue() - else: - tickerId, q = self.reuseQueue(tickerId) # reuse q for old tickerId - - # Get the best possible duration to reduce number of requests - duration = None - for dur in durations: - intdate = self.dt_plus_duration(begindate, dur) - if intdate >= enddate: - intdate = enddate - duration = dur # begin -> end fits in single request - break - - if duration is None: # no duration large enough to fit the request - duration = durations[-1] - - # Store the calculated data - self.histexreq[tickerId] = dict( - contract=contract, enddate=enddate, begindate=intdate, - timeframe=timeframe, compression=compression, - what=what, useRTH=useRTH, tz=tz, sessionend=sessionend) - - barsize = self.tfcomp_to_size(timeframe, compression) - self.histfmt[tickerId] = timeframe >= TimeFrame.Days - self.histsend[tickerId] = sessionend - self.histtz[tickerId] = tz - - if contract.m_secType in ['CASH', 'CFD']: - self.iscash[tickerId] = 1 # msg.field code - if not what: - what = 'BID' # default for cash unless otherwise specified - - elif contract.m_secType in ['IND'] and self.p.indcash: - self.iscash[tickerId] = 4 # msg.field code - - what = what or 'TRADES' - - self.conn.reqHistoricalData( - tickerId, - contract, - bytes(intdate.strftime('%Y%m%d %H:%M:%S') + ' GMT'), - bytes(duration), - bytes(barsize), - bytes(what), - int(useRTH), - 2) # dateformat 1 for string, 2 for unix time in seconds - - return q - - def reqHistoricalData(self, contract, enddate, duration, barsize, - what=None, useRTH=False, tz='', sessionend=None): - '''Proxy to reqHistorical Data''' - - # get a ticker/queue for identification/data delivery - tickerId, q = self.getTickerQueue() - - if contract.m_secType in ['CASH', 'CFD']: - self.iscash[tickerId] = True - if not what: - what = 'BID' # TRADES doesn't work - elif what == 'ASK': - self.iscash[tickerId] = 2 - else: - what = what or 'TRADES' - - # split barsize "x time", look in sizes for (tf, comp) get tf - tframe = self._sizes[barsize.split()[1]][0] - self.histfmt[tickerId] = tframe >= TimeFrame.Days - self.histsend[tickerId] = sessionend - self.histtz[tickerId] = tz - - self.conn.reqHistoricalData( - tickerId, - contract, - bytes(enddate.strftime('%Y%m%d %H:%M:%S') + ' GMT'), - bytes(duration), - bytes(barsize), - bytes(what), - int(useRTH), - 2) - - return q - - def cancelHistoricalData(self, q): - '''Cancels an existing HistoricalData request - - Params: - - q: the Queue returned by reqMktData - ''' - with self._lock_q: - self.conn.cancelHistoricalData(self.ts[q]) - self.cancelQueue(q, True) - - def reqRealTimeBars(self, contract, useRTH=False, duration=5): - '''Creates a request for (5 seconds) Real Time Bars - - Params: - - contract: a ib.ext.Contract.Contract intance - - useRTH: (default: False) passed to TWS - - duration: (default: 5) passed to TWS, no other value works in 2016) - - Returns: - - a Queue the client can wait on to receive a RTVolume instance - ''' - # get a ticker/queue for identification/data delivery - tickerId, q = self.getTickerQueue() - - # 20150929 - Only 5 secs supported for duration - self.conn.reqRealTimeBars( - tickerId, - contract, - duration, - bytes('TRADES'), - int(useRTH)) - - return q - - def cancelRealTimeBars(self, q): - '''Cancels an existing MarketData subscription - - Params: - - q: the Queue returned by reqMktData - ''' - with self._lock_q: - tickerId = self.ts.get(q, None) - if tickerId is not None: - self.conn.cancelRealTimeBars(tickerId) - - self.cancelQueue(q, True) - - def reqMktData(self, contract, what=None): - '''Creates a MarketData subscription - - Params: - - contract: a ib.ext.Contract.Contract intance - - Returns: - - a Queue the client can wait on to receive a RTVolume instance - ''' - # get a ticker/queue for identification/data delivery - tickerId, q = self.getTickerQueue() - ticks = '233' # request RTVOLUME tick delivered over tickString - - if contract.m_secType in ['CASH', 'CFD']: - self.iscash[tickerId] = True - ticks = '' # cash markets do not get RTVOLUME - if what == 'ASK': - self.iscash[tickerId] = 2 - - # q.put(None) # to kickstart backfilling - # Can request 233 also for cash ... nothing will arrive - self.conn.reqMktData(tickerId, contract, bytes(ticks), False) - return q - - def cancelMktData(self, q): - '''Cancels an existing MarketData subscription - - Params: - - q: the Queue returned by reqMktData - ''' - with self._lock_q: - tickerId = self.ts.get(q, None) - if tickerId is not None: - self.conn.cancelMktData(tickerId) - - self.cancelQueue(q, True) - - @ibregister - def tickString(self, msg): - # Receive and process a tickString message - if msg.tickType == 48: # RTVolume - try: - rtvol = RTVolume(msg.value) - except ValueError: # price not in message ... - pass - else: - # Don't need to adjust the time, because it is in "timestamp" - # form in the message - self.qs[msg.tickerId].put(rtvol) - - @ibregister - def tickPrice(self, msg): - '''Cash Markets have no notion of "last_price"/"last_size" and the - tracking of the price is done (industry de-facto standard at least with - the IB API) following the BID price - - A RTVolume which will only contain a price is put into the client's - queue to have a consistent cross-market interface - ''' - # Used for "CASH" markets - # The price field has been seen to be missing in some instances even if - # "field" is 1 - tickerId = msg.tickerId - fieldcode = self.iscash[tickerId] - if fieldcode: - if msg.field == fieldcode: # Expected cash field code - try: - if msg.price == -1.0: - # seems to indicate the stream is halted for example in - # between 23:00 - 23:15 CET for FOREX - return - except AttributeError: - pass - - try: - rtvol = RTVolume(price=msg.price, tmoffset=self.tmoffset) - # print('rtvol with datetime:', rtvol.datetime) - except ValueError: # price not in message ... - pass - else: - self.qs[tickerId].put(rtvol) - - @ibregister - def realtimeBar(self, msg): - '''Receives x seconds Real Time Bars (at the time of writing only 5 - seconds are supported) - - Not valid for cash markets - ''' - # Get a naive localtime object - msg.time = datetime.utcfromtimestamp(float(msg.time)) - self.qs[msg.reqId].put(msg) - - @ibregister - def historicalData(self, msg): - '''Receives the events of a historical data request''' - # For multi-tiered downloads we'd need to rebind the queue to a new - # tickerId (in case tickerIds are not reusable) and instead of putting - # None, issue a new reqHistData with the new data and move formward - tickerId = msg.reqId - q = self.qs[tickerId] - if msg.date.startswith('finished-'): - self.histfmt.pop(tickerId, None) - self.histsend.pop(tickerId, None) - self.histtz.pop(tickerId, None) - kargs = self.histexreq.pop(tickerId, None) - if kargs is not None: - self.reqHistoricalDataEx(tickerId=tickerId, **kargs) - return - - msg.date = None - self.cancelQueue(q) - else: - dtstr = msg.date # Format when string req: YYYYMMDD[ HH:MM:SS] - if self.histfmt[tickerId]: - sessionend = self.histsend[tickerId] - dt = datetime.strptime(dtstr, '%Y%m%d') - dteos = datetime.combine(dt, sessionend) - tz = self.histtz[tickerId] - if tz: - dteostz = tz.localize(dteos) - dteosutc = dteostz.astimezone(UTC).replace(tzinfo=None) - # When requesting for example daily bars, the current day - # will be returned with the already happened data. If the - # session end were added, the new ticks wouldn't make it - # through, because they happen before the end of time - else: - dteosutc = dteos - - if dteosutc <= datetime.utcnow(): - dt = dteosutc - - msg.date = dt - else: - msg.date = datetime.utcfromtimestamp(long(dtstr)) - - q.put(msg) - - # The _durations are meant to calculate the needed historical data to - # perform backfilling at the start of a connetion or a connection is lost. - # Using a timedelta as a key allows to quickly find out which bar size - # bar size (values in the tuples int the dict) can be used. - - _durations = dict([ - # 60 seconds - 1 min - ('60 S', - ('1 secs', '5 secs', '10 secs', '15 secs', '30 secs', - '1 min')), - - # 120 seconds - 2 mins - ('120 S', - ('1 secs', '5 secs', '10 secs', '15 secs', '30 secs', - '1 min', '2 mins')), - - # 180 seconds - 3 mins - ('180 S', - ('1 secs', '5 secs', '10 secs', '15 secs', '30 secs', - '1 min', '2 mins', '3 mins')), - - # 300 seconds - 5 mins - ('300 S', - ('1 secs', '5 secs', '10 secs', '15 secs', '30 secs', - '1 min', '2 mins', '3 mins', '5 mins')), - - # 600 seconds - 10 mins - ('600 S', - ('1 secs', '5 secs', '10 secs', '15 secs', '30 secs', - '1 min', '2 mins', '3 mins', '5 mins', '10 mins')), - - # 900 seconds - 15 mins - ('900 S', - ('1 secs', '5 secs', '10 secs', '15 secs', '30 secs', - '1 min', '2 mins', '3 mins', '5 mins', '10 mins', '15 mins')), - - # 1200 seconds - 20 mins - ('1200 S', - ('1 secs', '5 secs', '10 secs', '15 secs', '30 secs', - '1 min', '2 mins', '3 mins', '5 mins', '10 mins', '15 mins', - '20 mins')), - - # 1800 seconds - 30 mins - ('1800 S', - ('1 secs', '5 secs', '10 secs', '15 secs', '30 secs', - '1 min', '2 mins', '3 mins', '5 mins', '10 mins', '15 mins', - '20 mins', '30 mins')), - - # 3600 seconds - 1 hour - ('3600 S', - ('5 secs', '10 secs', '15 secs', '30 secs', - '1 min', '2 mins', '3 mins', '5 mins', '10 mins', '15 mins', - '20 mins', '30 mins', - '1 hour')), - - # 7200 seconds - 2 hours - ('7200 S', - ('5 secs', '10 secs', '15 secs', '30 secs', - '1 min', '2 mins', '3 mins', '5 mins', '10 mins', '15 mins', - '20 mins', '30 mins', - '1 hour', '2 hours')), - - # 10800 seconds - 3 hours - ('10800 S', - ('10 secs', '15 secs', '30 secs', - '1 min', '2 mins', '3 mins', '5 mins', '10 mins', '15 mins', - '20 mins', '30 mins', - '1 hour', '2 hours', '3 hours')), - - # 14400 seconds - 4 hours - ('14400 S', - ('15 secs', '30 secs', - '1 min', '2 mins', '3 mins', '5 mins', '10 mins', '15 mins', - '20 mins', '30 mins', - '1 hour', '2 hours', '3 hours', '4 hours')), - - # 28800 seconds - 8 hours - ('28800 S', - ('30 secs', - '1 min', '2 mins', '3 mins', '5 mins', '10 mins', '15 mins', - '20 mins', '30 mins', - '1 hour', '2 hours', '3 hours', '4 hours', '8 hours')), - - # 1 days - ('1 D', - ('1 min', '2 mins', '3 mins', '5 mins', '10 mins', '15 mins', - '20 mins', '30 mins', - '1 hour', '2 hours', '3 hours', '4 hours', '8 hours', - '1 day')), - - # 2 days - ('2 D', - ('2 mins', '3 mins', '5 mins', '10 mins', '15 mins', - '20 mins', '30 mins', - '1 hour', '2 hours', '3 hours', '4 hours', '8 hours', - '1 day')), - - # 1 weeks - ('1 W', - ('3 mins', '5 mins', '10 mins', '15 mins', - '20 mins', '30 mins', - '1 hour', '2 hours', '3 hours', '4 hours', '8 hours', - '1 day', '1 W')), - - # 2 weeks - ('2 W', - ('15 mins', '20 mins', '30 mins', - '1 hour', '2 hours', '3 hours', '4 hours', '8 hours', - '1 day', '1 W')), - - # 1 months - ('1 M', - ('30 mins', - '1 hour', '2 hours', '3 hours', '4 hours', '8 hours', - '1 day', '1 W', '1 M')), - - # 2+ months - ('2 M', ('1 day', '1 W', '1 M')), - ('3 M', ('1 day', '1 W', '1 M')), - ('4 M', ('1 day', '1 W', '1 M')), - ('5 M', ('1 day', '1 W', '1 M')), - ('6 M', ('1 day', '1 W', '1 M')), - ('7 M', ('1 day', '1 W', '1 M')), - ('8 M', ('1 day', '1 W', '1 M')), - ('9 M', ('1 day', '1 W', '1 M')), - ('10 M', ('1 day', '1 W', '1 M')), - ('11 M', ('1 day', '1 W', '1 M')), - - # 1+ years - ('1 Y', ('1 day', '1 W', '1 M')), - ]) - - # Sizes allow for quick translation from bar sizes above to actual - # timeframes to make a comparison with the actual data - _sizes = { - 'secs': (TimeFrame.Seconds, 1), - 'min': (TimeFrame.Minutes, 1), - 'mins': (TimeFrame.Minutes, 1), - 'hour': (TimeFrame.Minutes, 60), - 'hours': (TimeFrame.Minutes, 60), - 'day': (TimeFrame.Days, 1), - 'W': (TimeFrame.Weeks, 1), - 'M': (TimeFrame.Months, 1), - } - - _dur2tf = { - 'S': TimeFrame.Seconds, - 'D': TimeFrame.Days, - 'W': TimeFrame.Weeks, - 'M': TimeFrame.Months, - 'Y': TimeFrame.Years, - } - - def getdurations(self, timeframe, compression): - key = (timeframe, compression) - if key not in self.revdur: - return [] - - return self.revdur[key] - - def getmaxduration(self, timeframe, compression): - key = (timeframe, compression) - try: - return self.revdur[key][-1] - except (KeyError, IndexError): - pass - - return None - - def tfcomp_to_size(self, timeframe, compression): - if timeframe == TimeFrame.Months: - return '{} M'.format(compression) - - if timeframe == TimeFrame.Weeks: - return '{} W'.format(compression) - - if timeframe == TimeFrame.Days: - if not compression % 7: - return '{} W'.format(compression // 7) - - return '{} day'.format(compression) - - if timeframe == TimeFrame.Minutes: - if not compression % 60: - hours = compression // 60 - return ('{} hour'.format(hours)) + ('s' * (hours > 1)) - - return ('{} min'.format(compression)) + ('s' * (compression > 1)) - - if timeframe == TimeFrame.Seconds: - return '{} secs'.format(compression) - - # Microseconds or ticks - return None - - def dt_plus_duration(self, dt, duration): - size, dim = duration.split() - size = int(size) - if dim == 'S': - return dt + timedelta(seconds=size) - - if dim == 'D': - return dt + timedelta(days=size) - - if dim == 'W': - return dt + timedelta(days=size * 7) - - if dim == 'M': - month = dt.month - 1 + size # -1 to make it 0 based, readd below - years, month = divmod(month, 12) - return dt.replace(year=dt.year + years, month=month + 1, day=1) + timedelta(dt.day - 1) - - if dim == 'Y': - return dt.replace(year=dt.year + size) - - return dt # could do nothing with it ... return it intact - - def calcdurations(self, dtbegin, dtend): - '''Calculate a duration in between 2 datetimes''' - duration = self.histduration(dtbegin, dtend) - - if duration[-1] == 'M': - m = int(duration.split()[0]) - m1 = min(2, m) # (2, 1) -> 1, (2, 7) -> 2. Bottomline: 1 or 2 - m2 = max(1, m1) # m1 can only be 1 or 2 - checkdur = '{} M'.format(m2) - elif duration[-1] == 'Y': - checkdur = '1 Y' - else: - checkdur = duration - - sizes = self._durations[checkduration] - return duration, sizes - - def calcduration(self, dtbegin, dtend): - '''Calculate a duration in between 2 datetimes. Returns single size''' - duration, sizes = self._calcdurations(dtbegin, dtend) - return duration, sizes[0] - - def histduration(self, dt1, dt2): - # Given two dates calculates the smallest possible duration according - # to the table from the Historical Data API limitations provided by IB - # - # Seconds: 'x S' (x: [60, 120, 180, 300, 600, 900, 1200, 1800, 3600, - # 7200, 10800, 14400, 28800]) - # Days: 'x D' (x: [1, 2] - # Weeks: 'x W' (x: [1, 2]) - # Months: 'x M' (x: [1, 11]) - # Years: 'x Y' (x: [1]) - - td = dt2 - dt1 # get a timedelta for calculations - - # First: array of secs - tsecs = td.total_seconds() - secs = [60, 120, 180, 300, 600, 900, 1200, 1800, 3600, 7200, 10800, - 14400, 28800] - - idxsec = bisect.bisect_left(secs, tsecs) - if idxsec < len(secs): - return '{} S'.format(secs[idxsec]) - - tdextra = bool(td.seconds or td.microseconds) # over days/weeks - - # Next: 1 or 2 days - days = td.days + tdextra - if td.days <= 2: - return '{} D'.format(days) - - # Next: 1 or 2 weeks - weeks, d = divmod(td.days, 7) - weeks += bool(d or tdextra) - if weeks <= 2: - return '{} W'.format(weeks) - - # Get references to dt components - y2, m2, d2 = dt2.year, dt2.month, dt2.day - y1, m1, d1 = dt1.year, dt1.month, dt2.day - - H2, M2, S2, US2 = dt2.hour, dt2.minute, dt2.second, dt2.microsecond - H1, M1, S1, US1 = dt1.hour, dt1.minute, dt1.second, dt1.microsecond - - # Next: 1 -> 11 months (11 incl) - months = (y2 * 12 + m2) - (y1 * 12 + m1) + ( - (d2, H2, M2, S2, US2) > (d1, H1, M1, S1, US1)) - if months <= 1: # months <= 11 - return '1 M' # return '{} M'.format(months) - elif months <= 11: - return '2 M' # cap at 2 months to keep the table clean - - # Next: years - # y = y2 - y1 + (m2, d2, H2, M2, S2, US2) > (m1, d1, H1, M1, S1, US1) - # return '{} Y'.format(y) - - return '1 Y' # to keep the table clean - - def makecontract(self, symbol, sectype, exch, curr, - expiry='', strike=0.0, right='', mult=1): - '''returns a contract from the parameters without check''' - - contract = Contract() - contract.m_symbol = bytes(symbol) - contract.m_secType = bytes(sectype) - contract.m_exchange = bytes(exch) - if curr: - contract.m_currency = bytes(curr) - if sectype in ['FUT', 'OPT', 'FOP']: - contract.m_expiry = bytes(expiry) - if sectype in ['OPT', 'FOP']: - contract.m_strike = strike - contract.m_right = bytes(right) - if mult: - contract.m_multiplier = bytes(mult) - return contract - - def cancelOrder(self, orderid): - '''Proxy to cancelOrder''' - self.conn.cancelOrder(orderid) - - def placeOrder(self, orderid, contract, order): - '''Proxy to placeOrder''' - self.conn.placeOrder(orderid, contract, order) - - @ibregister - def openOrder(self, msg): - '''Receive the event ``openOrder`` events''' - self.broker.push_orderstate(msg) - - @ibregister - def execDetails(self, msg): - '''Receive execDetails''' - self.broker.push_execution(msg.execution) - - @ibregister - def orderStatus(self, msg): - '''Receive the event ``orderStatus``''' - self.broker.push_orderstatus(msg) - - @ibregister - def commissionReport(self, msg): - '''Receive the event commissionReport''' - self.broker.push_commissionreport(msg.commissionReport) - - def reqPositions(self): - '''Proxy to reqPositions''' - self.conn.reqPositions() - - @ibregister - def position(self, msg): - '''Receive event positions''' - pass # Not implemented yet - - def reqAccountUpdates(self, subscribe=True, account=None): - '''Proxy to reqAccountUpdates - - If ``account`` is ``None``, wait for the ``managedAccounts`` message to - set the account codes - ''' - if account is None: - self._event_managed_accounts.wait() - account = self.managed_accounts[0] - - self.conn.reqAccountUpdates(subscribe, bytes(account)) - - @ibregister - def accountDownloadEnd(self, msg): - # Signals the end of an account update - # the event indicates it's over. It's only false once, and can be used - # to find out if it has at least been downloaded once - self._event_accdownload.set() - if False: - if self.port_update: - self.broker.push_portupdate() - - self.port_update = False - - @ibregister - def updatePortfolio(self, msg): - # Lock access to the position dicts. This is called in sub-thread and - # can kick in at any time - with self._lock_pos: - if not self._event_accdownload.is_set(): # 1st event seen - position = Position(msg.position, msg.averageCost) - self.positions[msg.contract.m_conId] = position - else: - position = self.positions[msg.contract.m_conId] - if not position.fix(msg.position, msg.averageCost): - err = ('The current calculated position and ' - 'the position reported by the broker do not match. ' - 'Operation can continue, but the trades ' - 'calculated in the strategy may be wrong') - - self.notifs.put((err, (), {})) - - # Flag signal to broker at the end of account download - # self.port_update = True - self.broker.push_portupdate() - - def getposition(self, contract, clone=False): - # Lock access to the position dicts. This is called from main thread - # and updates could be happening in the background - with self._lock_pos: - position = self.positions[contract.m_conId] - if clone: - return copy(position) - - return position - - @ibregister - def updateAccountValue(self, msg): - # Lock access to the dicts where values are updated. This happens in a - # sub-thread and could kick it at anytime - with self._lock_accupd: - try: - value = float(msg.value) - except ValueError: - value = msg.value - - self.acc_upds[msg.accountName][msg.key][msg.currency] = value - - if msg.key == 'NetLiquidation': - # NetLiquidationByCurrency and currency == 'BASE' is the same - self.acc_value[msg.accountName] = value - elif msg.key == 'TotalCashBalance' and msg.currency == 'BASE': - self.acc_cash[msg.accountName] = value - - def get_acc_values(self, account=None): - '''Returns all account value infos sent by TWS during regular updates - Waits for at least 1 successful download - - If ``account`` is ``None`` then a dictionary with accounts as keys will - be returned containing all accounts - - If account is specified or the system has only 1 account the dictionary - corresponding to that account is returned - ''' - # Wait for at least 1 account update download to have been finished - # before the account infos can be returned to the calling client - if self.connected(): - self._event_accdownload.wait() - # Lock access to acc_cash to avoid an event intefering - with self._updacclock: - if account is None: - # wait for the managedAccount Messages - if self.connected(): - self._event_managed_accounts.wait() - - if not self.managed_accounts: - return self.acc_upds.copy() - - elif len(self.managed_accounts) > 1: - return self.acc_upds.copy() - - # Only 1 account, fall through to return only 1 - account = self.managed_accounts[0] - - try: - return self.acc_upds[account].copy() - except KeyError: - pass - - return self.acc_upds.copy() - - def get_acc_value(self, account=None): - '''Returns the net liquidation value sent by TWS during regular updates - Waits for at least 1 successful download - - If ``account`` is ``None`` then a dictionary with accounts as keys will - be returned containing all accounts - - If account is specified or the system has only 1 account the dictionary - corresponding to that account is returned - ''' - # Wait for at least 1 account update download to have been finished - # before the value can be returned to the calling client - if self.connected(): - self._event_accdownload.wait() - # Lock access to acc_cash to avoid an event intefering - with self._lock_accupd: - if account is None: - # wait for the managedAccount Messages - if self.connected(): - self._event_managed_accounts.wait() - - if not self.managed_accounts: - return float() - - elif len(self.managed_accounts) > 1: - return sum(self.acc_value.values()) - - # Only 1 account, fall through to return only 1 - account = self.managed_accounts[0] - - try: - return self.acc_value[account] - except KeyError: - pass - - return float() - - def get_acc_cash(self, account=None): - '''Returns the total cash value sent by TWS during regular updates - Waits for at least 1 successful download - - If ``account`` is ``None`` then a dictionary with accounts as keys will - be returned containing all accounts - - If account is specified or the system has only 1 account the dictionary - corresponding to that account is returned - ''' - # Wait for at least 1 account update download to have been finished - # before the cash can be returned to the calling client - if self.connected(): - self._event_accdownload.wait() - # Lock access to acc_cash to avoid an event intefering - with self._lock_accupd: - if account is None: - # wait for the managedAccount Messages - if self.connected(): - self._event_managed_accounts.wait() - - if not self.managed_accounts: - return float() - - elif len(self.managed_accounts) > 1: - return sum(self.acc_cash.values()) - - # Only 1 account, fall through to return only 1 - account = self.managed_accounts[0] - - try: - return self.acc_cash[account] - except KeyError: - pass diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_pipelines/fcenet_pipeline.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_pipelines/fcenet_pipeline.py deleted file mode 100644 index badb4536b10bd74760fdf519fe03f5c8d2bd7767..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_pipelines/fcenet_pipeline.py +++ /dev/null @@ -1,118 +0,0 @@ -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -# for icdar2015 -leval_prop_range_icdar2015 = ((0, 0.4), (0.3, 0.7), (0.6, 1.0)) -train_pipeline_icdar2015 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='LoadTextAnnotations', - with_bbox=True, - with_mask=True, - poly2mask=False), - dict( - type='ColorJitter', - brightness=32.0 / 255, - saturation=0.5, - contrast=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='RandomScaling', size=800, scale=(3. / 4, 5. / 2)), - dict( - type='RandomCropFlip', crop_ratio=0.5, iter_num=1, min_area_ratio=0.2), - dict( - type='RandomCropPolyInstances', - instance_key='gt_masks', - crop_ratio=0.8, - min_side_ratio=0.3), - dict( - type='RandomRotatePolyInstances', - rotate_ratio=0.5, - max_angle=30, - pad_with_fixed_color=False), - dict(type='SquareResizePad', target_size=800, pad_ratio=0.6), - dict(type='RandomFlip', flip_ratio=0.5, direction='horizontal'), - dict(type='Pad', size_divisor=32), - dict( - type='FCENetTargets', - fourier_degree=5, - level_proportion_range=leval_prop_range_icdar2015), - dict( - type='CustomFormatBundle', - keys=['p3_maps', 'p4_maps', 'p5_maps'], - visualize=dict(flag=False, boundary_key=None)), - dict(type='Collect', keys=['img', 'p3_maps', 'p4_maps', 'p5_maps']) -] - -img_scale_icdar2015 = (2260, 2260) -test_pipeline_icdar2015 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale_icdar2015, # used by Resize - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] - -# for ctw1500 -leval_prop_range_ctw1500 = ((0, 0.25), (0.2, 0.65), (0.55, 1.0)) -train_pipeline_ctw1500 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='LoadTextAnnotations', - with_bbox=True, - with_mask=True, - poly2mask=False), - dict( - type='ColorJitter', - brightness=32.0 / 255, - saturation=0.5, - contrast=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='RandomScaling', size=800, scale=(3. / 4, 5. / 2)), - dict( - type='RandomCropFlip', crop_ratio=0.5, iter_num=1, min_area_ratio=0.2), - dict( - type='RandomCropPolyInstances', - instance_key='gt_masks', - crop_ratio=0.8, - min_side_ratio=0.3), - dict( - type='RandomRotatePolyInstances', - rotate_ratio=0.5, - max_angle=30, - pad_with_fixed_color=False), - dict(type='SquareResizePad', target_size=800, pad_ratio=0.6), - dict(type='RandomFlip', flip_ratio=0.5, direction='horizontal'), - dict(type='Pad', size_divisor=32), - dict( - type='FCENetTargets', - fourier_degree=5, - level_proportion_range=leval_prop_range_ctw1500), - dict( - type='CustomFormatBundle', - keys=['p3_maps', 'p4_maps', 'p5_maps'], - visualize=dict(flag=False, boundary_key=None)), - dict(type='Collect', keys=['img', 'p3_maps', 'p4_maps', 'p5_maps']) -] - -img_scale_ctw1500 = (1080, 736) -test_pipeline_ctw1500 = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale_ctw1500, # used by Resize - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/psenet/psenet_r50_fpnf_600e_ctw1500.py b/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/psenet/psenet_r50_fpnf_600e_ctw1500.py deleted file mode 100644 index 483a2b2e1e7e584dfba26c7c5f506ce544953db8..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/textdet/psenet/psenet_r50_fpnf_600e_ctw1500.py +++ /dev/null @@ -1,35 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_step_600e.py', - '../../_base_/det_models/psenet_r50_fpnf.py', - '../../_base_/det_datasets/ctw1500.py', - '../../_base_/det_pipelines/psenet_pipeline.py' -] - -model = {{_base_.model_poly}} - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline_ctw1500 = {{_base_.test_pipeline_ctw1500}} - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_ctw1500), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_ctw1500)) - -evaluation = dict(interval=10, metric='hmean-iou') diff --git a/spaces/MAGAer13/mPLUG-Owl2/model_worker.py b/spaces/MAGAer13/mPLUG-Owl2/model_worker.py deleted file mode 100644 index 057fc6d1564acb021b5d21eeab6582ac9f72fc9b..0000000000000000000000000000000000000000 --- a/spaces/MAGAer13/mPLUG-Owl2/model_worker.py +++ /dev/null @@ -1,143 +0,0 @@ -""" -A model worker executes the model. -""" -import argparse -import asyncio -import json -import time -import threading -import uuid - -import requests -import torch -from functools import partial - -from mplug_owl2.constants import WORKER_HEART_BEAT_INTERVAL -from mplug_owl2.utils import (build_logger, server_error_msg, - pretty_print_semaphore) -from mplug_owl2.model.builder import load_pretrained_model -from mplug_owl2.mm_utils import process_images, load_image_from_base64, tokenizer_image_token, KeywordsStoppingCriteria -from mplug_owl2.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN -from transformers import TextIteratorStreamer -from threading import Thread - -GB = 1 << 30 - -worker_id = str(uuid.uuid4())[:6] -logger = build_logger("model_worker", f"model_worker_{worker_id}.log") - -class ModelWorker: - def __init__(self, model_path, model_base, model_name, load_8bit, load_4bit, device): - self.worker_id = worker_id - if model_path.endswith("/"): - model_path = model_path[:-1] - if model_name is None: - model_paths = model_path.split("/") - if model_paths[-1].startswith('checkpoint-'): - self.model_name = model_paths[-2] + "_" + model_paths[-1] - else: - self.model_name = model_paths[-1] - else: - self.model_name = model_name - - self.device = device - logger.info(f"Loading the model {self.model_name} on worker {worker_id} ...") - self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model( - model_path, model_base, self.model_name, load_8bit, load_4bit, device=self.device) - self.is_multimodal = True - - @torch.inference_mode() - def generate_stream(self, params): - tokenizer, model, image_processor = self.tokenizer, self.model, self.image_processor - - prompt = params["prompt"] - ori_prompt = prompt - images = params.get("images", None) - num_image_tokens = 0 - if images is not None and len(images) > 0 and self.is_multimodal: - if len(images) > 0: - if len(images) != prompt.count(DEFAULT_IMAGE_TOKEN): - raise ValueError("Number of images does not match number of <|image|> tokens in prompt") - - images = [load_image_from_base64(image) for image in images] - images = process_images(images, image_processor, model.config) - - if type(images) is list: - images = [image.to(self.model.device, dtype=torch.float16) for image in images] - else: - images = images.to(self.model.device, dtype=torch.float16) - - replace_token = DEFAULT_IMAGE_TOKEN - prompt = prompt.replace(DEFAULT_IMAGE_TOKEN, replace_token) - - num_image_tokens = prompt.count(replace_token) * (model.get_model().visual_abstractor.config.num_learnable_queries + 1) - else: - images = None - image_args = {"images": images} - else: - images = None - image_args = {} - - temperature = float(params.get("temperature", 1.0)) - top_p = float(params.get("top_p", 1.0)) - max_context_length = getattr(model.config, 'max_position_embeddings', 4096) - max_new_tokens = min(int(params.get("max_new_tokens", 256)), 1024) - stop_str = params.get("stop", None) - do_sample = True if temperature > 0.001 else False - - input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).to(self.device) - keywords = [stop_str] - stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids) - streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=15) - - max_new_tokens = min(max_new_tokens, max_context_length - input_ids.shape[-1] - num_image_tokens) - - if max_new_tokens < 1: - yield json.dumps({"text": ori_prompt + "Exceeds max token length. Please start a new conversation, thanks.", "error_code": 0}).encode() + b"\0" - return - - thread = Thread(target=model.generate, kwargs=dict( - inputs=input_ids, - do_sample=do_sample, - temperature=temperature, - top_p=top_p, - max_new_tokens=max_new_tokens, - streamer=streamer, - stopping_criteria=[stopping_criteria], - use_cache=True, - **image_args - )) - thread.start() - - generated_text = ori_prompt - for new_text in streamer: - generated_text += new_text - if generated_text.endswith(stop_str): - generated_text = generated_text[:-len(stop_str)] - yield json.dumps({"text": generated_text, "error_code": 0}).encode() - - def generate_stream_gate(self, params): - try: - for x in self.generate_stream(params): - yield x - except ValueError as e: - print("Caught ValueError:", e) - ret = { - "text": server_error_msg, - "error_code": 1, - } - yield json.dumps(ret).encode() - except torch.cuda.CudaError as e: - print("Caught torch.cuda.CudaError:", e) - ret = { - "text": server_error_msg, - "error_code": 1, - } - yield json.dumps(ret).encode() - except Exception as e: - print("Caught Unknown Error", e) - ret = { - "text": server_error_msg, - "error_code": 1, - } - yield json.dumps(ret).encode() \ No newline at end of file diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/__init__.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Manjushri/MusicGen/audiocraft/utils/utils.py b/spaces/Manjushri/MusicGen/audiocraft/utils/utils.py deleted file mode 100644 index 86e1448d065fa182ca69aae00d2f2a7eea55d8a4..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/MusicGen/audiocraft/utils/utils.py +++ /dev/null @@ -1,234 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from concurrent.futures import ProcessPoolExecutor -from functools import wraps -import hashlib -import logging -import typing as tp - -import flashy -import flashy.distrib -import omegaconf -import torch -from torch.nn.utils.rnn import pad_sequence - - -logger = logging.getLogger(__name__) - - -def dict_from_config(cfg: omegaconf.DictConfig) -> dict: - """Convenience function to map an omegaconf configuration to a dictionary. - - Args: - cfg (omegaconf.DictConfig): Original configuration to map to dict. - Returns: - dict: Config as dictionary object. - """ - dct = omegaconf.OmegaConf.to_container(cfg, resolve=True) - assert isinstance(dct, dict) - return dct - - -def random_subset(dataset, max_samples: int, seed: int = 42) -> torch.utils.data.Subset: - if max_samples >= len(dataset): - return dataset - - generator = torch.Generator().manual_seed(seed) - perm = torch.randperm(len(dataset), generator=generator) - return torch.utils.data.Subset(dataset, perm[:max_samples].tolist()) - - -def get_loader(dataset, num_samples: tp.Optional[int], batch_size: int, - num_workers: int, seed: int, **kwargs) -> torch.utils.data.DataLoader: - """Convenience function to load dataset into a dataloader with optional subset sampling. - - Args: - dataset: Dataset to load. - num_samples (Optional[int]): Number of samples to limit subset size. - batch_size (int): Batch size. - num_workers (int): Number of workers for data loading. - seed (int): Random seed. - """ - if num_samples is not None: - dataset = random_subset(dataset, num_samples, seed) - - dataloader = flashy.distrib.loader( - dataset, - batch_size=batch_size, - num_workers=num_workers, - **kwargs - ) - return dataloader - - -def get_dataset_from_loader(dataloader): - dataset = dataloader.dataset - if isinstance(dataset, torch.utils.data.Subset): - return dataset.dataset - else: - return dataset - - -def multinomial(input: torch.Tensor, num_samples: int, replacement=False, *, generator=None): - """torch.multinomial with arbitrary number of dimensions, and number of candidates on the last dimension. - - Args: - input (torch.Tensor): The input tensor containing probabilities. - num_samples (int): Number of samples to draw. - replacement (bool): Whether to draw with replacement or not. - Keywords args: - generator (torch.Generator): A pseudorandom number generator for sampling. - Returns: - torch.Tensor: Last dimension contains num_samples indices - sampled from the multinomial probability distribution - located in the last dimension of tensor input. - """ - input_ = input.reshape(-1, input.shape[-1]) - output_ = torch.multinomial(input_, num_samples=num_samples, replacement=replacement, generator=generator) - output = output_.reshape(*list(input.shape[:-1]), -1) - return output - - -def sample_top_k(probs: torch.Tensor, k: int) -> torch.Tensor: - """Sample next token from top K values along the last dimension of the input probs tensor. - - Args: - probs (torch.Tensor): Input probabilities with token candidates on the last dimension. - k (int): The k in “top-k”. - Returns: - torch.Tensor: Sampled tokens. - """ - top_k_value, _ = torch.topk(probs, k, dim=-1) - min_value_top_k = top_k_value[..., [-1]] - probs *= (probs >= min_value_top_k).float() - probs.div_(probs.sum(dim=-1, keepdim=True)) - next_token = multinomial(probs, num_samples=1) - return next_token - - -def sample_top_p(probs: torch.Tensor, p: float) -> torch.Tensor: - """Sample next token from top P probabilities along the last dimension of the input probs tensor. - - Args: - probs (torch.Tensor): Input probabilities with token candidates on the last dimension. - p (int): The p in “top-p”. - Returns: - torch.Tensor: Sampled tokens. - """ - probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True) - probs_sum = torch.cumsum(probs_sort, dim=-1) - mask = probs_sum - probs_sort > p - probs_sort *= (~mask).float() - probs_sort.div_(probs_sort.sum(dim=-1, keepdim=True)) - next_token = multinomial(probs_sort, num_samples=1) - next_token = torch.gather(probs_idx, -1, next_token) - return next_token - - -class DummyPoolExecutor: - """Dummy pool executor to use when we actually have only 1 worker. - (e.g. instead of ProcessPoolExecutor). - """ - class DummyResult: - def __init__(self, func, *args, **kwargs): - self.func = func - self.args = args - self.kwargs = kwargs - - def result(self): - return self.func(*self.args, **self.kwargs) - - def __init__(self, workers, mp_context=None): - pass - - def submit(self, func, *args, **kwargs): - return DummyPoolExecutor.DummyResult(func, *args, **kwargs) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, exc_tb): - return - - -def get_pool_executor(num_workers: int, mp_context=None): - return ProcessPoolExecutor(num_workers, mp_context) if num_workers > 1 else DummyPoolExecutor(1) - - -def length_to_mask(lengths: torch.Tensor, max_len: tp.Optional[int] = None) -> torch.Tensor: - """Utility function to convert a tensor of sequence lengths to a mask (useful when working on padded sequences). - For example: [3, 5] => [[1, 1, 1, 0, 0], [1, 1, 1, 1, 1]] - - Args: - lengths (torch.Tensor): tensor with lengths - max_len (int): can set the max length manually. Defaults to None. - Returns: - torch.Tensor: mask with 0s where there is pad tokens else 1s - """ - assert len(lengths.shape) == 1, "Length shape should be 1 dimensional." - final_length = lengths.max().item() if not max_len else max_len - final_length = max(final_length, 1) # if all seqs are of len zero we don't want a zero-size tensor - return torch.arange(final_length)[None, :].to(lengths.device) < lengths[:, None] - - -def hash_trick(word: str, vocab_size: int) -> int: - """Hash trick to pair each word with an index - - Args: - word (str): word we wish to convert to an index - vocab_size (int): size of the vocabulary - Returns: - int: index of the word in the embedding LUT - """ - hash = int(hashlib.sha256(word.encode("utf-8")).hexdigest(), 16) - return hash % vocab_size - - -def with_rank_rng(base_seed: int = 1234): - """Decorator for a function so that the function will use a Random Number Generator - whose state depend on the GPU rank. The original RNG state is restored upon returning. - - Args: - base_seed (int): Random seed. - """ - def _decorator(fun: tp.Callable): - @wraps(fun) - def _decorated(*args, **kwargs): - state = torch.get_rng_state() - seed = base_seed ^ flashy.distrib.rank() - torch.manual_seed(seed) - logger.debug('Rank dependent seed set to %d', seed) - try: - return fun(*args, **kwargs) - finally: - torch.set_rng_state(state) - logger.debug('RNG state restored.') - return _decorated - return _decorator - - -def collate(tensors: tp.List[torch.Tensor], dim: int = 0) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Get a list of tensors and collate them to a single tensor. according to the following logic: - - `dim` specifies the time dimension which will be stacked and padded. - - The output will contain 1 new dimension (dimension index 0) which will be the size of - of the original list. - - Args: - tensors (tp.List[torch.Tensor]): List of tensors to collate. - dim (int): Dimension which will be stacked and padded. - Returns: - tp.Tuple[torch.Tensor, torch.Tensor]: - torch.Tensor: Stacked and padded tensor. The output will contain 1 new dimension - (dimension index 0) which will be the size of the original list. - torch.Tensor: Tensor containing length of original tensor sizes (without padding). - """ - tensors = [x.transpose(0, dim) for x in tensors] - lens = torch.LongTensor([len(x) for x in tensors]) - padded_tensors = pad_sequence(tensors) - padded_tensors = padded_tensors.transpose(0, 1) - padded_tensors = padded_tensors.transpose(1, dim + 1) - return padded_tensors, lens diff --git a/spaces/Manjushri/PhotoReal-V3.6/README.md b/spaces/Manjushri/PhotoReal-V3.6/README.md deleted file mode 100644 index 02141e8c4ab20abd0f8eca803afb4405e0dbd7ef..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/PhotoReal-V3.6/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PhotoReal V3.6 -emoji: 👀 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Marshalls/testmtd/feature_extraction/apply_transforms.py b/spaces/Marshalls/testmtd/feature_extraction/apply_transforms.py deleted file mode 100644 index 1a72080b0f88596fbdfd6073c0838b5364f340b0..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/feature_extraction/apply_transforms.py +++ /dev/null @@ -1,66 +0,0 @@ -import librosa -import numpy as np -from pathlib import Path -import json -import os.path -import sys -import argparse -import pickle - -THIS_DIR = os.path.dirname(os.path.abspath(__file__)) -ROOT_DIR = os.path.abspath(os.path.join(os.path.join(THIS_DIR, os.pardir), os.pardir)) -DATA_DIR = os.path.join(ROOT_DIR, 'data') -EXTRACT_DIR = os.path.join(DATA_DIR, 'extracted_data') -if not os.path.isdir(DATA_DIR): - os.mkdir(DATA_DIR) -if not os.path.isdir(EXTRACT_DIR): - os.mkdir(EXTRACT_DIR) -sys.path.append(ROOT_DIR) -from audio_feature_utils import extract_features_hybrid, extract_features_mel, extract_features_multi_mel -from utils import distribute_tasks - -parser = argparse.ArgumentParser(description="Preprocess songs data") - -parser.add_argument("data_path", type=str, help="features path") -parser.add_argument("--feature_name", metavar='', type=str, default="mel", help="coma separated list of names of features to combine") -parser.add_argument("--transform_name", metavar='', type=str, default="scaler", help="pca_transform,scaler") -parser.add_argument("--pca_dims", metavar='', type=int, default=2, help="number of pca dimensions to keep, if applying pca transform") -parser.add_argument("--keep_feature_name", action="store_true") -parser.add_argument("--new_feature_name", metavar='', type=str, default=None) -parser.add_argument("--replace_existing", action="store_true") -args = parser.parse_args() - -# makes arugments into global variables of the same name, used later in the code -globals().update(vars(args)) -data_path = Path(data_path) - -## distributing tasks accross nodes ## -from mpi4py import MPI -comm = MPI.COMM_WORLD -rank = comm.Get_rank() -size = comm.Get_size() -print(rank) - -#assuming mp3 for now. TODO: generalize -candidate_files = sorted(data_path.glob('**/*'+feature_name+'.npy'), key=lambda path: path.parent.__str__()) -tasks = distribute_tasks(candidate_files,rank,size) - -for i in tasks: - path = candidate_files[i] - print(path) - feature_file = path.__str__() - if new_feature_name is None: - if keep_feature_name: - new_feature_name = feature_name - else: - new_feature_name = feature_name+"_applied_"+transform_name - base_filename = feature_file[:-(len(feature_name)+4)] - new_feature_file = base_filename+new_feature_name+".npy" - if replace_existing or not os.path.isfile(new_feature_file): - features = np.load(feature_file) - transform = pickle.load(open(data_path.joinpath(feature_name+'_'+transform_name+'.pkl'), "rb")) - pickle.dump(transform, open(data_path.joinpath(new_feature_name+'_scaler.pkl'), "wb")) - features = transform.transform(features) - if transform_name == "pca_transform": - features = features[:,:pca_dims] - np.save(new_feature_file,features) diff --git a/spaces/Matthijs/mms-tts-demo/uroman/bin/uroman-tsv.sh b/spaces/Matthijs/mms-tts-demo/uroman/bin/uroman-tsv.sh deleted file mode 100644 index adb81f4894a0539d44ad4370eda029694211e82b..0000000000000000000000000000000000000000 --- a/spaces/Matthijs/mms-tts-demo/uroman/bin/uroman-tsv.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/usr/bin/env bash -# Created by Thamme Gowda on June 17, 2019 - -DIR=$(dirname "${BASH_SOURCE[0]}") # get the directory name -# DIR=$(realpath "${DIR}") # resolve its full path if need be - -if [[ $# -lt 1 || $# -gt 2 ]]; then - >&2 echo "ERROR: invalid args" - >&2 echo "Usage: []" - exit 2 -fi - -INP=$1 -OUT=$2 - -CMD=$DIR/uroman.pl - -function romanize(){ - paste <(cut -f1 $INP) <(cut -f2 $INP | $CMD) -} - -if [[ -n $OUT ]]; then - romanize > $OUT -else - romanize -fi - - diff --git a/spaces/MattiaSangermano/IncentiveAI/README.md b/spaces/MattiaSangermano/IncentiveAI/README.md deleted file mode 100644 index 901f0aa9a23a4c14abe952018938668c569a9deb..0000000000000000000000000000000000000000 --- a/spaces/MattiaSangermano/IncentiveAI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: IncentiveAI -emoji: 💻 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MattiaSangermano/IncentiveAI/app.py b/spaces/MattiaSangermano/IncentiveAI/app.py deleted file mode 100644 index c9dd79e99e013d5ae21f4092be5f5235829bdef8..0000000000000000000000000000000000000000 --- a/spaces/MattiaSangermano/IncentiveAI/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import gradio as gr -import torch -from transformers import AutoFeatureExtractor, SwinModel -import pandas as pd -import numpy as np -from PIL import Image - - -extractor = AutoFeatureExtractor.from_pretrained("Neruoy/swin-finetuned-food101-e3") -model = SwinModel.from_pretrained("Neruoy/swin-finetuned-food101-e3") -dataset = pd.read_csv("./kb.csv") -embeddings = dataset['embeddings'].apply(eval).tolist() -embeddings = np.array(embeddings) -embeddings = torch.from_numpy(embeddings) -filenames = dataset['filename'].tolist() - - -def compute_scores(emb_one, emb_two): - """Computes cosine similarity between two vectors.""" - scores = torch.nn.functional.cosine_similarity(emb_one, emb_two) - return scores.numpy().tolist() - -def inference(img): - processed_img = extractor(img, return_tensors="pt") - with torch.no_grad(): - query_embeddings = model(**processed_img).last_hidden_state[:, 0].cpu() - - sim_scores = compute_scores(embeddings, query_embeddings) - similarity_mapping = dict(zip(filenames, sim_scores)) - - # Sort the mapping dictionary and return `top_k` candidates. - similarity_mapping_sorted = dict( - sorted(similarity_mapping.items(), key=lambda x: x[1], reverse=True) - ) - id_entries = list(similarity_mapping_sorted.keys())[0:3] - scores = list(similarity_mapping_sorted.values())[3:3] - images = [Image.open(f"./data/{directory}") for directory in id_entries] - return images - -title = "IncentiveAI" -description = "Demo" - -demo = gr.Interface( - fn=inference, - inputs=gr.inputs.Image(type="pil"), - outputs=[gr.outputs.Image(type="pil", label="First"), gr.outputs.Image(type="pil",label="Second"), gr.outputs.Image(type="pil",label="Third")], - title=title, - description=description -) -#app = gr.mount_gradio_app(app, demo, path="/incentive/") -demo.launch() \ No newline at end of file diff --git a/spaces/Mcdimmy/Clothing-Identifier/app.py b/spaces/Mcdimmy/Clothing-Identifier/app.py deleted file mode 100644 index 06893fb6a924a6894c6bedf831dfe5a673f2d75a..0000000000000000000000000000000000000000 --- a/spaces/Mcdimmy/Clothing-Identifier/app.py +++ /dev/null @@ -1,18 +0,0 @@ -from fastai.vision.all import * -import gradio as gr - -learn = load_learner('export.pkl') - -categories = ('Blouse', 'Dress', 'Pants', 'Shirt', 'Shorts') -title = "Clothing Identifier" - -def classify_image(img): - pred, idx, probs = learn.predict(img) - return dict(zip(categories, map(float, probs))) - -image = gr.Image(shape=(512, 512)) -label = gr.Label() -examples = ['dress.jpg', 'shirt.jpg', 'pants.jpg'] - -intf = gr.Interface(fn=classify_image, title=title, inputs=image, outputs=label, examples=examples) -intf.launch(inline=False) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/apis/test.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/apis/test.py deleted file mode 100644 index e574eb7da04f09a59cf99ff953c36468ae87a326..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/apis/test.py +++ /dev/null @@ -1,238 +0,0 @@ -import os.path as osp -import pickle -import shutil -import tempfile - -import annotator.uniformer.mmcv as mmcv -import numpy as np -import torch -import torch.distributed as dist -from annotator.uniformer.mmcv.image import tensor2imgs -from annotator.uniformer.mmcv.runner import get_dist_info - - -def np2tmp(array, temp_file_name=None): - """Save ndarray to local numpy file. - - Args: - array (ndarray): Ndarray to save. - temp_file_name (str): Numpy file name. If 'temp_file_name=None', this - function will generate a file name with tempfile.NamedTemporaryFile - to save ndarray. Default: None. - - Returns: - str: The numpy file name. - """ - - if temp_file_name is None: - temp_file_name = tempfile.NamedTemporaryFile( - suffix='.npy', delete=False).name - np.save(temp_file_name, array) - return temp_file_name - - -def single_gpu_test(model, - data_loader, - show=False, - out_dir=None, - efficient_test=False, - opacity=0.5): - """Test with single GPU. - - Args: - model (nn.Module): Model to be tested. - data_loader (utils.data.Dataloader): Pytorch data loader. - show (bool): Whether show results during inference. Default: False. - out_dir (str, optional): If specified, the results will be dumped into - the directory to save output results. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Default: False. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - Returns: - list: The prediction results. - """ - - model.eval() - results = [] - dataset = data_loader.dataset - prog_bar = mmcv.ProgressBar(len(dataset)) - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, **data) - - if show or out_dir: - img_tensor = data['img'][0] - img_metas = data['img_metas'][0].data[0] - imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg']) - assert len(imgs) == len(img_metas) - - for img, img_meta in zip(imgs, img_metas): - h, w, _ = img_meta['img_shape'] - img_show = img[:h, :w, :] - - ori_h, ori_w = img_meta['ori_shape'][:-1] - img_show = mmcv.imresize(img_show, (ori_w, ori_h)) - - if out_dir: - out_file = osp.join(out_dir, img_meta['ori_filename']) - else: - out_file = None - - model.module.show_result( - img_show, - result, - palette=dataset.PALETTE, - show=show, - out_file=out_file, - opacity=opacity) - - if isinstance(result, list): - if efficient_test: - result = [np2tmp(_) for _ in result] - results.extend(result) - else: - if efficient_test: - result = np2tmp(result) - results.append(result) - - batch_size = len(result) - for _ in range(batch_size): - prog_bar.update() - return results - - -def multi_gpu_test(model, - data_loader, - tmpdir=None, - gpu_collect=False, - efficient_test=False): - """Test model with multiple gpus. - - This method tests model with multiple gpus and collects the results - under two different modes: gpu and cpu modes. By setting 'gpu_collect=True' - it encodes results to gpu tensors and use gpu communication for results - collection. On cpu mode it saves the results on different gpus to 'tmpdir' - and collects them by the rank 0 worker. - - Args: - model (nn.Module): Model to be tested. - data_loader (utils.data.Dataloader): Pytorch data loader. - tmpdir (str): Path of directory to save the temporary results from - different gpus under cpu mode. - gpu_collect (bool): Option to use either gpu or cpu to collect results. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Default: False. - - Returns: - list: The prediction results. - """ - - model.eval() - results = [] - dataset = data_loader.dataset - rank, world_size = get_dist_info() - if rank == 0: - prog_bar = mmcv.ProgressBar(len(dataset)) - for i, data in enumerate(data_loader): - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - - if isinstance(result, list): - if efficient_test: - result = [np2tmp(_) for _ in result] - results.extend(result) - else: - if efficient_test: - result = np2tmp(result) - results.append(result) - - if rank == 0: - batch_size = data['img'][0].size(0) - for _ in range(batch_size * world_size): - prog_bar.update() - - # collect results from all ranks - if gpu_collect: - results = collect_results_gpu(results, len(dataset)) - else: - results = collect_results_cpu(results, len(dataset), tmpdir) - return results - - -def collect_results_cpu(result_part, size, tmpdir=None): - """Collect results with CPU.""" - rank, world_size = get_dist_info() - # create a tmp dir if it is not specified - if tmpdir is None: - MAX_LEN = 512 - # 32 is whitespace - dir_tensor = torch.full((MAX_LEN, ), - 32, - dtype=torch.uint8, - device='cuda') - if rank == 0: - tmpdir = tempfile.mkdtemp() - tmpdir = torch.tensor( - bytearray(tmpdir.encode()), dtype=torch.uint8, device='cuda') - dir_tensor[:len(tmpdir)] = tmpdir - dist.broadcast(dir_tensor, 0) - tmpdir = dir_tensor.cpu().numpy().tobytes().decode().rstrip() - else: - mmcv.mkdir_or_exist(tmpdir) - # dump the part result to the dir - mmcv.dump(result_part, osp.join(tmpdir, 'part_{}.pkl'.format(rank))) - dist.barrier() - # collect all parts - if rank != 0: - return None - else: - # load results of all parts from tmp dir - part_list = [] - for i in range(world_size): - part_file = osp.join(tmpdir, 'part_{}.pkl'.format(i)) - part_list.append(mmcv.load(part_file)) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - # remove tmp dir - shutil.rmtree(tmpdir) - return ordered_results - - -def collect_results_gpu(result_part, size): - """Collect results with GPU.""" - rank, world_size = get_dist_info() - # dump result part to tensor with pickle - part_tensor = torch.tensor( - bytearray(pickle.dumps(result_part)), dtype=torch.uint8, device='cuda') - # gather all result part tensor shape - shape_tensor = torch.tensor(part_tensor.shape, device='cuda') - shape_list = [shape_tensor.clone() for _ in range(world_size)] - dist.all_gather(shape_list, shape_tensor) - # padding result part tensor to max length - shape_max = torch.tensor(shape_list).max() - part_send = torch.zeros(shape_max, dtype=torch.uint8, device='cuda') - part_send[:shape_tensor[0]] = part_tensor - part_recv_list = [ - part_tensor.new_zeros(shape_max) for _ in range(world_size) - ] - # gather all result part - dist.all_gather(part_recv_list, part_send) - - if rank == 0: - part_list = [] - for recv, shape in zip(part_recv_list, shape_list): - part_list.append( - pickle.loads(recv[:shape[0]].cpu().numpy().tobytes())) - # sort the results - ordered_results = [] - for res in zip(*part_list): - ordered_results.extend(list(res)) - # the dataloader may pad some samples - ordered_results = ordered_results[:size] - return ordered_results diff --git a/spaces/MoonQiu/LongerCrafter/lvdm/models/samplers/ddim_mp.py b/spaces/MoonQiu/LongerCrafter/lvdm/models/samplers/ddim_mp.py deleted file mode 100644 index 86e0d45601c2d638a438253aa9f90d4d366012a8..0000000000000000000000000000000000000000 --- a/spaces/MoonQiu/LongerCrafter/lvdm/models/samplers/ddim_mp.py +++ /dev/null @@ -1,340 +0,0 @@ -import numpy as np -from tqdm import tqdm -import torch -from lvdm.models.utils_diffusion import make_ddim_sampling_parameters, make_ddim_timesteps -from lvdm.common import noise_like - - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - self.counter = 0 - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - self.use_scale = self.model.use_scale - print('DDIM scale', self.use_scale) - - if self.use_scale: - self.register_buffer('scale_arr', to_torch(self.model.scale_arr)) - ddim_scale_arr = self.scale_arr.cpu()[self.ddim_timesteps] - self.register_buffer('ddim_scale_arr', ddim_scale_arr) - ddim_scale_arr = np.asarray([self.scale_arr.cpu()[0]] + self.scale_arr.cpu()[self.ddim_timesteps[:-1]].tolist()) - self.register_buffer('ddim_scale_arr_prev', ddim_scale_arr) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - schedule_verbose=False, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - - # check condition bs - # if conditioning is not None: - # if isinstance(conditioning, dict): - # try: - # cbs = conditioning[list(conditioning.keys())[0]].shape[0] - # except: - # cbs = conditioning[list(conditioning.keys())[0]][0].shape[0] - - # if cbs != batch_size: - # print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - # else: - # if conditioning.shape[0] != batch_size: - # print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=schedule_verbose) - - # make shape - if len(shape) == 3: - C, H, W = shape - size = (batch_size, C, H, W) - elif len(shape) == 4: - C, T, H, W = shape - size = (batch_size, C, T, H, W) - # print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - verbose=verbose, - **kwargs) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, verbose=True, - cond_tau=1., target_size=None, start_timesteps=None, - **kwargs): - device = self.model.betas.device - print('ddim device', device) - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - if verbose: - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - else: - iterator = time_range - - init_x0 = False - clean_cond = kwargs.pop("clean_cond", False) - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - if start_timesteps is not None: - assert x0 is not None - if step > start_timesteps*time_range[0]: - continue - elif not init_x0: - img = self.model.q_sample(x0, ts) - init_x0 = True - - # use mask to blend noised original latent (img_orig) & new sampled latent (img) - if mask is not None: - assert x0 is not None - if clean_cond: - img_orig = x0 - else: - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img # keep original & modify use img - - index_clip = int((1 - cond_tau) * total_steps) - if index <= index_clip and target_size is not None: - target_size_ = [target_size[0], target_size[1]//8, target_size[2]//8] - img = torch.nn.functional.interpolate( - img, - size=target_size_, - mode="nearest", - ) - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - x0=x0, - step=i, - **kwargs) - - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None, - uc_type=None, conditional_guidance_scale_temporal=None, step=0, **kwargs): - b, *_, device = *x.shape, x.device - if x.dim() == 5: - is_video = True - else: - is_video = False - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - e_t = self.model.apply_model(x, t, c, **kwargs) # unet denoiser - else: - # with unconditional condition - if step < 5 or step > 15: - e_t = self.model.apply_model(x, t, c, use_injection=True, **kwargs) - e_t_uncond = self.model.apply_model(x, t, unconditional_conditioning, **kwargs) - elif isinstance(c, torch.Tensor): - e_t = self.model.apply_model(x, t, c, **kwargs) - e_t_uncond = self.model.apply_model(x, t, unconditional_conditioning, **kwargs) - elif isinstance(c, dict): - e_t = self.model.apply_model(x, t, c, **kwargs) - e_t_uncond = self.model.apply_model(x, t, unconditional_conditioning, **kwargs) - else: - raise NotImplementedError - # text cfg - if uc_type is None: - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - else: - if uc_type == 'cfg_original': - e_t = e_t + unconditional_guidance_scale * (e_t - e_t_uncond) - elif uc_type == 'cfg_ours': - e_t = e_t + unconditional_guidance_scale * (e_t_uncond - e_t) - else: - raise NotImplementedError - # temporal guidance - if conditional_guidance_scale_temporal is not None: - e_t_temporal = self.model.apply_model(x, t, c, **kwargs) - e_t_image = self.model.apply_model(x, t, c, no_temporal_attn=True, **kwargs) - e_t = e_t + conditional_guidance_scale_temporal * (e_t_temporal - e_t_image) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - - if is_video: - size = (b, 1, 1, 1, 1) - else: - size = (b, 1, 1, 1) - a_t = torch.full(size, alphas[index], device=device) - a_prev = torch.full(size, alphas_prev[index], device=device) - sigma_t = torch.full(size, sigmas[index], device=device) - sqrt_one_minus_at = torch.full(size, sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - if self.use_scale: - scale_arr = self.model.scale_arr if use_original_steps else self.ddim_scale_arr - scale_t = torch.full(size, scale_arr[index], device=device) - scale_arr_prev = self.model.scale_arr_prev if use_original_steps else self.ddim_scale_arr_prev - scale_t_prev = torch.full(size, scale_arr_prev[index], device=device) - pred_x0 /= scale_t - x_prev = a_prev.sqrt() * scale_t_prev * pred_x0 + dir_xt + noise - else: - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - - return x_prev, pred_x0 - - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - - def extract_into_tensor(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - return x_dec - diff --git a/spaces/MrBodean/VoiceClone/encoder/preprocess.py b/spaces/MrBodean/VoiceClone/encoder/preprocess.py deleted file mode 100644 index 551a8b29c4d84c0e1430f285a1c8b5e10c98ee5f..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/encoder/preprocess.py +++ /dev/null @@ -1,175 +0,0 @@ -from multiprocess.pool import ThreadPool -from encoder.params_data import * -from encoder.config import librispeech_datasets, anglophone_nationalites -from datetime import datetime -from encoder import audio -from pathlib import Path -from tqdm import tqdm -import numpy as np - - -class DatasetLog: - """ - Registers metadata about the dataset in a text file. - """ - def __init__(self, root, name): - self.text_file = open(Path(root, "Log_%s.txt" % name.replace("/", "_")), "w") - self.sample_data = dict() - - start_time = str(datetime.now().strftime("%A %d %B %Y at %H:%M")) - self.write_line("Creating dataset %s on %s" % (name, start_time)) - self.write_line("-----") - self._log_params() - - def _log_params(self): - from encoder import params_data - self.write_line("Parameter values:") - for param_name in (p for p in dir(params_data) if not p.startswith("__")): - value = getattr(params_data, param_name) - self.write_line("\t%s: %s" % (param_name, value)) - self.write_line("-----") - - def write_line(self, line): - self.text_file.write("%s\n" % line) - - def add_sample(self, **kwargs): - for param_name, value in kwargs.items(): - if not param_name in self.sample_data: - self.sample_data[param_name] = [] - self.sample_data[param_name].append(value) - - def finalize(self): - self.write_line("Statistics:") - for param_name, values in self.sample_data.items(): - self.write_line("\t%s:" % param_name) - self.write_line("\t\tmin %.3f, max %.3f" % (np.min(values), np.max(values))) - self.write_line("\t\tmean %.3f, median %.3f" % (np.mean(values), np.median(values))) - self.write_line("-----") - end_time = str(datetime.now().strftime("%A %d %B %Y at %H:%M")) - self.write_line("Finished on %s" % end_time) - self.text_file.close() - - -def _init_preprocess_dataset(dataset_name, datasets_root, out_dir) -> (Path, DatasetLog): - dataset_root = datasets_root.joinpath(dataset_name) - if not dataset_root.exists(): - print("Couldn\'t find %s, skipping this dataset." % dataset_root) - return None, None - return dataset_root, DatasetLog(out_dir, dataset_name) - - -def _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, extension, - skip_existing, logger): - print("%s: Preprocessing data for %d speakers." % (dataset_name, len(speaker_dirs))) - - # Function to preprocess utterances for one speaker - def preprocess_speaker(speaker_dir: Path): - # Give a name to the speaker that includes its dataset - speaker_name = "_".join(speaker_dir.relative_to(datasets_root).parts) - - # Create an output directory with that name, as well as a txt file containing a - # reference to each source file. - speaker_out_dir = out_dir.joinpath(speaker_name) - speaker_out_dir.mkdir(exist_ok=True) - sources_fpath = speaker_out_dir.joinpath("_sources.txt") - - # There's a possibility that the preprocessing was interrupted earlier, check if - # there already is a sources file. - if sources_fpath.exists(): - try: - with sources_fpath.open("r") as sources_file: - existing_fnames = {line.split(",")[0] for line in sources_file} - except: - existing_fnames = {} - else: - existing_fnames = {} - - # Gather all audio files for that speaker recursively - sources_file = sources_fpath.open("a" if skip_existing else "w") - for in_fpath in speaker_dir.glob("**/*.%s" % extension): - # Check if the target output file already exists - out_fname = "_".join(in_fpath.relative_to(speaker_dir).parts) - out_fname = out_fname.replace(".%s" % extension, ".npy") - if skip_existing and out_fname in existing_fnames: - continue - - # Load and preprocess the waveform - wav = audio.preprocess_wav(in_fpath) - if len(wav) == 0: - continue - - # Create the mel spectrogram, discard those that are too short - frames = audio.wav_to_mel_spectrogram(wav) - if len(frames) < partials_n_frames: - continue - - out_fpath = speaker_out_dir.joinpath(out_fname) - np.save(out_fpath, frames) - logger.add_sample(duration=len(wav) / sampling_rate) - sources_file.write("%s,%s\n" % (out_fname, in_fpath)) - - sources_file.close() - - # Process the utterances for each speaker - with ThreadPool(8) as pool: - list(tqdm(pool.imap(preprocess_speaker, speaker_dirs), dataset_name, len(speaker_dirs), - unit="speakers")) - logger.finalize() - print("Done preprocessing %s.\n" % dataset_name) - - -def preprocess_librispeech(datasets_root: Path, out_dir: Path, skip_existing=False): - for dataset_name in librispeech_datasets["train"]["other"]: - # Initialize the preprocessing - dataset_root, logger = _init_preprocess_dataset(dataset_name, datasets_root, out_dir) - if not dataset_root: - return - - # Preprocess all speakers - speaker_dirs = list(dataset_root.glob("*")) - _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, "flac", - skip_existing, logger) - - -def preprocess_voxceleb1(datasets_root: Path, out_dir: Path, skip_existing=False): - # Initialize the preprocessing - dataset_name = "VoxCeleb1" - dataset_root, logger = _init_preprocess_dataset(dataset_name, datasets_root, out_dir) - if not dataset_root: - return - - # Get the contents of the meta file - with dataset_root.joinpath("vox1_meta.csv").open("r") as metafile: - metadata = [line.split("\t") for line in metafile][1:] - - # Select the ID and the nationality, filter out non-anglophone speakers - nationalities = {line[0]: line[3] for line in metadata} - keep_speaker_ids = [speaker_id for speaker_id, nationality in nationalities.items() if - nationality.lower() in anglophone_nationalites] - print("VoxCeleb1: using samples from %d (presumed anglophone) speakers out of %d." % - (len(keep_speaker_ids), len(nationalities))) - - # Get the speaker directories for anglophone speakers only - speaker_dirs = dataset_root.joinpath("wav").glob("*") - speaker_dirs = [speaker_dir for speaker_dir in speaker_dirs if - speaker_dir.name in keep_speaker_ids] - print("VoxCeleb1: found %d anglophone speakers on the disk, %d missing (this is normal)." % - (len(speaker_dirs), len(keep_speaker_ids) - len(speaker_dirs))) - - # Preprocess all speakers - _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, "wav", - skip_existing, logger) - - -def preprocess_voxceleb2(datasets_root: Path, out_dir: Path, skip_existing=False): - # Initialize the preprocessing - dataset_name = "VoxCeleb2" - dataset_root, logger = _init_preprocess_dataset(dataset_name, datasets_root, out_dir) - if not dataset_root: - return - - # Get the speaker directories - # Preprocess all speakers - speaker_dirs = list(dataset_root.joinpath("dev", "aac").glob("*")) - _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, "m4a", - skip_existing, logger) diff --git a/spaces/NMEX/rvc-hoyogame-v2/app.py b/spaces/NMEX/rvc-hoyogame-v2/app.py deleted file mode 100644 index 4e286df3b5a8843b0e948686fe94d86c4b336dcb..0000000000000000000000000000000000000000 --- a/spaces/NMEX/rvc-hoyogame-v2/app.py +++ /dev/null @@ -1,518 +0,0 @@ -import os -import glob -import json -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -import yt_dlp -import ffmpeg -import subprocess -import sys -import io -import wave -from datetime import datetime -from fairseq import checkpoint_utils -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from vc_infer_pipeline import VC -from config import Config -config = Config() -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" - -audio_mode = [] -f0method_mode = [] -f0method_info = "" -if limitation is True: - audio_mode = ["Upload audio", "TTS Audio"] - f0method_mode = ["pm", "harvest"] - f0method_info = "PM is fast, Harvest is good but extremely slow. (Default: PM)" -else: - audio_mode = ["Input path", "Upload audio", "Youtube", "TTS Audio"] - f0method_mode = ["pm", "harvest", "crepe"] - f0method_info = "PM is fast, Harvest is good but extremely slow, and Crepe effect is good but requires GPU (Default: PM)" - -def create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, file_index): - def vc_fn( - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - f0_up_key, - f0_method, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ): - try: - print(f"Converting using {model_name}...") - if vc_audio_mode == "Input path" or "Youtube" and vc_input != "": - audio, sr = librosa.load(vc_input, sr=16000, mono=True) - elif vc_audio_mode == "Upload audio": - if vc_upload is None: - return "You need to upload an audio", None - sampling_rate, audio = vc_upload - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - elif vc_audio_mode == "TTS Audio": - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - vc_input = "tts.mp3" - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - vc_input, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ) - info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - print(f"{model_name} | {info}") - return info, (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, None - return vc_fn - -def load_model(): - categories = [] - with open("weights/folder_info.json", "r", encoding="utf-8") as f: - folder_info = json.load(f) - for category_name, category_info in folder_info.items(): - if not category_info['enable']: - continue - category_title = category_info['title'] - category_folder = category_info['folder_path'] - description = category_info['description'] - models = [] - with open(f"weights/{category_folder}/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for character_name, info in models_info.items(): - if not info['enable']: - continue - model_title = info['title'] - model_name = info['model_path'] - model_author = info.get("author", None) - model_cover = f"weights/{category_folder}/{character_name}/{info['cover']}" - model_index = f"weights/{category_folder}/{character_name}/{info['feature_retrieval_library']}" - cpt = torch.load(f"weights/{category_folder}/{character_name}/{model_name}", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - model_version = "V1" - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - model_version = "V2" - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})") - models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, model_index))) - categories.append([category_title, category_folder, description, models]) - return categories - -def cut_vocal_and_inst(url, audio_provider, split_model): - if url != "": - if not os.path.exists("dl_audio"): - os.mkdir("dl_audio") - if audio_provider == "Youtube": - ydl_opts = { - 'noplaylist': True, - 'format': 'bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'wav', - }], - "outtmpl": 'dl_audio/youtube_audio', - } - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([url]) - audio_path = "dl_audio/youtube_audio.wav" - if split_model == "htdemucs": - command = f"demucs --two-stems=vocals {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/htdemucs/youtube_audio/vocals.wav", "output/htdemucs/youtube_audio/no_vocals.wav", audio_path, "output/htdemucs/youtube_audio/vocals.wav" - else: - command = f"demucs --two-stems=vocals -n mdx_extra_q {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/mdx_extra_q/youtube_audio/vocals.wav", "output/mdx_extra_q/youtube_audio/no_vocals.wav", audio_path, "output/mdx_extra_q/youtube_audio/vocals.wav" - else: - raise gr.Error("URL Required!") - return None, None, None, None - -def combine_vocal_and_inst(audio_data, audio_volume, split_model): - if not os.path.exists("output/result"): - os.mkdir("output/result") - vocal_path = "output/result/output.wav" - output_path = "output/result/combine.mp3" - if split_model == "htdemucs": - inst_path = "output/htdemucs/youtube_audio/no_vocals.wav" - else: - inst_path = "output/mdx_extra_q/youtube_audio/no_vocals.wav" - with wave.open(vocal_path, "w") as wave_file: - wave_file.setnchannels(1) - wave_file.setsampwidth(2) - wave_file.setframerate(audio_data[0]) - wave_file.writeframes(audio_data[1].tobytes()) - command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}' - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return output_path - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_audio_mode(vc_audio_mode): - if vc_audio_mode == "Input path": - return ( - # Input & Upload - gr.Textbox.update(visible=True), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Upload audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Youtube": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True), - gr.Button.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "TTS Audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True) - ) - else: - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - -def use_microphone(microphone): - if microphone == True: - return gr.Audio.update(source="microphone") - else: - return gr.Audio.update(source="upload") - -if __name__ == '__main__': - load_hubert() - categories = load_model() - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with gr.Blocks() as app: - gr.Markdown( - "
\n\n"+ - "# RVC Genshin Impact\n\n"+ - "### Recommended to use Google Colab to use other character and feature.\n\n"+ - "[![Colab](https://img.shields.io/badge/Colab-RVC%20Genshin%20Impact-blue?style=for-the-badge&logo=googlecolab)](https://colab.research.google.com/drive/110kiMZTdP6Ri1lY9-NbQf17GVPPhHyeT?usp=sharing)\n\n"+ - "
\n\n"+ - "[![Repository](https://img.shields.io/badge/Github-Multi%20Model%20RVC%20Inference-blue?style=for-the-badge&logo=github)](https://github.com/ArkanDash/Multi-Model-RVC-Inference)" - ) - for (folder_title, folder, description, models) in categories: - with gr.TabItem(folder_title): - if description: - gr.Markdown(f"###
{description}") - with gr.Tabs(): - if not models: - gr.Markdown("#
No Model Loaded.") - gr.Markdown("##
Please add model or fix your model path.") - continue - for (name, title, author, cover, model_version, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
' - f'
{title}
\n'+ - f'
RVC {model_version} Model
\n'+ - (f'
Model author: {author}
' if author else "")+ - (f'' if cover else "")+ - '
' - ) - with gr.Row(): - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio") - # Input - vc_input = gr.Textbox(label="Input audio path", visible=False) - # Upload - vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True) - vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["htdemucs", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split = gr.Button("Split Audio", variant="primary", visible=False) - vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False) - vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False) - vc_audio_preview = gr.Audio(label="Audio Preview", visible=False) - # TTS - tts_text = gr.Textbox(visible=False, label="TTS text", info="Text to speech input") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - with gr.Column(): - vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice') - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - info="(Default: 0.7)", - value=0.7, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.5, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=4, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 4}", - visible=False - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False) - vc_combine = gr.Button("Combine",variant="primary", visible=False) - vc_convert.click( - fn=vc_fn, - inputs=[ - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - vc_transform0, - f0method0, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - outputs=[vc_log ,vc_output] - ) - vc_split.click( - fn=cut_vocal_and_inst, - inputs=[vc_link, vc_download_audio, vc_split_model], - outputs=[vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input] - ) - vc_combine.click( - fn=combine_vocal_and_inst, - inputs=[vc_output, vc_volume, vc_split_model], - outputs=[vc_combined_output] - ) - vc_microphone_mode.change( - fn=use_microphone, - inputs=vc_microphone_mode, - outputs=vc_upload - ) - vc_audio_mode.change( - fn=change_audio_mode, - inputs=[vc_audio_mode], - outputs=[ - vc_input, - vc_microphone_mode, - vc_upload, - vc_download_audio, - vc_link, - vc_split_model, - vc_split, - vc_vocal_preview, - vc_inst_preview, - vc_audio_preview, - vc_volume, - vc_combined_output, - vc_combine, - tts_text, - tts_voice - ] - ) - app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab) \ No newline at end of file diff --git a/spaces/NachtYoru/Linaqruf-anything-v3-better-vae/app.py b/spaces/NachtYoru/Linaqruf-anything-v3-better-vae/app.py deleted file mode 100644 index c9d41763c79cbb4bd6d7be45ea74e3e161a49774..0000000000000000000000000000000000000000 --- a/spaces/NachtYoru/Linaqruf-anything-v3-better-vae/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Linaqruf/anything-v3-better-vae").launch() \ No newline at end of file diff --git a/spaces/Nikita22121671/stabilityai-stablecode-instruct-alpha-3b/README.md b/spaces/Nikita22121671/stabilityai-stablecode-instruct-alpha-3b/README.md deleted file mode 100644 index 5121952f288b0ec178b669b4dc0f657008805efc..0000000000000000000000000000000000000000 --- a/spaces/Nikita22121671/stabilityai-stablecode-instruct-alpha-3b/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Stabilityai Stablecode Instruct Alpha 3b -emoji: ⚡ -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OAOA/DifFace/basicsr/utils/img_util.py b/spaces/OAOA/DifFace/basicsr/utils/img_util.py deleted file mode 100644 index fbce5dba5b01deb78f2453edc801a76e6a126998..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/utils/img_util.py +++ /dev/null @@ -1,172 +0,0 @@ -import cv2 -import math -import numpy as np -import os -import torch -from torchvision.utils import make_grid - - -def img2tensor(imgs, bgr2rgb=True, float32=True): - """Numpy array to tensor. - - Args: - imgs (list[ndarray] | ndarray): Input images. - bgr2rgb (bool): Whether to change bgr to rgb. - float32 (bool): Whether to change to float32. - - Returns: - list[tensor] | tensor: Tensor images. If returned results only have - one element, just return tensor. - """ - - def _totensor(img, bgr2rgb, float32): - if img.shape[2] == 3 and bgr2rgb: - if img.dtype == 'float64': - img = img.astype('float32') - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = torch.from_numpy(img.transpose(2, 0, 1)) - if float32: - img = img.float() - return img - - if isinstance(imgs, list): - return [_totensor(img, bgr2rgb, float32) for img in imgs] - else: - return _totensor(imgs, bgr2rgb, float32) - - -def tensor2img(tensor, rgb2bgr=True, out_type=np.uint8, min_max=(0, 1)): - """Convert torch Tensors into image numpy arrays. - - After clamping to [min, max], values will be normalized to [0, 1]. - - Args: - tensor (Tensor or list[Tensor]): Accept shapes: - 1) 4D mini-batch Tensor of shape (B x 3/1 x H x W); - 2) 3D Tensor of shape (3/1 x H x W); - 3) 2D Tensor of shape (H x W). - Tensor channel should be in RGB order. - rgb2bgr (bool): Whether to change rgb to bgr. - out_type (numpy type): output types. If ``np.uint8``, transform outputs - to uint8 type with range [0, 255]; otherwise, float type with - range [0, 1]. Default: ``np.uint8``. - min_max (tuple[int]): min and max values for clamp. - - Returns: - (Tensor or list): 3D ndarray of shape (H x W x C) OR 2D ndarray of - shape (H x W). The channel order is BGR. - """ - if not (torch.is_tensor(tensor) or (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))): - raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}') - - if torch.is_tensor(tensor): - tensor = [tensor] - result = [] - for _tensor in tensor: - _tensor = _tensor.squeeze(0).float().detach().cpu().clamp_(*min_max) - _tensor = (_tensor - min_max[0]) / (min_max[1] - min_max[0]) - - n_dim = _tensor.dim() - if n_dim == 4: - img_np = make_grid(_tensor, nrow=int(math.sqrt(_tensor.size(0))), normalize=False).numpy() - img_np = img_np.transpose(1, 2, 0) - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 3: - img_np = _tensor.numpy() - img_np = img_np.transpose(1, 2, 0) - if img_np.shape[2] == 1: # gray image - img_np = np.squeeze(img_np, axis=2) - else: - if rgb2bgr: - img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR) - elif n_dim == 2: - img_np = _tensor.numpy() - else: - raise TypeError(f'Only support 4D, 3D or 2D tensor. But received with dimension: {n_dim}') - if out_type == np.uint8: - # Unlike MATLAB, numpy.unit8() WILL NOT round by default. - img_np = (img_np * 255.0).round() - img_np = img_np.astype(out_type) - result.append(img_np) - if len(result) == 1 and torch.is_tensor(tensor): - result = result[0] - return result - - -def tensor2img_fast(tensor, rgb2bgr=True, min_max=(0, 1)): - """This implementation is slightly faster than tensor2img. - It now only supports torch tensor with shape (1, c, h, w). - - Args: - tensor (Tensor): Now only support torch tensor with (1, c, h, w). - rgb2bgr (bool): Whether to change rgb to bgr. Default: True. - min_max (tuple[int]): min and max values for clamp. - """ - output = tensor.squeeze(0).detach().clamp_(*min_max).permute(1, 2, 0) - output = (output - min_max[0]) / (min_max[1] - min_max[0]) * 255 - output = output.type(torch.uint8).cpu().numpy() - if rgb2bgr: - output = cv2.cvtColor(output, cv2.COLOR_RGB2BGR) - return output - - -def imfrombytes(content, flag='color', float32=False): - """Read an image from bytes. - - Args: - content (bytes): Image bytes got from files or other streams. - flag (str): Flags specifying the color type of a loaded image, - candidates are `color`, `grayscale` and `unchanged`. - float32 (bool): Whether to change to float32., If True, will also norm - to [0, 1]. Default: False. - - Returns: - ndarray: Loaded image array. - """ - img_np = np.frombuffer(content, np.uint8) - imread_flags = {'color': cv2.IMREAD_COLOR, 'grayscale': cv2.IMREAD_GRAYSCALE, 'unchanged': cv2.IMREAD_UNCHANGED} - img = cv2.imdecode(img_np, imread_flags[flag]) - if float32: - img = img.astype(np.float32) / 255. - return img - - -def imwrite(img, file_path, params=None, auto_mkdir=True): - """Write image to file. - - Args: - img (ndarray): Image array to be written. - file_path (str): Image file path. - params (None or list): Same as opencv's :func:`imwrite` interface. - auto_mkdir (bool): If the parent folder of `file_path` does not exist, - whether to create it automatically. - - Returns: - bool: Successful or not. - """ - if auto_mkdir: - dir_name = os.path.abspath(os.path.dirname(file_path)) - os.makedirs(dir_name, exist_ok=True) - ok = cv2.imwrite(file_path, img, params) - if not ok: - raise IOError('Failed in writing images.') - - -def crop_border(imgs, crop_border): - """Crop borders of images. - - Args: - imgs (list[ndarray] | ndarray): Images with shape (h, w, c). - crop_border (int): Crop border for each end of height and weight. - - Returns: - list[ndarray]: Cropped images. - """ - if crop_border == 0: - return imgs - else: - if isinstance(imgs, list): - return [v[crop_border:-crop_border, crop_border:-crop_border, ...] for v in imgs] - else: - return imgs[crop_border:-crop_border, crop_border:-crop_border, ...] diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/fairseq_incremental_decoder.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/fairseq_incremental_decoder.py deleted file mode 100644 index cc72a0f8f3da238a8ce846240e5008d91ce1bc1a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/fairseq_incremental_decoder.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import Dict, Optional - -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.models import FairseqDecoder -from torch import Tensor - - -logger = logging.getLogger(__name__) - - -@with_incremental_state -class FairseqIncrementalDecoder(FairseqDecoder): - """Base class for incremental decoders. - - Incremental decoding is a special mode at inference time where the Model - only receives a single timestep of input corresponding to the previous - output token (for teacher forcing) and must produce the next output - *incrementally*. Thus the model must cache any long-term state that is - needed about the sequence, e.g., hidden states, convolutional states, etc. - - Compared to the standard :class:`FairseqDecoder` interface, the incremental - decoder interface allows :func:`forward` functions to take an extra keyword - argument (*incremental_state*) that can be used to cache state across - time-steps. - - The :class:`FairseqIncrementalDecoder` interface also defines the - :func:`reorder_incremental_state` method, which is used during beam search - to select and reorder the incremental state based on the selection of beams. - - To learn more about how incremental decoding works, refer to `this blog - `_. - """ - - def __init__(self, dictionary): - super().__init__(dictionary) - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs - ): - """ - Args: - prev_output_tokens (LongTensor): shifted output tokens of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (dict, optional): output from the encoder, used for - encoder-side attention - incremental_state (dict, optional): dictionary used for storing - state during :ref:`Incremental decoding` - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - raise NotImplementedError - - def extract_features( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs - ): - """ - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - """ - raise NotImplementedError - - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - new_order: Tensor, - ): - """Reorder incremental state. - - This will be called when the order of the input has changed from the - previous time step. A typical use case is beam search, where the input - order changes between time steps based on the selection of beams. - """ - pass - - def reorder_incremental_state_scripting( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - new_order: Tensor, - ): - """Main entry point for reordering the incremental state. - - Due to limitations in TorchScript, we call this function in - :class:`fairseq.sequence_generator.SequenceGenerator` instead of - calling :func:`reorder_incremental_state` directly. - """ - for module in self.modules(): - if hasattr(module, "reorder_incremental_state"): - result = module.reorder_incremental_state(incremental_state, new_order) - if result is not None: - incremental_state = result - - def set_beam_size(self, beam_size): - """Sets the beam size in the decoder and all children.""" - if getattr(self, "_beam_size", -1) != beam_size: - seen = set() - - def apply_set_beam_size(module): - if ( - module != self - and hasattr(module, "set_beam_size") - and module not in seen - ): - seen.add(module) - module.set_beam_size(beam_size) - - self.apply(apply_set_beam_size) - self._beam_size = beam_size diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/scalar/utils.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/scalar/utils.py deleted file mode 100644 index 2ec6af3fcb09ccaf853be15a84ed8181f9e2f546..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/scalar/utils.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from operator import attrgetter - -import torch.distributed as dist -import torch.nn as nn - -from ..pq.utils import attrsetter, get_layers -from .modules import ActivationQuantizer, IntConv2d, IntEmbedding, IntLinear - - -MAPPING = {nn.Linear: IntLinear, nn.Embedding: IntEmbedding, nn.Conv2d: IntConv2d} - - -def quantize_model_(model, p=0.2, bits=8, update_step=3000, method="histogram", remove_weights=False): - """ - Replaces all modules with their scalar quantized counterpart and - registers hooks to quantize the post-ativations of those modules. - - Args: - - model: a nn.Module - - p: amount of noise (0 for no noise, 1 to quantize all the weights/activations) - - bits: number of bits - - update_step: update quantization parameters every update_step steps - """ - # quantize all layers - # remove weights indicates whether the weights extension should be removed, in addition to - # weight_orig and weight extension on names - quantized_layers = get_layers(model, "(.*?)", remove_weights=remove_weights) - - for layer in quantized_layers: - - # book-keeping - is_master_process = (not dist.is_initialized()) or ( - dist.is_initialized() and dist.get_rank() == 0 - ) - - # recover module - module = attrgetter(layer)(model) - if is_master_process: - logging.info( - f"Quantizing layer {layer} with bits={bits} and QuantNoise={p}" - ) - - # quantization params - q_params = { - "p": p, - "update_step": update_step, - "bits": bits, - "method": method, - "counter": 0, - } - - # instantiate the quantized counterpart - if isinstance(module, tuple(MAPPING.keys())): - QuantizedModule = MAPPING[module.__class__] - quantized_module = QuantizedModule.__new__(QuantizedModule) - params = module.__dict__ - params.update(q_params) - quantized_module.__dict__.update(params) - - else: - if is_master_process: - logging.info(f"Module {module} not yet supported for quantization") - continue - - # activation quantization - a_q = ActivationQuantizer(quantized_module, p=0, bits=bits, method=method) - - # replace layer by its quantized counterpart - attrsetter(layer)(model, quantized_module) - - # return name of quantized layers - return quantized_layers diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/compare_namespaces.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/compare_namespaces.py deleted file mode 100644 index bc24db624f8db36f546c263ba3a806dae6d466bf..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/compare_namespaces.py +++ /dev/null @@ -1,46 +0,0 @@ -#!/usr/bin/env python -"""Helper script to compare two argparse.Namespace objects.""" - -from argparse import Namespace # noqa - - -def main(): - - ns1 = eval(input("Namespace 1: ")) - ns2 = eval(input("Namespace 2: ")) - - def keys(ns): - ks = set() - for k in dir(ns): - if not k.startswith("_"): - ks.add(k) - return ks - - k1 = keys(ns1) - k2 = keys(ns2) - - def print_keys(ks, ns1, ns2=None): - for k in ks: - if ns2 is None: - print("{}\t{}".format(k, getattr(ns1, k, None))) - else: - print( - "{}\t{}\t{}".format(k, getattr(ns1, k, None), getattr(ns2, k, None)) - ) - - print("Keys unique to namespace 1:") - print_keys(k1 - k2, ns1) - print() - - print("Keys unique to namespace 2:") - print_keys(k2 - k1, ns2) - print() - - print("Overlapping keys with different values:") - ks = [k for k in k1 & k2 if getattr(ns1, k, "None") != getattr(ns2, k, "None")] - print_keys(ks, ns1, ns2) - print() - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/linformer/linformer_src/models/__init__.py b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/linformer/linformer_src/models/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/checkpoint_activations.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/checkpoint_activations.py deleted file mode 100644 index 7489e09eb79b595aef674914556018d7f0a4efbf..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/modules/checkpoint_activations.py +++ /dev/null @@ -1,236 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import functools -from typing import Any, Dict, List, Tuple, Union - -import torch -import torch.utils.checkpoint as checkpoint -from fairseq import utils - - -def checkpoint_wrapper(m, offload_to_cpu=False): - """ - A friendlier wrapper for performing activation checkpointing. - - Compared to the PyTorch version, this version: - - wraps an nn.Module, so that all subsequent calls will use checkpointing - - handles keyword arguments in the forward - - handles non-Tensor outputs from the forward - - Usage:: - - checkpointed_module = checkpoint_wrapper(my_module, offload_to_cpu=True) - a, b = checkpointed_module(x, y=3, z=torch.Tensor([1])) - """ - # should I check whether original_forward has already been set? - assert not hasattr( - m, "precheckpoint_forward" - ), "checkpoint function has already been applied?" - m.precheckpoint_forward = m.forward - m.forward = functools.partial( - _checkpointed_forward, - m.precheckpoint_forward, # original_forward - offload_to_cpu, - ) - return m - - -def unwrap_checkpoint(m: torch.nn.Module): - """ - unwrap a module and its children from checkpoint_wrapper - """ - for module in m.modules(): - if hasattr(module, "precheckpoint_forward"): - module.forward = module.precheckpoint_forward - del module.precheckpoint_forward - return m - - -def _checkpointed_forward(original_forward, offload_to_cpu, *args, **kwargs): - # Autograd Functions in PyTorch work best with positional args, since - # the backward must return gradients (or None) for every input argument. - # We can flatten keyword arguments to make this easier. - kwarg_keys, flat_args = pack_kwargs(*args, **kwargs) - parent_ctx_dict = {"offload": offload_to_cpu} - output = CheckpointFunction.apply( - original_forward, parent_ctx_dict, kwarg_keys, *flat_args - ) - if isinstance(output, torch.Tensor): - return output - else: - packed_non_tensor_outputs = parent_ctx_dict["packed_non_tensor_outputs"] - if packed_non_tensor_outputs: - output = unpack_non_tensors(output, packed_non_tensor_outputs) - return output - - -def pack_kwargs(*args, **kwargs) -> Tuple[List[str], List[Any]]: - """ - Usage:: - - kwarg_keys, flat_args = pack_kwargs(1, 2, a=3, b=4) - args, kwargs = unpack_kwargs(kwarg_keys, flat_args) - assert args == [1, 2] - assert kwargs == {"a": 3, "b": 4} - """ - kwarg_keys = [] - flat_args = list(args) - for k, v in kwargs.items(): - kwarg_keys.append(k) - flat_args.append(v) - return kwarg_keys, flat_args - - -def unpack_kwargs( - kwarg_keys: List[str], flat_args: List[Any] -) -> Tuple[List[Any], Dict[str, Any]]: - if len(kwarg_keys) == 0: - return flat_args, {} - args = flat_args[: -len(kwarg_keys)] - kwargs = {k: v for k, v in zip(kwarg_keys, flat_args[-len(kwarg_keys) :])} - return args, kwargs - - -def split_non_tensors( - mixed: Union[torch.Tensor, Tuple[Any]] -) -> Tuple[Tuple[torch.Tensor], Dict[str, List[Any]]]: - """ - Usage:: - - x = torch.Tensor([1]) - y = torch.Tensor([2]) - tensors, packed_non_tensors = split_non_tensors((x, y, None, 3)) - recon = unpack_non_tensors(tensors, packed_non_tensors) - assert recon == (x, y, None, 3) - """ - if isinstance(mixed, torch.Tensor): - return (mixed,), None - tensors = [] - packed_non_tensors = {"is_tensor": [], "objects": []} - for o in mixed: - if isinstance(o, torch.Tensor): - packed_non_tensors["is_tensor"].append(True) - tensors.append(o) - else: - packed_non_tensors["is_tensor"].append(False) - packed_non_tensors["objects"].append(o) - return tuple(tensors), packed_non_tensors - - -def unpack_non_tensors( - tensors: Tuple[torch.Tensor], - packed_non_tensors: Dict[str, List[Any]], -) -> Tuple[Any]: - if packed_non_tensors is None: - return tensors - assert isinstance(packed_non_tensors, dict) - mixed = [] - is_tensor_list = packed_non_tensors["is_tensor"] - objects = packed_non_tensors["objects"] - assert len(tensors) + len(objects) == len(is_tensor_list) - obj_i = tnsr_i = 0 - for is_tensor in is_tensor_list: - if is_tensor: - mixed.append(tensors[tnsr_i]) - tnsr_i += 1 - else: - mixed.append(objects[obj_i]) - obj_i += 1 - return tuple(mixed) - - -class CheckpointFunction(torch.autograd.Function): - """Similar to the torch version, but support non-Tensor outputs. - - The caller is expected to provide a dict (*parent_ctx_dict*) that will hold - the non-Tensor outputs. These should be combined with the Tensor *outputs* - by calling ``unpack_non_tensors``. - """ - - @staticmethod - def forward(ctx, run_function, parent_ctx_dict, kwarg_keys, *args): - if torch.is_grad_enabled(): # grad may be disabled, e.g., during validation - checkpoint.check_backward_validity(args) - - ctx.run_function = run_function - ctx.kwarg_keys = kwarg_keys - ctx.fwd_rng_state = utils.get_rng_state() - - tensor_inputs, packed_non_tensor_inputs = split_non_tensors(args) - if parent_ctx_dict["offload"]: - ctx.fwd_device = tuple(x.device for x in tensor_inputs) - ctx.grad_requirements = tuple(x.requires_grad for x in tensor_inputs) - tensor_inputs = tuple(x.to(torch.device("cpu"), non_blocking=True) for x in tensor_inputs) - - else: - ctx.fwd_device, ctx.grad_requirements = None, None - - ctx.save_for_backward(*tensor_inputs) - ctx.packed_non_tensor_inputs = packed_non_tensor_inputs - - with torch.no_grad(): - unpacked_args, unpacked_kwargs = unpack_kwargs(kwarg_keys, args) - outputs = run_function(*unpacked_args, **unpacked_kwargs) - - if isinstance(outputs, torch.Tensor): - return outputs - else: - # Autograd Functions don't like non-Tensor outputs. We can split the - # non-Tensor and Tensor outputs, returning the former by reference - # through *parent_ctx_dict* and returning the latter directly. - outputs, packed_non_tensor_outputs = split_non_tensors(outputs) - parent_ctx_dict["packed_non_tensor_outputs"] = packed_non_tensor_outputs - return outputs - - @staticmethod - def backward(ctx, *args): - if not torch.autograd._is_checkpoint_valid(): - raise RuntimeError( - "Checkpointing is not compatible with .grad(), please use .backward() if possible" - ) - - tensor_inputs: Tuple = ctx.saved_tensors - tensor_inputs = checkpoint.detach_variable(tensor_inputs) - if ctx.fwd_device is not None: - tensor_inputs = [ - t.to(ctx.fwd_device[i], non_blocking=True) for i, t in enumerate(tensor_inputs) - ] - for i, need_grad in enumerate(ctx.grad_requirements): - tensor_inputs[i].requires_grad = need_grad - inputs = unpack_non_tensors(tensor_inputs, ctx.packed_non_tensor_inputs) - - # Store the current states. - bwd_rng_state = utils.get_rng_state() - - # Set the states to what it used to be before the forward pass. - utils.set_rng_state(ctx.fwd_rng_state) - - with torch.enable_grad(): - unpacked_args, unpacked_kwargs = unpack_kwargs(ctx.kwarg_keys, inputs) - outputs = ctx.run_function(*unpacked_args, **unpacked_kwargs) - tensor_outputs, _ = split_non_tensors(outputs) - # Set the states back to what it was at the start of this function. - utils.set_rng_state(bwd_rng_state) - - # Run backward() with only Tensors that require grad - outputs_with_grad = [] - args_with_grad = [] - for i in range(len(tensor_outputs)): - if tensor_outputs[i].requires_grad: - outputs_with_grad.append(tensor_outputs[i]) - args_with_grad.append(args[i]) - if len(outputs_with_grad) == 0: - raise RuntimeError( - "None of the outputs have requires_grad=True, " - "this checkpoint() is not necessary" - ) - - torch.autograd.backward(outputs_with_grad, args_with_grad) - - grads = tuple( - inp.grad if isinstance(inp, torch.Tensor) else None for inp in inputs - ) - return (None, None, None) + grads diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/adadelta.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/adadelta.py deleted file mode 100644 index f1a21549770f0904a6a40a42ff7eb52811f1bfbe..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/adadelta.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.optim - -from . import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("adadelta") -class Adadelta(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = torch.optim.Adadelta(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--adadelta-rho', type=float, default=0.9, metavar='RHO', - help='coefficient used for computing a running average of squared gradients') - parser.add_argument('--adadelta-eps', type=float, default=1e-6, metavar='EPS', - help='term added to the denominator to improve numerical stability') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - parser.add_argument('--anneal-eps', action='store_true', help='flag to anneal eps') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "rho": self.args.adadelta_rho, - "eps": self.args.adadelta_eps, - "weight_decay": self.args.weight_decay, - } - - @property - def supports_flat_params(self): - return True diff --git a/spaces/OFA-Sys/OFA-vqa/utils/cider/pyciderevalcap/__init__.py b/spaces/OFA-Sys/OFA-vqa/utils/cider/pyciderevalcap/__init__.py deleted file mode 100644 index 3f7d85bba884ea8f83fc6ab2a1e6ade80d98d4d9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/utils/cider/pyciderevalcap/__init__.py +++ /dev/null @@ -1 +0,0 @@ -__author__ = 'tylin' diff --git a/spaces/Olivier-Truong/faster-whisper-webui-v2/tests/vad_test.py b/spaces/Olivier-Truong/faster-whisper-webui-v2/tests/vad_test.py deleted file mode 100644 index c72492b1e7f9183c7a452784facb2cdf6c1bf0e2..0000000000000000000000000000000000000000 --- a/spaces/Olivier-Truong/faster-whisper-webui-v2/tests/vad_test.py +++ /dev/null @@ -1,72 +0,0 @@ -import unittest -import numpy as np -import sys - -sys.path.append('../whisper-webui') -#print("Sys path: " + str(sys.path)) - -from src.whisper.abstractWhisperContainer import LambdaWhisperCallback -from src.vad import AbstractTranscription, TranscriptionConfig, VadSileroTranscription - -class TestVad(unittest.TestCase): - def __init__(self, *args, **kwargs): - super(TestVad, self).__init__(*args, **kwargs) - self.transcribe_calls = [] - - def test_transcript(self): - mock = MockVadTranscription(mock_audio_length=120) - config = TranscriptionConfig() - - self.transcribe_calls.clear() - result = mock.transcribe("mock", LambdaWhisperCallback(lambda segment, _1, _2, _3, _4: self.transcribe_segments(segment)), config) - - self.assertListEqual(self.transcribe_calls, [ - [30, 30], - [100, 100] - ]) - - self.assertListEqual(result['segments'], - [{'end': 50.0, 'start': 40.0, 'text': 'Hello world '}, - {'end': 120.0, 'start': 110.0, 'text': 'Hello world '}] - ) - - def transcribe_segments(self, segment): - self.transcribe_calls.append(segment.tolist()) - - # Dummy text - return { - 'text': "Hello world ", - 'segments': [ - { - "start": 10.0, - "end": 20.0, - "text": "Hello world " - } - ], - 'language': "" - } - -class MockVadTranscription(AbstractTranscription): - def __init__(self, mock_audio_length: float = 1000): - super().__init__() - self.mock_audio_length = mock_audio_length - - def get_audio_segment(self, str, start_time: str = None, duration: str = None): - start_time_seconds = float(start_time.removesuffix("s")) - duration_seconds = float(duration.removesuffix("s")) - - # For mocking, this just returns a simple numppy array - return np.array([start_time_seconds, duration_seconds], dtype=np.float64) - - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, duration: float): - result = [] - - result.append( { 'start': 30, 'end': 60 } ) - result.append( { 'start': 100, 'end': 200 } ) - return result - - def get_audio_duration(self, audio: str, config: TranscriptionConfig): - return self.mock_audio_length - -if __name__ == '__main__': - unittest.main() \ No newline at end of file diff --git a/spaces/Omnibus/game-test/README.md b/spaces/Omnibus/game-test/README.md deleted file mode 100644 index bfba0eb824b0ab8136d312f0bf7a6e089abe2d17..0000000000000000000000000000000000000000 --- a/spaces/Omnibus/game-test/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Game Test -emoji: 🐢 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false - ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OnabajoMonsurat/Medical_Diagnosis_Chatbot/README.md b/spaces/OnabajoMonsurat/Medical_Diagnosis_Chatbot/README.md deleted file mode 100644 index a8a71e850807875245015217f124559d7fb404bc..0000000000000000000000000000000000000000 --- a/spaces/OnabajoMonsurat/Medical_Diagnosis_Chatbot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Medical Diagnosis Chatbot -emoji: 🏢 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/modules/batchnorm.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/modules/batchnorm.py deleted file mode 100644 index 18318965335b37cc671004a6aceda3229dc7b477..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/nn/modules/batchnorm.py +++ /dev/null @@ -1,329 +0,0 @@ -# -*- coding: utf-8 -*- -# File : batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import collections - -import torch -import torch.nn.functional as F - -from torch.nn.modules.batchnorm import _BatchNorm -from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast - -from .comm import SyncMaster - -__all__ = ['SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d'] - - -def _sum_ft(tensor): - """sum over the first and last dimention""" - return tensor.sum(dim=0).sum(dim=-1) - - -def _unsqueeze_ft(tensor): - """add new dementions at the front and the tail""" - return tensor.unsqueeze(0).unsqueeze(-1) - - -_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size']) -_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std']) - - -class _SynchronizedBatchNorm(_BatchNorm): - def __init__(self, num_features, eps=1e-5, momentum=0.001, affine=True): - super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine) - - self._sync_master = SyncMaster(self._data_parallel_master) - - self._is_parallel = False - self._parallel_id = None - self._slave_pipe = None - - # customed batch norm statistics - self._moving_average_fraction = 1. - momentum - self.register_buffer('_tmp_running_mean', torch.zeros(self.num_features)) - self.register_buffer('_tmp_running_var', torch.ones(self.num_features)) - self.register_buffer('_running_iter', torch.ones(1)) - self._tmp_running_mean = self.running_mean.clone() * self._running_iter - self._tmp_running_var = self.running_var.clone() * self._running_iter - - def forward(self, input): - # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation. - if not (self._is_parallel and self.training): - return F.batch_norm( - input, self.running_mean, self.running_var, self.weight, self.bias, - self.training, self.momentum, self.eps) - - # Resize the input to (B, C, -1). - input_shape = input.size() - input = input.view(input.size(0), self.num_features, -1) - - # Compute the sum and square-sum. - sum_size = input.size(0) * input.size(2) - input_sum = _sum_ft(input) - input_ssum = _sum_ft(input ** 2) - - # Reduce-and-broadcast the statistics. - if self._parallel_id == 0: - mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size)) - else: - mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size)) - - # Compute the output. - if self.affine: - # MJY:: Fuse the multiplication for speed. - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias) - else: - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std) - - # Reshape it. - return output.view(input_shape) - - def __data_parallel_replicate__(self, ctx, copy_id): - self._is_parallel = True - self._parallel_id = copy_id - - # parallel_id == 0 means master device. - if self._parallel_id == 0: - ctx.sync_master = self._sync_master - else: - self._slave_pipe = ctx.sync_master.register_slave(copy_id) - - def _data_parallel_master(self, intermediates): - """Reduce the sum and square-sum, compute the statistics, and broadcast it.""" - intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device()) - - to_reduce = [i[1][:2] for i in intermediates] - to_reduce = [j for i in to_reduce for j in i] # flatten - target_gpus = [i[1].sum.get_device() for i in intermediates] - - sum_size = sum([i[1].sum_size for i in intermediates]) - sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce) - - mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size) - - broadcasted = Broadcast.apply(target_gpus, mean, inv_std) - - outputs = [] - for i, rec in enumerate(intermediates): - outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2]))) - - return outputs - - def _add_weighted(self, dest, delta, alpha=1, beta=1, bias=0): - """return *dest* by `dest := dest*alpha + delta*beta + bias`""" - return dest * alpha + delta * beta + bias - - def _compute_mean_std(self, sum_, ssum, size): - """Compute the mean and standard-deviation with sum and square-sum. This method - also maintains the moving average on the master device.""" - assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.' - mean = sum_ / size - sumvar = ssum - sum_ * mean - unbias_var = sumvar / (size - 1) - bias_var = sumvar / size - - self._tmp_running_mean = self._add_weighted(self._tmp_running_mean, mean.data, alpha=self._moving_average_fraction) - self._tmp_running_var = self._add_weighted(self._tmp_running_var, unbias_var.data, alpha=self._moving_average_fraction) - self._running_iter = self._add_weighted(self._running_iter, 1, alpha=self._moving_average_fraction) - - self.running_mean = self._tmp_running_mean / self._running_iter - self.running_var = self._tmp_running_var / self._running_iter - - return mean, bias_var.clamp(self.eps) ** -0.5 - - -class SynchronizedBatchNorm1d(_SynchronizedBatchNorm): - r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a - mini-batch. - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm1d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm - - Args: - num_features: num_features from an expected input of size - `batch_size x num_features [x width]` - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C)` or :math:`(N, C, L)` - - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 2 and input.dim() != 3: - raise ValueError('expected 2D or 3D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm1d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm2d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch - of 3d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm2d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C, H, W)` - - Output: :math:`(N, C, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 4: - raise ValueError('expected 4D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm2d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm3d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch - of 4d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm3d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm - or Spatio-temporal BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x depth x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C, D, H, W)` - - Output: :math:`(N, C, D, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 5: - raise ValueError('expected 5D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm3d, self)._check_input_dim(input) diff --git a/spaces/PAIR/PAIR-Diffusion/ldm/modules/midas/midas/vit.py b/spaces/PAIR/PAIR-Diffusion/ldm/modules/midas/midas/vit.py deleted file mode 100644 index ea46b1be88b261b0dec04f3da0256f5f66f88a74..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/ldm/modules/midas/midas/vit.py +++ /dev/null @@ -1,491 +0,0 @@ -import torch -import torch.nn as nn -import timm -import types -import math -import torch.nn.functional as F - - -class Slice(nn.Module): - def __init__(self, start_index=1): - super(Slice, self).__init__() - self.start_index = start_index - - def forward(self, x): - return x[:, self.start_index :] - - -class AddReadout(nn.Module): - def __init__(self, start_index=1): - super(AddReadout, self).__init__() - self.start_index = start_index - - def forward(self, x): - if self.start_index == 2: - readout = (x[:, 0] + x[:, 1]) / 2 - else: - readout = x[:, 0] - return x[:, self.start_index :] + readout.unsqueeze(1) - - -class ProjectReadout(nn.Module): - def __init__(self, in_features, start_index=1): - super(ProjectReadout, self).__init__() - self.start_index = start_index - - self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU()) - - def forward(self, x): - readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :]) - features = torch.cat((x[:, self.start_index :], readout), -1) - - return self.project(features) - - -class Transpose(nn.Module): - def __init__(self, dim0, dim1): - super(Transpose, self).__init__() - self.dim0 = dim0 - self.dim1 = dim1 - - def forward(self, x): - x = x.transpose(self.dim0, self.dim1) - return x - - -def forward_vit(pretrained, x): - b, c, h, w = x.shape - - glob = pretrained.model.forward_flex(x) - - layer_1 = pretrained.activations["1"] - layer_2 = pretrained.activations["2"] - layer_3 = pretrained.activations["3"] - layer_4 = pretrained.activations["4"] - - layer_1 = pretrained.act_postprocess1[0:2](layer_1) - layer_2 = pretrained.act_postprocess2[0:2](layer_2) - layer_3 = pretrained.act_postprocess3[0:2](layer_3) - layer_4 = pretrained.act_postprocess4[0:2](layer_4) - - unflatten = nn.Sequential( - nn.Unflatten( - 2, - torch.Size( - [ - h // pretrained.model.patch_size[1], - w // pretrained.model.patch_size[0], - ] - ), - ) - ) - - if layer_1.ndim == 3: - layer_1 = unflatten(layer_1) - if layer_2.ndim == 3: - layer_2 = unflatten(layer_2) - if layer_3.ndim == 3: - layer_3 = unflatten(layer_3) - if layer_4.ndim == 3: - layer_4 = unflatten(layer_4) - - layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1) - layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2) - layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3) - layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4) - - return layer_1, layer_2, layer_3, layer_4 - - -def _resize_pos_embed(self, posemb, gs_h, gs_w): - posemb_tok, posemb_grid = ( - posemb[:, : self.start_index], - posemb[0, self.start_index :], - ) - - gs_old = int(math.sqrt(len(posemb_grid))) - - posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) - posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1) - - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - - return posemb - - -def forward_flex(self, x): - b, c, h, w = x.shape - - pos_embed = self._resize_pos_embed( - self.pos_embed, h // self.patch_size[1], w // self.patch_size[0] - ) - - B = x.shape[0] - - if hasattr(self.patch_embed, "backbone"): - x = self.patch_embed.backbone(x) - if isinstance(x, (list, tuple)): - x = x[-1] # last feature if backbone outputs list/tuple of features - - x = self.patch_embed.proj(x).flatten(2).transpose(1, 2) - - if getattr(self, "dist_token", None) is not None: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - dist_token = self.dist_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, dist_token, x), dim=1) - else: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - - x = x + pos_embed - x = self.pos_drop(x) - - for blk in self.blocks: - x = blk(x) - - x = self.norm(x) - - return x - - -activations = {} - - -def get_activation(name): - def hook(model, input, output): - activations[name] = output - - return hook - - -def get_readout_oper(vit_features, features, use_readout, start_index=1): - if use_readout == "ignore": - readout_oper = [Slice(start_index)] * len(features) - elif use_readout == "add": - readout_oper = [AddReadout(start_index)] * len(features) - elif use_readout == "project": - readout_oper = [ - ProjectReadout(vit_features, start_index) for out_feat in features - ] - else: - assert ( - False - ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'" - - return readout_oper - - -def _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - size=[384, 384], - hooks=[2, 5, 8, 11], - vit_features=768, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - # 32, 48, 136, 384 - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_large_patch16_384", pretrained=pretrained) - - hooks = [5, 11, 17, 23] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[256, 512, 1024, 1024], - hooks=hooks, - vit_features=1024, - use_readout=use_readout, - ) - - -def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_distil_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model( - "vit_deit_base_distilled_patch16_384", pretrained=pretrained - ) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - hooks=hooks, - use_readout=use_readout, - start_index=2, - ) - - -def _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=[0, 1, 8, 11], - vit_features=768, - use_vit_only=False, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - - if use_vit_only == True: - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - else: - pretrained.model.patch_embed.backbone.stages[0].register_forward_hook( - get_activation("1") - ) - pretrained.model.patch_embed.backbone.stages[1].register_forward_hook( - get_activation("2") - ) - - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - if use_vit_only == True: - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - else: - pretrained.act_postprocess1 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - pretrained.act_postprocess2 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitb_rn50_384( - pretrained, use_readout="ignore", hooks=None, use_vit_only=False -): - model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained) - - hooks = [0, 1, 8, 11] if hooks == None else hooks - return _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/tin_shift.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/tin_shift.py deleted file mode 100644 index 472c9fcfe45a124e819b7ed5653e585f94a8811e..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/tin_shift.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Code reference from "Temporal Interlacing Network" -# https://github.com/deepcs233/TIN/blob/master/cuda_shift/rtc_wrap.py -# Hao Shao, Shengju Qian, Yu Liu -# shaoh19@mails.tsinghua.edu.cn, sjqian@cse.cuhk.edu.hk, yuliu@ee.cuhk.edu.hk - -import torch -import torch.nn as nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', - ['tin_shift_forward', 'tin_shift_backward']) - - -class TINShiftFunction(Function): - - @staticmethod - def forward(ctx, input, shift): - C = input.size(2) - num_segments = shift.size(1) - if C // num_segments <= 0 or C % num_segments != 0: - raise ValueError('C should be a multiple of num_segments, ' - f'but got C={C} and num_segments={num_segments}.') - - ctx.save_for_backward(shift) - - out = torch.zeros_like(input) - ext_module.tin_shift_forward(input, shift, out) - - return out - - @staticmethod - def backward(ctx, grad_output): - - shift = ctx.saved_tensors[0] - data_grad_input = grad_output.new(*grad_output.size()).zero_() - shift_grad_input = shift.new(*shift.size()).zero_() - ext_module.tin_shift_backward(grad_output, shift, data_grad_input) - - return data_grad_input, shift_grad_input - - -tin_shift = TINShiftFunction.apply - - -class TINShift(nn.Module): - """Temporal Interlace Shift. - - Temporal Interlace shift is a differentiable temporal-wise frame shifting - which is proposed in "Temporal Interlacing Network" - - Please refer to https://arxiv.org/abs/2001.06499 for more details. - Code is modified from https://github.com/mit-han-lab/temporal-shift-module - """ - - def forward(self, input, shift): - """Perform temporal interlace shift. - - Args: - input (Tensor): Feature map with shape [N, num_segments, C, H * W]. - shift (Tensor): Shift tensor with shape [N, num_segments]. - - Returns: - Feature map after temporal interlace shift. - """ - return tin_shift(input, shift) diff --git a/spaces/PICOF/YusamiAlchemy/README.md b/spaces/PICOF/YusamiAlchemy/README.md deleted file mode 100644 index e18307d9454971564f57cc2eb691865a1d672c2f..0000000000000000000000000000000000000000 --- a/spaces/PICOF/YusamiAlchemy/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: YusamiAlchemy -emoji: 💩 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.8 -app_file: app.py -pinned: false -license: gpl ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/display-commentary.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/display-commentary.go deleted file mode 100644 index 2718d7c472a4f263f306a14021faa4c491090c44..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/scripts/display-commentary.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-42.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-42.go deleted file mode 100644 index 256e5172b51b2e02daaa5a513045dae3cf0742c8..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-42.go and /dev/null differ diff --git a/spaces/PaulHilders/CLIPGroundingExplainability/clip_grounding/utils/log.py b/spaces/PaulHilders/CLIPGroundingExplainability/clip_grounding/utils/log.py deleted file mode 100644 index 70af379514fef1ea0b7ce3506c11366cab4b62a2..0000000000000000000000000000000000000000 --- a/spaces/PaulHilders/CLIPGroundingExplainability/clip_grounding/utils/log.py +++ /dev/null @@ -1,57 +0,0 @@ -"""Utilities for logging""" -import logging -from tqdm import tqdm -from termcolor import colored - - -def color(string: str, color_name: str = 'yellow') -> str: - """Returns colored string for output to terminal""" - return colored(string, color_name) - - -def print_update(message: str, width: int = 140, fillchar: str = ":", color="yellow") -> str: - """Prints an update message - - Args: - message (str): message - width (int): width of new update message - fillchar (str): character to be filled to L and R of message - - Returns: - str: print-ready update message - """ - message = message.center(len(message) + 2, " ") - print(colored(message.center(width, fillchar), color)) - - -def set_logger(log_path): - """Set the logger to log info in terminal and file `log_path`. - - Args: - log_path (str): path to the log file - """ - logger = logging.getLogger() - logger.setLevel(logging.INFO) - - if not logger.handlers: - # Logging to a file - file_handler = logging.FileHandler(log_path) - file_handler.setFormatter(logging.Formatter('%(asctime)s:%(levelname)s: %(message)s')) - logger.addHandler(file_handler) - - # Logging to console - stream_handler = logging.StreamHandler() - stream_handler.setFormatter(logging.Formatter('%(message)s')) - logger.addHandler(stream_handler) - - -def tqdm_iterator(items, desc=None, bar_format=None, **kwargs): - tqdm._instances.clear() - iterator = tqdm( - items, - desc=desc, - bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}', - **kwargs, - ) - - return iterator \ No newline at end of file diff --git a/spaces/Paulraj916/paulraj916/README.md b/spaces/Paulraj916/paulraj916/README.md deleted file mode 100644 index cefbe952fbe84602469507091abda49eba2829ae..0000000000000000000000000000000000000000 --- a/spaces/Paulraj916/paulraj916/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Paulraj916 -emoji: 🌖 -colorFrom: blue -colorTo: pink -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/PeepDaSlan9/SDXL-artists-browser/artists_and_tags.js b/spaces/PeepDaSlan9/SDXL-artists-browser/artists_and_tags.js deleted file mode 100644 index 0ecc2cf6cfa0ca29247f96d01580328cfd7341ed..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/SDXL-artists-browser/artists_and_tags.js +++ /dev/null @@ -1,815 +0,0 @@ -var artistsData = [ -["Alma-Tadema","Lawrence","romanticism|victorian|history|opulent|ancient|added-2023-08-08",false], -["Anatsui","El","abstract|sculpture|contemporary|African|recycled-materials|Ghanaian|textiles|added-2023-08-08",false], -["Andersen","Sarah","cartoon|comics|high-contrast|contemporary|collage|femininity|fashion|mixed-media|added-2023-08-08",false], -["Balfour","Ronald","art-deco|art-nouveau|watercolor|contemporary|vibrant|abstract|organic|added-2023-08-08",true], -["Basquiat","Jean-Michel","expressionism|messy|neo-expressionism|street-art|African-American|graffiti|punk|contemporary|added-2023-08-08",false], -["Beaux","Cecilia","impressionism|portraits|elegant|femininity|American|added-2023-08-08",false], -["Blanche","John","fantasy|science-fiction|portraits|elegant|French|added-2023-08-08",false], -["Bontecou","Lee","sculpture|abstract|contemporary|mixed-media|added-2023-08-08",false], -["Burgert","Jonas","contemporary|figurative|surrealism|allegory|large-scale|German|added-2023-08-08",true], -["Burlet","Richard","art-nouveau|impressionism|figurative|urban-life|characters|cityscapes|French|added-2023-08-08",false], -["Cassatt","Mary","impressionism|characters|portraits|pastel|added-2023-08-08",false], -["Cézanne","Paul","impressionism|cubism|romanticism|post-impressionism|still-life|landscapes|geometric|added-2023-08-08",false], -["Chicago","Judy","abstract|vibrant|psychedelic|feminism|sculpture|installation|activism|femininity|empowerment|added-2023-08-08",false], -["Ciurlionis","Mikalojus Konstantinas","dark|art-nouveau|symbolist|spirituality|Lithuanian|mysticism|added-2023-08-08",false], -["Clark","Alson Skinner","landscapes|impressionism|seascapes|atmospheric|added-2023-08-08",false], -["Cowper","Frank Cadogan","Victorian|history|romanticism|British|opulent|added-2023-08-08",false], -["Crewdson","Gregory","photography|surrealism|dark|eerie|suburbia|American|added-2023-08-08",false], -["Davis","Stuart","cubism|abstract|social-realism|American|rural-life|printmaking|added-2023-08-08",false], -["Dubbeldam","Ton","pointillism|landscapes|vibrant|contemporary|architecture|conceptual|geometric|Dutch|added-2023-08-08",false], -["Earles","Amy","watercolor|characters|whimsical|dark|abstract-expressionism|gestural|American|added-2023-08-08",false], -["Eliasson","Olafur","installation|contemporary|environmentalism|immersive|nature|added-2023-08-08",false], -["Evans","Walker","photography|monochromatic|documentary|American|great-depression|portraits|social-commentary|added-2023-08-08",false], -["Fahrenkrog","Ludwig","expressionism|symbolist|mysticism|eerie|German|added-2023-08-08",false], -["Flavin","Dan","installation|minimalism|light-art|sculpture|conceptual|contemporary|added-2023-08-08",false], -["Frankenthaler","Helen","abstract|expressionism|watercolor|abstract-expressionism|color-field|painting|feminism|printmaking|contemporary|added-2023-08-08",false], -["Gascar","Henri","impressionism|landscapes|French|atmospheric|added-2023-08-08",true], -["Goldberger","Sacha","photography|portraits|characters|contemporary|mixed-media|identity|immigrants|added-2023-08-08",false], -["Gonzalez-Torres","Felix","installation|conceptual|minimalism|LGBTQ|contemporary|added-2023-08-08",false], -["Haacke","Hans","installation|photography|sculpture|conceptual|environmentalism|politics|contemporary|added-2023-08-08",false], -["Hammons","David","installation|abstract|conceptual|African-American|social-commentary|contemporary|added-2023-08-08",false], -["Haring","Keith","graffiti|street-art|expressionism|flat-colors|high-contrast|vibrant|pop-art|activism|LGBTQ|added-2023-08-08",false], -["Hartley","Marsden","landscapes|portraits|primitivism|expressionism|modern|American|abstract|added-2023-08-08",false], -["Hassam","Childe","impressionism|cityscapes|American|landscapes|added-2023-08-08",false], -["Hatoum","Mona","installation|sculpture|photography|conceptual|displacement|body-art|contemporary|added-2023-08-08",false], -["Hawkes","Pam","figurativism|portraits|contemporary|ceramics|figurative|nature|organic|delicate|added-2023-08-08",false], -["Heizer","Michael","installation|landscapes|angular|land-art|earthworks|nature|large-scale|added-2023-08-08",false], -["Herrera","Carolina","photography|characters|fashion|minimalism|abstract|contemporary|added-2023-08-08",false], -["Holler","Carsten","contemporary|immersive|interactive|experiential|playful|added-2023-08-08",false], -["Huyghe","Pierre","conceptual|contemporary|multimedia|surrealism|added-2023-08-08",false], -["Irwin","Robert","installation|angular|minimalism|environmentalism|contemporary|added-2023-08-08",false], -["Judd","Donald","angular|installation|minimalism|sculpture|metalwork|contemporary|added-2023-08-08",false], -["Kahlo","Frida","surrealism|portraits|vibrant|Mexican|self-portraits|feminism|added-2023-08-08",false], -["Kelly","Ellsworth","abstract|flat-colors|minimalism|color-field|geometric|contemporary|added-2023-08-08",false], -["Kentridge","William","messy|monochromatic|drawing|animation|printmaking|African|politics|contemporary|added-2023-08-08",false], -["Koons","Jeff","sculpture|pop-art|vibrant|contemporary|kitsch|consumerism|post-modern|added-2023-08-08",false], -["Krasner","Lee","expressionism|abstract|high-contrast|abstract-expressionism|color-field|gestural|improvisation|feminism|added-2023-08-08",false], -["Kruger","Barbara","high-contrast|graphic-design|conceptual|feminism|text-based|montage|advertising|contemporary|added-2023-08-08",false], -["Kusama","Yayoi","vibrant|polka-dots|installation|fashion|pop-art|contemporary|infinity-rooms|feminism|added-2023-08-08",false], -["Lawrence","Jacob","cubism|angular|modern|African-American|social-realism|harlem-renaissance|contemporary|added-2023-08-08",false], -["Lawson","Ernest","impressionism|landscapes|everyday-life|American|added-2023-08-08",false], -["LeWitt","Sol","conceptual|minimalism|sculpture|geometric|abstract|wall-drawings|contemporary|serial-art|added-2023-08-08",false], -["Lin","Maya","installation|land-art|architecture|identity|environmentalism|contemporary|added-2023-08-08",false], -["List","Herbert","photography|monochromatic|high-contrast|surrealism|portraits|German|added-2023-08-08",false], -["Mapplethorpe","Robert","photography|figure-studies|BDSM|monochromatic|portraits|homo-eroticism|LGBTQ|added-2023-08-08",false], -["Martin","Agnes","minimalism|abstract-expressionism|grids|color-field|spirituality|contemporary|added-2023-08-08",false], -["Merian","Maria Sibylla","biological|nature|naturalist|botanical|insects|observational|added-2023-08-08",false], -["Metcalf","Willard","tonalism|landscapes|muted-colors|American|added-2023-08-08",false], -["Morimoto","Kōji","contemporary|surrealism|illustration|Japanese|monsters|cute|added-2023-08-08",false], -["Mostyn","Thomas Edwin","landscapes|still-life|portraits|romanticism|pre-raphaelite|dream-like|British|mysticism|added-2023-08-08",false], -["Murakami","Takashi","pop-art|manga-anime|flat-colors|Japanese|cute|contemporary|added-2023-08-08",false], -["Nan","Juliana","contemporary|multimedia|identity|African|added-2023-08-08",false], -["Nauman","Bruce","conceptual|sculpture|performance|neon|contemporary|added-2023-08-08",false], -["Neel","Alice","high-contrast|portraits|expressionism|figurative|social-realism|feminism|contemporary|added-2023-08-08",false], -["Neshat","Shirin","contemporary|video-art|photography|Iranian|feminism|identity|added-2023-08-08",false], -["Noguchi","Isamu","sculpture|landscape-architecture|organic|Japanese|added-2023-08-08",false], -["O'Keeffe","Georgia","figurativism|abstract|watercolor|modern|precisionism|American|flowers|southwest|landscapes|added-2023-08-08",false], -["Ofili","Chris","watercolor|expressionism|contemporary|figurative|painting|afro-futurism|mixed-media|post-colonialism|added-2023-08-08",false], -["Parreno","Philippe","installation|contemporary|multimedia|film|conceptual|post-modern|added-2023-08-08",false], -["Perry","Lilla Cabot","impressionism|interiors|gardens|American|added-2023-08-08",false], -["Ribemont-Dessaignes","Georges","Dadaism|avant-garde|French|added-2023-08-08",false], -["Ringgold","Faith","pop-art|abstract|expressionism|feminism|quilting|African-American|activism|contemporary|added-2023-08-08",false], -["Scully","Sean","abstract|angular|minimalism|grids|added-2023-08-08",false], -["Serra","Richard","sculpture|installation|minimalism|large-scale|contemporary|added-2023-08-08",false], -["Sherman","Cindy","photography|portraits|conceptual|self-portraits|feminism|post-modern|identity|contemporary|added-2023-08-08",false], -["Sims","David","contemporary|photography|fashion|British|added-2023-08-08",false], -["Singer","Andy","pop-art|consumerism|celebrity|American|added-2023-08-08",false], -["Smart","Jeffrey","surrealism|Scottish|dream-like|added-2023-08-08",false], -["Smith","Kiki","minimalism|feminism|sculpture|body-art|performance|contemporary|added-2023-08-08",true], -["Smithson","Robert","land-art|sculpture|conceptual|earthworks|environmentalism|post-minimalism|added-2023-08-08",false], -["Suddese","Kate Van","contemporary|abstract|mixed-media|organic|vibrant|added-2023-08-08",true], -["Sutherland","Graham","abstract|landscapes|expressionism|surrealism|portraits|distortion|British|battle-scenes|eerie|added-2023-08-08",false], -["Tanning","Dorothea","surrealism|dream-like|figure-studies|metamorphosis|eerie|added-2023-08-08",false], -["Tenniel","John","kids-book|fantasy|whimsical|drawing|added-2023-08-08",false], -["Thomson","Tom","expressionism|art-nouveau|impressionism|landscapes|Canadian|nature|wilderness|added-2023-08-08",false], -["Toth","Alex","cartoon|comics|high-contrast|figurative|animals|wildlife|bronze|added-2023-08-08",false], -["Turrell","James","light-art|vibrant|installation|sculpture|contemporary|architecture|minimalism|colorful|geometric|added-2023-08-08",false], -["Uhlig","Daniela","digital|portraits|characters|contemporary|landscapes|dream-like|ethereal|surrealism|German|added-2023-08-08",false], -["Valadon","Suzanne","post-impressionism|nudes|mysterious|added-2023-08-08",false], -["Valdi","Thiago","contemporary|urban-life|street-art|colorful|Brazilian|added-2023-08-08",false], -["Varo","Remedios","surrealism|low-contrast|magic-realism|Spanish|added-2023-08-08",false], -["Vonnoh","Robert","impressionism|bronze|sculpture|American|added-2023-08-08",false], -["Walker","Kara","silhouettes|African-American|identity|contemporary|added-2023-08-08",false], -["Warhol","Andy","pop-art|vibrant|portraits|celebrity|contemporary|added-2023-08-08",false], -["Weiwei","Ai","contemporary|installation|social-commentary|activism|politics|Chinese|added-2023-08-08",true], -["Wiley","Kehinde","photorealism|portraits|vibrant|colorful|contemporary|African-American|baroque|identity|added-2023-08-08",false], -["Wilson","Wes","contemporary|art-nouveau|psychedelic|added-2023-08-08",false], -["Woodman","Francesca","feminism|self-portraits|photography|surrealism|still-life|contemporary|added-2023-08-08",false], -["Wu","Bayard","fantasy|fashion|illustration|Chinese|LGBTQ|contemporary|added-2023-08-08",true], -["Wylie","Rose","figurative|portraits|painting|observational|contemporary|added-2023-08-08",false], -["Abts","Tomma","abstract|angular|geometric|modern|minimalism|contemporary|color-field|added-2023-08-08",false], -["Acconci","Vito","dark|installation|architecture|sculpture|performance|conceptual|added-2023-08-08",false], -["Adams","Ansel","monochromatic|high-contrast|nature|American|landscapes|photography|added-2023-08-08",false], -["Aoshima","Chiho","pop-art|colorful|whimsical|manga-anime|fantasy|vibrant|Japanese|digital|futuristic|added-2023-08-08",false], -["Araki","Hirohiko","manga-anime|Japanese|characters|pop-culture|illustration|graphic-novel|surrealism|added-2023-08-08",false], -["Bacon","Francis","expressionism|British|portraits|abstract|dark|figurative|distortion|surrealism|added-2023-08-08",false], -["Banksy","","street-art|graffiti|high-contrast|politics|social-commentary|anonymous|urban-life|added-2023-08-08",false], -["Barney","Matthew","photography|surrealism|sculpture|video-art|performance|multimedia|film|conceptual|added-2023-08-08",false], -["Bosch","Hieronymus","whimsical|renaissance|religion|mysticism|surrealism|allegory|fantasy|added-2023-08-08",false], -["Botticelli","Sandro","renaissance|Italian|figurative|mythology|religion|femininity|dream-like|added-2023-08-08",false], -["Chagall","Marc","fauvism|impressionism|surrealism|stained-glass|Russian|French|Jewish|colorful|folklore|romanticism|added-2023-08-08",false], -["Constable","John","landscapes|romanticism|dark|nature|British|oil-painting|skies|added-2023-08-08",false], -["Creed","Martin","installation|abstract|expressionism|minimalism|conceptual|British|playful|interactive|added-2023-08-08",false], -["Crumb","Robert","comics|characters|American|underground|satire|counter-culture|added-2023-08-08",false], -["Dalí","Salvador","surrealism|dark|Spanish|dream-like|oil-painting|dreams|illusion|metaphysics|added-2023-08-08",false], -["Degas","Edgar","impressionism|French|ballet|pastel|drawing|portraits|dancers|femininity|added-2023-08-08",false], -["Delacroix","Eugene","romanticism|French|history|oil-painting|sketching|orientalism|colorful|vibrant|added-2023-08-08",false], -["Doig","Peter","figurativism|landscapes|abstract|British|Canadian|large-scale|dream-like|nature|added-2023-08-08",false], -["Duchamp","Marcel","surrealism|cubism|fauvism|expressionism|impressionism|conceptual|dadaism|added-2023-08-08",false], -["Ernst","Max","surrealism|expressionism|German|Dadaism|collage|oil-painting|automatism|mythology|added-2023-08-08",false], -["Escher","M. C.","angular|high-contrast|surrealism|Dutch|lithography|woodblock|geometric|illusion|mathematics|added-2023-08-08",false], -["Freud","Lucian","portraits|expressionism|British|realism|oil-painting|figurative|flesh|added-2023-08-08",false], -["Gaudi","Antoni","architecture|angular|Spanish|organic|mosaic|art-nouveau|fantasy|added-2023-08-08",false], -["Gauguin","Paul","impressionism|primitivism|French|exoticism|oil-painting|colorful|tropics|spirituality|added-2023-08-08",false], -["Giacometti","Alberto","sculpture|expressionism|Swiss|bronze|figurative|portraits|emaciation|added-2023-08-08",false], -["Goya","Francisco","romanticism|portraits|Spanish|etching|social-commentary|oil-painting|dark|politics|satire|horror|added-2023-08-08",false], -["Hiroshige","","ukiyo-e|landscapes|Japanese|woodblock|nature|Edo-period|printmaking|added-2023-08-08",false], -["Hirst","Damien","conceptual|contemporary|installation|British|shock-art|mixed-media|sculpture|animals|death|added-2023-08-08",false], -["Hockney","David","pools|cubism|vibrant|colorful|British|pop-art|portraits|added-2023-08-08",false], -["Hokusai","Katsushika","ukiyo-e|high-contrast|Japanese|woodblock|nature|Edo-period|waves|japanese|added-2023-08-08",false], -["Hopper","Edward","impressionism|American|realism|architecture|landscapes|oil-painting|urban-life|solitude|loneliness|nostalgia|added-2023-08-08",false], -["Horn","Roni","conceptual|sculpture|photography|American|minimalism|installation|nature|environmentalism|added-2023-08-08",false], -["Kandinsky","Wassily","bauhaus|expressionism|abstract|vibrant|Russian|modern|spirituality|added-2023-08-08",false], -["Klee","Paul","bauhaus|expressionism|abstract|surrealism|German|drawing|playful|added-2023-08-08",false], -["Klein","William","photography|monochromatic|abstract|American|urban-life|fashion|minimalism|added-2023-08-08",false], -["Klein","Yves","abstract|monochromatic|expressionism|French|performance|modern|color-field|fashion|added-2023-08-08",false], -["Kleiner","Carl","abstract|surrealism|portraits|graphic-design|American|digital|collage|pop-art|added-2023-08-08",false], -["Klimt","Gustav","art-nouveau|Austrian|erotica|mosaic|portraits|golden|female-figures|added-2023-08-08",false], -["Larson","Gary","cartoon|American|newspaper|satire|pop-culture|comics|animals|slice-of-life|added-2023-08-08",false], -["Lichtenstein","Roy","flat-colors|comics|portraits|abstract|expressionism|American|pop-art|added-2023-08-08",false], -["Magritte","Rene","surrealism|cloudscapes|art-deco|cubism|impressionism|Belgian|illusion|added-2023-08-08",false], -["Manet","Édouard","impressionism|portraits|French|still-life|realism|femininity|modern-life|controversy|added-2023-08-08",false], -["Matisse","Henri","fauvism|impressionism|French|collage|sculpture|color-field|colorful|cut-outs|added-2023-08-08",false], -["Michelangelo","","renaissance|Italian|sculpture|frescoes|religion|figurative|ceiling-painting|added-2023-08-08",false], -["Miró","Joan","abstract|Spanish|surrealism|sculpture|drawing|color-field|colorful|outer-space|playful|added-2023-08-08",false], -["Miyazaki","Hayao","whimsical|manga-anime|kids-book|Japanese|animation|fantasy|adventure|added-2023-08-08",false], -["Modigliani","Amedeo","expressionism|fauvism|portraits|Italian|sculpture|modern|romanticism|added-2023-08-08",false], -["Mondrian","Piet","cubism|vibrant|angular|Dutch|abstract|geometric|primary-colors|added-2023-08-08",false], -["Monet","Claude","impressionism|landscapes|seascapes|French|plein-air|color-field|water-lilies|added-2023-08-08",false], -["Morisot","Berthe","impressionism|feminism|landscapes|portraits|French|still-life|domestic-scenes|fleeting-moments|added-2023-08-08",false], -["Moriyama","Daido","photography|Japanese|urban-life|monochromatic|post-war|documentary|grungy|added-2023-08-08",false], -["Mucha","Alphonse","art-nouveau|portraits|Czech|commercial-art|posters|femininity|stained-glass|added-2023-08-08",false], -["Munch","Edvard","expressionism|impressionism|Norwegian|anxiety|oil-painting|dark|melancholy|added-2023-08-08",false], -["Okamoto","Tarō","surrealism|gutai|Japanese|abstract|sculpture|avant-garde|performance|added-2023-08-08",false], -["Picasso","Pablo","cubism|surrealism|impressionism|Spanish|sculpture|modern|collage|added-2023-08-08",false], -["Pollock","Jackson","abstract|messy|expressionism|American|drip-painting|action-painting|added-2023-08-08",false], -["Potter","Beatrix","whimsical|watercolor|kids-book|British|animals|book-illustration|nature|added-2023-08-08",false], -["Renoir","Pierre-Auguste","impressionism|portraits|French|landscapes|plein-air|female-figures|pastel-colors|femininity|outdoor-scenes|added-2023-08-08",false], -["Richter","Gerhard","abstract|German|photorealism|oil-painting|contemporary|blurry|multimedia|added-2023-08-08",false], -["Rijn","Rembrandt van","baroque|portraits|Dutch|etching|self-portraits|history|religion|added-2023-08-08",false], -["Rothko","Mark","abstract|expressionism|American|color-field|large-scale|minimalism|spirituality|added-2023-08-08",false], -["Rubens","Peter Paul","baroque|renaissance|romanticism|Flemish|history|painting|oil-painting|mythology|nudes|added-2023-08-08",false], -["Schulz","Charles","comics|cartoon|American|characters|nostalgia|childhood|social-commentary|added-2023-08-08",false], -["Shimamoto","Shozo","performance|gutai|Japanese|abstract|mixed-media|post-war|action-painting|collaborative|added-2023-08-08",false], -["Spiegelman","Art","cartoon|comics|American|history|graphic-novel|autobiographical|Holocaust|animals|added-2023-08-08",false], -["Strand","Paul","photography|monochromatic|American|landscapes|portraits|abstract|minimalism|still-life|urban-life|added-2023-08-08",false], -["Sugimoto","Hiroshi","photography|monochromatic|Japanese|conceptual|seascapes|long-exposure|architecture|geometric|added-2023-08-08",false], -["Tezuka","Osamu","cartoon|manga-anime|Japanese|animation|characters|science-fiction|robots-cyborgs|added-2023-08-08",false], -["Titian","","renaissance|dark|Italian|portraits|religion|oil-painting|mythology|painting|colorful|added-2023-08-08",false], -["Toulouse-Lautrec","Henri de","impressionism|art-nouveau|French|posters|lithography|portraits|nightlife|cabaret|vibrant|added-2023-08-08",false], -["Turner","J.M.W.","romanticism|landscapes|seascapes|British|watercolor|atmospheric|added-2023-08-08",false], -["Utamaro","Kitagawa","ukiyo-e|Japanese|woodblock|Edo-period|female-figures|nature|portraits|fashion|genre-scenes|added-2023-08-08",false], -["Velázquez","Diego","baroque|Spanish|portraits|religion|oil-painting|realism|royalty|history|added-2023-08-08",false], -["Vermeer","Johannes","baroque|interiors|portraits|Dutch|genre-scenes|domestic-scenes|illusion|added-2023-08-08",false], -["Ware","Chris","cartoon|comics|American|graphic-novel|modern-life|characters|slice-of-life|added-2023-08-08",false], -["Watterson","Bill","friendship|American|characters|nostalgia|colorful|melancholy|loneliness|added-2023-08-08",false], -["Whistler","James Abbott McNeill","whimsical|low-contrast|American|tonalism|portraits|etching|interiors|added-2023-08-08",false], -["Woodring","Jim","surrealism|comics|American|fantasy|characters|pen-and-ink|psychedelic|dream-like|aliens|creatures|added-2023-08-08",false], -["Nielsen","Kay","Danish|American|illustration|Fantasy|kids-book|exoticism|fantasy|orientalism|elegant|whimsical|Painting|added-2023-08-08",false], -["Nesterov","Mikhail","Religion|Spirituality|religion|Figurative|Romanticism|Painting|added-2023-08-08",false], -["Bloch","Albert","Satire|Social-commentary|Impressionism|Realism|Painting|Engraving|added-2023-08-08",false], -["Kawase","Hasui","Plein-air|Slice-of-life|landscapes|ukiyo-e|Printmaking|added-2023-08-08",false], -["Fontana","Franco","Conceptual|Metamorphosis|abstract|Spatialism|Painting|Sculpture|added-2023-08-08",false], -["Stelfreeze","Brian","Activism|Social-realism|comics|Illustration|contemporary|digital|added-2023-08-08",false], -["Hughes","Nicholas","Surreal|Symbolist|Realism|figurativism|Painting|added-2023-08-08",true], -["Ditlev","Jan","Dreams|landscapes|Realism|Painting|Printmaking|added-2023-08-08",true], -["Szukalski","Stanisław","Metaphysics|Mysticism|Surrealism|primitivism|Sculpture|added-2023-08-08",false], -["Ancher","Helga","Observational|Slice-of-life|Realism|impressionism|Painting|added-2023-08-08",false], -["MacDonald","Frances","Allegory|Nostalgia|landscapes|impressionism|Painting|added-2023-08-08",false], -["Flint","Alex Russell","Social-commentary|Environmentalism|abstract|abstract-Expressionism|Painting|Illustration|added-2023-08-08",false], -["Pasquini","Alice","Documentary|Social-realism|Public-Art|contemporary|Street-art|Mural-painting|added-2023-08-08",false], -["Grimly","Gris","dark|comics|whimsical|fantasy|Surrealism|illustration|whimsical|kids-book|gothic|eerie|fantasy|added-2023-08-08",false], -["Smith","Samantha Keely","Dream-like|Loneliness|abstract|abstract-Expressionism|contemporary|Painting|added-2023-08-08",false], -["Semenov","Anton","Surreal|Symbolist|Surrealism|shock-art|digital|Painting|added-2023-08-08",false], -["Podolchak","Ihor","Metaphysics|Surrealism|underground|Film|Painting|added-2023-08-08",true], -["Rousse","Georges","Femininity|Mysticism|Impressionism|Neo-Impressionism|Post-Impressionism|Painting|added-2023-08-08",false], -["Vrubel","Mikhail","Symbolist|Religion|Painting|Sculpture|added-2023-08-08",false], -["Biddle","George","politics|Activism|Impressionism|contemporary|Painting|Illustration|added-2023-08-08",true], -["Pissarro","Camille","impressionism|Observational|Impressionism|Painting|Printmaking|added-2023-08-08",false], -["Selimoglu","Niyazi","Exoticism|Futurism|Geometric|Orientalism|Painting|Printmaking|added-2023-08-08",true], -["Sidibé","Malick","Documentary|Slice-of-life|Harlem-Renaissance|Photography|added-2023-08-08",false], -["the Elder","Lucas Cranach","Religion|Allegory|religion|Renaissance|Painting|added-2023-08-08",false], -["Manabe","Johji","Science-fiction|Metamorphosis|abstract|contemporary|Illustration|added-2023-08-08",false], -["Tarnowski","Artur","realism|3D-rendering|video-games|contemporary|added-2023-08-08",true], -["Garcin","Gilbert","Surreal|Conceptual|abstract|contemporary|Sculpture|Installation|added-2023-08-08",false], -["Smilde","Berndnaut","Surreal|Metamorphosis|installation|Installation|Photography|added-2023-08-08",false], -["Ladrönn","José","Fantasy|Science-fiction|comics|Illustration|added-2023-08-08",true], -["Shatseva","Tanya","Russian|Surrealism|eerie|contemporary|Painting|added-2023-08-08",false], -["Tessari","Vittorio","Satire|Social-commentary|abstract|Realism|Painting|added-2023-08-08",true], -["Cruz-Diez","Carlos","Conceptual|illusion|Kinetic|Light-art|added-2023-08-08",false], -["Bak","Karol","Conceptual|Metamorphosis|Impressionism|contemporary|Painting|added-2023-08-08",false], -["Robinson","Charles","Satire|politics|Realism|Painting|added-2023-08-08",false], -["Korovin","Konstantin","impressionism|Plein-air|Impressionism|Painting|added-2023-08-08",false], -["Rattner","Abraham","expressionism|Symbolist|Expressionism|Painting|Sculpture|added-2023-08-08",false], -["Hamilton","Richard","Pop-art|Consumerism|Pop-Art|Mixed-media|added-2023-08-08",false], -["Toraji","","Commercial-art|Sculpture|Installation|added-2023-08-08",true], -["Shinkai","Makoto","Slice-of-life|Fleeting-moments|manga-anime|contemporary|Film|added-2023-08-08",false], -["Aldridge","Miles","Femininity|Consumerism|Pop-Art|Pop-art|Illustration|added-2023-08-08",false], -["Rydingsvard","Ursula von","Metamorphosis|abstract|Minimalism|Sculpture|added-2023-08-08",false], -["Whitaker","William","Documentary|Social-realism|landscapes|contemporary|Painting|added-2023-08-08",false], -["Weissenbruch","Hendrik","Plein-air|Observational|landscapes|Painting|added-2023-08-08",false], -["Wilkes","Cathy","Activism|Social-commentary|Surrealism|contemporary|Photography|added-2023-08-08",false], -["Rocafort","Kenneth","illustration|Science-fiction|comics|Fantasy|contemporary|Illustration|Graphic-novel|added-2023-08-08",false], -["Knight","Nick","Fantasy|Adventure|Surrealism|Pop-art|Illustration|added-2023-08-08",false], -["Jensen","Georg","Symbolist|Plein-air|Realism|Painting|added-2023-08-08",false], -["Hobbema","Meindert","Observational|Plein-air|landscapes|Dutch-Golden-Age|Painting|added-2023-08-08",false], -["Khnopff","Fernand","Symbolist|metaphysics|Painting|Sculpture|added-2023-08-08",false], -["Carte","Anto","Dream-like|Fantasy|abstract|contemporary|Painting|added-2023-08-08",true], -["the Elder","Lorenzo Costa","Religion|Allegory|religion|Renaissance|Painting|added-2023-08-08",false], -["Broom","Lee","Activism|Social-commentary|abstract|Harlem-Renaissance|Painting|added-2023-08-08",false], -["the Elder","Jan van Kessel","Observational|Allegory|Still-Life|Nature|Baroque|Painting|added-2023-08-08",false], -["Mendoza","Eddie","Consumerism|Commercial-art|urban-life|underground|Painting|added-2023-08-08",true], -["Prendergast","Maurice","impressionism|Observational|Impressionism|Painting|added-2023-08-08",false], -["Ohman","Jack","Satire|politics|comics|Illustration|contemporary|Painting|added-2023-08-08",false], -["Killion","Tom","Plein-air|Observational|landscapes|contemporary|Printmaking|added-2023-08-08",false], -["Roybal","Antonio","Social-realism|Slice-of-life|Social-Realism|contemporary|Painting|added-2023-08-08",true], -["Solomon","Simeon","Symbolist|Metaphysics|abstract|contemporary|Painting|added-2023-08-08",false], -["Thomas","Mickalene","Femininity|identity|Collage|Portraits|contemporary|Painting|Photography|added-2023-08-08",false], -["Ozeri","Yigal","Observational|Slice-of-life|Realism|contemporary|Painting|added-2023-08-08",false], -["Picabia","Francis","Dadaism|Surreal|Surrealism|Painting|added-2023-08-08",false], -["Aagaard","Zacharias Martin","Observational|Slice-of-life|landscapes|Romanticism|Painting|added-2023-08-08",false], -["Tindle","David","Symbolist|Metaphysics|Surrealism|contemporary|Sculpture|added-2023-08-08",true], -["Dossena","Emilio Giuseppe","Conceptual|metaphysics|abstract|contemporary|Sculpture|added-2023-08-08",false], -["Ketner","Jeremiah","Activism|Social-commentary|abstract|contemporary|Painting|added-2023-08-08",false], -["Lagorio","Lev","Plein-air|Observational|landscapes|Realism|Painting|added-2023-08-08",false], -["Britenbucher","Renie","Fleeting-moments|Observational|Portraits|contemporary|Painting|added-2023-08-08",false], -["Holloway","Zena","Photography|British|underwater|animals|portraits|added-2023-08-08",false], -["Pinturicchio","","Painting|Renaissance|Religion|Allegory|added-2023-08-08",false], -["Cold","Chris","Activism|Social-commentary|Land-Art|contemporary|Painting|added-2023-08-08",true], -["Spriggs","Ian","Surreal|Symbolist|Illustration|contemporary|Painting|added-2023-08-08",true], -["Marcela-Froideval","François","Fantasy|Science-fiction|contemporary|Graphic-novel|added-2023-08-08",false], -["Caniglia","Jeremy","dark|Satire|Surrealism|contemporary|Painting|added-2023-08-08",true], -["Nagy","Tibor","Symbolist|metaphysics|abstract|contemporary|Sculpture|added-2023-08-08",false], -["Münter","Gabriele","expressionism|Symbolist|Expressionism|Painting|added-2023-08-08",false], -["Fouquet","Jean","Religion|Allegory|Renaissance|renaissance|Painting|added-2023-08-08",false], -["Gorky","Arshile","Surreal|Symbolist|abstract-Expressionism|Surrealism|Painting|Drawing|added-2023-08-08",false], -["Raphael","","Renaissance|Painting|added-2023-08-08",false], -["Ross","Bob","Commercial-art|Consumerism|landscapes|contemporary|Painting|added-2023-08-08",false], -["Mosina","Inna","Femininity|identity|Ballet|Photography|contemporary|Sculpture|added-2023-08-08",false], -["Disney","Walt","Fantasy|Adventure|Cartoon|contemporary|Animation|added-2023-08-08",false], -["Lasdun","Denys","Architecture|metaphysics|contemporary|added-2023-08-08",false], -["Ravesteyn","Jan van","Observational|Plein-air|Baroque|Architecture|Sculpture|added-2023-08-08",false], -["HUSH","","Street-art|Activism|Painting|added-2023-08-08",false], -["Heysen","Nora","Femininity|Consumerism|landscapes|contemporary|Painting|added-2023-08-08",false], -["Fumito","Ueda","Dream-like|Surreal|video-games|contemporary|Video-art|added-2023-08-08",true], -["Watts","James Thomas","Symbolist|Allegory|Victorian|Painting|added-2023-08-08",true], -["Saarinen","Eero","Architecture|metaphysics|modern|Modern|added-2023-08-08",false], -["Fautrier","Jean","Metaphysics|abstract-expressionism|Painting|Sculpture|added-2023-08-08",false], -["Davis","Jim","comics|Satire|Illustration|contemporary|added-2023-08-08",true], -["Taaffe","Philip","Surreal|Symbolist|abstract|contemporary|Painting|added-2023-08-08",false], -["Permeke","Constant","expressionism|Symbolist|Expressionism|Painting|Sculpture|added-2023-08-08",false], -["Qwek","Dom","Fantasy|Adventure|contemporary|Illustration|added-2023-08-08",true], -["Solomon","Barbara Stauffacher","Pop-art|Commercial-art|Graphic-Design|contemporary|Graphic-design|added-2023-08-08",false], -["Vivanco","Kelly","Femininity|Consumerism|Sculpture|contemporary|Photography|added-2023-08-08",false], -["Grasso","Laurent","Surreal|Conceptual|Surrealism|contemporary|Sculpture|added-2023-08-08",false], -["Francés","Victoria","expressionism|Metaphysics|abstract|contemporary|Painting|added-2023-08-08",true], -["Fegredo","Duncan","Fantasy|Adventure|comics|contemporary|Illustration|added-2023-08-08",true], -["Shwedoff","Yuri","Surreal|Fantasy|contemporary|Illustration|added-2023-08-08",false], -["Nicholson","William","Observational|Slice-of-life|abstract|Modern|Painting|added-2023-08-08",false], -["Cotton","Olive","Australian|Modern|photography|monochromatic|nature|added-2023-08-08",false], -["Clausen","George","Observational|Plein-air|Realism|Painting|added-2023-08-08",false], -["Howitt","Alex","Fleeting-moments|Slice-of-life|Illustration|contemporary|Painting|added-2023-08-08",false], -["Cormon","Fernand","impressionism|Observational|Realism|Painting|added-2023-08-08",false], -["Sueur","Eustache Le","impressionism|Fleeting-moments|portraits|Baroque|Painting|added-2023-08-08",false], -["Williams","Kyffin","Surreal|Symbolist|landscapes|contemporary|Painting|added-2023-08-08",false], -["Hegarty","Valerie","Social-commentary|metamorphosis|sculpture|Painting|added-2023-08-08",false], -["Telgemeier","Raina","autobiographical|Slice-of-life|comics|graphic-novel|contemporary|Graphic-novel|added-2023-08-08",false], -["Mashkov","Ilya","expressionism|Symbolist|russian|painting|added-2023-08-08",false], -["Steinlen","Théophile","Observational|Allegory|Art-Nouveau|Printmaking|added-2023-08-08",false], -["Bissell","Robert","impressionism|Plein-air|wildlife|contemporary|painting|animals|nature|whimsical|kids-book|fantasy|mysterious|added-2023-08-08",false], -["Lhote","André","Symbolist|impressionism|Cubism|Painting|added-2023-08-08",false], -["Morris","Sarah","Femininity|identity|abstract|contemporary|Painting|added-2023-08-08",false], -["Truitt","Anne","minimalism|Conceptual|Minimalism|Sculpture|added-2023-08-08",false], -["Launay","Melissa","Surreal|Symbolist|abstract|contemporary|Painting|added-2023-08-08",false], -["Roy","Pierre","impressionism|Observational|abstract|contemporary|Painting|added-2023-08-08",true], -["Jiaying","He","Femininity|identity|Realism|contemporary|Painting|added-2023-08-08",false], -["Achenbach","Andreas","Plein-air|Observational|landscapes|Romanticism|Painting|added-2023-08-08",false], -["Barnet","Will","Activism|Social-commentary|abstract|contemporary|Painting|added-2023-08-08",false], -["Bellotto","Bernardo","Observational|Plein-air|landscapes|Rococo|Painting|added-2023-08-08",false], -["Bernini","Gian Lorenzo","Religion|Allegory|Baroque|Sculpture|added-2023-08-08",false], -["Herriman","George","Satire|politics|comics|contemporary|Illustration|added-2023-08-08",false], -["Wooten","Ben","Femininity|identity|abstract|contemporary|Painting|added-2023-08-08",true], -["Sudworth","Anne","Femininity|Metaphysics|landscapes|contemporary|Illustration|added-2023-08-08",true], -["Belkina","Katerina","Femininity|identity|portraits|contemporary|Photography|added-2023-08-08",false], -["Parrish","Maxfield","Fantasy|Nostalgia|Art-Nouveau|Painting|added-2023-08-08",false], -["Fleischer","Max","comics|dark|Animation|contemporary|added-2023-08-08",false], -["Oshii","Mamoru","Science-fiction|Metaphysics|manga-anime|contemporary|Animation|added-2023-08-08",false], -["Etchells","Tim","Conceptual|metaphysics|conceptual|contemporary|Painting|added-2023-08-08",false], -["Mutu","Wangechi","Feminism|identity|Collage|contemporary|Mixed-media|added-2023-08-08",false], -["Chambers","Tom","Fleeting-moments|Observational|abstract|contemporary|Illustration|added-2023-08-08",false], -["Maillol","Aristide","Surreal|metaphysics|modern|Art-Nouveau|Sculpture|added-2023-08-08",false], -["the Younger","Hans Holbein","anthropomorphism|portraits|Renaissance|Painting|added-2023-08-08",false], -["Werkman","H.N.","activism|Typography|Printmaking|added-2023-08-08",true], -["Seliger","Mark","Anxiety|Metaphysics|Portraits|Photography|contemporary|added-2023-08-08",false], -["Loughridge","Lee","autobiographical|Conceptual|abstract|contemporary|Illustration|added-2023-08-08",true], -["Andreev","Alex","Death|Displacement|Surrealism|contemporary|Painting|added-2023-08-08",false], -["Zerbe","Karl","Documentary|Dreams|Surrealism|Expressionism|Painting|added-2023-08-08",true], -["Addams","Charles","Social-commentary|Cartoon|contemporary|Illustration|added-2023-08-08",false], -["Castelfranco","Giorgio Barbarelli da","Environmentalism|Fantasy|Rococo|Renaissance|Painting|added-2023-08-08",false], -["Fuke","Ryohei","Fleeting-moments|identity|landscapes|contemporary|Painting|added-2023-08-08",false], -["Gahō","Hashimoto","Kitsch|Politics|Printmaking|ukiyo-e|added-2023-08-08",false], -["Bergland","Don","Religion|Social-realism|landscapes|contemporary|Photography|added-2023-08-08",true], -["Manara","Milo","Controversy|Femininity|erotica|Comics|Illustration|added-2023-08-08",false], -["Guanzhong","Wu","Feminism|Homo-eroticism|landscapes|contemporary|Illustration|added-2023-08-08",false], -["Johns","Jasper","Dream-like|Mysticism|abstract-Expressionism|Painting|Printmaking|added-2023-08-08",false], -["Kelsner","Alfred","Metamorphosis|Surreal|abstract|contemporary|Painting|added-2023-08-08",false], -["Mulready","Augustus Edwin","Symbolist|Commercial-art|Realism|Romanticism|Painting|added-2023-08-08",false], -["Moonan","John","Nostalgia|Slice-of-life|abstract|contemporary|Painting|added-2023-08-08",true], -["Dauterman","Russell","Observational|Plein-air|comics|superheroes|contemporary|Illustration|added-2023-08-08",true], -["Vogelsang","Elke","abstract|contemporary|Painting|added-2023-08-08",false], -["Ledroit","Olivier","comics|Fantasy|fantasy|Illustration|added-2023-08-08",true], -["Casson","A. J.","Mathematics|Punk|landscapes|contemporary|Painting|added-2023-08-08",false], -["Gray","Eileen","Friendship|Loneliness|abstract|contemporary|Painting|added-2023-08-08",false], -["Olsen","Greg","outer-space|Spirituality|Wildlife|contemporary|Painting|added-2023-08-08",false], -["Jover","Loui","eerie|satire|Illustration|contemporary|added-2023-08-08",false], -["Veeber","Kuno","Science-fiction|Exoticism|abstract|contemporary|Painting|added-2023-08-08",true], -["Musgrove","Scott","Adventure|Advertising|landscapes|contemporary|Illustration|added-2023-08-08",false], -["Munnings","Alfred","horses|modern|Painting|added-2023-08-08",false], -["Abbott","Elenore","fantasy|watercolor|art-nouveau|dream-like|ethereal|romanticism|pastel-colors|femininity|mythology|added-2023-08-08",false], -["Anderson","Richard","digital|fantasy|dark|messy|surreal|gothic|horror|psychedelic|added-2023-08-08",false], -["Argyle","Steve","fantasy|characters|whimsical|colorful|cartoon|playful|added-2023-08-08",true], -["Bagshaw","Tom","characters|dark|fantasy|surreal|horror|eerie|melancholy|added-2023-08-08",false], -["Balaskas","Christopher","vibrant|digital|landscapes|science-fiction|futuristic|eerie|outer-space|added-2023-08-08",false], -["Bana","Benedick","characters|science-fiction|messy|monochromatic|3D-rendering|grungy|industrial|cyberpunk|dystopia|added-2023-08-08",false], -["Barker","Cicely Mary","fantasy|whimsical|characters|folklore|magic|nostalgia|added-2023-08-08",false], -["Barlowe","Wayne","science-fiction|fantasy|dark|alien-worlds|dystopia|mythology|creatures|eerie|added-2023-08-08",false], -["Bautista","Chiara","fantasy|dark|whimsical|dream-like|mysterious|surreal|magic|illusion|added-2023-08-08",false], -["Bean","Alan","science-fiction|outer-space|metaphysics|astronauts|painting|added-2023-08-08",false], -["Becket-Griffith","Jasmine","fantasy|portraits|whimsical|vibrant|gothic|fairies|magic|romanticism|added-2023-08-08",false], -["Bell","Julie","fantasy|nature|dragons|magic|mythology|wilderness|added-2023-08-08",false], -["Bergsma","Jody","watercolor|fantasy|whimsical|fairies|mythology|dream-like|magic-realism|ethereal|added-2023-08-08",false], -["Berkey","John","fantasy|science-fiction|eerie|outer-space|futuristic|added-2023-08-08",false], -["Bilal","Enki","science-fiction|comics|cyberpunk|urban-life|grungy|futuristic|dystopia|surreal|added-2023-08-08",false], -["Binkley","Ed","fantasy|mythology|dream-like|magic|ethereal|whimsical|added-2023-08-08",false], -["Bogle","Lee","portraits|fantasy|surreal|dream-like|eerie|ethereal|added-2023-08-08",false], -["Bonestell","Chesley","science-fiction|alien-worlds|outer-space|futuristic|added-2023-08-08",false], -["Bosma","Sam","cartoon|comics|fantasy|characters|playful|whimsical|colorful|animation|added-2023-08-08",false], -["Bosschart","Johfra","whimsical|fantasy|surreal|dream-like|magic|mythology|ethereal|added-2023-08-08",false], -["Boulet","Susan Seddon","fantasy|magic-realism|nature|whimsical|ethereal|magic|dream-like|femininity|added-2023-08-08",false], -["Bowater","Charlie","fantasy|digital|portraits|characters|dark|gothic|eerie|added-2023-08-08",false], -["Bradley","Noah","dark|landscapes|fantasy|eerie|added-2023-08-08",false], -["Briclot","Aleksi","fantasy|dark|grungy|dystopia|horror|gothic|added-2023-08-08",false], -["Brom","Gerald","dark|fantasy|horror|gothic|eerie|added-2023-08-08",false], -["Brooks","Mark","comics|fantasy|science-fiction|added-2023-08-08",false], -["Brown","Patrick","fantasy|comics|colorful|added-2023-08-08",true], -["Burdisio","Alejandro","digital|landscapes|fantasy|science-fiction|dark|atmospheric|eerie|magic|added-2023-08-08",false], -["Burns","Jim","science-fiction|characters|cyberpunk|grungy|urban-life|dark|futuristic|dystopia|noir|added-2023-08-08",false], -["Cai","Zhichao","fantasy|digital|whimsical|dream-like|magic|ethereal|surreal|added-2023-08-08",false], -["Caldwell","Clyde","fantasy|science-fiction|mythology|female-figures|pulp|added-2023-08-08",false], -["Callebaut","Vincent","architecture|science-fiction|3D-rendering|futuristic|surreal|fantasy|cyberpunk|utopia|dystopia|added-2023-08-08",false], -["Canete","Eric","fantasy|characters|comics|superheroes|added-2023-08-08",false], -["Carman","Bill","fantasy|pop-art|surrealism|whimsical|playful|psychedelic|added-2023-08-08",false], -["Chen","Bo","fantasy|magic|whimsical|dream-like|ethereal|illusion|added-2023-08-08",true], -["Christensen","James C.","whimsical|fantasy|mythology|ethereal|mysterious|magic|dream-like|American|illustration|kids-book|religion|magic|added-2023-08-08",false], -["Clark","Amanda","fantasy|landscapes|characters|watercolor|dream-like|magic|whimsical|ethereal|added-2023-08-08",false], -["Corben","Richard","science-fiction|horror|comics|dark|eerie|added-2023-08-08",false], -["Dean","Roger","landscapes|fantasy|science-fiction|magic|eerie|dream-like|ethereal|posters|added-2023-08-08",false], -["Deharme","Lise","fantasy|whimsical|dream-like|magic|ethereal|surreal|added-2023-08-08",true], -["Dell'otto","Gabriele","comics|fantasy|colorful|added-2023-08-08",false], -["Delort","Nicolas","monochromatic|fantasy|dark|gothic|horror|eerie|labyrinths|added-2023-08-08",false], -["Delville","Jean","fantasy|surrealism|dream-like|magic|metaphysics|added-2023-08-08",false], -["Demizu","Posuka","manga-anime|fantasy|whimsical|colorful|playful|adventure|contemporary|illustration|added-2023-08-08",false], -["Deschambault","Martin","digital|landscapes|science-fiction|eerie|minimalism|atmospheric|mysterious|futuristic|added-2023-08-08",false], -["Deschamps","Eric","fantasy|science-fiction|digital|surreal|added-2023-08-08",true], -["Detmold","Charles Maurice","fantasy|watercolor|art-nouveau|ethereal|mythology|magic|opulent|dream-like|added-2023-08-08",false], -["Detmold","Edward Julius","fantasy|watercolor|art-nouveau|ethereal|mythology|magic|opulent|dream-like|British|illustration|kids-book|Victorian|animals|nature|botanical|delicate|added-2023-08-08",false], -["DiTerlizzi","Tony","fantasy|whimsical|magic|creatures|playful|added-2023-08-08",false], -["Dittmann","Anna","portraits|fantasy|digital|ethereal|dream-like|mysterious|added-2023-08-08",false], -["Dorman","Dave","science-fiction|horror|fantasy|photorealism|dark|added-2023-08-08",false], -["Drysdale","TJ","photography|fantasy|landscapes|magic|dream-like|ethereal|eerie|added-2023-08-08",false], -["Earle","Eyvind","magic-realism|magic-realism|fantasy|high-contrast|dream-like|whimsical|surreal|colorful|added-2023-08-08",false], -["Easley","Jeff","fantasy|added-2023-08-08",false], -["Edlin","Tyler","fantasy|digital|landscapes|dream-like|ethereal|magic|whimsical|added-2023-08-08",true], -["Edmiston","Jason","fantasy|horror|characters|portraits|illustration|dark|eerie|monochromatic|ethereal|added-2023-08-08",false], -["Edwards","Les","science-fiction|horror|fantasy|illustration|outer-space|creatures|dark|added-2023-08-08",true], -["Eggleton","Bob","science-fiction|fantasy|horror|illustration|aliens|landscapes|colorful|dream-like|added-2023-08-08",true], -["Ejsing","Jesper","fantasy|illustration|characters|mythology|whimsical|magic|adventure|added-2023-08-08",false], -["Ellger","Christine","fantasy|magic-realism|illustration|dream-like|folklore|ethereal|surreal|added-2023-08-08",false], -["Ellis","Dean","science-fiction|vibrant|illustration|cyberpunk|futuristic|technology|neon|added-2023-08-08",true], -["Elmore","Larry","fantasy|illustration|superheroes|medieval|battle-scenes|added-2023-08-08",false], -["Elorza","Joseba","photography|surrealism|collage|science-fiction|outer-space|dream-like|abstract|added-2023-08-08",false], -["Elson","Peter","science-fiction|outer-space|illustration|space-ships|robots-cyborgs|futuristic|added-2023-08-08",false], -["Emshwiller","Ed","science-fiction|illustration|outer-space|aliens|pulp|colorful|added-2023-08-08",false], -["Eng","Kilian","digital|landscapes|science-fiction|fantasy|atmospheric|added-2023-08-08",false], -["Engle","Jason A.","fantasy|science-fiction|dark|illustration|creatures|added-2023-08-08",false], -["Fabry","Glenn","fantasy|science-fiction|comics|illustration|grungy|violence|added-2023-08-08",false], -["Fairhurst","Andy","science-fiction|fantasy|horror|digital|illustration|eerie|added-2023-08-08",false], -["Falero","Luis Ricardo","figurativism|fantasy|nudes|painting|dream-like|romanticism|erotica|added-2023-08-08",false], -["Fate","Vincent Di","science-fiction|fantasy|illustration|outer-space|eerie|futuristic|added-2023-08-08",true], -["Ferez","Andrew","surrealism|fantasy|illustration|dream-like|fragmentation|eerie|added-2023-08-08",false], -["Finch","David","comics|fantasy|illustration|superheroes|grungy|noir|added-2023-08-08",false], -["Finlay","Virgil","comics|science-fiction|fantasy|horror|dark|high-contrast|pulp|eerie|added-2023-08-08",false], -["Finnstark","Anato","digital|fantasy|illustration|whimsical|magic|colorful|playful|added-2023-08-08",false], -["Fitzgerald","John Anster","fantasy|whimsical|illustration|folklore|pastel|magic|added-2023-08-08",false], -["Foss","Chris","vibrant|science-fiction|outer-space|illustration|psychedelic|alien-worlds|added-2023-08-08",false], -["Frazetta","Frank","fantasy|dark|illustration|barbarians|muscles|added-2023-08-08",false], -["Freas","Kelly","science-fiction|fantasy|illustration|adventure|eerie|colorful|added-2023-08-08",false], -["Froud","Brian","fantasy|dark|whimsical|illustration|fairies|mythology|magic|added-2023-08-08",false], -["Froud","Wendy","fantasy|dark|whimsical|illustration|fairies|mythology|magic|added-2023-08-08",false], -["Gaughan","Jack","science-fiction|vibrant|illustration|aliens|outer-space|colorful|alien-worlds|added-2023-08-08",false], -["Gerard","Justin","fantasy|whimsical|illustration|dream-like|folklore|magic|added-2023-08-08",true], -["Giancola","Donato","fantasy|science-fiction|illustration|mythology|painting|added-2023-08-08",false], -["Giger","H.R.","science-fiction|dark|monochromatic|painting|surreal|robots-cyborgs|horror|added-2023-08-08",false], -["Giraud","Jean","comics|psychedelic|surrealism|fantasy|science-fiction|illustration|dream-like|added-2023-08-08",false], -["Gonzalez","Josan","science-fiction|illustration|cyberpunk|futuristic|technology|grungy|atmospheric|added-2023-08-08",false], -["Guay","Rebecca","watercolor|digital|fantasy|illustration|dream-like|ethereal|magic|added-2023-08-08",false], -["Guidice","Rick","science-fiction|illustration|space-ships|outer-space|adventure|added-2023-08-08",false], -["Gurney","James","fantasy|landscapes|illustration|realism|painting|atmospheric|magic|added-2023-08-08",true], -["Gustafson","Scott","magic-realism|whimsical|kids-book|fantasy|illustration|playful|colorful|added-2023-08-08",false], -["Hardy","David A.","landscapes|science-fiction|illustration|outer-space|added-2023-08-08",true], -["Harris","John","dark|science-fiction|outer-space|messy|illustration|dystopia|grungy|added-2023-08-08",false], -["Hase","Ryohei","magic-realism|surrealism|fantasy|digital|illustration|dream-like|ethereal|mysterious|added-2023-08-08",false], -["Hideyoshi","Lorenz","digital|science-fiction|illustration|cyberpunk|futuristic|dark|neon|dystopia|added-2023-08-08",false], -["Hildebrandt","Brothers","fantasy|vibrant|illustration|superheroes|painting|added-2023-08-08",false], -["Hong","Kuang","fantasy|digital|dark|illustration|mythology|eerie|added-2023-08-08",true], -["Horkey","Aaron","fantasy|comics|illustration|etching|added-2023-08-08",false], -["Horley","Alex","fantasy|dark|characters|illustration|grungy|horror|added-2023-08-08",false], -["Horsley","Ralph","science-fiction|fantasy|whimsical|vibrant|dark|high-contrast|colorful|monochromatic|geometric|angular|added-2023-08-08",false], -["Howe","John","fantasy|dark|eerie|portraits|landscapes|nature|characters|added-2023-08-08",false], -["Huang","Shilin","fantasy|characters|dream-like|mysterious|magic|mythology|added-2023-08-08",false], -["Hughes","Edward Robert","romanticism|characters|fantasy|impressionism|whimsical|dream-like|ethereal|nostalgia|added-2023-08-08",false], -["Hutter","Michael","surrealism|fantasy|science-fiction|dream-like|horror|surreal|eerie|added-2023-08-08",false], -["Jansson","Alexander","fantasy|whimsical|dark|dream-like|mythology|surreal|added-2023-08-08",false], -["Jean","James","fantasy|mythology|colorful|mysterious|added-2023-08-08",false], -["Jia","Ruan","digital|portraits|fantasy|dark|colorful|futuristic|surreal|added-2023-08-08",true], -["Jones","Jeffrey Catherine","fantasy|figurativism|realism|added-2023-08-08",false], -["Jones","Peter Andrew","science-fiction|fantasy|futuristic|eerie|alien-worlds|outer-space|added-2023-08-08",false], -["Jusko","Joe","comics|fantasy|added-2023-08-08",false], -["Kaluta","M.W.","fantasy|whimsical|romanticism|nostalgia|victorian|dream-like|ethereal|added-2023-08-08",false], -["Karcz","Michal","landscapes|fantasy|science-fiction|photography|futuristic|surreal|eerie|added-2023-08-08",false], -["Katsuya","Terada","manga-anime|fantasy|portraits|colorful|magic|added-2023-08-08",false], -["Kelly","Ken","fantasy|characters|mythology|vibrant|whimsical|added-2023-08-08",true], -["Kikuchi","Hideyuki","horror|fantasy|manga-anime|dark|eerie|added-2023-08-08",false], -["Kirby","Jack","comics|science-fiction|added-2023-08-08",false], -["Koike","Kazuo","manga-anime|comics|fantasy|added-2023-08-08",false], -["Kon","Satoshi","whimsical|high-contrast|fantasy|manga-anime|surreal|dream-like|added-2023-08-08",false], -["Kutsche","Michael K","characters|fantasy|dark|whimsical|dream-like|mysterious|mythology|added-2023-08-08",false], -["Kuvshinov","Ilya","vibrant|digital|fantasy|manga-anime|dream-like|romanticism|ethereal|surreal|added-2023-08-08",false], -["Lacoste","Raphael","fantasy|landscapes|dark|mysterious|atmospheric|eerie|dream-like|added-2023-08-08",false], -["Langley","Clint","comics|fantasy|digital|dark|grungy|urban-life|dystopia|noir|added-2023-08-08",true], -["Lecouffe-Deharme","Bastien","digital|dark|fantasy|characters|surreal|ethereal|magic|added-2023-08-08",false], -["Lee","Alan","fantasy|romanticism|dream-like|nostalgia|mythology|whimsical|ethereal|added-2023-08-08",false], -["Lehr","Paul","science-fiction|fantasy|vibrant|colorful|high-contrast|futuristic|eerie|surreal|added-2023-08-08",false], -["Lewandowski","Mariusz","fantasy|surrealism|dark|dream-like|mysterious|eerie|added-2023-08-08",true], -["Liefeld","Rob","comics|science-fiction|fantasy|added-2023-08-08",false], -["Madureira","Joe","comics|fantasy|added-2023-08-08",false], -["Maitz","Don","science-fiction|fantasy|eerie|futuristic|surreal|added-2023-08-08",false], -["Maleev","Alex","comics|fantasy|high-contrast|dark|noir|added-2023-08-08",false], -["Maniak","Slawomir","fantasy|dark|surreal|dream-like|eerie|mysterious|added-2023-08-08",true], -["Manzanedo","Antonio J.","fantasy|dark|characters|mysterious|added-2023-08-08",false], -["Mars","Chris","surrealism|dark|fantasy|dream-like|eerie|abstract|added-2023-08-08",true], -["Martinière","Stephan","science-fiction|fantasy|landscapes|dark|futuristic|surreal|atmospheric|added-2023-08-08",false], -["Matthews","Rodney","fantasy|science-fiction|futuristic|eerie|colorful|added-2023-08-08",false], -["Mattingly","David B.","science-fiction|fantasy|eerie|futuristic|surreal|vibrant|added-2023-08-08",true], -["Mayhew","Mike","comics|fantasy|portraits|added-2023-08-08",false], -["McCaffrey","Anne","dragons|fantasy|science-fiction|mythology|adventure|magic|added-2023-08-08",false], -["McCall","Robert","science-fiction|outer-space|futuristic|added-2023-08-08",false], -["McFarlane","Todd","comics|fantasy|dark|added-2023-08-08",false], -["McKie","Angus","vibrant|science-fiction|fantasy|futuristic|added-2023-08-08",false], -["McPharlin","Dan","science-fiction|vibrant|surrealism|abstract|dream-like|ethereal|magic|added-2023-08-08",false], -["McQuarrie","Ralph","science-fiction|landscapes|eerie|futuristic|added-2023-08-08",false], -["McQue","Ian","science-fiction|fantasy|messy|grungy|dark|surreal|added-2023-08-08",false], -["Mead","Syd","flat-colors|science-fiction|abstract|angular|futuristic|minimalism|technology|modern|added-2023-08-08",false], -["Minguez","Victor Adame","fantasy|characters|digital|colorful|whimsical|mysterious|added-2023-08-08",true], -["Moebius","","comics|psychedelic|surrealism|fantasy|science-fiction|dream-like|added-2023-08-08",false], -["Mohrbacher","Peter","surrealism|fantasy|dark|whimsical|dream-like|ethereal|mythology|added-2023-08-08",false], -["Monge","Jean-Baptiste","dark|fantasy|surreal|eerie|mysterious|added-2023-08-08",false], -["Moore","Alan","comics|graphic-novel|dark|science-fiction|horror|fantasy|dystopia|grungy|noir|added-2023-08-08",false], -["Moore","Chris","science-fiction|high-contrast|cyberpunk|dystopia|technology|futuristic|added-2023-08-08",true], -["Moore","Tony","comics|horror|science-fiction|magic|gothic|eerie|mythology|added-2023-08-08",true], -["Mullins","Craig","dark|fantasy|surrealism|mythology|dream-like|horror|added-2023-08-08",false], -["Mumford","Dan","digital|vibrant|fantasy|psychedelic|surrealism|colorful|dreams|added-2023-08-08",false], -["Nasmith","Ted","fantasy|landscapes|ethereal|magic|mythology|atmospheric|added-2023-08-08",false], -["Nauck","Todd","comics|characters|science-fiction|superheroes|adventure|added-2023-08-08",false], -["Nerdrum","Odd","dark|characters|fantasy|figurative|melancholy|added-2023-08-08",false], -["Nihei","Tsutomu","manga-anime|science-fiction|dark|monochromatic|cyberpunk|dystopia|industrial|alien-worlds|added-2023-08-08",false], -["Nirasawa","Yasushi","fantasy|characters|dark|creatures|mythology|monsters|added-2023-08-08",true], -["Nizovtsev","Victor","magic-realism|vibrant|whimsical|fantasy|magic|mysterious|dream-like|surreal|added-2023-08-08",false], -["Norem","Earl","fantasy|dark|battle-scenes|mythology|added-2023-08-08",false], -["Oakes","Terry","fantasy|science-fiction|magic|outer-space|colorful|adventure|added-2023-08-08",false], -["Ohrai","Noriyoshi","fantasy|science-fiction|futuristic|posters|vibrant|added-2023-08-08",false], -["Okon","Marek","digital|science-fiction|dark|surreal|robots-cyborgs|horror|magic|added-2023-08-08",true], -["Paick","James","digital|landscapes|fantasy|science-fiction|vibrant|ethereal|eerie|immersive|added-2023-08-08",true], -["Parkes","Michael","magic-realism|fantasy|art-nouveau|dream-like|spirituality|ethereal|added-2023-08-08",false], -["Parkinson","Keith","fantasy|mythology|whimsical|added-2023-08-08",true], -["Pennington","Bruce","science-fiction|fantasy|vibrant|landscapes|futuristic|outer-space|added-2023-08-08",false], -["Razell","Aliza","photography|fantasy|surrealism|dream-like|ethereal|eerie|conceptual|added-2023-08-08",false], -["Rebelka","Jakub","surrealism|fantasy|dream-like|illusion|added-2023-08-08",true], -["Rekunenko","Valentin","fantasy|surrealism|dream-like|whimsical|added-2023-08-08",false], -["Rigney","Brad","fantasy|characters|dark|mythology|surreal|added-2023-08-08",true], -["Rocha","Andreas","digital|landscapes|fantasy|dark|atmospheric|added-2023-08-08",false], -["Różalski","Jakub","landscapes|fantasy|science-fiction|battle-scenes|steampunk|futuristic|dystopia|added-2023-08-08",true], -["Ruas","Joao","dark|comics|characters|fantasy|gothic|noir|horror|added-2023-08-08",false], -["Rutkowski","Greg","digital|landscapes|fantasy|dark|atmospheric|surreal|eerie|added-2023-08-08",true], -["Shaw","Barclay","science-fiction|dark|angular|cyberpunk|futuristic|industrial|neon|added-2023-08-08",false], -["Shirow","Masamune","manga-anime|cartoon|comics|characters|fantasy|science-fiction|robots-cyborgs|added-2023-08-08",false], -["Simonetti","Marc","landscapes|digital|fantasy|dark|surreal|dream-like|added-2023-08-08",false], -["Smith","Adrian","dark|fantasy|digital|characters|grungy|added-2023-08-08",true], -["Sorayama","Hajime","characters|science-fiction|robots-cyborgs|futuristic|erotica|technology|added-2023-08-08",false], -["Sparth","","digital|fantasy|science-fiction|landscapes|futuristic|surreal|minimalism|abstract|added-2023-08-08",false], -["Stålenhag","Simon","landscapes|digital|science-fiction|nostalgia|rural-life|futurism|suburbia|eerie|added-2023-08-08",false], -["Staples","Greg","comics|fantasy|adventure|characters|colorful|added-2023-08-08",true], -["Stokes","Anne","fantasy|dark|characters|whimsical|mysterious|gothic|eerie|added-2023-08-08",false], -["Stout","William","dark|fantasy|mythology|gothic|added-2023-08-08",false], -["Struzan","Drew","portraits|fantasy|science-fiction|nostalgia|added-2023-08-08",false], -["Sum","Brian","science-fiction|digital|characters|cyberpunk|futuristic|added-2023-08-08",true], -["Suuronen","Matti","architecture|photography|science-fiction|futuristic|minimalism|modern|eerie|added-2023-08-08",true], -["Swanland","Raymond","fantasy|digital|dark|eerie|atmospheric|added-2023-08-08",false], -["Theurer","Heather","fantasy|romanticism|renaissance|ethereal|erotica|mythology|dream-like|baroque|added-2023-08-08",false], -["Thole","Karel","surrealism|dark|science-fiction|horror|dream-like|added-2023-08-08",true], -["Uno","Aquirax","surreal|metaphysics|contemporary|painting|fantasy|vibrant|portraits|dream-like|abstract|added-2023-08-08",true], -["Urschel","Jan","dark|digital|landscapes|science-fiction|atmospheric|dystopia|added-2023-08-08",true], -["Vacher","Christophe","cloudscapes|landscapes|fantasy|magic-realism|ethereal|dream-like|added-2023-08-08",false], -["Vess","Charles","fantasy|comics|magic|dream-like|mythology|whimsical|romanticism|added-2023-08-08",false], -["Walotsky","Ron","science-fiction|fantasy|surreal|futuristic|added-2023-08-08",true], -["Whelan","Michael","science-fiction|fantasy|dream-like|surreal|vibrant|eerie|outer-space|alien-worlds|added-2023-08-08",false], -["White","Tim","science-fiction|fantasy|landscapes|atmospheric|immersive|added-2023-08-08",false], -["Williams","Gilbert","fantasy|landscapes|whimsical|magic|nostalgia|added-2023-08-08",false], -["Williamson","Al","comics|science-fiction|fantasy|adventure|mythology|added-2023-08-08",false], -["Wong","Liam","photography|colorful|vibrant|science-fiction|futuristic|dystopia|urban-life|added-2023-08-08",false], -["Woodroffe","Patrick","science-fiction|surrealism|dream-like|illusion|eerie|added-2023-08-08",false], -["Zand","Amir","science-fiction|digital|vibrant|futuristic|robots-cyborgs|technology|added-2023-08-08",true], -["Moscoso","Victor","vibrant|psychedelic|art-nouveau|pop-art|typography|colorful|added-2023-08-10",false], -["Naismith","Scott","vibrant|seascapes|landscapes|abstract|impressionism|serenity|colorful|added-2023-08-10",false], -["Dmitriev","Dima","impressionism|landscapes|vibrant|figure-studies|nature|oil-painting|romanticism|high-contrast|added-2023-08-10",false], -["Rist","Pipilotti","vibrant|colorful|installation|video-art|immersive|dream-like|playful|added-2023-08-10",false], -["Ventrue","Eve","digital|dark|characters|illustration|femininity|gothic|fantasy|costumes|added-2023-08-10",false], -["Deforge","Michael","vibrant|cartoon|pop-art|surrealism|satire|whimsical|added-2023-08-10",false], -["Saryan","Martiros","vibrant|impressionism|landscapes|colorful|nature|wildlife|serenity|added-2023-08-10",false], -["Mosse","Richard","vibrant|colorful|photography|landscapes|surrealism|battle-scenes|documentary|added-2023-08-10",false], -["Adnan","Etel","abstract|vibrant|landscapes|colorful|nature|serenity|added-2023-08-10",false], -["Bocek","Anna","portraits|vibrant|figurativism|messy|colorful|added-2023-08-10",false], -["Bearden","Romare","cubism|vibrant|expressionism|collage|African-American|urban-life|history|added-2023-08-10",false], -["Erté","Romain de Tirtoff","art-deco|Russian|fashion|masks|theater|silhouettes|luxury|added-2023-08-10",false], -["Metzinger","Jean","cubism|abstract|vibrant|contemporary|modern|geometric|futuristic|added-2023-08-10",false], -["Grey","Alex","psychedelic|vibrant|contemporary|surrealism|abstract-expressionism|dream-like|colorful|added-2023-08-10",false], -["Luce","Maximilien","landscapes|impressionism|vibrant|nature|french|plein-air|oil-painting|romanticism",false], -["Turner","Pete","photography|vibrant|colorful|abstract|contemporary|impasto|ethereal|added-2023-08-10",false], -["LaChapelle","David","surrealism|pop-art|photography|vibrant|contemporary|conceptual|luxury|added-2023-08-10",false], -["Kaneko","Jun","abstract|sculpture|vibrant|contemporary|geometric|organic|added-2023-08-10",false], -["Gottlieb","Adolph","abstract|contemporary|abstract-expressionism|geometric|color-field|added-2023-08-10",false], -["Biggers","John T.","contemporary|modern|African-American|social-commentary|harlem-renaissance|mural-painting|added-2023-08-10",false], -["Nagai","Go","manga-anime|vibrant|portraits|childhood|added-2023-08-10",false], -["Scarry","Richard","kids-book|animals|anthropomorphism|vibrant|whimsical|contemporary|illustration|colorful|playful|added-2023-08-10",false], -["Ghailan","Atey","digital|manga-anime|fantasy|science-fiction|illustration|characters|dream-like|surrealism|added-2023-08-10",false], -["Armstrong","Rolf","characters|art-nouveau|art-deco|illustration|contemporary|fashion|added-2023-08-10",false], -["Blackman","Charles","vibrant|high-contrast|painting|portraits|colorful|added-2023-08-10",false], -["Fischinger","Oskar","abstract|vibrant|colorful|contemporary|avant-garde|spirituality|added-2023-08-10",false], -["Pesce","Gaetano","architecture|vibrant|contemporary|organic|futuristic|added-2023-08-10",false], -["Deakins","Roger","photography|vibrant|digital|contemporary|abstract|geometric|minimalism|added-2023-08-10",true], -["Groening","Matt","cartoon|vibrant|pop-culture|satire|colorful|whimsical|added-2023-08-10",false], -["Harper","Charley","vibrant|flat-colors|animals|nature|illustration|whimsical|playful|folk-art|added-2023-08-10",false], -["Mouly","Marcel","abstract|fauvism|vibrant|contemporary|modern|colorful|added-2023-08-10",false], -["Brooks","Troy","surrealism|portraits|vibrant|contemporary|oil-painting|dream-like|dark|impressionism|added-2023-08-10",false], -["Pechstein","Max","expressionism|vibrant|contemporary|modern|colorful|added-2023-08-10",false], -["Gangloff","Hope","high-contrast|vibrant|portraits|contemporary|expressionism|added-2023-08-10",false], -["Leger","Fernand","abstract|cubism|vibrant|contemporary|modern|geometric|futuristic|added-2023-08-10",false], -["Bonhomme","Olivier","surrealism|vibrant|colorful|contemporary|pop-art|whimsical|added-2023-08-10",true], -["Heilmann","Mary","abstract|vibrant|high-contrast|contemporary|minimalism|geometric|colorful|added-2023-08-10",false], -["Afremov","Leonid","vibrant|stained-glass|impressionism|nature|cityscapes|colorful|atmospheric|added-2023-08-10",false], -["Dyer","Chris","psychedelic|vibrant|colorful|contemporary|abstract|pop-art|surrealism|expressionism|added-2023-08-10",false], -["Ginner","Charles","vibrant|landscapes|cityscapes|urban-life|impressionism|added-2023-08-10",false], -["Hyde","Doug","whimsical|kids-book|vibrant|contemporary|illustration|colorful|playful|added-2023-08-10",false], -["Page","Michael","colorful|vibrant|pop-art|expressionism|contemporary|whimsical|playful|added-2023-08-10",false], -["Chihuly","Dale","abstract|sculpture|vibrant|contemporary|organic|added-2023-08-10",false], -["Delaunay","Sonia","art-deco|cubism|fauvism|abstract|French|modern|geometric|female-figures|fashion|added-2023-08-10",false], -["Azzopardi","Deborah","pop-art|cartoon|whimsical|femininity|fashion|comics|colorful|added-2023-08-10",false], -["Davenport","Ian","abstract|colorful|vibrant|contemporary|modern|geometric|added-2023-08-10",false], -["Icart","Louis","art-deco|impressionism|low-contrast|romanticism|femininity|dancers|urban-life|added-2023-08-10",false], -["Koch","Phil","landscapes|photography|vibrant|contemporary|nature|colorful|serenity|atmospheric|added-2023-08-10",false], -["Calleri","Fred","whimsical|portraits|vibrant|sculpture|expressionism|colorful|mixed-media|added-2023-08-10",false], -["Bomberg","David","cubism|vibrant|abstract|battle-scenes|expressionism|added-2023-08-10",false], -["Moureaux","Emmanuelle","installation|colorful|vibrant|abstract|contemporary|multimedia|sculpture|environmentalism|added-2023-08-10",false], -["Cappiello","Leonetto","graphic-design|vibrant|high-contrast|art-nouveau|color-field|mixed-media|posters|colorful|added-2023-08-10",false], -["Lalique","René","art-deco|art-nouveau|glasswork|jewelry|luxury|nature|French|sculpture|added-2023-08-10",false], -["Blanding","Don","art-deco|high-contrast|architecture|minimalism|added-2023-08-10",false], -["Mallett","Keith","figurativism|abstract|vibrant|sculpture|urban-life|modern|dark|minimalism|added-2023-08-10",false], -["Fink","Callie","psychedelic|vibrant|colorful|portraits|contemporary|pop-art|surrealism|expressionism|added-2023-08-10",false], -["Barbier","George","art-deco|art-nouveau|illustration|fashion|vibrant|costumes|theater|romanticism|added-2023-08-10",false], -["Billy","Butcher","graphic-design|pop-art|vibrant|comics|contemporary|colorful|characters|feminism|added-2023-08-10",false], -["Gacy","John Wayne","vibrant|portraits|dark|horror|clowns|death|added-2023-08-10",false], -["Blair","Mary","whimsical|high-contrast|vibrant|illustration|characters|childhood|nature|fantasy",false], -["Nay","Ernst Wilhelm","expressionism|abstract|vibrant|figurativism|colorful|modern|german|surrealism|added-2023-08-10",false], -["Phillips","Coles","art-deco|illustration|femininity|advertising|nostalgia|fashion|added-2023-08-10",false], -["Lempicka","Tamara de","cubism|art-deco|portraits|fashion|luxury|romanticism|added-2023-08-10",false], -["Yuumei","","digital|whimsical|characters|environmentalism|fantasy|dream-like|femininity|manga-anime|added-2023-08-10",false], -["Aitchison","Craigie","vibrant|primitivism|figurativism|expressionism|nature|added-2023-08-10",false], -["Stella","Frank","angular|abstract|expressionism|vibrant|cubism|colorful|geometric|modern|added-2023-08-10",false], -["Carlson","Larry","psychedelic|surrealism|digital|vibrant|colorful|abstract|nature|dream-like|added-2023-08-10",false], -["Wright","Frank Lloyd","architecture|art-deco|angular|organic|nature|environmentalism|added-2023-08-10",false], -["Ferriss","Hugh","cityscapes|architecture|art-deco|geometric|nightlife|urban-life|futuristic|added-2023-08-10",false], -["Foster","Jon","digital|portraits|abstract|minimalism|figurativism|contemporary|modern|added-2023-08-10",false], -["Sottsass","Ettore","colorful|art-deco|furniture|architecture|playful|sculpture|added-2023-08-10",false], -["Okubo","Naomi","vibrant|collage|identity|feminism|empowerment|politics|added-2023-08-10",false], -["Aarons","Slim","vibrant|photography|fashion|social-commentary|luxury|nostalgia|added-2023-08-10",false], -["Shiota","Chiharu","vibrant|installation|messy|low-contrast|environmentalism|conceptual|immersive|added-2023-08-10",false], -["Criswell","Debbie","vibrant|landscapes|whimsical|surrealism|playful|added-2023-08-10",false], -["Hironaka","Harumi","vibrant|portraits|manga-anime|watercolor|femininity|serenity|dream-like|added-2023-08-10",false], -["Allred","Mike","comics|vibrant|illustration|pop-art|colorful|whimsical|superheroes|added-2023-08-10",false], -["Agam","Yaacov","vibrant|colorful|abstract|angular|kinetic|illusion|interactive|added-2023-08-10",false], -["Frank","Lisa","whimsical|vibrant|colorful|illustration|childhood|playful|fantasy|added-2023-08-10",false], -["Ranson","Paul","abstract|vibrant|art-nouveau|nature|whimsical|fantasy|added-2023-08-10",false], -["Hanson","Erin","colorful|vibrant|impressionism|landscapes|nature|serenity|atmospheric|dream-like|added-2023-08-10",false], -["Scharf","Kenny","colorful|vibrant|pop-art|surrealism|psychedelic|whimsical|playful|added-2023-08-10",false], -["Hoyland","John","abstract|vibrant|contemporary|modern|geometric|color-field|added-2023-08-10",false], -["teamLab","","vibrant|colorful|installation|light-art|digital|immersive|interactive|technology|added-2023-08-10",false], -["Ngai","Victo","vibrant|kids-book|surrealism|illustration|dream-like|playful|added-2023-08-10",false], -["Asai","Miki","photography|vibrant|contemporary|nature|landscapes|abstract|minimalism|added-2023-08-10",false], -["Hamiti","Bess","landscapes|vibrant|magic-realism|contemporary|dream-like|surrealism|whimsical|impressionism|added-2023-08-10",false], -["Britto","Romero","colorful|vibrant|high-contrast|stained-glass|contemporary|pop-art|whimsical|playful|added-2023-08-10",false], -["Lijun","Fang","figurativism|vibrant|contemporary|portraits|realism|dutch|added-2023-08-10",false], -["Kurzgesagt","","vibrant|graphic-design|digital|minimalism|animation|outer-space|added-2023-08-10",false], -["Knight","Chad","vibrant|digital|surrealism|pop-art|collage|colorful|whimsical|playful|added-2023-08-10",false], -["Hewett","Ryan","vibrant|abstract|cubism|portraits|colorful|mysticism|added-2023-08-10",false], -["Agar","Eileen","vibrant|abstract|collage|surrealism|femininity|dream-like|nature|added-2023-08-10",false], -["Hughes","Jack","high-contrast|vibrant|portraits|flat-colors|contemporary|expressionism|added-2023-08-10",false], -["Boccioni","Umberto","cubism|colorful|vibrant|messy|contemporary|futurism|added-2023-08-10",false], -["Hodas","Filip","digital|3D-rendering|surrealism|conceptual|dream-like|dark|science-fiction|monochromatic",false], -["Ascher","Clemens","photography|vibrant|high-contrast|contemporary|minimalism|geometric|abstract|architecture|added-2023-08-10",false], -["Arkley","Howard","architecture|vibrant|colorful|contemporary|pop-art|whimsical|playful|futuristic|added-2023-08-10",false], -["Anderson","Wes","vibrant|whimsical|photography|film|nostalgia|surreal|colorful|added-2023-08-10",false], -["Jones","Lois Mailou","colorful|vibrant|contemporary|modern|geometric|abstract|identity|added-2023-08-10",true], -["Burch","Laurel","vibrant|high-contrast|illustration|femininity|nature|fantasy|whimsical|added-2023-08-10",false], -["Hundertwasser","Friedensreich","vibrant|colorful|messy|contemporary|expressionism|surrealism|abstract|organic|added-2023-08-10",false], -["Max","Peter","colorful|abstract|vibrant|contemporary|pop-art|surrealism|added-2023-08-10",false], -["Cooke","Darwyn","comics|cartoon|vibrant|contemporary|illustration|added-2023-08-10",false], -["Haygarth","Stuart","installation|vibrant|angular|colorful|contemporary|conceptual|added-2023-08-10",false], -["BurGerman","Jon","pop-art|colorful|vibrant|contemporary|illustration|playful|added-2023-08-10",false], -["Delaunay","Robert","abstract|cubism|vibrant|contemporary|modern|geometric|added-2023-08-10",false], -["Jones","Erik","vibrant|colorful|portraits|cubism|abstract|collage|added-2023-08-10",false], -["Fontana","Lucio","abstract|sculpture|conceptual|minimalism|modern|large-scale|installation|added-2023-08-10",false], -["Janson","Klaus","comics|high-contrast|vibrant|figurativism|pop-art|collage|graphic-novel|characters|added-2023-08-10",true], -["Jawlensky","Alexej von","expressionism|vibrant|portraits|colorful|modern|abstract|german|spirituality|added-2023-08-10",false], -["Schmidt-Rottluff","Karl","expressionism|vibrant|german|abstract|figurativism|colorful|japanese|woodblock|landscapes|added-2023-08-10",false], -["Cortright","Petra","expressionism|messy|vibrant|digital|abstract|nature|impressionism|added-2023-08-10",false], -["Wall","Josephine","psychedelic|colorful|vibrant|digital|pop-art|portraits|whimsical|femininity|added-2023-08-10",false], -["Gaffrey","Justin","sculpture|landscapes|vibrant|installation|minimalism|nature|large-scale|environmentalism|added-2023-08-10",false], -["RHADS","","digital|surrealism|landscapes|vibrant|mixed-media|magic-realism|added-2023-08-10",false], -["Bayer","Herbert","graphic-design|colorful|flat-colors|vibrant|bauhaus|typography|angular|contemporary|added-2023-08-10",false], -["Sienkiewicz","Bill","abstract|expressionism|grungy|comics|dark|figurativism|surrealism|pop-art|added-2023-08-10",false], -["Newland","Jane","vibrant|watercolor|nature|botanical|serenity|added-2023-08-10",false], -["Kngwarreye","Emily Kame","abstract|expressionism|vibrant|australian|Aboriginal|nature|landscapes|dream-like|added-2023-08-10",false], -["Eaton","Tristan","graphic-design|street-art|collage|vibrant|pop-art|characters|colorful|added-2023-08-10",false], -["Negley","Keith","vibrant|high-contrast|collage|illustration|mixed-media|pop-art|graphic-design|added-2023-08-10",false], -["Perceval","John","vibrant|messy|expressionism|abstract|added-2023-08-10",false], -["Marc","Franz","vibrant|expressionism|cubism|animals|colorful|spirituality|added-2023-08-10",false], -["Macke","August","expressionism|vibrant|contemporary|modern|colorful|abstract|impressionism|serenity|added-2023-08-10",false], -["Pelton","Agnes Lawrence","abstract|vibrant|contemporary|modern|ethereal|spirituality|serenity|color-field|added-2023-08-10",false], -["Indiana","Robert","flat-colors|graphic-design|vibrant|pop-art|contemporary|typography|added-2023-08-10",false], -["Beeple","","digital|3D-rendering|abstract|conceptual|science-fiction|cyberpunk|futuristic|added-2023-08-10",false], -["Loftis","Cory","digital|cartoon|whimsical|characters|childhood|nature|fantasy|added-2023-08-10",true], -["Corfield","Paul","vibrant|landscapes|cartoon|nature|whimsical|satire|playful|added-2023-08-10",false], -["Brood","Herman","pop-art|vibrant|childhood|added-2023-08-10",false], -["Birrell","George","cityscapes|vibrant|contemporary|urban-life|colorful|added-2023-08-10",false], -["Amaral","Tarsila do","surrealism|vibrant|cubism|contemporary|modern|abstract|added-2023-08-10",false], -["Gerstner","Karl","graphic-design|vibrant|abstract|colorful|contemporary|typography|geometric|added-2023-08-10",true], -["Kiuchi","Tatsuro","flat-colors|landscapes|digital|vibrant|flat-colors|whimsical|nature|urban-life|street-art|added-2023-08-10",false], -["Adamski","Josh","landscapes|contemporary|nature|photography|impressionism|atmospheric|serenity|added-2023-08-10",false], -["McGinley","Ryan","photography|vibrant|contemporary|childhood|portraits|dream-like|colorful|added-2023-08-10",false], -["Tartakovsky","Genndy","cartoon|vibrant|contemporary|animation|playful|whimsical|colorful|added-2023-08-10",false], -["Parc","Julio Le","vibrant|colorful|abstract|pop-art|graphic-design|playful|added-2023-08-10",false], -["Mahfood","Jim","comics|high-contrast|pop-art|graffiti|street-art|added-2023-08-10",false], -["Hodgkin","Howard","abstract|vibrant|contemporary|modern|color-field|nature|added-2023-08-10",false], -["Oiticica","Helio","abstract|vibrant|angular|installation|contemporary|multimedia|interactive|added-2023-08-10",false], -["Sage","Amanda","psychedelic|contemporary|surrealism|expressionism|whimsical|playful|added-2023-08-10",false], -["Schapiro","Miriam","abstract|vibrant|expressionism|contemporary|feminism|politics|added-2023-08-10",false], -["Fitzpatrick","Tony","collage|vibrant|contemporary|mixed-media|pop-art|colorful|whimsical|playful|added-2023-08-10",false], -["Murciano","Patrice","colorful|vibrant|portraits|contemporary|pop-art|surrealism|expressionism|added-2023-08-10",false], -["Buren","Daniel","high-contrast|vibrant|installation|sculpture|contemporary|conceptual|minimalism|added-2023-08-10",false], -["Sassen","Viviane","photography|vibrant|contemporary|abstract|surrealism|conceptual|geometric|added-2023-08-10",false], -["Caulfield","Patrick","colorful|vibrant|high-contrast|contemporary|pop-art|minimalism|geometric|added-2023-08-10",false], -["Aenami","Alena","vibrant|landscapes|digital|dream-like|surrealism|fantasy|serenity|atmospheric|added-2023-08-10",false], -["Young","Skottie","comics|cartoon|vibrant|contemporary|illustration|colorful|whimsical|playful|added-2023-08-10",false], -["Glaser","Milton","graphic-design|vibrant|colorful|contemporary|pop-art|whimsical|added-2023-08-10",false], -["Nagai","Hiroshi","landscapes|cityscapes|vibrant|high-contrast|japanese|minimalism|urban-life|photorealism|added-2023-08-10",false], -["Gilleard","James","flat-colors|digital|architecture|vibrant|landscapes|colorful|fantasy|futuristic|environmentalism|added-2023-08-10",false], -["Hagan","Robert","impressionism|landscapes|vibrant|nature|colorful|dream-like|romanticism|added-2023-08-10",false], -["Hammick","Tom","landscapes|figurativism|vibrant|multimedia|nature|dream-like|added-2023-08-10",false], -["Stella","Joseph","angular|abstract|expressionism|vibrant|cubism|geometric|modern|minimalism|added-2023-08-10",false], -["Skoglund","Sandy","installation|photography|surrealism|vibrant|contemporary|conceptual|whimsical|still-life|added-2023-08-10",false], -["Fruin","Tom","sculpture|stained-glass|architecture|installation|vibrant|contemporary|colorful|geometric|multimedia|added-2023-08-10",false], -["Fox","Toby","digital|cartoon|whimsical|childhood|fantasy|nature|animals|comics|added-2023-08-10",false], -["Prades","Simon","digital|surrealism|whimsical|conceptual|dream-like|contemporary|pop-art|magic-realism|added-2023-08-10",false], -["Saray","Rebeca","digital|photography|portraits|conceptual|contemporary|femininity|identity|added-2023-08-10",false], -["Cushart","Krenz","digital|manga-anime|characters|portraits|illustration|fantasy|whimsical|added-2023-08-10",false], -["Jones","Android","digital|psychedelic|conceptual|surrealism|dream-like|geometric|colorful|added-2023-08-10",false], -["Smith","Jeffrey","digital|surrealism|landscapes|magic-realism|cloudscapes|dark|dream-like|conceptual|added-2023-08-10",true], -["Apterus","Sabbas","digital|dark|abstract|conceptual|surrealism|dream-like|monochromatic|added-2023-08-10",false], -["Baarle","Lois van","digital|characters|illustration|fantasy|femininity|whimsical|dream-like|added-2023-08-10",false], -["Gillett","Leticia","digital|characters|3D-rendering|fantasy|science-fiction|whimsical|childhood|costumes|added-2023-08-10",true], -["Inceoglu","Ismail","digital|landscapes|architecture|urban-life|conceptual|minimalism|colorful|futuristic|added-2023-08-10",true], -["Lisowski","Michal","digital|dark|surrealism|conceptual|dream-like|fantasy|gothic|sculpture|added-2023-08-10",true], -["Martinakis","Adam","digital|3D-rendering|sculpture|conceptual|futuristic|dream-like|multimedia|virtual-reality|added-2023-08-10",false], -["Winkelmann","Mike","digital|conceptual|abstract|minimalism|geometric|color-field|contemporary|added-2023-08-10",false], -["Koresh","Omri","dark|digital|surrealism|conceptual|dream-like|gothic|monochromatic|added-2023-08-10",false], -["Viveros","Brian M.","digital|portraits|whimsical|femininity|surrealism|fantasy|contemporary|dream-like|added-2023-08-10",false], -["Tran","Ross","portraits|digital|realism|conceptual|minimalism|figurativism|manga-anime|femininity|added-2023-08-10",false], -["Crain","Clayton","comics|digital|fantasy|whimsical|illustration|science-fiction|characters|added-2023-08-10",false], -["Cheng","Yanjun","portraits|digital|femininity|whimsical|contemporary|romanticism|dream-like|illustration|added-2023-08-10",false], -["Campau","Mike","digital|3D-rendering|conceptual|contemporary|urban-life|landscapes|added-2023-08-10",false], -["Fadeev","Anton","landscapes|digital|abstract|impressionism|vibrant|colorful|dream-like|nature|added-2023-08-10",true], -["Kashin","Wadim","messy|dark|digital|surrealism|urban-life|street-art|expressionism|whimsical|added-2023-08-10",true], -["Shau","Natalie","surrealism|digital|characters|fantasy|whimsical|dream-like|femininity|mixed-media|added-2023-08-10",false], -["Cheng","Hsiao-Ron","portraits|digital|pop-art|femininity|fashion|colorful|minimalism|mixed-media|added-2023-08-10",false], -["WLOP","","characters|portraits|digital|fantasy|manga-anime|femininity|added-2023-08-10",false], -["Bleda","Elsa","photography|dark|digital|urban-life|environmentalism|social-commentary|added-2023-08-10",true], -["Rossier","Jessica","outer-space|landscapes|digital|dark|surrealism|conceptual|whimsical|spirituality|added-2023-08-10",false], -["Jewett","Ellen","sculpture|surrealism|digital|installation|abstract|expressionism|whimsical|nature|added-2023-08-10",false], -["Jung","Matthias","architecture|surrealism|digital|conceptual|minimalism|dream-like|futuristic|environmentalism|added-2023-08-10",false], -["Olschinsky","Atelier","cityscapes|abstract|digital|modern|minimalism|geometric|added-2023-08-10",false], -["Wolfers","Philippe","Belgian|art-nouveau|jewelry|sculpture|metalwork|flowers|ornate|added-2023-08-12",true], -["Tenniel","Sir John","British|illustration|kids-book|fantasy|whimsical|Victorian|added-2023-08-12",false], -["Crane","Walter","British|illustration|kids-book|folklore|nostalgia|engraving|added-2023-08-12",false], -["Caldecott","Randolph","British|illustration|kids-book|animals|nature|playful|added-2023-08-12",false], -["Greenaway","Kate","British|illustration|kids-book|fashion|Victorian|childhood|romanticism|added-2023-08-12",false], -["Pyle","Howard","American|illustration|kids-book|adventure|history|colorful|posters|added-2023-08-12",false], -["Willcox Smith","Jessie","American|illustration|kids-book|childhood|nostalgia|whimsical|folklore|added-2023-08-12",false], -["Rackham","Arthur","British|illustration|kids-book|fantasy|magic|creatures|added-2023-08-12",false], -["Shippen Green","Elizabeth","American|illustration|kids-book|fairies|dream-like|added-2023-08-12",false], -["Craft","Kinuko Y.","American|illustration|kids-book|fantasy|folklore|colorful|dream-like|royalty|added-2023-08-12",false], -["Bilibin","Ivan","Russian|illustration|kids-book|folklore|ornate|mythology|added-2023-08-12",false], -["Sowerby","Millicent","British|illustration|kids-book|botanical|nature|flowers|added-2023-08-12",false], -["Dulac","Edmund","French|orientalism|illustration|kids-book|folklore|romanticism|dream-like|magic|added-2023-08-12",false], -["Pogany","Willy","Hungarian|American|illustration|kids-book|whimsical|ornate|fantasy|added-2023-08-12",false], -["Wyeth","N.C.","American|illustration|kids-book|realism|rural-life|nature|nostalgia|added-2023-08-12",false], -["Tarrant","Margaret","British|illustration|kids-book|folklore|colorful|dream-like|whimsical|added-2023-08-12",false], -["Saint-Exupery","Antoine de","French|illustration|kids-book|adventure|spirituality|whimsical|added-2023-08-12",false], -["Wulfing","Sulamith","German|illustration|kids-book|dream-like|fantasy|whimsical|ethereal|spirituality|added-2023-08-12",false], -["Sendak","Maurice","American|illustration|kids-book|whimsical|fantasy|wilderness|added-2023-08-12",false], -["van Allsburg","Chris","American|illustration|kids-book|mysterious|adventure|psychedelic|added-2023-08-12",false], -["Barrett","Angela","kids-book|animals|playful|whimsical|fantasy|added-2023-08-12",false], -["Berenstain","Stan","kids-book|cartoon|family|animals|whimsical|playful|added-2023-08-12",false], -["Carle","Eric","kids-book|colorful|interactive|animals|playful|added-2023-08-12",false], -["Gammell","Stephen","dark|kids-book|high-contrast|horror|eerie|added-2023-08-12",false], -["Goble","Warwick","whimsical|art-nouveau|kids-book|folklore|nature|vibrant|added-2023-08-12",false], -["Gorey","Edward","high-contrast|monochromatic|dark|kids-book|gothic|mysterious|horror|eerie|added-2023-08-12",false], -["Grimm","Brothers","art-nouveau|kids-book|folklore|magic|characters|added-2023-08-12",false], -["Grimwood","Tracie","colorful|whimsical|kids-book|playful|fantasy|dream-like|added-2023-08-12",false], -["Harrison","Florence","art-nouveau|kids-book|romanticism|whimsical|delicate|dream-like|added-2023-08-12",false], -["Hatke","Ben","cartoon|kids-book|characters|adventure|playful|whimsical|added-2023-08-12",false], -["Jansson","Tove","cartoon|kids-book|playful|whimsical|adventure|added-2023-08-12",false], -["Jeffers","Oliver","cartoon|kids-book|whimsical|colorful|playful|added-2023-08-12",false], -["Keane","Glen","cartoon|kids-book|characters|adventure|whimsical|playful|added-2023-08-12",false], -["Klassen","Jon","watercolor|kids-book|animals|nature|playful|whimsical|dream-like|added-2023-08-12",false], -["Larson","Abigail","dark|whimsical|kids-book|fantasy|eerie|added-2023-08-12",false], -["Lathrop","Dorothy","art-nouveau|kids-book|whimsical|romanticism|delicate|dream-like|added-2023-08-12",false], -["McGuire","Richard","comics|kids-book|whimsical|colorful|conceptual|added-2023-08-12",false], -["Mortensen","John Kenn","kids-book|dark|horror|monochromatic|eerie|added-2023-08-12",false], -["Outhwaite","Ida Rentoul","whimsical|kids-book|art-nouveau|fantasy|femininity|folklore|nature|watercolor|dream-like|added-2023-08-12",false], -["Polacco","Patricia","kids-book|nostalgia|illustration|family|animals|colorful|added-2023-08-12",false], -["Riddell","Chris","cartoon|kids-book|watercolor|whimsical|fantasy|illustration|creatures|added-2023-08-12",false], -["Seuss","Dr.","cartoon|whimsical|kids-book|colorful|playful|characters|added-2023-08-12",false], -["Shepard","E. H.","whimsical|kids-book|watercolor|illustration|nostalgia|nature|animals|added-2023-08-12",false], -["Steig","William","kids-book|watercolor|playful|colorful|illustration|added-2023-08-12",false], -["Wain","Louis","psychedelic|kids-book|animals|fantasy|whimsical|colorful|playful|creatures|added-2023-08-12",false], -["Wiesner","David","cartoon|kids-book|whimsical|playful|added-2023-08-12",false], -["Yokai","Kozo","kids-book|Japanese|folklore|magic|monsters|illustration|colorful|playful|added-2023-08-12",false], -["Topor","Roland","eerie|horror|surreal|animation|dark|satire|added-2023-08-12",false], -["Svankmajer","Jan","animation|sculpture|surreal|puppets|dark|horror|added-2023-08-12",false], -["Plympton","Bill","animation|sketching|whimsical|cartoon|surreal|added-2023-08-12",false], -["Hertzfeldt","Don","animation|drawing|whimsical|surreal|dark|added-2023-08-12",false], -["Reiniger","Lotte","animation|silhouettes|German|folklore|puppets|nostalgia|added-2023-08-12",false], -["Yuasa","Masaaki","animation|Japanese|eerie|surreal|colorful|fantasy|added-2023-08-12",false], -["Peterson","Cleon","flat-colors|characters|graphic-design|childhood|modern|geometric|added-2023-08-12",false], -["Jullien","Jean","high-contrast|cartoon|flat-colors|playful|graphic-design|minimalism|added-2023-08-12",false], -["McNaught","Jon","cartoon|flat-colors|high-contrast|illustration|playful|added-2023-08-12",false], -["Arntz","Gerd","graphic-design|high-contrast|flat-colors|monochromatic|minimalism|geometric|added-2023-08-12",false], -["Bors","Matt","comics|flat-colors|satire|graphic-design|social-commentary|added-2023-08-12",false], -["Brosh","Allie","comics|high-contrast|flat-colors|autobiographical|whimsical|added-2023-08-12",false], -["Catherall","Paul","flat-colors|architecture|graphic-design|urban-life|minimalism|geometric|added-2023-08-12",false], -["Correll","Gemma","cartoon|flat-colors|high-contrast|whimsical|graphic-design|playful|added-2023-08-12",false], -["Gottardo","Alessandro","flat-colors|high-contrast|illustration|surreal|dream-like|whimsical|playful|characters|added-2023-08-12",false], -["Hume","Gary","abstract|flat-colors|geometric|minimalism|vibrant|painting|modern|added-2023-08-12",false], -["Fairey","Shepard","high-contrast|graphic-design|flat-colors|politics|street-art|social-commentary|added-2023-08-12",false], -["Daeni","Pino","illustration|pulp|erotica|romanticism|nostalgia|figurative|added-2023-08-12",false], -["Hall","H. Tom","illustration|pulp|erotica|romanticism|nostalgia|figurative|added-2023-08-12",true], -["McGinnis","Robert","illustration|pulp|erotica|romanticism|figurative|dream-like|added-2023-08-12",false], -["Stinkfish","","graffiti|Colombian|street-art|portraits|colorful|surreal|vibrant|urban-life|added-2023-08-12",false], -["Steadman","Ralph","high-contrast|messy|cartoon|surreal|illustration|whimsical|dark|satire|added-2023-08-12",false], -] - -// first category must be 'important' and last must be 'other' or things won't work -// tag names cannot be 'image-item' or 'hidden' because well, this isn't coded that well, lol -var tagCategories = [ - ['important'], - ['mediums',"3D-rendering","animation","architecture","assemblage","body-art","book-illustration","bronze","calligraphy","caricature","cartoon","ceiling-painting","ceramics","collage","comics","digital","drawing","earthworks","enamel","engraving","etching","experiential","film","frescoes","glasswork","graffiti","graphic-design","graphic-novel","illuminated-manuscripts","illustration","immersive","metalwork","infinity-rooms","installation","interactive","jewelry","kinetic","land-art","landscape-architecture","light-art","lithography","manga-anime","mixed-media","montage","mosaic","multimedia","mural-painting","newspaper","oil-painting","painting","pastel","pen-and-ink","performance","photography","posters","printmaking","public-art","puppets","quilting","recycled-materials","sculpture","sketching","stained-glass","street-art","tapestry","textiles","typography","video-art","video-games","virtual-reality","wall-drawings","watercolor","woodblock"], - ['styles',"abstract","action-painting","afro-futurism","angular","anthropomorphism","atmospheric","blurry","bohemian","bold-colors","color-field","colorful","cute","cyberpunk","dark","delicate","drip-painting","eerie","elegant","ethereal","figurative","flat-colors","folk-art","fragmentation","futuristic","geometric","gestural","golden","gothic","grids","grungy","high-contrast","illusion","impasto","improvisation","industrial","kids-book","large-scale","long-exposure","low-contrast","opulent","Maximalism","melancholy","messy","miniature","monochromatic","muted-colors","mysterious","naturalist","neon","noir","observational","organic","ornate","pastel-colors","photorealism","pin-up","playful","polka-dots","precisionism","primary-colors","propaganda","psychedelic","pulp","Rococo","steampunk","symbolist","text-based","vibrant","whimsical"], - ['themes',"activism","adventure","advertising","allegory","anxiety","autobiographical","childhood","commercial-art","conceptual","consumerism","controversy","death","displacement","distortion","documentary","dream-like","dreams","dystopia","empowerment","environmentalism","exoticism","family","fantasy","femininity","feminism","fleeting-moments","folklore","friendship","futurism","homo-eroticism","horror","identity","kitsch","loneliness","luxury","magic","mathematics","metamorphosis","metaphysics","mysticism","nightlife","nostalgia","observational","plein-air","politics","punk","religion","satire","science-fiction","serenity","slice-of-life","social-commentary","solitude","spirituality","surreal","utopia"], - ['subjects',"astronauts","alien-worlds","aliens","animals","ballet","barbarians","battle-scenes","BDSM","biological","botanical","cabaret","celebrity","characters","cityscapes","cloudscapes","clowns","contemporary-life","costumes","counter-culture","creatures","dancers","dinosaurs","domestic-scenes","dragons","emaciation","erotica","everyday-life","fairies","fashion","female-figures","figure-studies","flesh","flowers","furniture","gardens","genre-scenes","great-depression","history","holocaust","horses","immigrants","insects","interiors","kabuki-yakusha-e","labyrinths","landscapes","masks","modern-life","monsters","muscles","mythology","nature","nudes","outdoor-scenes","outer-space","plein-air","pools","pop-culture","portraits","robots-cyborgs","royalty","rural-life","seascapes","self-portraits","silhouettes","skies","Southwest","space-ships","still-life","suburbia","superheroes","technology","theater","tropics","underwater","urban-life","violence","water-lilies","waves","wilderness","wildlife"], - ['movements',"abstract-expressionism","art-deco","art-Nouveau","automatism","avant-garde","baroque","bauhaus","collaborative","cubism","cut-outs","dadaism","Dutch-golden-age","earthworks","expressionism","fauvism","figurativism","gutai","harlem-renaissance","impressionism","magic-realism","minimalism","neo-expressionism","neo-impressionism","orientalism","pointillism","pop-art","post-colonialism","post-impressionism","post-minimalism","primitivism","realism","romanticism","serial-art","shock-art","social-realism","spatialism","surrealism","tonalism","underground"], - ['periods',"ancient","Ancient-Egyptian","Ancient-Greek","contemporary","Edo-period","medieval","modern","post-colonialism","post-modern","post-war","pre-raphaelite","renaissance","ukiyo-e","Victorian"], - ['identities',"Aboriginal","African","African-American","Albanian","Algerian","American","Angolan","anonymous","Argentinean","Armenian","Asian","Australian","Austrian","Azerbaijani","Bahraini","Bangladeshi","Barbadian","Belarusian","Belgian","Bengali","Bosnian","Brazilian","British","Bulgarian","Cameroonian","Canadian","Catalan","Chilean","Chinese","Colombian","CostaRican","Croatian","Cuban","Cypriot","Czech","Dane","Dominican","Danish","Dutch","Ecuadorian","Egyptian","Emirati","Estonian","Ethiopian","European","Filipino","Finnish","Flemish","French","Georgian","German","Ghanaian","Greek","Guatemalan","Guyanese","Hungarian","Icelandic","Indian","Indonesian","Iranian","Iraqi","Irish","Islamic","Israeli","Italian","Jamaican","Japanese","Jewish","Kenyan","Latvian","Lebanese","LGBTQ","Libyan","Lithuanian","Luxembourger","Macedonian","Mexican","Moldovan","Mongol","Montenegrin","Moroccan","Namibian","Native-American","New-Zealander","Nigerian","Norwegian","Palestinian","Peruvian","Polish","Portuguese","PuertoRican","Qatari","Romanian","Russian","Saudi","Scottish","Serbian","Slovak","Slovenian","SouthAfrican","SouthKorean","Spanish","Sudanese","Swedish","Swiss","Syrian","Thai","Tunisian","Turkish","Ukrainian","Uruguayan","Venezuelan","Vietnamese","Yemeni"], - ['other'], -]; - diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/utils/se_layer.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/utils/se_layer.py deleted file mode 100644 index 083bd7d1ccee909c900c7aed2cc928bf14727f3e..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/utils/se_layer.py +++ /dev/null @@ -1,57 +0,0 @@ -import annotator.uniformer.mmcv as mmcv -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule - -from .make_divisible import make_divisible - - -class SELayer(nn.Module): - """Squeeze-and-Excitation Module. - - Args: - channels (int): The input (and output) channels of the SE layer. - ratio (int): Squeeze ratio in SELayer, the intermediate channel will be - ``int(channels/ratio)``. Default: 16. - conv_cfg (None or dict): Config dict for convolution layer. - Default: None, which means using conv2d. - act_cfg (dict or Sequence[dict]): Config dict for activation layer. - If act_cfg is a dict, two activation layers will be configured - by this dict. If act_cfg is a sequence of dicts, the first - activation layer will be configured by the first dict and the - second activation layer will be configured by the second dict. - Default: (dict(type='ReLU'), dict(type='HSigmoid', bias=3.0, - divisor=6.0)). - """ - - def __init__(self, - channels, - ratio=16, - conv_cfg=None, - act_cfg=(dict(type='ReLU'), - dict(type='HSigmoid', bias=3.0, divisor=6.0))): - super(SELayer, self).__init__() - if isinstance(act_cfg, dict): - act_cfg = (act_cfg, act_cfg) - assert len(act_cfg) == 2 - assert mmcv.is_tuple_of(act_cfg, dict) - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - self.conv1 = ConvModule( - in_channels=channels, - out_channels=make_divisible(channels // ratio, 8), - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[0]) - self.conv2 = ConvModule( - in_channels=make_divisible(channels // ratio, 8), - out_channels=channels, - kernel_size=1, - stride=1, - conv_cfg=conv_cfg, - act_cfg=act_cfg[1]) - - def forward(self, x): - out = self.global_avgpool(x) - out = self.conv1(out) - out = self.conv2(out) - return x * out diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/builders.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/builders.py deleted file mode 100644 index 038bf99c3d0fbbb86005683d5a2a1b4edcac4298..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/models/builders.py +++ /dev/null @@ -1,252 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -All the functions to build the relevant models and modules -from the Hydra config. -""" - -import typing as tp - -import audiocraft -import omegaconf -import torch - -from .encodec import CompressionModel, EncodecModel -from .lm import LMModel -from ..modules.codebooks_patterns import ( - CodebooksPatternProvider, - DelayedPatternProvider, - MusicLMPattern, - ParallelPatternProvider, - UnrolledPatternProvider, - VALLEPattern, -) -from ..modules.conditioners import ( - BaseConditioner, - ChromaStemConditioner, - CLAPEmbeddingConditioner, - ConditionFuser, - ConditioningProvider, - LUTConditioner, - T5Conditioner, -) -from .unet import DiffusionUnet -from .. import quantization as qt -from ..utils.utils import dict_from_config -from ..modules.diffusion_schedule import MultiBandProcessor, SampleProcessor - - -def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer: - klass = { - 'no_quant': qt.DummyQuantizer, - 'rvq': qt.ResidualVectorQuantizer - }[quantizer] - kwargs = dict_from_config(getattr(cfg, quantizer)) - if quantizer != 'no_quant': - kwargs['dimension'] = dimension - return klass(**kwargs) - - -def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig): - if encoder_name == 'seanet': - kwargs = dict_from_config(getattr(cfg, 'seanet')) - encoder_override_kwargs = kwargs.pop('encoder') - decoder_override_kwargs = kwargs.pop('decoder') - encoder_kwargs = {**kwargs, **encoder_override_kwargs} - decoder_kwargs = {**kwargs, **decoder_override_kwargs} - encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs) - return encoder, decoder - else: - raise KeyError(f"Unexpected compression model {cfg.compression_model}") - - -def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel: - """Instantiate a compression model.""" - if cfg.compression_model == 'encodec': - kwargs = dict_from_config(getattr(cfg, 'encodec')) - encoder_name = kwargs.pop('autoencoder') - quantizer_name = kwargs.pop('quantizer') - encoder, decoder = get_encodec_autoencoder(encoder_name, cfg) - quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension) - frame_rate = kwargs['sample_rate'] // encoder.hop_length - renormalize = kwargs.pop('renormalize', False) - # deprecated params - kwargs.pop('renorm', None) - return EncodecModel(encoder, decoder, quantizer, - frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device) - else: - raise KeyError(f"Unexpected compression model {cfg.compression_model}") - - -def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel: - """Instantiate a transformer LM.""" - if cfg.lm_model == 'transformer_lm': - kwargs = dict_from_config(getattr(cfg, 'transformer_lm')) - n_q = kwargs['n_q'] - q_modeling = kwargs.pop('q_modeling', None) - codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern') - attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout')) - cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance')) - cfg_prob, cfg_coef = cls_free_guidance['training_dropout'], cls_free_guidance['inference_coef'] - fuser = get_condition_fuser(cfg) - condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device) - if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programmatically - kwargs['cross_attention'] = True - if codebooks_pattern_cfg.modeling is None: - assert q_modeling is not None, \ - "LM model should either have a codebook pattern defined or transformer_lm.q_modeling" - codebooks_pattern_cfg = omegaconf.OmegaConf.create( - {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}} - ) - pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg) - return LMModel( - pattern_provider=pattern_provider, - condition_provider=condition_provider, - fuser=fuser, - cfg_dropout=cfg_prob, - cfg_coef=cfg_coef, - attribute_dropout=attribute_dropout, - dtype=getattr(torch, cfg.dtype), - device=cfg.device, - **kwargs - ).to(cfg.device) - else: - raise KeyError(f"Unexpected LM model {cfg.lm_model}") - - -def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider: - """Instantiate a conditioning model.""" - device = cfg.device - duration = cfg.dataset.segment_duration - cfg = getattr(cfg, 'conditioners') - dict_cfg = {} if cfg is None else dict_from_config(cfg) - conditioners: tp.Dict[str, BaseConditioner] = {} - condition_provider_args = dict_cfg.pop('args', {}) - condition_provider_args.pop('merge_text_conditions_p', None) - condition_provider_args.pop('drop_desc_p', None) - - for cond, cond_cfg in dict_cfg.items(): - model_type = cond_cfg['model'] - model_args = cond_cfg[model_type] - if model_type == 't5': - conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args) - elif model_type == 'lut': - conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args) - elif model_type == 'chroma_stem': - conditioners[str(cond)] = ChromaStemConditioner( - output_dim=output_dim, - duration=duration, - device=device, - **model_args - ) - elif model_type == 'clap': - conditioners[str(cond)] = CLAPEmbeddingConditioner( - output_dim=output_dim, - device=device, - **model_args - ) - else: - raise ValueError(f"Unrecognized conditioning model: {model_type}") - conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args) - return conditioner - - -def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser: - """Instantiate a condition fuser object.""" - fuser_cfg = getattr(cfg, 'fuser') - fuser_methods = ['sum', 'cross', 'prepend', 'input_interpolate'] - fuse2cond = {k: fuser_cfg[k] for k in fuser_methods} - kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods} - fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs) - return fuser - - -def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider: - """Instantiate a codebooks pattern provider object.""" - pattern_providers = { - 'parallel': ParallelPatternProvider, - 'delay': DelayedPatternProvider, - 'unroll': UnrolledPatternProvider, - 'valle': VALLEPattern, - 'musiclm': MusicLMPattern, - } - name = cfg.modeling - kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {} - klass = pattern_providers[name] - return klass(n_q, **kwargs) - - -def get_debug_compression_model(device='cpu', sample_rate: int = 32000): - """Instantiate a debug compression model to be used for unit tests.""" - assert sample_rate in [16000, 32000], "unsupported sample rate for debug compression model" - model_ratios = { - 16000: [10, 8, 8], # 25 Hz at 16kHz - 32000: [10, 8, 16] # 25 Hz at 32kHz - } - ratios: tp.List[int] = model_ratios[sample_rate] - frame_rate = 25 - seanet_kwargs: dict = { - 'n_filters': 4, - 'n_residual_layers': 1, - 'dimension': 32, - 'ratios': ratios, - } - print(seanet_kwargs) - encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs) - quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4) - init_x = torch.randn(8, 32, 128) - quantizer(init_x, 1) # initialize kmeans etc. - compression_model = EncodecModel( - encoder, decoder, quantizer, - frame_rate=frame_rate, sample_rate=sample_rate, channels=1).to(device) - return compression_model.eval() - - -def get_diffusion_model(cfg: omegaconf.DictConfig): - # TODO Find a way to infer the channels from dset - channels = cfg.channels - num_steps = cfg.schedule.num_steps - return DiffusionUnet( - chin=channels, num_steps=num_steps, **cfg.diffusion_unet) - - -def get_processor(cfg, sample_rate: int = 24000): - sample_processor = SampleProcessor() - if cfg.use: - kw = dict(cfg) - kw.pop('use') - kw.pop('name') - if cfg.name == "multi_band_processor": - sample_processor = MultiBandProcessor(sample_rate=sample_rate, **kw) - return sample_processor - - -def get_debug_lm_model(device='cpu'): - """Instantiate a debug LM to be used for unit tests.""" - pattern = DelayedPatternProvider(n_q=4) - dim = 16 - providers = { - 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"), - } - condition_provider = ConditioningProvider(providers) - fuser = ConditionFuser( - {'cross': ['description'], 'prepend': [], - 'sum': [], 'input_interpolate': []}) - lm = LMModel( - pattern, condition_provider, fuser, - n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2, - cross_attention=True, causal=True) - return lm.to(device).eval() - - -def get_wrapped_compression_model( - compression_model: CompressionModel, - cfg: omegaconf.DictConfig) -> CompressionModel: - # more to come. - return compression_model diff --git a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/scripts/extract_depth.py b/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/scripts/extract_depth.py deleted file mode 100644 index d6aa0d80c63a3e580fa28e0f2c7af4e9ae003b64..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/src/taming-transformers/scripts/extract_depth.py +++ /dev/null @@ -1,112 +0,0 @@ -import os -import torch -import numpy as np -from tqdm import trange -from PIL import Image - - -def get_state(gpu): - import torch - midas = torch.hub.load("intel-isl/MiDaS", "MiDaS") - if gpu: - midas.cuda() - midas.eval() - - midas_transforms = torch.hub.load("intel-isl/MiDaS", "transforms") - transform = midas_transforms.default_transform - - state = {"model": midas, - "transform": transform} - return state - - -def depth_to_rgba(x): - assert x.dtype == np.float32 - assert len(x.shape) == 2 - y = x.copy() - y.dtype = np.uint8 - y = y.reshape(x.shape+(4,)) - return np.ascontiguousarray(y) - - -def rgba_to_depth(x): - assert x.dtype == np.uint8 - assert len(x.shape) == 3 and x.shape[2] == 4 - y = x.copy() - y.dtype = np.float32 - y = y.reshape(x.shape[:2]) - return np.ascontiguousarray(y) - - -def run(x, state): - model = state["model"] - transform = state["transform"] - hw = x.shape[:2] - with torch.no_grad(): - prediction = model(transform((x + 1.0) * 127.5).cuda()) - prediction = torch.nn.functional.interpolate( - prediction.unsqueeze(1), - size=hw, - mode="bicubic", - align_corners=False, - ).squeeze() - output = prediction.cpu().numpy() - return output - - -def get_filename(relpath, level=-2): - # save class folder structure and filename: - fn = relpath.split(os.sep)[level:] - folder = fn[-2] - file = fn[-1].split('.')[0] - return folder, file - - -def save_depth(dataset, path, debug=False): - os.makedirs(path) - N = len(dset) - if debug: - N = 10 - state = get_state(gpu=True) - for idx in trange(N, desc="Data"): - ex = dataset[idx] - image, relpath = ex["image"], ex["relpath"] - folder, filename = get_filename(relpath) - # prepare - folderabspath = os.path.join(path, folder) - os.makedirs(folderabspath, exist_ok=True) - savepath = os.path.join(folderabspath, filename) - # run model - xout = run(image, state) - I = depth_to_rgba(xout) - Image.fromarray(I).save("{}.png".format(savepath)) - - -if __name__ == "__main__": - from taming.data.imagenet import ImageNetTrain, ImageNetValidation - out = "data/imagenet_depth" - if not os.path.exists(out): - print("Please create a folder or symlink '{}' to extract depth data ".format(out) + - "(be prepared that the output size will be larger than ImageNet itself).") - exit(1) - - # go - dset = ImageNetValidation() - abspath = os.path.join(out, "val") - if os.path.exists(abspath): - print("{} exists - not doing anything.".format(abspath)) - else: - print("preparing {}".format(abspath)) - save_depth(dset, abspath) - print("done with validation split") - - dset = ImageNetTrain() - abspath = os.path.join(out, "train") - if os.path.exists(abspath): - print("{} exists - not doing anything.".format(abspath)) - else: - print("preparing {}".format(abspath)) - save_depth(dset, abspath) - print("done with train split") - - print("done done.") diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/req/req_set.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/req/req_set.py deleted file mode 100644 index ec7a6e07a25acfa978030c65ae7c1d8609163249..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/req/req_set.py +++ /dev/null @@ -1,82 +0,0 @@ -import logging -from collections import OrderedDict -from typing import Dict, List - -from pip._vendor.packaging.utils import canonicalize_name - -from pip._internal.req.req_install import InstallRequirement - -logger = logging.getLogger(__name__) - - -class RequirementSet: - def __init__(self, check_supported_wheels: bool = True) -> None: - """Create a RequirementSet.""" - - self.requirements: Dict[str, InstallRequirement] = OrderedDict() - self.check_supported_wheels = check_supported_wheels - - self.unnamed_requirements: List[InstallRequirement] = [] - - def __str__(self) -> str: - requirements = sorted( - (req for req in self.requirements.values() if not req.comes_from), - key=lambda req: canonicalize_name(req.name or ""), - ) - return " ".join(str(req.req) for req in requirements) - - def __repr__(self) -> str: - requirements = sorted( - self.requirements.values(), - key=lambda req: canonicalize_name(req.name or ""), - ) - - format_string = "<{classname} object; {count} requirement(s): {reqs}>" - return format_string.format( - classname=self.__class__.__name__, - count=len(requirements), - reqs=", ".join(str(req.req) for req in requirements), - ) - - def add_unnamed_requirement(self, install_req: InstallRequirement) -> None: - assert not install_req.name - self.unnamed_requirements.append(install_req) - - def add_named_requirement(self, install_req: InstallRequirement) -> None: - assert install_req.name - - project_name = canonicalize_name(install_req.name) - self.requirements[project_name] = install_req - - def has_requirement(self, name: str) -> bool: - project_name = canonicalize_name(name) - - return ( - project_name in self.requirements - and not self.requirements[project_name].constraint - ) - - def get_requirement(self, name: str) -> InstallRequirement: - project_name = canonicalize_name(name) - - if project_name in self.requirements: - return self.requirements[project_name] - - raise KeyError(f"No project with the name {name!r}") - - @property - def all_requirements(self) -> List[InstallRequirement]: - return self.unnamed_requirements + list(self.requirements.values()) - - @property - def requirements_to_install(self) -> List[InstallRequirement]: - """Return the list of requirements that need to be installed. - - TODO remove this property together with the legacy resolver, since the new - resolver only returns requirements that need to be installed. - """ - return [ - install_req - for install_req in self.all_requirements - if not install_req.constraint and not install_req.satisfied_by - ] diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/packaging/__init__.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/packaging/__init__.py deleted file mode 100644 index 3c50c5dcfeeda2efed282200a5c5cc8c5f7542f7..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/packaging/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -from .__about__ import ( - __author__, - __copyright__, - __email__, - __license__, - __summary__, - __title__, - __uri__, - __version__, -) - -__all__ = [ - "__title__", - "__summary__", - "__uri__", - "__version__", - "__author__", - "__email__", - "__license__", - "__copyright__", -] diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/config.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/config.py deleted file mode 100644 index 4492c89660c202acf882375258dffafff00a99ba..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/config.py +++ /dev/null @@ -1,377 +0,0 @@ -"""distutils.command.config - -Implements the Distutils 'config' command, a (mostly) empty command class -that exists mainly to be sub-classed by specific module distributions and -applications. The idea is that while every "config" command is different, -at least they're all named the same, and users always see "config" in the -list of standard commands. Also, this is a good place to put common -configure-like tasks: "try to compile this C code", or "figure out where -this header file lives". -""" - -import os -import re - -from distutils.core import Command -from distutils.errors import DistutilsExecError -from distutils.sysconfig import customize_compiler -from distutils import log - -LANG_EXT = {"c": ".c", "c++": ".cxx"} - - -class config(Command): - - description = "prepare to build" - - user_options = [ - ('compiler=', None, "specify the compiler type"), - ('cc=', None, "specify the compiler executable"), - ('include-dirs=', 'I', "list of directories to search for header files"), - ('define=', 'D', "C preprocessor macros to define"), - ('undef=', 'U', "C preprocessor macros to undefine"), - ('libraries=', 'l', "external C libraries to link with"), - ('library-dirs=', 'L', "directories to search for external C libraries"), - ('noisy', None, "show every action (compile, link, run, ...) taken"), - ( - 'dump-source', - None, - "dump generated source files before attempting to compile them", - ), - ] - - # The three standard command methods: since the "config" command - # does nothing by default, these are empty. - - def initialize_options(self): - self.compiler = None - self.cc = None - self.include_dirs = None - self.libraries = None - self.library_dirs = None - - # maximal output for now - self.noisy = 1 - self.dump_source = 1 - - # list of temporary files generated along-the-way that we have - # to clean at some point - self.temp_files = [] - - def finalize_options(self): - if self.include_dirs is None: - self.include_dirs = self.distribution.include_dirs or [] - elif isinstance(self.include_dirs, str): - self.include_dirs = self.include_dirs.split(os.pathsep) - - if self.libraries is None: - self.libraries = [] - elif isinstance(self.libraries, str): - self.libraries = [self.libraries] - - if self.library_dirs is None: - self.library_dirs = [] - elif isinstance(self.library_dirs, str): - self.library_dirs = self.library_dirs.split(os.pathsep) - - def run(self): - pass - - # Utility methods for actual "config" commands. The interfaces are - # loosely based on Autoconf macros of similar names. Sub-classes - # may use these freely. - - def _check_compiler(self): - """Check that 'self.compiler' really is a CCompiler object; - if not, make it one. - """ - # We do this late, and only on-demand, because this is an expensive - # import. - from distutils.ccompiler import CCompiler, new_compiler - - if not isinstance(self.compiler, CCompiler): - self.compiler = new_compiler( - compiler=self.compiler, dry_run=self.dry_run, force=1 - ) - customize_compiler(self.compiler) - if self.include_dirs: - self.compiler.set_include_dirs(self.include_dirs) - if self.libraries: - self.compiler.set_libraries(self.libraries) - if self.library_dirs: - self.compiler.set_library_dirs(self.library_dirs) - - def _gen_temp_sourcefile(self, body, headers, lang): - filename = "_configtest" + LANG_EXT[lang] - with open(filename, "w") as file: - if headers: - for header in headers: - file.write("#include <%s>\n" % header) - file.write("\n") - file.write(body) - if body[-1] != "\n": - file.write("\n") - return filename - - def _preprocess(self, body, headers, include_dirs, lang): - src = self._gen_temp_sourcefile(body, headers, lang) - out = "_configtest.i" - self.temp_files.extend([src, out]) - self.compiler.preprocess(src, out, include_dirs=include_dirs) - return (src, out) - - def _compile(self, body, headers, include_dirs, lang): - src = self._gen_temp_sourcefile(body, headers, lang) - if self.dump_source: - dump_file(src, "compiling '%s':" % src) - (obj,) = self.compiler.object_filenames([src]) - self.temp_files.extend([src, obj]) - self.compiler.compile([src], include_dirs=include_dirs) - return (src, obj) - - def _link(self, body, headers, include_dirs, libraries, library_dirs, lang): - (src, obj) = self._compile(body, headers, include_dirs, lang) - prog = os.path.splitext(os.path.basename(src))[0] - self.compiler.link_executable( - [obj], - prog, - libraries=libraries, - library_dirs=library_dirs, - target_lang=lang, - ) - - if self.compiler.exe_extension is not None: - prog = prog + self.compiler.exe_extension - self.temp_files.append(prog) - - return (src, obj, prog) - - def _clean(self, *filenames): - if not filenames: - filenames = self.temp_files - self.temp_files = [] - log.info("removing: %s", ' '.join(filenames)) - for filename in filenames: - try: - os.remove(filename) - except OSError: - pass - - # XXX these ignore the dry-run flag: what to do, what to do? even if - # you want a dry-run build, you still need some sort of configuration - # info. My inclination is to make it up to the real config command to - # consult 'dry_run', and assume a default (minimal) configuration if - # true. The problem with trying to do it here is that you'd have to - # return either true or false from all the 'try' methods, neither of - # which is correct. - - # XXX need access to the header search path and maybe default macros. - - def try_cpp(self, body=None, headers=None, include_dirs=None, lang="c"): - """Construct a source file from 'body' (a string containing lines - of C/C++ code) and 'headers' (a list of header files to include) - and run it through the preprocessor. Return true if the - preprocessor succeeded, false if there were any errors. - ('body' probably isn't of much use, but what the heck.) - """ - from distutils.ccompiler import CompileError - - self._check_compiler() - ok = True - try: - self._preprocess(body, headers, include_dirs, lang) - except CompileError: - ok = False - - self._clean() - return ok - - def search_cpp(self, pattern, body=None, headers=None, include_dirs=None, lang="c"): - """Construct a source file (just like 'try_cpp()'), run it through - the preprocessor, and return true if any line of the output matches - 'pattern'. 'pattern' should either be a compiled regex object or a - string containing a regex. If both 'body' and 'headers' are None, - preprocesses an empty file -- which can be useful to determine the - symbols the preprocessor and compiler set by default. - """ - self._check_compiler() - src, out = self._preprocess(body, headers, include_dirs, lang) - - if isinstance(pattern, str): - pattern = re.compile(pattern) - - with open(out) as file: - match = False - while True: - line = file.readline() - if line == '': - break - if pattern.search(line): - match = True - break - - self._clean() - return match - - def try_compile(self, body, headers=None, include_dirs=None, lang="c"): - """Try to compile a source file built from 'body' and 'headers'. - Return true on success, false otherwise. - """ - from distutils.ccompiler import CompileError - - self._check_compiler() - try: - self._compile(body, headers, include_dirs, lang) - ok = True - except CompileError: - ok = False - - log.info(ok and "success!" or "failure.") - self._clean() - return ok - - def try_link( - self, - body, - headers=None, - include_dirs=None, - libraries=None, - library_dirs=None, - lang="c", - ): - """Try to compile and link a source file, built from 'body' and - 'headers', to executable form. Return true on success, false - otherwise. - """ - from distutils.ccompiler import CompileError, LinkError - - self._check_compiler() - try: - self._link(body, headers, include_dirs, libraries, library_dirs, lang) - ok = True - except (CompileError, LinkError): - ok = False - - log.info(ok and "success!" or "failure.") - self._clean() - return ok - - def try_run( - self, - body, - headers=None, - include_dirs=None, - libraries=None, - library_dirs=None, - lang="c", - ): - """Try to compile, link to an executable, and run a program - built from 'body' and 'headers'. Return true on success, false - otherwise. - """ - from distutils.ccompiler import CompileError, LinkError - - self._check_compiler() - try: - src, obj, exe = self._link( - body, headers, include_dirs, libraries, library_dirs, lang - ) - self.spawn([exe]) - ok = True - except (CompileError, LinkError, DistutilsExecError): - ok = False - - log.info(ok and "success!" or "failure.") - self._clean() - return ok - - # -- High-level methods -------------------------------------------- - # (these are the ones that are actually likely to be useful - # when implementing a real-world config command!) - - def check_func( - self, - func, - headers=None, - include_dirs=None, - libraries=None, - library_dirs=None, - decl=0, - call=0, - ): - """Determine if function 'func' is available by constructing a - source file that refers to 'func', and compiles and links it. - If everything succeeds, returns true; otherwise returns false. - - The constructed source file starts out by including the header - files listed in 'headers'. If 'decl' is true, it then declares - 'func' (as "int func()"); you probably shouldn't supply 'headers' - and set 'decl' true in the same call, or you might get errors about - a conflicting declarations for 'func'. Finally, the constructed - 'main()' function either references 'func' or (if 'call' is true) - calls it. 'libraries' and 'library_dirs' are used when - linking. - """ - self._check_compiler() - body = [] - if decl: - body.append("int %s ();" % func) - body.append("int main () {") - if call: - body.append(" %s();" % func) - else: - body.append(" %s;" % func) - body.append("}") - body = "\n".join(body) + "\n" - - return self.try_link(body, headers, include_dirs, libraries, library_dirs) - - def check_lib( - self, - library, - library_dirs=None, - headers=None, - include_dirs=None, - other_libraries=[], - ): - """Determine if 'library' is available to be linked against, - without actually checking that any particular symbols are provided - by it. 'headers' will be used in constructing the source file to - be compiled, but the only effect of this is to check if all the - header files listed are available. Any libraries listed in - 'other_libraries' will be included in the link, in case 'library' - has symbols that depend on other libraries. - """ - self._check_compiler() - return self.try_link( - "int main (void) { }", - headers, - include_dirs, - [library] + other_libraries, - library_dirs, - ) - - def check_header(self, header, include_dirs=None, library_dirs=None, lang="c"): - """Determine if the system header file named by 'header_file' - exists and can be found by the preprocessor; return true if so, - false otherwise. - """ - return self.try_cpp( - body="/* No body */", headers=[header], include_dirs=include_dirs - ) - - -def dump_file(filename, head=None): - """Dumps a file content into log.info. - - If head is not None, will be dumped before the file content. - """ - if head is None: - log.info('%s', filename) - else: - log.info(head) - file = open(filename) - try: - log.info(file.read()) - finally: - file.close() diff --git a/spaces/Realcat/image-matching-webui/hloc/pipelines/Aachen_v1_1/__init__.py b/spaces/Realcat/image-matching-webui/hloc/pipelines/Aachen_v1_1/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/src/lightning/lightning_aspanformer.py b/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/src/lightning/lightning_aspanformer.py deleted file mode 100644 index 9b34b7b7485d4419390614e3fe0174ccc53ac7a9..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/ASpanFormer/src/lightning/lightning_aspanformer.py +++ /dev/null @@ -1,374 +0,0 @@ -from collections import defaultdict -import pprint -from loguru import logger -from pathlib import Path - -import torch -import numpy as np -import pytorch_lightning as pl -from matplotlib import pyplot as plt - -from src.ASpanFormer.aspanformer import ASpanFormer -from src.ASpanFormer.utils.supervision import ( - compute_supervision_coarse, - compute_supervision_fine, -) -from src.losses.aspan_loss import ASpanLoss -from src.optimizers import build_optimizer, build_scheduler -from src.utils.metrics import ( - compute_symmetrical_epipolar_errors, - compute_symmetrical_epipolar_errors_offset_bidirectional, - compute_pose_errors, - aggregate_metrics, -) -from src.utils.plotting import make_matching_figures, make_matching_figures_offset -from src.utils.comm import gather, all_gather -from src.utils.misc import lower_config, flattenList -from src.utils.profiler import PassThroughProfiler - - -class PL_ASpanFormer(pl.LightningModule): - def __init__(self, config, pretrained_ckpt=None, profiler=None, dump_dir=None): - """ - TODO: - - use the new version of PL logging API. - """ - super().__init__() - # Misc - self.config = config # full config - _config = lower_config(self.config) - self.loftr_cfg = lower_config(_config["aspan"]) - self.profiler = profiler or PassThroughProfiler() - self.n_vals_plot = max( - config.TRAINER.N_VAL_PAIRS_TO_PLOT // config.TRAINER.WORLD_SIZE, 1 - ) - - # Matcher: LoFTR - self.matcher = ASpanFormer(config=_config["aspan"]) - self.loss = ASpanLoss(_config) - - # Pretrained weights - print(pretrained_ckpt) - if pretrained_ckpt: - print("load") - state_dict = torch.load(pretrained_ckpt, map_location="cpu")["state_dict"] - msg = self.matcher.load_state_dict(state_dict, strict=False) - print(msg) - logger.info(f"Load '{pretrained_ckpt}' as pretrained checkpoint") - - # Testing - self.dump_dir = dump_dir - - def configure_optimizers(self): - # FIXME: The scheduler did not work properly when `--resume_from_checkpoint` - optimizer = build_optimizer(self, self.config) - scheduler = build_scheduler(self.config, optimizer) - return [optimizer], [scheduler] - - def optimizer_step( - self, - epoch, - batch_idx, - optimizer, - optimizer_idx, - optimizer_closure, - on_tpu, - using_native_amp, - using_lbfgs, - ): - # learning rate warm up - warmup_step = self.config.TRAINER.WARMUP_STEP - if self.trainer.global_step < warmup_step: - if self.config.TRAINER.WARMUP_TYPE == "linear": - base_lr = self.config.TRAINER.WARMUP_RATIO * self.config.TRAINER.TRUE_LR - lr = base_lr + ( - self.trainer.global_step / self.config.TRAINER.WARMUP_STEP - ) * abs(self.config.TRAINER.TRUE_LR - base_lr) - for pg in optimizer.param_groups: - pg["lr"] = lr - elif self.config.TRAINER.WARMUP_TYPE == "constant": - pass - else: - raise ValueError( - f"Unknown lr warm-up strategy: {self.config.TRAINER.WARMUP_TYPE}" - ) - - # update params - optimizer.step(closure=optimizer_closure) - optimizer.zero_grad() - - def _trainval_inference(self, batch): - with self.profiler.profile("Compute coarse supervision"): - compute_supervision_coarse(batch, self.config) - - with self.profiler.profile("LoFTR"): - self.matcher(batch) - - with self.profiler.profile("Compute fine supervision"): - compute_supervision_fine(batch, self.config) - - with self.profiler.profile("Compute losses"): - self.loss(batch) - - def _compute_metrics(self, batch): - with self.profiler.profile("Copmute metrics"): - compute_symmetrical_epipolar_errors( - batch - ) # compute epi_errs for each match - compute_symmetrical_epipolar_errors_offset_bidirectional( - batch - ) # compute epi_errs for offset match - compute_pose_errors( - batch, self.config - ) # compute R_errs, t_errs, pose_errs for each pair - - rel_pair_names = list(zip(*batch["pair_names"])) - bs = batch["image0"].size(0) - metrics = { - # to filter duplicate pairs caused by DistributedSampler - "identifiers": ["#".join(rel_pair_names[b]) for b in range(bs)], - "epi_errs": [ - batch["epi_errs"][batch["m_bids"] == b].cpu().numpy() - for b in range(bs) - ], - "epi_errs_offset": [ - batch["epi_errs_offset_left"][batch["offset_bids_left"] == b] - .cpu() - .numpy() - for b in range(bs) - ], # only consider left side - "R_errs": batch["R_errs"], - "t_errs": batch["t_errs"], - "inliers": batch["inliers"], - } - ret_dict = {"metrics": metrics} - return ret_dict, rel_pair_names - - def training_step(self, batch, batch_idx): - self._trainval_inference(batch) - - # logging - if ( - self.trainer.global_rank == 0 - and self.global_step % self.trainer.log_every_n_steps == 0 - ): - # scalars - for k, v in batch["loss_scalars"].items(): - if not k.startswith("loss_flow") and not k.startswith("conf_"): - self.logger.experiment.add_scalar(f"train/{k}", v, self.global_step) - - # log offset_loss and conf for each layer and level - layer_num = self.loftr_cfg["coarse"]["layer_num"] - for layer_index in range(layer_num): - log_title = "layer_" + str(layer_index) - self.logger.experiment.add_scalar( - log_title + "/offset_loss", - batch["loss_scalars"]["loss_flow_" + str(layer_index)], - self.global_step, - ) - self.logger.experiment.add_scalar( - log_title + "/conf_", - batch["loss_scalars"]["conf_" + str(layer_index)], - self.global_step, - ) - - # net-params - if self.config.ASPAN.MATCH_COARSE.MATCH_TYPE == "sinkhorn": - self.logger.experiment.add_scalar( - f"skh_bin_score", - self.matcher.coarse_matching.bin_score.clone().detach().cpu().data, - self.global_step, - ) - - # figures - if self.config.TRAINER.ENABLE_PLOTTING: - compute_symmetrical_epipolar_errors( - batch - ) # compute epi_errs for each match - figures = make_matching_figures( - batch, self.config, self.config.TRAINER.PLOT_MODE - ) - for k, v in figures.items(): - self.logger.experiment.add_figure( - f"train_match/{k}", v, self.global_step - ) - - # plot offset - if self.global_step % 200 == 0: - compute_symmetrical_epipolar_errors_offset_bidirectional(batch) - figures_left = make_matching_figures_offset( - batch, self.config, self.config.TRAINER.PLOT_MODE, side="_left" - ) - figures_right = make_matching_figures_offset( - batch, self.config, self.config.TRAINER.PLOT_MODE, side="_right" - ) - for k, v in figures_left.items(): - self.logger.experiment.add_figure( - f"train_offset/{k}" + "_left", v, self.global_step - ) - figures = make_matching_figures_offset( - batch, self.config, self.config.TRAINER.PLOT_MODE, side="_right" - ) - for k, v in figures_right.items(): - self.logger.experiment.add_figure( - f"train_offset/{k}" + "_right", v, self.global_step - ) - - return {"loss": batch["loss"]} - - def training_epoch_end(self, outputs): - avg_loss = torch.stack([x["loss"] for x in outputs]).mean() - if self.trainer.global_rank == 0: - self.logger.experiment.add_scalar( - "train/avg_loss_on_epoch", avg_loss, global_step=self.current_epoch - ) - - def validation_step(self, batch, batch_idx): - self._trainval_inference(batch) - - ret_dict, _ = self._compute_metrics( - batch - ) # this func also compute the epi_errors - - val_plot_interval = max(self.trainer.num_val_batches[0] // self.n_vals_plot, 1) - figures = {self.config.TRAINER.PLOT_MODE: []} - figures_offset = {self.config.TRAINER.PLOT_MODE: []} - if batch_idx % val_plot_interval == 0: - figures = make_matching_figures( - batch, self.config, mode=self.config.TRAINER.PLOT_MODE - ) - figures_offset = make_matching_figures_offset( - batch, self.config, self.config.TRAINER.PLOT_MODE, "_left" - ) - return { - **ret_dict, - "loss_scalars": batch["loss_scalars"], - "figures": figures, - "figures_offset_left": figures_offset, - } - - def validation_epoch_end(self, outputs): - # handle multiple validation sets - multi_outputs = ( - [outputs] if not isinstance(outputs[0], (list, tuple)) else outputs - ) - multi_val_metrics = defaultdict(list) - - for valset_idx, outputs in enumerate(multi_outputs): - # since pl performs sanity_check at the very begining of the training - cur_epoch = self.trainer.current_epoch - if ( - not self.trainer.resume_from_checkpoint - and self.trainer.running_sanity_check - ): - cur_epoch = -1 - - # 1. loss_scalars: dict of list, on cpu - _loss_scalars = [o["loss_scalars"] for o in outputs] - loss_scalars = { - k: flattenList(all_gather([_ls[k] for _ls in _loss_scalars])) - for k in _loss_scalars[0] - } - - # 2. val metrics: dict of list, numpy - _metrics = [o["metrics"] for o in outputs] - metrics = { - k: flattenList(all_gather(flattenList([_me[k] for _me in _metrics]))) - for k in _metrics[0] - } - # NOTE: all ranks need to `aggregate_merics`, but only log at rank-0 - val_metrics_4tb = aggregate_metrics( - metrics, self.config.TRAINER.EPI_ERR_THR - ) - for thr in [5, 10, 20]: - multi_val_metrics[f"auc@{thr}"].append(val_metrics_4tb[f"auc@{thr}"]) - - # 3. figures - _figures = [o["figures"] for o in outputs] - figures = { - k: flattenList(gather(flattenList([_me[k] for _me in _figures]))) - for k in _figures[0] - } - - # tensorboard records only on rank 0 - if self.trainer.global_rank == 0: - for k, v in loss_scalars.items(): - mean_v = torch.stack(v).mean() - self.logger.experiment.add_scalar( - f"val_{valset_idx}/avg_{k}", mean_v, global_step=cur_epoch - ) - - for k, v in val_metrics_4tb.items(): - self.logger.experiment.add_scalar( - f"metrics_{valset_idx}/{k}", v, global_step=cur_epoch - ) - - for k, v in figures.items(): - if self.trainer.global_rank == 0: - for plot_idx, fig in enumerate(v): - self.logger.experiment.add_figure( - f"val_match_{valset_idx}/{k}/pair-{plot_idx}", - fig, - cur_epoch, - close=True, - ) - plt.close("all") - - for thr in [5, 10, 20]: - # log on all ranks for ModelCheckpoint callback to work properly - self.log( - f"auc@{thr}", torch.tensor(np.mean(multi_val_metrics[f"auc@{thr}"])) - ) # ckpt monitors on this - - def test_step(self, batch, batch_idx): - with self.profiler.profile("LoFTR"): - self.matcher(batch) - - ret_dict, rel_pair_names = self._compute_metrics(batch) - - with self.profiler.profile("dump_results"): - if self.dump_dir is not None: - # dump results for further analysis - keys_to_save = {"mkpts0_f", "mkpts1_f", "mconf", "epi_errs"} - pair_names = list(zip(*batch["pair_names"])) - bs = batch["image0"].shape[0] - dumps = [] - for b_id in range(bs): - item = {} - mask = batch["m_bids"] == b_id - item["pair_names"] = pair_names[b_id] - item["identifier"] = "#".join(rel_pair_names[b_id]) - for key in keys_to_save: - item[key] = batch[key][mask].cpu().numpy() - for key in ["R_errs", "t_errs", "inliers"]: - item[key] = batch[key][b_id] - dumps.append(item) - ret_dict["dumps"] = dumps - - return ret_dict - - def test_epoch_end(self, outputs): - # metrics: dict of list, numpy - _metrics = [o["metrics"] for o in outputs] - metrics = { - k: flattenList(gather(flattenList([_me[k] for _me in _metrics]))) - for k in _metrics[0] - } - - # [{key: [{...}, *#bs]}, *#batch] - if self.dump_dir is not None: - Path(self.dump_dir).mkdir(parents=True, exist_ok=True) - _dumps = flattenList([o["dumps"] for o in outputs]) # [{...}, #bs*#batch] - dumps = flattenList(gather(_dumps)) # [{...}, #proc*#bs*#batch] - logger.info( - f"Prediction and evaluation results will be saved to: {self.dump_dir}" - ) - - if self.trainer.global_rank == 0: - print(self.profiler.summary()) - val_metrics_4tb = aggregate_metrics( - metrics, self.config.TRAINER.EPI_ERR_THR - ) - logger.info("\n" + pprint.pformat(val_metrics_4tb)) - if self.dump_dir is not None: - np.save(Path(self.dump_dir) / "LoFTR_pred_eval", dumps) diff --git a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/config/project_config.py b/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/config/project_config.py deleted file mode 100644 index 6846b4451e038b1c517043ea6db08f3029b79852..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/config/project_config.py +++ /dev/null @@ -1,46 +0,0 @@ -""" -Project configurations. -""" -import os - - -class Config(object): - """Datasets and experiments folders for the whole project.""" - - ##################### - ## Dataset setting ## - ##################### - DATASET_ROOT = os.getenv( - "DATASET_ROOT", "./datasets/" - ) # TODO: path to your datasets folder - if not os.path.exists(DATASET_ROOT): - os.makedirs(DATASET_ROOT) - - # Synthetic shape dataset - synthetic_dataroot = os.path.join(DATASET_ROOT, "synthetic_shapes") - synthetic_cache_path = os.path.join(DATASET_ROOT, "synthetic_shapes") - if not os.path.exists(synthetic_dataroot): - os.makedirs(synthetic_dataroot) - - # Exported predictions dataset - export_dataroot = os.path.join(DATASET_ROOT, "export_datasets") - export_cache_path = os.path.join(DATASET_ROOT, "export_datasets") - if not os.path.exists(export_dataroot): - os.makedirs(export_dataroot) - - # Wireframe dataset - wireframe_dataroot = os.path.join(DATASET_ROOT, "wireframe") - wireframe_cache_path = os.path.join(DATASET_ROOT, "wireframe") - - # Holicity dataset - holicity_dataroot = os.path.join(DATASET_ROOT, "Holicity") - holicity_cache_path = os.path.join(DATASET_ROOT, "Holicity") - - ######################## - ## Experiment Setting ## - ######################## - EXP_PATH = os.getenv( - "EXP_PATH", "./experiments/" - ) # TODO: path to your experiments folder - if not os.path.exists(EXP_PATH): - os.makedirs(EXP_PATH) diff --git a/spaces/RedValis/Music-Helix/model.py b/spaces/RedValis/Music-Helix/model.py deleted file mode 100644 index c3ab5205e8f2f5ed8c5baeff4cf81ecd35e6d628..0000000000000000000000000000000000000000 --- a/spaces/RedValis/Music-Helix/model.py +++ /dev/null @@ -1,520 +0,0 @@ -import pandas as pd -import spotipy -from spotipy.oauth2 import SpotifyOAuth, SpotifyClientCredentials -import yaml -import re -from sklearn.feature_extraction.text import TfidfVectorizer -from sklearn.metrics.pairwise import cosine_similarity -from sklearn.preprocessing import MinMaxScaler -import pickle -import streamlit as st -import os - -def playlist_model(url, model, max_gen=3, same_art=5): - log = [] - Fresult = [] - try: - log.append('Start logging') - uri = url.split('/')[-1].split('?')[0] - try: - log.append('spotify local method') - stream = open("Spotify/Spotify.yaml") - spotify_details = yaml.safe_load(stream) - auth_manager = SpotifyClientCredentials(client_id=spotify_details['Client_id'], client_secret=spotify_details['client_secret']) - except: - log.append('spotify .streamlit method') - try: - Client_id=st.secrets["Client_ID"] - client_secret=st.secrets["Client_secret"] - auth_manager = SpotifyClientCredentials(client_id=Client_id, client_secret=client_secret) - except: - log.append('spotify hug method') - Client_id=os.environ['Client_ID'] - client_secret=os.environ['Client_secret'] - auth_manager = SpotifyClientCredentials(client_id=Client_id, client_secret=client_secret) - sp = spotipy.client.Spotify(auth_manager=auth_manager) - - if model == 'Spotify Model': - def get_IDs(user, playlist_id): - try: - log.append('start playlist extraction') - track_ids = [] - playlist = sp.user_playlist(user, playlist_id) - for item in playlist['tracks']['items']: - track = item['track'] - track_ids.append(track['id']) - return track_ids - except Exception as e: - log.append('Failed to load the playlist') - log.append(e) - - track_ids = get_IDs('Ruby', uri) - track_ids_uni = list(set(track_ids)) - log.append('Starting Spotify Model') - Spotifyresult = pd.DataFrame() - for i in range(len(track_ids_uni)-5): - if len(Spotifyresult) >= 50: - break - try: - ff = sp.recommendations(seed_tracks=list(track_ids_uni[i:i+5]), limit=5) - except Exception as e: - log.append(e) - continue - for z in range(5): - result = pd.DataFrame([z+(5*i)+1]) - result['uri'] = ff['tracks'][z]['id'] - Spotifyresult = pd.concat([Spotifyresult, result], axis=0) - Spotifyresult.drop_duplicates(subset=['uri'], inplace=True,keep='first') - Fresult = Spotifyresult.uri[:50] - - log.append('Model run successfully') - return Fresult, log - - lendf=len(pd.read_csv('Data/streamlit.csv',usecols=['track_uri'])) - dtypes = {'track_uri': 'object', 'artist_uri': 'object', 'album_uri': 'object', 'danceability': 'float16', 'energy': 'float16', 'key': 'float16', - 'loudness': 'float16', 'mode': 'float16', 'speechiness': 'float16', 'acousticness': 'float16', 'instrumentalness': 'float16', - 'liveness': 'float16', 'valence': 'float16', 'tempo': 'float16', 'duration_ms': 'float32', 'time_signature': 'float16', - 'Track_release_date': 'int8', 'Track_pop': 'int8', 'Artist_pop': 'int8', 'Artist_genres': 'object'} - col_name= ['track_uri', 'artist_uri', 'album_uri', 'danceability', 'energy', 'key', - 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', - 'liveness', 'valence', 'tempo', 'duration_ms', 'time_signature', - 'Track_release_date', 'Track_pop', 'Artist_pop', 'Artist_genres'] - - try: - def get_IDs(user, playlist_id): - log.append('start playlist extraction') - track_ids = [] - artist_id = [] - playlist = sp.user_playlist(user, playlist_id) - for item in playlist['tracks']['items']: - track = item['track'] - track_ids.append(track['id']) - artist = item['track']['artists'] - artist_id.append(artist[0]['id']) - return track_ids, artist_id - except Exception as e: - log.append('Failed to load the playlist') - log.append(e) - - track_ids, artist_id = get_IDs('Ruby', uri) - log.append("Number of Track : {}".format(len(track_ids))) - - artist_id_uni = list(set(artist_id)) - track_ids_uni = list(set(track_ids)) - log.append("Number of unique Artists : {}".format(len(artist_id_uni))) - log.append("Number of unique Tracks : {}".format(len(track_ids_uni))) - - def extract(track_ids_uni, artist_id_uni): - err = [] - err.append('Start audio features extraction') - audio_features = pd.DataFrame() - for i in range(0, len(track_ids_uni), 25): - try: - track_feature = sp.audio_features(track_ids_uni[i:i+25]) - track_df = pd.DataFrame(track_feature) - audio_features = pd.concat([audio_features, track_df], axis=0) - except Exception as e: - err.append(e) - continue - err.append('Start track features extraction') - track_ = pd.DataFrame() - for i in range(0, len(track_ids_uni), 25): - try: - track_features = sp.tracks(track_ids_uni[i:i+25]) - for x in range(25): - track_pop = pd.DataFrame([track_ids_uni[i+x]], columns=['Track_uri']) - track_pop['Track_release_date'] = track_features['tracks'][x]['album']['release_date'] - track_pop['Track_pop'] = track_features['tracks'][x]["popularity"] - track_pop['Artist_uri'] = track_features['tracks'][x]['artists'][0]['id'] - track_pop['Album_uri'] = track_features['tracks'][x]['album']['id'] - track_ = pd.concat([track_, track_pop], axis=0) - except Exception as e: - err.append(e) - continue - err.append('Start artist features extraction') - artist_ = pd.DataFrame() - for i in range(0, len(artist_id_uni), 25): - try: - artist_features = sp.artists(artist_id_uni[i:i+25]) - for x in range(25): - artist_df = pd.DataFrame([artist_id_uni[i+x]], columns=['Artist_uri']) - artist_pop = artist_features['artists'][x]["popularity"] - artist_genres = artist_features['artists'][x]["genres"] - artist_df["Artist_pop"] = artist_pop - if artist_genres: - artist_df["genres"] = " ".join([re.sub(' ', '_', i) for i in artist_genres]) - else: - artist_df["genres"] = "unknown" - artist_ = pd.concat([artist_, artist_df], axis=0) - except Exception as e: - err.append(e) - continue - try: - test = pd.DataFrame( - track_, columns=['Track_uri', 'Artist_uri', 'Album_uri']) - - test.rename(columns={'Track_uri': 'track_uri', - 'Artist_uri': 'artist_uri', 'Album_uri': 'album_uri'}, inplace=True) - - audio_features.drop( - columns=['type', 'uri', 'track_href', 'analysis_url'], axis=1, inplace=True) - - test = pd.merge(test, audio_features, - left_on="track_uri", right_on="id", how='outer') - test = pd.merge(test, track_, left_on="track_uri", - right_on="Track_uri", how='outer') - test = pd.merge(test, artist_, left_on="artist_uri", - right_on="Artist_uri", how='outer') - - test.rename(columns={'genres': 'Artist_genres'}, inplace=True) - - test.drop(columns=['Track_uri', 'Artist_uri_x', - 'Artist_uri_y', 'Album_uri', 'id'], axis=1, inplace=True) - - test.dropna(axis=0, inplace=True) - test['Track_pop'] = test['Track_pop'].apply(lambda x: int(x/5)) - test['Artist_pop'] = test['Artist_pop'].apply(lambda x: int(x/5)) - test['Track_release_date'] = test['Track_release_date'].apply(lambda x: x.split('-')[0]) - test['Track_release_date'] = test['Track_release_date'].astype('int16') - test['Track_release_date'] = test['Track_release_date'].apply(lambda x: int(x/50)) - - test[['danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'time_signature']] = test[[ - 'danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'time_signature']].astype('float16') - test[['duration_ms']] = test[['duration_ms']].astype('float32') - test[['Track_release_date', 'Track_pop', 'Artist_pop']] = test[[ - 'Track_release_date', 'Track_pop', 'Artist_pop']].astype('int8') - except Exception as e: - err.append(e) - err.append('Finish extraction') - return test, err - test, err = extract(track_ids_uni, artist_id_uni) - - for i in err: - log.append(i) - del err - grow = test.copy() - test['Artist_genres'] = test['Artist_genres'].apply(lambda x: x.split(" ")) - tfidf = TfidfVectorizer(max_features=max_gen) - tfidf_matrix = tfidf.fit_transform(test['Artist_genres'].apply(lambda x: " ".join(x))) - genre_df = pd.DataFrame(tfidf_matrix.toarray()) - genre_df.columns = ['genre' + "|" +i for i in tfidf.get_feature_names_out()] - genre_df = genre_df.astype('float16') - test.drop(columns=['Artist_genres'], axis=1, inplace=True) - test = pd.concat([test.reset_index(drop=True),genre_df.reset_index(drop=True)], axis=1) - Fresult = pd.DataFrame() - x = 1 - for i in range(int(lendf/2), lendf+1, int(lendf/2)): - try: - df = pd.read_csv('Data/streamlit.csv',names= col_name,dtype=dtypes,skiprows=x,nrows=i) - log.append('reading data frame chunks from {} to {}'.format(x,i)) - except Exception as e: - log.append('Failed to load grow') - log.append(e) - grow = grow[~grow['track_uri'].isin(df['track_uri'].values)] - df = df[~df['track_uri'].isin(test['track_uri'].values)] - df['Artist_genres'] = df['Artist_genres'].apply(lambda x: x.split(" ")) - tfidf_matrix = tfidf.transform(df['Artist_genres'].apply(lambda x: " ".join(x))) - genre_df = pd.DataFrame(tfidf_matrix.toarray()) - genre_df.columns = ['genre' + "|" +i for i in tfidf.get_feature_names_out()] - genre_df = genre_df.astype('float16') - df.drop(columns=['Artist_genres'], axis=1, inplace=True) - df = pd.concat([df.reset_index(drop=True), - genre_df.reset_index(drop=True)], axis=1) - del genre_df - try: - df.drop(columns=['genre|unknown'], axis=1, inplace=True) - test.drop(columns=['genre|unknown'], axis=1, inplace=True) - except: - log.append('genre|unknown not found') - log.append('Scaling the data .....') - if x == 1: - sc = pickle.load(open('Data/sc.sav','rb')) - df.iloc[:, 3:19] = sc.transform(df.iloc[:, 3:19]) - test.iloc[:, 3:19] = sc.transform(test.iloc[:, 3:19]) - log.append("Creating playlist vector") - playvec = pd.DataFrame(test.sum(axis=0)).T - else: - df.iloc[:, 3:19] = sc.transform(df.iloc[:, 3:19]) - x = i - if model == 'Model 1': - df['sim']=cosine_similarity(df.drop(['track_uri', 'artist_uri', 'album_uri'], axis = 1),playvec.drop(['track_uri', 'artist_uri', 'album_uri'], axis = 1)) - df['sim2']=cosine_similarity(df.iloc[:,16:-1],playvec.iloc[:,16:]) - df['sim3']=cosine_similarity(df.iloc[:,19:-2],playvec.iloc[:,19:]) - df = df.sort_values(['sim3','sim2','sim'],ascending = False,kind='stable').groupby('artist_uri').head(same_art).head(50) - Fresult = pd.concat([Fresult, df], axis=0) - Fresult = Fresult.sort_values(['sim3', 'sim2', 'sim'],ascending=False,kind='stable') - Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first') - Fresult = Fresult.groupby('artist_uri').head(same_art).head(50) - elif model == 'Model 2': - df['sim'] = cosine_similarity(df.iloc[:, 3:16], playvec.iloc[:, 3:16]) - df['sim2'] = cosine_similarity(df.loc[:, df.columns.str.startswith('T') | df.columns.str.startswith('A')], playvec.loc[:, playvec.columns.str.startswith('T') | playvec.columns.str.startswith('A')]) - df['sim3'] = cosine_similarity(df.loc[:, df.columns.str.startswith('genre')], playvec.loc[:, playvec.columns.str.startswith('genre')]) - df['sim4'] = (df['sim']+df['sim2']+df['sim3'])/3 - df = df.sort_values(['sim4'], ascending=False,kind='stable').groupby('artist_uri').head(same_art).head(50) - Fresult = pd.concat([Fresult, df], axis=0) - Fresult = Fresult.sort_values(['sim4'], ascending=False,kind='stable') - Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first') - Fresult = Fresult.groupby('artist_uri').head(same_art).head(50) - del test - try: - del df - log.append('Getting Result') - except: - log.append('Getting Result') - if model == 'Model 1': - Fresult = Fresult.sort_values(['sim3', 'sim2', 'sim'],ascending=False,kind='stable') - Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first') - Fresult = Fresult.groupby('artist_uri').head(same_art).track_uri.head(50) - elif model == 'Model 2': - Fresult = Fresult.sort_values(['sim4'], ascending=False,kind='stable') - Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first') - Fresult = Fresult.groupby('artist_uri').head(same_art).track_uri.head(50) - log.append('{} New Tracks Found'.format(len(grow))) - if(len(grow)>=1): - try: - new=pd.read_csv('Data/new_tracks.csv',dtype=dtypes) - new=pd.concat([new, grow], axis=0) - new=new[new.Track_pop >0] - new.drop_duplicates(subset=['track_uri'], inplace=True,keep='last') - new.to_csv('Data/new_tracks.csv',index=False) - except: - grow.to_csv('Data/new_tracks.csv', index=False) - log.append('Model run successfully') - except Exception as e: - log.append("Model Failed") - log.append(e) - return Fresult, log - - - -def top_tracks(url,region): - log = [] - Fresult = [] - uri = url.split('/')[-1].split('?')[0] - try: - log.append('spotify local method') - stream = open("Spotify/Spotify.yaml") - spotify_details = yaml.safe_load(stream) - auth_manager = SpotifyClientCredentials(client_id=spotify_details['Client_id'], client_secret=spotify_details['client_secret']) - except: - log.append('spotify .streamlit method') - try: - Client_id=st.secrets["Client_ID"] - client_secret=st.secrets["Client_secret"] - auth_manager = SpotifyClientCredentials(client_id=Client_id, client_secret=client_secret) - except: - log.append('spotify hug method') - Client_id=os.environ['Client_ID'] - client_secret=os.environ['Client_secret'] - auth_manager = SpotifyClientCredentials(client_id=Client_id, client_secret=client_secret) - sp = spotipy.client.Spotify(auth_manager=auth_manager) - try: - log.append('Starting Spotify Model') - top=sp.artist_top_tracks(uri,country=region) - for i in range(10) : - Fresult.append(top['tracks'][i]['id']) - log.append('Model run successfully') - except Exception as e: - log.append("Model Failed") - log.append(e) - return Fresult,log - -def song_model(url, model, max_gen=3, same_art=5): - log = [] - Fresult = [] - try: - log.append('Start logging') - uri = url.split('/')[-1].split('?')[0] - try: - log.append('spotify local method') - stream = open("Spotify/Spotify.yaml") - spotify_details = yaml.safe_load(stream) - auth_manager = SpotifyClientCredentials(client_id=spotify_details['Client_id'], client_secret=spotify_details['client_secret']) - except: - log.append('spotify .streamlit method') - try: - Client_id=st.secrets["Client_ID"] - client_secret=st.secrets["Client_secret"] - auth_manager = SpotifyClientCredentials(client_id=Client_id, client_secret=client_secret) - except: - log.append('spotify hug method') - Client_id=os.environ['Client_ID'] - client_secret=os.environ['Client_secret'] - auth_manager = SpotifyClientCredentials(client_id=Client_id, client_secret=client_secret) - sp = spotipy.client.Spotify(auth_manager=auth_manager) - - if model == 'Spotify Model': - log.append('Starting Spotify Model') - aa=sp.recommendations(seed_tracks=[uri], limit=25) - for i in range(25): - Fresult.append(aa['tracks'][i]['id']) - log.append('Model run successfully') - return Fresult, log - lendf=len(pd.read_csv('Data/streamlit.csv',usecols=['track_uri'])) - dtypes = {'track_uri': 'object', 'artist_uri': 'object', 'album_uri': 'object', 'danceability': 'float16', 'energy': 'float16', 'key': 'float16', - 'loudness': 'float16', 'mode': 'float16', 'speechiness': 'float16', 'acousticness': 'float16', 'instrumentalness': 'float16', - 'liveness': 'float16', 'valence': 'float16', 'tempo': 'float16', 'duration_ms': 'float32', 'time_signature': 'float16', - 'Track_release_date': 'int8', 'Track_pop': 'int8', 'Artist_pop': 'int8', 'Artist_genres': 'object'} - col_name= ['track_uri', 'artist_uri', 'album_uri', 'danceability', 'energy', 'key', - 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', - 'liveness', 'valence', 'tempo', 'duration_ms', 'time_signature', - 'Track_release_date', 'Track_pop', 'Artist_pop', 'Artist_genres'] - log.append('Start audio features extraction') - audio_features = pd.DataFrame(sp.audio_features([uri])) - log.append('Start track features extraction') - track_ = pd.DataFrame() - track_features = sp.tracks([uri]) - track_pop = pd.DataFrame([uri], columns=['Track_uri']) - track_pop['Track_release_date'] = track_features['tracks'][0]['album']['release_date'] - track_pop['Track_pop'] = track_features['tracks'][0]["popularity"] - track_pop['Artist_uri'] = track_features['tracks'][0]['artists'][0]['id'] - track_pop['Album_uri'] = track_features['tracks'][0]['album']['id'] - track_ = pd.concat([track_, track_pop], axis=0) - log.append('Start artist features extraction') - artist_id_uni=list(track_['Artist_uri']) - artist_ = pd.DataFrame() - artist_features = sp.artists(artist_id_uni) - artist_df = pd.DataFrame(artist_id_uni, columns=['Artist_uri']) - artist_pop = artist_features['artists'][0]["popularity"] - artist_genres = artist_features['artists'][0]["genres"] - artist_df["Artist_pop"] = artist_pop - if artist_genres: - artist_df["genres"] = " ".join([re.sub(' ', '_', i) for i in artist_genres]) - else: - artist_df["genres"] = "unknown" - artist_ = pd.concat([artist_, artist_df], axis=0) - try: - test = pd.DataFrame(track_, columns=['Track_uri', 'Artist_uri', 'Album_uri']) - test.rename(columns={'Track_uri': 'track_uri','Artist_uri': 'artist_uri', 'Album_uri': 'album_uri'}, inplace=True) - audio_features.drop(columns=['type', 'uri', 'track_href', 'analysis_url'], axis=1, inplace=True) - test = pd.merge(test, audio_features,left_on="track_uri", right_on="id", how='outer') - test = pd.merge(test, track_, left_on="track_uri",right_on="Track_uri", how='outer') - test = pd.merge(test, artist_, left_on="artist_uri",right_on="Artist_uri", how='outer') - test.rename(columns={'genres': 'Artist_genres'}, inplace=True) - test.drop(columns=['Track_uri', 'Artist_uri_x','Artist_uri_y', 'Album_uri', 'id'], axis=1, inplace=True) - test.dropna(axis=0, inplace=True) - test['Track_pop'] = test['Track_pop'].apply(lambda x: int(x/5)) - test['Artist_pop'] = test['Artist_pop'].apply(lambda x: int(x/5)) - test['Track_release_date'] = test['Track_release_date'].apply(lambda x: x.split('-')[0]) - test['Track_release_date'] = test['Track_release_date'].astype('int16') - test['Track_release_date'] = test['Track_release_date'].apply(lambda x: int(x/50)) - test[['danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'time_signature']] = test[['danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'time_signature']].astype('float16') - test[['duration_ms']] = test[['duration_ms']].astype('float32') - test[['Track_release_date', 'Track_pop', 'Artist_pop']] = test[['Track_release_date', 'Track_pop', 'Artist_pop']].astype('int8') - except Exception as e: - log.append(e) - log.append('Finish extraction') - grow = test.copy() - test['Artist_genres'] = test['Artist_genres'].apply(lambda x: x.split(" ")) - tfidf = TfidfVectorizer(max_features=max_gen) - tfidf_matrix = tfidf.fit_transform(test['Artist_genres'].apply(lambda x: " ".join(x))) - genre_df = pd.DataFrame(tfidf_matrix.toarray()) - genre_df.columns = ['genre' + "|" +i for i in tfidf.get_feature_names_out()] - genre_df = genre_df.astype('float16') - test.drop(columns=['Artist_genres'], axis=1, inplace=True) - test = pd.concat([test.reset_index(drop=True),genre_df.reset_index(drop=True)], axis=1) - Fresult = pd.DataFrame() - x = 1 - for i in range(int(lendf/2), lendf+1, int(lendf/2)): - try: - df = pd.read_csv('Data/streamlit.csv',names= col_name,dtype=dtypes,skiprows=x,nrows=i) - log.append('reading data frame chunks from {} to {}'.format(x,i)) - except Exception as e: - log.append('Failed to load grow') - log.append(e) - grow = grow[~grow['track_uri'].isin(df['track_uri'].values)] - df = df[~df['track_uri'].isin(test['track_uri'].values)] - df['Artist_genres'] = df['Artist_genres'].apply(lambda x: x.split(" ")) - tfidf_matrix = tfidf.transform(df['Artist_genres'].apply(lambda x: " ".join(x))) - genre_df = pd.DataFrame(tfidf_matrix.toarray()) - genre_df.columns = ['genre' + "|" +i for i in tfidf.get_feature_names_out()] - genre_df = genre_df.astype('float16') - df.drop(columns=['Artist_genres'], axis=1, inplace=True) - df = pd.concat([df.reset_index(drop=True), - genre_df.reset_index(drop=True)], axis=1) - del genre_df - try: - df.drop(columns=['genre|unknown'], axis=1, inplace=True) - test.drop(columns=['genre|unknown'], axis=1, inplace=True) - except: - log.append('genre|unknown not found') - log.append('Scaling the data .....') - if x == 1: - sc = pickle.load(open('Data/sc.sav','rb')) - df.iloc[:, 3:19] = sc.transform(df.iloc[:, 3:19]) - test.iloc[:, 3:19] = sc.transform(test.iloc[:, 3:19]) - log.append("Creating playlist vector") - playvec = pd.DataFrame(test.sum(axis=0)).T - else: - df.iloc[:, 3:19] = sc.transform(df.iloc[:, 3:19]) - x = i - if model == 'Model 1': - df['sim']=cosine_similarity(df.drop(['track_uri', 'artist_uri', 'album_uri'], axis = 1),playvec.drop(['track_uri', 'artist_uri', 'album_uri'], axis = 1)) - df['sim2']=cosine_similarity(df.iloc[:,16:-1],playvec.iloc[:,16:]) - df['sim3']=cosine_similarity(df.iloc[:,19:-2],playvec.iloc[:,19:]) - df = df.sort_values(['sim3','sim2','sim'],ascending = False,kind='stable').groupby('artist_uri').head(same_art).head(50) - Fresult = pd.concat([Fresult, df], axis=0) - Fresult = Fresult.sort_values(['sim3', 'sim2', 'sim'],ascending=False,kind='stable') - Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first') - Fresult = Fresult.groupby('artist_uri').head(same_art).head(50) - elif model == 'Model 2': - df['sim'] = cosine_similarity(df.iloc[:, 3:16], playvec.iloc[:, 3:16]) - df['sim2'] = cosine_similarity(df.loc[:, df.columns.str.startswith('T') | df.columns.str.startswith('A')], playvec.loc[:, playvec.columns.str.startswith('T') | playvec.columns.str.startswith('A')]) - df['sim3'] = cosine_similarity(df.loc[:, df.columns.str.startswith('genre')], playvec.loc[:, playvec.columns.str.startswith('genre')]) - df['sim4'] = (df['sim']+df['sim2']+df['sim3'])/3 - df = df.sort_values(['sim4'], ascending=False,kind='stable').groupby('artist_uri').head(same_art).head(50) - Fresult = pd.concat([Fresult, df], axis=0) - Fresult = Fresult.sort_values(['sim4'], ascending=False,kind='stable') - Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first') - Fresult = Fresult.groupby('artist_uri').head(same_art).head(50) - del test - try: - del df - log.append('Getting Result') - except: - log.append('Getting Result') - if model == 'Model 1': - Fresult = Fresult.sort_values(['sim3', 'sim2', 'sim'],ascending=False,kind='stable') - Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first') - Fresult = Fresult.groupby('artist_uri').head(same_art).track_uri.head(50) - elif model == 'Model 2': - Fresult = Fresult.sort_values(['sim4'], ascending=False,kind='stable') - Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first') - Fresult = Fresult.groupby('artist_uri').head(same_art).track_uri.head(50) - log.append('{} New Tracks Found'.format(len(grow))) - if(len(grow)>=1): - try: - new=pd.read_csv('Data/new_tracks.csv',dtype=dtypes) - new=pd.concat([new, grow], axis=0) - new=new[new.Track_pop >0] - new.drop_duplicates(subset=['track_uri'], inplace=True,keep='last') - new.to_csv('Data/new_tracks.csv',index=False) - except: - grow.to_csv('Data/new_tracks.csv', index=False) - log.append('Model run successfully') - except Exception as e: - log.append("Model Failed") - log.append(e) - return Fresult, log - -def update_dataset(): - col_name= ['track_uri', 'artist_uri', 'album_uri', 'danceability', 'energy', 'key', - 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', - 'liveness', 'valence', 'tempo', 'duration_ms', 'time_signature', - 'Track_release_date', 'Track_pop', 'Artist_pop', 'Artist_genres'] - dtypes = {'track_uri': 'object', 'artist_uri': 'object', 'album_uri': 'object', 'danceability': 'float16', 'energy': 'float16', 'key': 'float16', - 'loudness': 'float16', 'mode': 'float16', 'speechiness': 'float16', 'acousticness': 'float16', 'instrumentalness': 'float16', - 'liveness': 'float16', 'valence': 'float16', 'tempo': 'float16', 'duration_ms': 'float32', 'time_signature': 'float16', - 'Track_release_date': 'int8', 'Track_pop': 'int8', 'Artist_pop': 'int8', 'Artist_genres': 'object'} - df = pd.read_csv('Data/streamlit.csv',dtype=dtypes) - grow = pd.read_csv('Data/new_tracks.csv',dtype=dtypes) - cur = len(df) - df=pd.concat([df,grow],axis=0) - grow=pd.DataFrame(columns=col_name) - grow.to_csv('Data/new_tracks.csv',index=False) - df=df[df.Track_pop >0] - df.drop_duplicates(subset=['track_uri'],inplace=True,keep='last') - df.dropna(axis=0,inplace=True) - df.to_csv('Data/streamlit.csv',index=False) - return (len(df)-cur) - diff --git a/spaces/Redgon/bingo/src/components/chat-attachments.tsx b/spaces/Redgon/bingo/src/components/chat-attachments.tsx deleted file mode 100644 index ef43d4e262935d263b6099138c56f7daade5299d..0000000000000000000000000000000000000000 --- a/spaces/Redgon/bingo/src/components/chat-attachments.tsx +++ /dev/null @@ -1,37 +0,0 @@ -import Image from 'next/image' -import ClearIcon from '@/assets/images/clear.svg' -import RefreshIcon from '@/assets/images/refresh.svg' -import { FileItem } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' -import { useBing } from '@/lib/hooks/use-bing' - -type ChatAttachmentsProps = Pick, 'attachmentList' | 'setAttachmentList' | 'uploadImage'> - -export function ChatAttachments({ attachmentList = [], setAttachmentList, uploadImage }: ChatAttachmentsProps) { - return attachmentList.length ? ( -
- {attachmentList.map(file => ( -
- {file.status === 'loading' && ( -
-
-
) - } - {file.status !== 'error' && ( -
- -
) - } - {file.status === 'error' && ( -
- refresh uploadImage(file.url)} /> -
- )} - -
- ))} -
- ) : null -} diff --git a/spaces/Rimi98/InsectRecognizer/app.py b/spaces/Rimi98/InsectRecognizer/app.py deleted file mode 100644 index ed6b20c0c3fc0a15d541df2074aac9785ed3dc1e..0000000000000000000000000000000000000000 --- a/spaces/Rimi98/InsectRecognizer/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import gradio as gr -from fastai.vision.all import load_learner -from fastai import * -import torch -import os -from PIL import Image - - -model_path = 'model6-90%.pkl' - -model = load_learner(model_path) - -def result(path): - - pred,_,probability = model.predict(path) - pred = str(pred) - pred = pred.upper() - - return {pred: float(probability.max())} - -path = 'test_images/' - -image_path = [] - -for i in os.listdir(path): - image_path.append(path+i) - - -image = gr.inputs.Image(shape =(128,128)) -label = gr.outputs.Label() - -iface = gr.Interface(fn=result, inputs=image, outputs=label, examples = image_path) -iface.launch(inline=False) \ No newline at end of file diff --git a/spaces/RitaParadaRamos/SmallCapDemo/src/utils.py b/spaces/RitaParadaRamos/SmallCapDemo/src/utils.py deleted file mode 100644 index dc2f3b70261ef1e4346c8c7990ceed6441020b57..0000000000000000000000000000000000000000 --- a/spaces/RitaParadaRamos/SmallCapDemo/src/utils.py +++ /dev/null @@ -1,131 +0,0 @@ -from torch.utils.data import Dataset -from PIL import Image -import torch -import json -import h5py -import bisect - -CAPTION_LENGTH = 25 -SIMPLE_PREFIX = "This image shows " - -def prep_strings(text, tokenizer, template=None, retrieved_caps=None, k=None, is_test=False, max_length=None): - - if is_test: - padding = False - truncation = False - else: - padding = True - truncation = True - - if retrieved_caps is not None: - infix = '\n\n'.join(retrieved_caps[:k]) + '.' - prefix = template.replace('||', infix) - else: - prefix = SIMPLE_PREFIX - - prefix_ids = tokenizer.encode(prefix) - len_prefix = len(prefix_ids) - - text_ids = tokenizer.encode(text, add_special_tokens=False) - if truncation: - text_ids = text_ids[:CAPTION_LENGTH] - input_ids = prefix_ids + text_ids if not is_test else prefix_ids - - # we ignore the prefix (minus one as the first subtoken in the prefix is not predicted) - label_ids = [-100] * (len_prefix - 1) + text_ids + [tokenizer.eos_token_id] - if padding: - input_ids += [tokenizer.pad_token_id] * (max_length - len(input_ids)) - label_ids += [-100] * (max_length - len(label_ids)) - - if is_test: - return input_ids - else: - return input_ids, label_ids - -def postprocess_preds(pred, tokenizer): - pred = pred.split(SIMPLE_PREFIX)[-1] - pred = pred.replace(tokenizer.pad_token, '') - if pred.startswith(tokenizer.bos_token): - pred = pred[len(tokenizer.bos_token):] - if pred.endswith(tokenizer.eos_token): - pred = pred[:-len(tokenizer.eos_token)] - return pred - -class TrainDataset(Dataset): - def __init__(self, df, features_path, tokenizer, rag=False, template_path=None, k=None, max_caption_length=25): - self.df = df - self.tokenizer = tokenizer - self.features = h5py.File(features_path, 'r') - - if rag: - self.template = open(template_path).read().strip() + ' ' - self.max_target_length = (max_caption_length # target caption - + max_caption_length * k # retrieved captions - + len(tokenizer.encode(self.template)) # template - + len(tokenizer.encode('\n\n')) * (k-1) # separator between captions - ) - assert k is not None - self.k = k - self.rag = rag - - def __len__(self): - return len(self.df) - - def __getitem__(self, idx): - text = self.df['text'][idx] - if self.rag: - caps = self.df['caps'][idx] - decoder_input_ids, labels = prep_strings(text, self.tokenizer, template=self.template, - retrieved_caps=caps, k=self.k, max_length=self.max_target_length) - else: - decoder_input_ids, labels = prep_strings(text, self.tokenizer, max_length=self.max_target_length) - # load precomputed features - encoder_outputs = self.features[self.df['cocoid'][idx]][()] - encoding = {"encoder_outputs": torch.tensor(encoder_outputs), - "decoder_input_ids": torch.tensor(decoder_input_ids), - "labels": torch.tensor(labels)} - - return encoding - - -def load_data_for_training(annot_path, caps_path=None): - annotations = json.load(open(annot_path))['images'] - if caps_path is not None: - retrieved_caps = json.load(open(caps_path)) - data = {'train': [], 'val': []} - - for item in annotations: - file_name = item['filename'].split('_')[-1] - if caps_path is not None: - caps = retrieved_caps[str(item['cocoid'])] - else: - caps = None - samples = [] - for sentence in item['sentences']: - samples.append({'file_name': file_name, 'cocoid': str(item['cocoid']), 'caps': caps, 'text': ' '.join(sentence['tokens'])}) - if item['split'] == 'train' or item['split'] == 'restval': - data['train'] += samples - elif item['split'] == 'val': - data['val'] += samples - return data - -def load_data_for_inference(annot_path, caps_path=None): - annotations = json.load(open(annot_path))['images'] - if caps_path is not None: - retrieved_caps = json.load(open(caps_path)) - data = {'test': [], 'val': []} - - for item in annotations: - file_name = item['filename'].split('_')[-1] - if caps_path is not None: - caps = retrieved_caps[str(item['cocoid'])] - else: - caps = None - image = {'file_name': file_name, 'caps': caps, 'image_id': str(item['cocoid'])} - if item['split'] == 'test': - data['test'].append(image) - elif item['split'] == 'val': - data['val'].append(image) - - return data - diff --git a/spaces/RohithMidigudla/Comment_Toxicity_Detection/README.md b/spaces/RohithMidigudla/Comment_Toxicity_Detection/README.md deleted file mode 100644 index 455547a9b972b44869bc60993fc450b6c75e5ae6..0000000000000000000000000000000000000000 --- a/spaces/RohithMidigudla/Comment_Toxicity_Detection/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Comment Toxicity Detection -emoji: 💻 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sakil/sakil_text_summarization_app/README.md b/spaces/Sakil/sakil_text_summarization_app/README.md deleted file mode 100644 index b48c54c2b1292a9b34d3928e004efaa7b412a337..0000000000000000000000000000000000000000 --- a/spaces/Sakil/sakil_text_summarization_app/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Sakil_text_summarization_app -emoji: ⚡ -colorFrom: green -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false -license: apache-2.0 ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/ddim/__init__.py b/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/ddim/__init__.py deleted file mode 100644 index 8fd31868a88ac0d9ec7118574f21a9d8a1d4069b..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/ddim/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# flake8: noqa -from .pipeline_ddim import DDIMPipeline diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/schedulers/__init__.py b/spaces/Salesforce/EDICT/my_half_diffusers/schedulers/__init__.py deleted file mode 100644 index 20c25f35183faeeef2cd7b5095f80a70a9edac01..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_half_diffusers/schedulers/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from ..utils import is_scipy_available -from .scheduling_ddim import DDIMScheduler -from .scheduling_ddpm import DDPMScheduler -from .scheduling_karras_ve import KarrasVeScheduler -from .scheduling_pndm import PNDMScheduler -from .scheduling_sde_ve import ScoreSdeVeScheduler -from .scheduling_sde_vp import ScoreSdeVpScheduler -from .scheduling_utils import SchedulerMixin - - -if is_scipy_available(): - from .scheduling_lms_discrete import LMSDiscreteScheduler -else: - from ..utils.dummy_scipy_objects import * # noqa F403 diff --git a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/text/sanskrit.py b/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/SeViLA/SeViLA/lavis/datasets/builders/classification_builder.py b/spaces/SeViLA/SeViLA/lavis/datasets/builders/classification_builder.py deleted file mode 100644 index 1fa4787bea4eae08114f12112ada29f7105ec686..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/datasets/builders/classification_builder.py +++ /dev/null @@ -1,27 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from lavis.common.registry import registry -from lavis.datasets.builders.base_dataset_builder import BaseDatasetBuilder -from lavis.datasets.datasets.nlvr_datasets import NLVRDataset, NLVREvalDataset -from lavis.datasets.datasets.snli_ve_datasets import SNLIVisualEntialmentDataset - - -@registry.register_builder("nlvr") -class NLVRBuilder(BaseDatasetBuilder): - train_dataset_cls = NLVRDataset - eval_dataset_cls = NLVREvalDataset - - DATASET_CONFIG_DICT = {"default": "configs/datasets/nlvr/defaults.yaml"} - - -@registry.register_builder("snli_ve") -class SNLIVisualEntailmentBuilder(BaseDatasetBuilder): - train_dataset_cls = SNLIVisualEntialmentDataset - eval_dataset_cls = SNLIVisualEntialmentDataset - - DATASET_CONFIG_DICT = {"default": "configs/datasets/snli_ve/defaults.yaml"} diff --git a/spaces/SeyedAli/Multilingual-Text-Similarity/README.md b/spaces/SeyedAli/Multilingual-Text-Similarity/README.md deleted file mode 100644 index 0677f894d25f906823dafeb4c12884ffab182c1c..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Multilingual-Text-Similarity/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Multilingual Text Similarity -emoji: 📝🆚📝 -colorFrom: purple -colorTo: red -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Silentlin/DiffSinger/inference/svs/opencpop/map.py b/spaces/Silentlin/DiffSinger/inference/svs/opencpop/map.py deleted file mode 100644 index 37d5d0b8a43f88293c73362d75c591e51ec82aee..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/inference/svs/opencpop/map.py +++ /dev/null @@ -1,8 +0,0 @@ -def cpop_pinyin2ph_func(): - # In the README file of opencpop dataset, they defined a "pinyin to phoneme mapping table" - pinyin2phs = {'AP': 'AP', 'SP': 'SP'} - with open('inference/svs/opencpop/cpop_pinyin2ph.txt') as rf: - for line in rf.readlines(): - elements = [x.strip() for x in line.split('|') if x.strip() != ''] - pinyin2phs[elements[0]] = elements[1] - return pinyin2phs \ No newline at end of file diff --git a/spaces/SouthCity/ShuruiXu/crazy_functions/test_project/cpp/cppipc/prod_cons.h b/spaces/SouthCity/ShuruiXu/crazy_functions/test_project/cpp/cppipc/prod_cons.h deleted file mode 100644 index c9004bb8043a12e32814436baa6262a00c8ef68e..0000000000000000000000000000000000000000 --- a/spaces/SouthCity/ShuruiXu/crazy_functions/test_project/cpp/cppipc/prod_cons.h +++ /dev/null @@ -1,433 +0,0 @@ -#pragma once - -#include -#include -#include -#include -#include - -#include "libipc/def.h" - -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" -#include "libipc/utility/log.h" -#include "libipc/utility/utility.h" - -namespace ipc { - -//////////////////////////////////////////////////////////////// -/// producer-consumer implementation -//////////////////////////////////////////////////////////////// - -template -struct prod_cons_impl; - -template <> -struct prod_cons_impl> { - - template - struct elem_t { - std::aligned_storage_t data_ {}; - }; - - alignas(cache_line_size) std::atomic rd_; // read index - alignas(cache_line_size) std::atomic wt_; // write index - - constexpr circ::u2_t cursor() const noexcept { - return 0; - } - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed)); - if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) { - return false; // full - } - std::forward(f)(&(elems[cur_wt].data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - /** - * In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'. - * So we could just disconnect all connections of receiver, and return false. - */ - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(~static_cast(0u)); - return false; - } - - template - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed)); - if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::forward(f)(&(elems[cur_rd].data_)); - std::forward(out)(true); - rd_.fetch_add(1, std::memory_order_release); - return true; - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - if (circ::index_of(cur_rd) == - circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - using flag_t = std::uint64_t; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - circ::u2_t cur_ct, nxt_ct; - for (unsigned k = 0;;) { - cur_ct = ct_.load(std::memory_order_relaxed); - if (circ::index_of(nxt_ct = cur_ct + 1) == - circ::index_of(rd_.load(std::memory_order_acquire))) { - return false; // full - } - if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - auto* el = elems + circ::index_of(cur_ct); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - while (1) { - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if (cur_ct != wt_.load(std::memory_order_relaxed)) { - return true; - } - if ((~cac_ct) != cur_ct) { - return true; - } - if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) { - return true; - } - wt_.store(nxt_ct, std::memory_order_release); - cur_ct = nxt_ct; - nxt_ct = cur_ct + 1; - el = elems + circ::index_of(cur_ct); - } - return true; - } - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - auto cur_wt = wt_.load(std::memory_order_acquire); - auto id_rd = circ::index_of(cur_rd); - auto id_wt = circ::index_of(cur_wt); - if (id_rd == id_wt) { - auto* el = elems + id_wt; - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if ((~cac_ct) != cur_wt) { - return false; // empty - } - if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) { - wt_.store(cur_wt + 1, std::memory_order_release); - } - k = 0; - } - else { - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - - enum : rc_t { - ep_mask = 0x00000000ffffffffull, - ep_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - }; - - alignas(cache_line_size) std::atomic wt_; // write index - alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer - - circ::u2_t cursor() const noexcept { - return wt_.load(std::memory_order_acquire); - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) { - return false; // has not finished yet - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - epoch_ += ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) { - if (cur == cursor()) return false; // acquire - auto* el = elems + circ::index_of(cur++); - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & ep_mask) == 0) { - std::forward(out)(true); - return true; - } - auto nxt_rc = cur_rc & ~static_cast(wrapper->connected_id()); - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)((nxt_rc & ep_mask) == 0); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - using flag_t = std::uint64_t; - - enum : rc_t { - rc_mask = 0x00000000ffffffffull, - ep_mask = 0x00ffffffffffffffull, - ep_incr = 0x0100000000000000ull, - ic_mask = 0xff000000ffffffffull, - ic_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - alignas(cache_line_size) std::atomic epoch_ { 0 }; - - circ::u2_t cursor() const noexcept { - return ct_.load(std::memory_order_acquire); - } - - constexpr static rc_t inc_rc(rc_t rc) noexcept { - return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask); - } - - constexpr static rc_t inc_mask(rc_t rc) noexcept { - return inc_rc(rc) & ~rc_mask; - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.load(std::memory_order_acquire); - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_relaxed); - circ::cc_t rem_cc = cur_rc & rc_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) { - return false; // has not finished yet - } - else if (!rem_cc) { - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if ((cur_fl != cur_ct) && cur_fl) { - return false; // full - } - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed) && - epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & rc_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed)) { - if (epoch == epoch_.load(std::memory_order_acquire)) { - break; - } - else if (push(wrapper, std::forward(f), elems)) { - return true; - } - epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) { - auto* el = elems + circ::index_of(cur); - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if (cur_fl != ~static_cast(cur)) { - return false; // empty - } - ++cur; - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & rc_mask) == 0) { - std::forward(out)(true); - el->f_ct_.store(cur + N - 1, std::memory_order_release); - return true; - } - auto nxt_rc = inc_rc(cur_rc) & ~static_cast(wrapper->connected_id()); - bool last_one = false; - if ((last_one = (nxt_rc & rc_mask) == 0)) { - el->f_ct_.store(cur + N - 1, std::memory_order_release); - } - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)(last_one); - return true; - } - ipc::yield(k); - } - } -}; - -} // namespace ipc diff --git a/spaces/Spark808/rvc-demo/config.py b/spaces/Spark808/rvc-demo/config.py deleted file mode 100644 index c0c16e0017efbcaf250cb539a1d0edb4e83575e4..0000000000000000000000000000000000000000 --- a/spaces/Spark808/rvc-demo/config.py +++ /dev/null @@ -1,88 +0,0 @@ -########################硬件参数######################## - -# 填写cuda:x, cpu 或 mps, x指代第几张卡,只支持 N卡 / Apple Silicon 加速 -device = "cuda:0" - -# 9-10-20-30-40系显卡无脑True,不影响质量,>=20显卡开启有加速 -is_half = True - -# 默认0用上所有线程,写数字限制CPU资源使用 -n_cpu = 0 - -########################硬件参数######################## - - -##################下为参数处理逻辑,勿动################## - -########################命令行参数######################## -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument("--port", type=int, default=7865, help="Listen port") -parser.add_argument("--pycmd", type=str, default="python", help="Python command") -parser.add_argument("--colab", action="store_true", help="Launch in colab") -parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" -) -parser.add_argument( - "--noautoopen", action="store_true", help="Do not open in browser automatically" -) -cmd_opts, unknown = parser.parse_known_args() - -python_cmd = cmd_opts.pycmd -listen_port = cmd_opts.port -iscolab = cmd_opts.colab -noparallel = cmd_opts.noparallel -noautoopen = cmd_opts.noautoopen -########################命令行参数######################## - -import sys -import torch - - -# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. -# check `getattr` and try it for compatibility -def has_mps() -> bool: - if sys.platform != "darwin": - return False - else: - if not getattr(torch, "has_mps", False): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - -if not torch.cuda.is_available(): - if has_mps(): - print("没有发现支持的N卡, 使用MPS进行推理") - device = "mps" - else: - print("没有发现支持的N卡, 使用CPU进行推理") - device = "cpu" - is_half = False - -if device not in ["cpu", "mps"]: - gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1])) - if "16" in gpu_name or "MX" in gpu_name: - print("16系显卡/MX系显卡强制单精度") - is_half = False - -from multiprocessing import cpu_count - -if n_cpu == 0: - n_cpu = cpu_count() -if is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 -else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 diff --git a/spaces/Starcodium/README/README.md b/spaces/Starcodium/README/README.md deleted file mode 100644 index da773cdd3a0a1cd462747db1f16b6235db4f6efb..0000000000000000000000000000000000000000 --- a/spaces/Starcodium/README/README.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: README -emoji: 💻 -colorFrom: pink -thumbnail: https://imgur.com/a/AxhpUlp -colorTo: yellow -sdk: static -pinned: false ---- - - -
- Starcodium -
- - -## Welcome to Starcodium Community - -Welcome to the vibrant community of Starcodium! Here, we share exciting updates on newly developed and trained AI models. Our current star is the remarkable AI companion, "Vergil," who actively engages with our community on our Discord server. - -## Discover What Awaits - -At Starcodium, we house a collection of diverse AI models, each trained with a unique approach. As of now, we specialize in text-generation models that are fine-tuned on conversational datasets which bring delightful interactions. With your invaluable support and feedback, we aim to expand our repertoire by developing and training models with different functionalities to keep you thoroughly entertained. - -## Meet Our Team - -Organization Leader -- Username: AP -- Discord: Aubrie#0727 -- YouTube: [Aubrie's YouTube Channel](https://www.youtube.com/@aubriep) -- Github: [Aubrie's Github](https://github.com/AubrienPippin) -- HuggingFace: [Aubrie on HuggingFace](https://huggingface.co/aubrie) - -AI Researcher/Software Developer -- Username: SpeedStar101 -- Discord: SpeedStar101#0101 -- YouTube: [SpeedStar101's YouTube Channel](https://www.youtube.com/@SpeedStar101) -- Github: [SpeedStar101's Github](https://github.com/SpeedStar1O1) -- HuggingFace: [SpeedStar101 on HuggingFace](https://huggingface.co/SpeedStar101) - -App Testers -- Username: GhostFace4606 -- Discord: Ghost4606#8895 - -- Username: Tekio -- Discord: ʞɔıɟɹǝl⋊#1878 - -## Join the Starcodium Community - -Welcome to our thriving Discord server! If you haven't joined us yet, we invite you to become part of our engaging community. Click the link below to join: - -[Starcodium Server](https://discord.com/invite/qauyxubB7a) - -## Become an App Tester - -At Starcodium, we value the contributions of our dedicated community members. By becoming an App Tester, you'll gain exclusive privileges and play a crucial role in shaping our future endeavors. - -As an esteemed App Tester, you will be granted the prestigious app tester role in our main server. Moreover, you'll have the exciting opportunity to join our Testing Ground Server—a dedicated space where we put our cutting-edge Discord Bots and AI Models through rigorous testing and refinement. - -Join us today, and unlock a world of possibilities. Elevate your involvement, collaborate with like-minded individuals, and help us shape the future of AI innovation. Let's embark on this exciting journey together! - -## Thank you - -We're thrilled to have you join our community at Starcodium, and we look forward to creating captivating AI experiences together. Explore, engage, and let's embark on an incredible journey into the realm of AI innovation! \ No newline at end of file diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/pydev_monkey.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/pydev_monkey.py deleted file mode 100644 index d5fe01d2a2dab89ef79c9de152556fa113ce205d..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/pydev_monkey.py +++ /dev/null @@ -1,1246 +0,0 @@ -# License: EPL -import os -import re -import sys -from _pydev_bundle._pydev_saved_modules import threading -from _pydevd_bundle.pydevd_constants import get_global_debugger, IS_WINDOWS, IS_JYTHON, get_current_thread_id, \ - sorted_dict_repr, set_global_debugger, DebugInfoHolder -from _pydev_bundle import pydev_log -from contextlib import contextmanager -from _pydevd_bundle import pydevd_constants, pydevd_defaults -from _pydevd_bundle.pydevd_defaults import PydevdCustomization -import ast - -try: - from pathlib import Path -except ImportError: - Path = None - -#=============================================================================== -# Things that are dependent on having the pydevd debugger -#=============================================================================== - -pydev_src_dir = os.path.dirname(os.path.dirname(__file__)) - -_arg_patch = threading.local() - - -@contextmanager -def skip_subprocess_arg_patch(): - _arg_patch.apply_arg_patching = False - try: - yield - finally: - _arg_patch.apply_arg_patching = True - - -def _get_apply_arg_patching(): - return getattr(_arg_patch, 'apply_arg_patching', True) - - -def _get_setup_updated_with_protocol_and_ppid(setup, is_exec=False): - if setup is None: - setup = {} - setup = setup.copy() - # Discard anything related to the protocol (we'll set the the protocol based on the one - # currently set). - setup.pop(pydevd_constants.ARGUMENT_HTTP_JSON_PROTOCOL, None) - setup.pop(pydevd_constants.ARGUMENT_JSON_PROTOCOL, None) - setup.pop(pydevd_constants.ARGUMENT_QUOTED_LINE_PROTOCOL, None) - - if not is_exec: - # i.e.: The ppid for the subprocess is the current pid. - # If it's an exec, keep it what it was. - setup[pydevd_constants.ARGUMENT_PPID] = os.getpid() - - protocol = pydevd_constants.get_protocol() - if protocol == pydevd_constants.HTTP_JSON_PROTOCOL: - setup[pydevd_constants.ARGUMENT_HTTP_JSON_PROTOCOL] = True - - elif protocol == pydevd_constants.JSON_PROTOCOL: - setup[pydevd_constants.ARGUMENT_JSON_PROTOCOL] = True - - elif protocol == pydevd_constants.QUOTED_LINE_PROTOCOL: - setup[pydevd_constants.ARGUMENT_QUOTED_LINE_PROTOCOL] = True - - elif protocol == pydevd_constants.HTTP_PROTOCOL: - setup[pydevd_constants.ARGUMENT_HTTP_PROTOCOL] = True - - else: - pydev_log.debug('Unexpected protocol: %s', protocol) - - mode = pydevd_defaults.PydevdCustomization.DEBUG_MODE - if mode: - setup['debug-mode'] = mode - - preimport = pydevd_defaults.PydevdCustomization.PREIMPORT - if preimport: - setup['preimport'] = preimport - - if DebugInfoHolder.PYDEVD_DEBUG_FILE: - setup['log-file'] = DebugInfoHolder.PYDEVD_DEBUG_FILE - - if DebugInfoHolder.DEBUG_TRACE_LEVEL: - setup['log-level'] = DebugInfoHolder.DEBUG_TRACE_LEVEL - - return setup - - -class _LastFutureImportFinder(ast.NodeVisitor): - - def __init__(self): - self.last_future_import_found = None - - def visit_ImportFrom(self, node): - if node.module == '__future__': - self.last_future_import_found = node - - -def _get_offset_from_line_col(code, line, col): - offset = 0 - for i, line_contents in enumerate(code.splitlines(True)): - if i == line: - offset += col - return offset - else: - offset += len(line_contents) - - return -1 - - -def _separate_future_imports(code): - ''' - :param code: - The code from where we want to get the __future__ imports (note that it's possible that - there's no such entry). - - :return tuple(str, str): - The return is a tuple(future_import, code). - - If the future import is not available a return such as ('', code) is given, otherwise, the - future import will end with a ';' (so that it can be put right before the pydevd attach - code). - ''' - try: - node = ast.parse(code, '', 'exec') - visitor = _LastFutureImportFinder() - visitor.visit(node) - - if visitor.last_future_import_found is None: - return '', code - - node = visitor.last_future_import_found - offset = -1 - if hasattr(node, 'end_lineno') and hasattr(node, 'end_col_offset'): - # Python 3.8 onwards has these (so, use when possible). - line, col = node.end_lineno, node.end_col_offset - offset = _get_offset_from_line_col(code, line - 1, col) # ast lines are 1-based, make it 0-based. - - else: - # end line/col not available, let's just find the offset and then search - # for the alias from there. - line, col = node.lineno, node.col_offset - offset = _get_offset_from_line_col(code, line - 1, col) # ast lines are 1-based, make it 0-based. - if offset >= 0 and node.names: - from_future_import_name = node.names[-1].name - i = code.find(from_future_import_name, offset) - if i < 0: - offset = -1 - else: - offset = i + len(from_future_import_name) - - if offset >= 0: - for i in range(offset, len(code)): - if code[i] in (' ', '\t', ';', ')', '\n'): - offset += 1 - else: - break - - future_import = code[:offset] - code_remainder = code[offset:] - - # Now, put '\n' lines back into the code remainder (we had to search for - # `\n)`, but in case we just got the `\n`, it should be at the remainder, - # not at the future import. - while future_import.endswith('\n'): - future_import = future_import[:-1] - code_remainder = '\n' + code_remainder - - if not future_import.endswith(';'): - future_import += ';' - return future_import, code_remainder - - # This shouldn't happen... - pydev_log.info('Unable to find line %s in code:\n%r', line, code) - return '', code - - except: - pydev_log.exception('Error getting from __future__ imports from: %r', code) - return '', code - - -def _get_python_c_args(host, port, code, args, setup): - setup = _get_setup_updated_with_protocol_and_ppid(setup) - - # i.e.: We want to make the repr sorted so that it works in tests. - setup_repr = setup if setup is None else (sorted_dict_repr(setup)) - - future_imports = '' - if '__future__' in code: - # If the code has a __future__ import, we need to be able to strip the __future__ - # imports from the code and add them to the start of our code snippet. - future_imports, code = _separate_future_imports(code) - - return ("%simport sys; sys.path.insert(0, r'%s'); import pydevd; pydevd.config(%r, %r); " - "pydevd.settrace(host=%r, port=%s, suspend=False, trace_only_current_thread=False, patch_multiprocessing=True, access_token=%r, client_access_token=%r, __setup_holder__=%s); " - "%s" - ) % ( - future_imports, - pydev_src_dir, - pydevd_constants.get_protocol(), - PydevdCustomization.DEBUG_MODE, - host, - port, - setup.get('access-token'), - setup.get('client-access-token'), - setup_repr, - code) - - -def _get_host_port(): - import pydevd - host, port = pydevd.dispatch() - return host, port - - -def _is_managed_arg(arg): - pydevd_py = _get_str_type_compatible(arg, 'pydevd.py') - if arg.endswith(pydevd_py): - return True - return False - - -def _on_forked_process(setup_tracing=True): - pydevd_constants.after_fork() - pydev_log.initialize_debug_stream(reinitialize=True) - - if setup_tracing: - pydev_log.debug('pydevd on forked process: %s', os.getpid()) - - import pydevd - pydevd.threadingCurrentThread().__pydevd_main_thread = True - pydevd.settrace_forked(setup_tracing=setup_tracing) - - -def _on_set_trace_for_new_thread(global_debugger): - if global_debugger is not None: - global_debugger.enable_tracing() - - -def _get_str_type_compatible(s, args): - ''' - This method converts `args` to byte/unicode based on the `s' type. - ''' - if isinstance(args, (list, tuple)): - ret = [] - for arg in args: - if type(s) == type(arg): - ret.append(arg) - else: - if isinstance(s, bytes): - ret.append(arg.encode('utf-8')) - else: - ret.append(arg.decode('utf-8')) - return ret - else: - if type(s) == type(args): - return args - else: - if isinstance(s, bytes): - return args.encode('utf-8') - else: - return args.decode('utf-8') - - -#=============================================================================== -# Things related to monkey-patching -#=============================================================================== -def is_python(path): - single_quote, double_quote = _get_str_type_compatible(path, ["'", '"']) - - if path.endswith(single_quote) or path.endswith(double_quote): - path = path[1:len(path) - 1] - filename = os.path.basename(path).lower() - for name in _get_str_type_compatible(filename, ['python', 'jython', 'pypy']): - if filename.find(name) != -1: - return True - - return False - - -class InvalidTypeInArgsException(Exception): - pass - - -def remove_quotes_from_args(args): - if sys.platform == "win32": - new_args = [] - - for x in args: - if Path is not None and isinstance(x, Path): - x = str(x) - else: - if not isinstance(x, (bytes, str)): - raise InvalidTypeInArgsException(str(type(x))) - - double_quote, two_double_quotes = _get_str_type_compatible(x, ['"', '""']) - - if x != two_double_quotes: - if len(x) > 1 and x.startswith(double_quote) and x.endswith(double_quote): - x = x[1:-1] - - new_args.append(x) - return new_args - else: - new_args = [] - for x in args: - if Path is not None and isinstance(x, Path): - x = x.as_posix() - else: - if not isinstance(x, (bytes, str)): - raise InvalidTypeInArgsException(str(type(x))) - new_args.append(x) - - return new_args - - -def quote_arg_win32(arg): - fix_type = lambda x: _get_str_type_compatible(arg, x) - - # See if we need to quote at all - empty strings need quoting, as do strings - # with whitespace or quotes in them. Backslashes do not need quoting. - if arg and not set(arg).intersection(fix_type(' "\t\n\v')): - return arg - - # Per https://docs.microsoft.com/en-us/windows/desktop/api/shellapi/nf-shellapi-commandlinetoargvw, - # the standard way to interpret arguments in double quotes is as follows: - # - # 2N backslashes followed by a quotation mark produce N backslashes followed by - # begin/end quote. This does not become part of the parsed argument, but toggles - # the "in quotes" mode. - # - # 2N+1 backslashes followed by a quotation mark again produce N backslashes followed - # by a quotation mark literal ("). This does not toggle the "in quotes" mode. - # - # N backslashes not followed by a quotation mark simply produce N backslashes. - # - # This code needs to do the reverse transformation, thus: - # - # N backslashes followed by " produce 2N+1 backslashes followed by " - # - # N backslashes at the end (i.e. where the closing " goes) produce 2N backslashes. - # - # N backslashes in any other position remain as is. - - arg = re.sub(fix_type(r'(\\*)\"'), fix_type(r'\1\1\\"'), arg) - arg = re.sub(fix_type(r'(\\*)$'), fix_type(r'\1\1'), arg) - return fix_type('"') + arg + fix_type('"') - - -def quote_args(args): - if sys.platform == "win32": - return list(map(quote_arg_win32, args)) - else: - return args - - -def patch_args(args, is_exec=False): - ''' - :param list args: - Arguments to patch. - - :param bool is_exec: - If it's an exec, the current process will be replaced (this means we have - to keep the same ppid). - ''' - try: - pydev_log.debug("Patching args: %s", args) - original_args = args - try: - unquoted_args = remove_quotes_from_args(args) - except InvalidTypeInArgsException as e: - pydev_log.info('Unable to monkey-patch subprocess arguments because a type found in the args is invalid: %s', e) - return original_args - - # Internally we should reference original_args (if we want to return them) or unquoted_args - # to add to the list which will be then quoted in the end. - del args - - from pydevd import SetupHolder - if not unquoted_args: - return original_args - - if not is_python(unquoted_args[0]): - pydev_log.debug("Process is not python, returning.") - return original_args - - # Note: we create a copy as string to help with analyzing the arguments, but - # the final list should have items from the unquoted_args as they were initially. - args_as_str = _get_str_type_compatible('', unquoted_args) - - params_with_value_in_separate_arg = ( - '--check-hash-based-pycs', - '--jit' # pypy option - ) - - # All short switches may be combined together. The ones below require a value and the - # value itself may be embedded in the arg. - # - # i.e.: Python accepts things as: - # - # python -OQold -qmtest - # - # Which is the same as: - # - # python -O -Q old -q -m test - # - # or even: - # - # python -OQold "-vcimport sys;print(sys)" - # - # Which is the same as: - # - # python -O -Q old -v -c "import sys;print(sys)" - - params_with_combinable_arg = set(('W', 'X', 'Q', 'c', 'm')) - - module_name = None - before_module_flag = '' - module_name_i_start = -1 - module_name_i_end = -1 - - code = None - code_i = -1 - code_i_end = -1 - code_flag = '' - - filename = None - filename_i = -1 - - ignore_next = True # start ignoring the first (the first entry is the python executable) - for i, arg_as_str in enumerate(args_as_str): - if ignore_next: - ignore_next = False - continue - - if arg_as_str.startswith('-'): - if arg_as_str == '-': - # Contents will be read from the stdin. This is not currently handled. - pydev_log.debug('Unable to fix arguments to attach debugger on subprocess when reading from stdin ("python ... -").') - return original_args - - if arg_as_str.startswith(params_with_value_in_separate_arg): - if arg_as_str in params_with_value_in_separate_arg: - ignore_next = True - continue - - break_out = False - for j, c in enumerate(arg_as_str): - - # i.e.: Python supports -X faulthandler as well as -Xfaulthandler - # (in one case we have to ignore the next and in the other we don't - # have to ignore it). - if c in params_with_combinable_arg: - remainder = arg_as_str[j + 1:] - if not remainder: - ignore_next = True - - if c == 'm': - # i.e.: Something as - # python -qm test - # python -m test - # python -qmtest - before_module_flag = arg_as_str[:j] # before_module_flag would then be "-q" - if before_module_flag == '-': - before_module_flag = '' - module_name_i_start = i - if not remainder: - module_name = unquoted_args[i + 1] - module_name_i_end = i + 1 - else: - # i.e.: python -qmtest should provide 'test' as the module_name - module_name = unquoted_args[i][j + 1:] - module_name_i_end = module_name_i_start - break_out = True - break - - elif c == 'c': - # i.e.: Something as - # python -qc "import sys" - # python -c "import sys" - # python "-qcimport sys" - code_flag = arg_as_str[:j + 1] # code_flag would then be "-qc" - - if not remainder: - # arg_as_str is something as "-qc", "import sys" - code = unquoted_args[i + 1] - code_i_end = i + 2 - else: - # if arg_as_str is something as "-qcimport sys" - code = remainder # code would be "import sys" - code_i_end = i + 1 - code_i = i - break_out = True - break - - else: - break - - if break_out: - break - - else: - # It doesn't start with '-' and we didn't ignore this entry: - # this means that this is the file to be executed. - filename = unquoted_args[i] - - # Note that the filename is not validated here. - # There are cases where even a .exe is valid (xonsh.exe): - # https://github.com/microsoft/debugpy/issues/945 - # So, we should support whatever runpy.run_path - # supports in this case. - - filename_i = i - - if _is_managed_arg(filename): # no need to add pydevd twice - pydev_log.debug('Skipped monkey-patching as pydevd.py is in args already.') - return original_args - - break - else: - # We didn't find the filename (something is unexpected). - pydev_log.debug('Unable to fix arguments to attach debugger on subprocess (filename not found).') - return original_args - - if code_i != -1: - host, port = _get_host_port() - - if port is not None: - new_args = [] - new_args.extend(unquoted_args[:code_i]) - new_args.append(code_flag) - new_args.append(_get_python_c_args(host, port, code, unquoted_args, SetupHolder.setup)) - new_args.extend(unquoted_args[code_i_end:]) - - return quote_args(new_args) - - first_non_vm_index = max(filename_i, module_name_i_start) - if first_non_vm_index == -1: - pydev_log.debug('Unable to fix arguments to attach debugger on subprocess (could not resolve filename nor module name).') - return original_args - - # Original args should be something as: - # ['X:\\pysrc\\pydevd.py', '--multiprocess', '--print-in-debugger-startup', - # '--vm_type', 'python', '--client', '127.0.0.1', '--port', '56352', '--file', 'x:\\snippet1.py'] - from _pydevd_bundle.pydevd_command_line_handling import setup_to_argv - new_args = [] - new_args.extend(unquoted_args[:first_non_vm_index]) - if before_module_flag: - new_args.append(before_module_flag) - - add_module_at = len(new_args) + 1 - - new_args.extend(setup_to_argv( - _get_setup_updated_with_protocol_and_ppid(SetupHolder.setup, is_exec=is_exec), - skip_names=set(('module', 'cmd-line')) - )) - new_args.append('--file') - - if module_name is not None: - assert module_name_i_start != -1 - assert module_name_i_end != -1 - # Always after 'pydevd' (i.e.: pydevd "--module" --multiprocess ...) - new_args.insert(add_module_at, '--module') - new_args.append(module_name) - new_args.extend(unquoted_args[module_name_i_end + 1:]) - - elif filename is not None: - assert filename_i != -1 - new_args.append(filename) - new_args.extend(unquoted_args[filename_i + 1:]) - - else: - raise AssertionError('Internal error (unexpected condition)') - - return quote_args(new_args) - except: - pydev_log.exception('Error patching args (debugger not attached to subprocess).') - return original_args - - -def str_to_args_windows(args): - # See https://docs.microsoft.com/en-us/cpp/c-language/parsing-c-command-line-arguments. - # - # Implemetation ported from DebugPlugin.parseArgumentsWindows: - # https://github.com/eclipse/eclipse.platform.debug/blob/master/org.eclipse.debug.core/core/org/eclipse/debug/core/DebugPlugin.java - - result = [] - - DEFAULT = 0 - ARG = 1 - IN_DOUBLE_QUOTE = 2 - - state = DEFAULT - backslashes = 0 - buf = '' - - args_len = len(args) - for i in range(args_len): - ch = args[i] - if (ch == '\\'): - backslashes += 1 - continue - elif (backslashes != 0): - if ch == '"': - while backslashes >= 2: - backslashes -= 2 - buf += '\\' - if (backslashes == 1): - if (state == DEFAULT): - state = ARG - - buf += '"' - backslashes = 0 - continue - # else fall through to switch - else: - # false alarm, treat passed backslashes literally... - if (state == DEFAULT): - state = ARG - - while backslashes > 0: - backslashes -= 1 - buf += '\\' - # fall through to switch - if ch in (' ', '\t'): - if (state == DEFAULT): - # skip - continue - elif (state == ARG): - state = DEFAULT - result.append(buf) - buf = '' - continue - - if state in (DEFAULT, ARG): - if ch == '"': - state = IN_DOUBLE_QUOTE - else: - state = ARG - buf += ch - - elif state == IN_DOUBLE_QUOTE: - if ch == '"': - if (i + 1 < args_len and args[i + 1] == '"'): - # Undocumented feature in Windows: - # Two consecutive double quotes inside a double-quoted argument are interpreted as - # a single double quote. - buf += '"' - i += 1 - else: - state = ARG - else: - buf += ch - - else: - raise RuntimeError('Illegal condition') - - if len(buf) > 0 or state != DEFAULT: - result.append(buf) - - return result - - -def patch_arg_str_win(arg_str): - args = str_to_args_windows(arg_str) - # Fix https://youtrack.jetbrains.com/issue/PY-9767 (args may be empty) - if not args or not is_python(args[0]): - return arg_str - arg_str = ' '.join(patch_args(args)) - pydev_log.debug("New args: %s", arg_str) - return arg_str - - -def monkey_patch_module(module, funcname, create_func): - if hasattr(module, funcname): - original_name = 'original_' + funcname - if not hasattr(module, original_name): - setattr(module, original_name, getattr(module, funcname)) - setattr(module, funcname, create_func(original_name)) - - -def monkey_patch_os(funcname, create_func): - monkey_patch_module(os, funcname, create_func) - - -def warn_multiproc(): - pass # TODO: Provide logging as messages to the IDE. - # pydev_log.error_once( - # "pydev debugger: New process is launching (breakpoints won't work in the new process).\n" - # "pydev debugger: To debug that process please enable 'Attach to subprocess automatically while debugging?' option in the debugger settings.\n") - # - - -def create_warn_multiproc(original_name): - - def new_warn_multiproc(*args, **kwargs): - import os - - warn_multiproc() - - return getattr(os, original_name)(*args, **kwargs) - - return new_warn_multiproc - - -def create_execl(original_name): - - def new_execl(path, *args): - """ - os.execl(path, arg0, arg1, ...) - os.execle(path, arg0, arg1, ..., env) - os.execlp(file, arg0, arg1, ...) - os.execlpe(file, arg0, arg1, ..., env) - """ - if _get_apply_arg_patching(): - args = patch_args(args, is_exec=True) - send_process_created_message() - send_process_about_to_be_replaced() - - return getattr(os, original_name)(path, *args) - - return new_execl - - -def create_execv(original_name): - - def new_execv(path, args): - """ - os.execv(path, args) - os.execvp(file, args) - """ - if _get_apply_arg_patching(): - args = patch_args(args, is_exec=True) - send_process_created_message() - send_process_about_to_be_replaced() - - return getattr(os, original_name)(path, args) - - return new_execv - - -def create_execve(original_name): - """ - os.execve(path, args, env) - os.execvpe(file, args, env) - """ - - def new_execve(path, args, env): - if _get_apply_arg_patching(): - args = patch_args(args, is_exec=True) - send_process_created_message() - send_process_about_to_be_replaced() - - return getattr(os, original_name)(path, args, env) - - return new_execve - - -def create_spawnl(original_name): - - def new_spawnl(mode, path, *args): - """ - os.spawnl(mode, path, arg0, arg1, ...) - os.spawnlp(mode, file, arg0, arg1, ...) - """ - if _get_apply_arg_patching(): - args = patch_args(args) - send_process_created_message() - - return getattr(os, original_name)(mode, path, *args) - - return new_spawnl - - -def create_spawnv(original_name): - - def new_spawnv(mode, path, args): - """ - os.spawnv(mode, path, args) - os.spawnvp(mode, file, args) - """ - if _get_apply_arg_patching(): - args = patch_args(args) - send_process_created_message() - - return getattr(os, original_name)(mode, path, args) - - return new_spawnv - - -def create_spawnve(original_name): - """ - os.spawnve(mode, path, args, env) - os.spawnvpe(mode, file, args, env) - """ - - def new_spawnve(mode, path, args, env): - if _get_apply_arg_patching(): - args = patch_args(args) - send_process_created_message() - - return getattr(os, original_name)(mode, path, args, env) - - return new_spawnve - - -def create_posix_spawn(original_name): - """ - os.posix_spawn(executable, args, env, **kwargs) - """ - - def new_posix_spawn(executable, args, env, **kwargs): - if _get_apply_arg_patching(): - args = patch_args(args) - send_process_created_message() - - return getattr(os, original_name)(executable, args, env, **kwargs) - - return new_posix_spawn - - -def create_fork_exec(original_name): - """ - _posixsubprocess.fork_exec(args, executable_list, close_fds, ... (13 more)) - """ - - def new_fork_exec(args, *other_args): - import _posixsubprocess # @UnresolvedImport - if _get_apply_arg_patching(): - args = patch_args(args) - send_process_created_message() - - return getattr(_posixsubprocess, original_name)(args, *other_args) - - return new_fork_exec - - -def create_warn_fork_exec(original_name): - """ - _posixsubprocess.fork_exec(args, executable_list, close_fds, ... (13 more)) - """ - - def new_warn_fork_exec(*args): - try: - import _posixsubprocess - warn_multiproc() - return getattr(_posixsubprocess, original_name)(*args) - except: - pass - - return new_warn_fork_exec - - -def create_subprocess_fork_exec(original_name): - """ - subprocess._fork_exec(args, executable_list, close_fds, ... (13 more)) - """ - - def new_fork_exec(args, *other_args): - import subprocess - if _get_apply_arg_patching(): - args = patch_args(args) - send_process_created_message() - - return getattr(subprocess, original_name)(args, *other_args) - - return new_fork_exec - - -def create_subprocess_warn_fork_exec(original_name): - """ - subprocess._fork_exec(args, executable_list, close_fds, ... (13 more)) - """ - - def new_warn_fork_exec(*args): - try: - import subprocess - warn_multiproc() - return getattr(subprocess, original_name)(*args) - except: - pass - - return new_warn_fork_exec - - -def create_CreateProcess(original_name): - """ - CreateProcess(*args, **kwargs) - """ - - def new_CreateProcess(app_name, cmd_line, *args): - try: - import _subprocess - except ImportError: - import _winapi as _subprocess - - if _get_apply_arg_patching(): - cmd_line = patch_arg_str_win(cmd_line) - send_process_created_message() - - return getattr(_subprocess, original_name)(app_name, cmd_line, *args) - - return new_CreateProcess - - -def create_CreateProcessWarnMultiproc(original_name): - """ - CreateProcess(*args, **kwargs) - """ - - def new_CreateProcess(*args): - try: - import _subprocess - except ImportError: - import _winapi as _subprocess - warn_multiproc() - return getattr(_subprocess, original_name)(*args) - - return new_CreateProcess - - -def create_fork(original_name): - - def new_fork(): - # A simple fork will result in a new python process - is_new_python_process = True - frame = sys._getframe() - - apply_arg_patch = _get_apply_arg_patching() - - is_subprocess_fork = False - while frame is not None: - if frame.f_code.co_name == '_execute_child' and 'subprocess' in frame.f_code.co_filename: - is_subprocess_fork = True - # If we're actually in subprocess.Popen creating a child, it may - # result in something which is not a Python process, (so, we - # don't want to connect with it in the forked version). - executable = frame.f_locals.get('executable') - if executable is not None: - is_new_python_process = False - if is_python(executable): - is_new_python_process = True - break - - frame = frame.f_back - frame = None # Just make sure we don't hold on to it. - - protocol = pydevd_constants.get_protocol() - debug_mode = PydevdCustomization.DEBUG_MODE - - child_process = getattr(os, original_name)() # fork - if not child_process: - if is_new_python_process: - PydevdCustomization.DEFAULT_PROTOCOL = protocol - PydevdCustomization.DEBUG_MODE = debug_mode - _on_forked_process(setup_tracing=apply_arg_patch and not is_subprocess_fork) - else: - set_global_debugger(None) - else: - if is_new_python_process: - send_process_created_message() - return child_process - - return new_fork - - -def send_process_created_message(): - py_db = get_global_debugger() - if py_db is not None: - py_db.send_process_created_message() - - -def send_process_about_to_be_replaced(): - py_db = get_global_debugger() - if py_db is not None: - py_db.send_process_about_to_be_replaced() - - -def patch_new_process_functions(): - # os.execl(path, arg0, arg1, ...) - # os.execle(path, arg0, arg1, ..., env) - # os.execlp(file, arg0, arg1, ...) - # os.execlpe(file, arg0, arg1, ..., env) - # os.execv(path, args) - # os.execve(path, args, env) - # os.execvp(file, args) - # os.execvpe(file, args, env) - monkey_patch_os('execl', create_execl) - monkey_patch_os('execle', create_execl) - monkey_patch_os('execlp', create_execl) - monkey_patch_os('execlpe', create_execl) - monkey_patch_os('execv', create_execv) - monkey_patch_os('execve', create_execve) - monkey_patch_os('execvp', create_execv) - monkey_patch_os('execvpe', create_execve) - - # os.spawnl(mode, path, ...) - # os.spawnle(mode, path, ..., env) - # os.spawnlp(mode, file, ...) - # os.spawnlpe(mode, file, ..., env) - # os.spawnv(mode, path, args) - # os.spawnve(mode, path, args, env) - # os.spawnvp(mode, file, args) - # os.spawnvpe(mode, file, args, env) - - monkey_patch_os('spawnl', create_spawnl) - monkey_patch_os('spawnle', create_spawnl) - monkey_patch_os('spawnlp', create_spawnl) - monkey_patch_os('spawnlpe', create_spawnl) - monkey_patch_os('spawnv', create_spawnv) - monkey_patch_os('spawnve', create_spawnve) - monkey_patch_os('spawnvp', create_spawnv) - monkey_patch_os('spawnvpe', create_spawnve) - monkey_patch_os('posix_spawn', create_posix_spawn) - - if not IS_JYTHON: - if not IS_WINDOWS: - monkey_patch_os('fork', create_fork) - try: - import _posixsubprocess - monkey_patch_module(_posixsubprocess, 'fork_exec', create_fork_exec) - except ImportError: - pass - - try: - import subprocess - monkey_patch_module(subprocess, '_fork_exec', create_subprocess_fork_exec) - except AttributeError: - pass - else: - # Windows - try: - import _subprocess - except ImportError: - import _winapi as _subprocess - monkey_patch_module(_subprocess, 'CreateProcess', create_CreateProcess) - - -def patch_new_process_functions_with_warning(): - monkey_patch_os('execl', create_warn_multiproc) - monkey_patch_os('execle', create_warn_multiproc) - monkey_patch_os('execlp', create_warn_multiproc) - monkey_patch_os('execlpe', create_warn_multiproc) - monkey_patch_os('execv', create_warn_multiproc) - monkey_patch_os('execve', create_warn_multiproc) - monkey_patch_os('execvp', create_warn_multiproc) - monkey_patch_os('execvpe', create_warn_multiproc) - monkey_patch_os('spawnl', create_warn_multiproc) - monkey_patch_os('spawnle', create_warn_multiproc) - monkey_patch_os('spawnlp', create_warn_multiproc) - monkey_patch_os('spawnlpe', create_warn_multiproc) - monkey_patch_os('spawnv', create_warn_multiproc) - monkey_patch_os('spawnve', create_warn_multiproc) - monkey_patch_os('spawnvp', create_warn_multiproc) - monkey_patch_os('spawnvpe', create_warn_multiproc) - monkey_patch_os('posix_spawn', create_warn_multiproc) - - if not IS_JYTHON: - if not IS_WINDOWS: - monkey_patch_os('fork', create_warn_multiproc) - try: - import _posixsubprocess - monkey_patch_module(_posixsubprocess, 'fork_exec', create_warn_fork_exec) - except ImportError: - pass - - try: - import subprocess - monkey_patch_module(subprocess, '_fork_exec', create_subprocess_warn_fork_exec) - except AttributeError: - pass - - else: - # Windows - try: - import _subprocess - except ImportError: - import _winapi as _subprocess - monkey_patch_module(_subprocess, 'CreateProcess', create_CreateProcessWarnMultiproc) - - -class _NewThreadStartupWithTrace: - - def __init__(self, original_func, args, kwargs): - self.original_func = original_func - self.args = args - self.kwargs = kwargs - - def __call__(self): - # We monkey-patch the thread creation so that this function is called in the new thread. At this point - # we notify of its creation and start tracing it. - py_db = get_global_debugger() - - thread_id = None - if py_db is not None: - # Note: if this is a thread from threading.py, we're too early in the boostrap process (because we mocked - # the start_new_thread internal machinery and thread._bootstrap has not finished), so, the code below needs - # to make sure that we use the current thread bound to the original function and not use - # threading.current_thread() unless we're sure it's a dummy thread. - t = getattr(self.original_func, '__self__', getattr(self.original_func, 'im_self', None)) - if not isinstance(t, threading.Thread): - # This is not a threading.Thread but a Dummy thread (so, get it as a dummy thread using - # currentThread). - t = threading.current_thread() - - if not getattr(t, 'is_pydev_daemon_thread', False): - thread_id = get_current_thread_id(t) - py_db.notify_thread_created(thread_id, t) - _on_set_trace_for_new_thread(py_db) - - if getattr(py_db, 'thread_analyser', None) is not None: - try: - from _pydevd_bundle.pydevd_concurrency_analyser.pydevd_concurrency_logger import log_new_thread - log_new_thread(py_db, t) - except: - sys.stderr.write("Failed to detect new thread for visualization") - try: - ret = self.original_func(*self.args, **self.kwargs) - finally: - if thread_id is not None: - if py_db is not None: - # At thread shutdown we only have pydevd-related code running (which shouldn't - # be tracked). - py_db.disable_tracing() - py_db.notify_thread_not_alive(thread_id) - - return ret - - -class _NewThreadStartupWithoutTrace: - - def __init__(self, original_func, args, kwargs): - self.original_func = original_func - self.args = args - self.kwargs = kwargs - - def __call__(self): - return self.original_func(*self.args, **self.kwargs) - - -_UseNewThreadStartup = _NewThreadStartupWithTrace - - -def _get_threading_modules_to_patch(): - threading_modules_to_patch = [] - - try: - import thread as _thread - except: - import _thread - threading_modules_to_patch.append(_thread) - threading_modules_to_patch.append(threading) - - return threading_modules_to_patch - - -threading_modules_to_patch = _get_threading_modules_to_patch() - - -def patch_thread_module(thread_module): - - if getattr(thread_module, '_original_start_new_thread', None) is None: - if thread_module is threading: - if not hasattr(thread_module, '_start_new_thread'): - return # Jython doesn't have it. - _original_start_new_thread = thread_module._original_start_new_thread = thread_module._start_new_thread - else: - _original_start_new_thread = thread_module._original_start_new_thread = thread_module.start_new_thread - else: - _original_start_new_thread = thread_module._original_start_new_thread - - class ClassWithPydevStartNewThread: - - def pydev_start_new_thread(self, function, args=(), kwargs={}): - ''' - We need to replace the original thread_module.start_new_thread with this function so that threads started - through it and not through the threading module are properly traced. - ''' - return _original_start_new_thread(_UseNewThreadStartup(function, args, kwargs), ()) - - # This is a hack for the situation where the thread_module.start_new_thread is declared inside a class, such as the one below - # class F(object): - # start_new_thread = thread_module.start_new_thread - # - # def start_it(self): - # self.start_new_thread(self.function, args, kwargs) - # So, if it's an already bound method, calling self.start_new_thread won't really receive a different 'self' -- it - # does work in the default case because in builtins self isn't passed either. - pydev_start_new_thread = ClassWithPydevStartNewThread().pydev_start_new_thread - - try: - # We need to replace the original thread_module.start_new_thread with this function so that threads started through - # it and not through the threading module are properly traced. - if thread_module is threading: - thread_module._start_new_thread = pydev_start_new_thread - else: - thread_module.start_new_thread = pydev_start_new_thread - thread_module.start_new = pydev_start_new_thread - except: - pass - - -def patch_thread_modules(): - for t in threading_modules_to_patch: - patch_thread_module(t) - - -def undo_patch_thread_modules(): - for t in threading_modules_to_patch: - try: - t.start_new_thread = t._original_start_new_thread - except: - pass - - try: - t.start_new = t._original_start_new_thread - except: - pass - - try: - t._start_new_thread = t._original_start_new_thread - except: - pass - - -def disable_trace_thread_modules(): - ''' - Can be used to temporarily stop tracing threads created with thread.start_new_thread. - ''' - global _UseNewThreadStartup - _UseNewThreadStartup = _NewThreadStartupWithoutTrace - - -def enable_trace_thread_modules(): - ''' - Can be used to start tracing threads created with thread.start_new_thread again. - ''' - global _UseNewThreadStartup - _UseNewThreadStartup = _NewThreadStartupWithTrace - - -def get_original_start_new_thread(threading_module): - try: - return threading_module._original_start_new_thread - except: - return threading_module.start_new_thread diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_dont_trace_files.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_dont_trace_files.py deleted file mode 100644 index d37b1fc53c28d4dd7373fd30f0aa1128345ade7c..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_dont_trace_files.py +++ /dev/null @@ -1,153 +0,0 @@ -# Important: Autogenerated file. - -# DO NOT edit manually! -# DO NOT edit manually! - -LIB_FILE = 1 -PYDEV_FILE = 2 - -DONT_TRACE_DIRS = { - '_pydev_bundle': PYDEV_FILE, - '_pydev_runfiles': PYDEV_FILE, - '_pydevd_bundle': PYDEV_FILE, - '_pydevd_frame_eval': PYDEV_FILE, - 'pydev_ipython': PYDEV_FILE, - 'pydev_sitecustomize': PYDEV_FILE, - 'pydevd_attach_to_process': PYDEV_FILE, - 'pydevd_concurrency_analyser': PYDEV_FILE, - 'pydevd_plugins': PYDEV_FILE, - 'test_pydevd_reload': PYDEV_FILE, -} - -DONT_TRACE = { - # commonly used things from the stdlib that we don't want to trace - 'Queue.py':LIB_FILE, - 'queue.py':LIB_FILE, - 'socket.py':LIB_FILE, - 'weakref.py':LIB_FILE, - '_weakrefset.py':LIB_FILE, - 'linecache.py':LIB_FILE, - 'threading.py':LIB_FILE, - 'dis.py':LIB_FILE, - - # things from pydev that we don't want to trace - '__main__pydevd_gen_debug_adapter_protocol.py': PYDEV_FILE, - '_pydev_calltip_util.py': PYDEV_FILE, - '_pydev_completer.py': PYDEV_FILE, - '_pydev_execfile.py': PYDEV_FILE, - '_pydev_filesystem_encoding.py': PYDEV_FILE, - '_pydev_getopt.py': PYDEV_FILE, - '_pydev_imports_tipper.py': PYDEV_FILE, - '_pydev_jy_imports_tipper.py': PYDEV_FILE, - '_pydev_log.py': PYDEV_FILE, - '_pydev_saved_modules.py': PYDEV_FILE, - '_pydev_sys_patch.py': PYDEV_FILE, - '_pydev_tipper_common.py': PYDEV_FILE, - 'django_debug.py': PYDEV_FILE, - 'jinja2_debug.py': PYDEV_FILE, - 'pycompletionserver.py': PYDEV_FILE, - 'pydev_app_engine_debug_startup.py': PYDEV_FILE, - 'pydev_console_utils.py': PYDEV_FILE, - 'pydev_import_hook.py': PYDEV_FILE, - 'pydev_imports.py': PYDEV_FILE, - 'pydev_ipython_console.py': PYDEV_FILE, - 'pydev_ipython_console_011.py': PYDEV_FILE, - 'pydev_is_thread_alive.py': PYDEV_FILE, - 'pydev_localhost.py': PYDEV_FILE, - 'pydev_log.py': PYDEV_FILE, - 'pydev_monkey.py': PYDEV_FILE, - 'pydev_monkey_qt.py': PYDEV_FILE, - 'pydev_override.py': PYDEV_FILE, - 'pydev_run_in_console.py': PYDEV_FILE, - 'pydev_runfiles.py': PYDEV_FILE, - 'pydev_runfiles_coverage.py': PYDEV_FILE, - 'pydev_runfiles_nose.py': PYDEV_FILE, - 'pydev_runfiles_parallel.py': PYDEV_FILE, - 'pydev_runfiles_parallel_client.py': PYDEV_FILE, - 'pydev_runfiles_pytest2.py': PYDEV_FILE, - 'pydev_runfiles_unittest.py': PYDEV_FILE, - 'pydev_runfiles_xml_rpc.py': PYDEV_FILE, - 'pydev_umd.py': PYDEV_FILE, - 'pydev_versioncheck.py': PYDEV_FILE, - 'pydevconsole.py': PYDEV_FILE, - 'pydevconsole_code.py': PYDEV_FILE, - 'pydevd.py': PYDEV_FILE, - 'pydevd_additional_thread_info.py': PYDEV_FILE, - 'pydevd_additional_thread_info_regular.py': PYDEV_FILE, - 'pydevd_api.py': PYDEV_FILE, - 'pydevd_base_schema.py': PYDEV_FILE, - 'pydevd_breakpoints.py': PYDEV_FILE, - 'pydevd_bytecode_utils.py': PYDEV_FILE, - 'pydevd_code_to_source.py': PYDEV_FILE, - 'pydevd_collect_bytecode_info.py': PYDEV_FILE, - 'pydevd_comm.py': PYDEV_FILE, - 'pydevd_comm_constants.py': PYDEV_FILE, - 'pydevd_command_line_handling.py': PYDEV_FILE, - 'pydevd_concurrency_logger.py': PYDEV_FILE, - 'pydevd_console.py': PYDEV_FILE, - 'pydevd_constants.py': PYDEV_FILE, - 'pydevd_custom_frames.py': PYDEV_FILE, - 'pydevd_cython_wrapper.py': PYDEV_FILE, - 'pydevd_daemon_thread.py': PYDEV_FILE, - 'pydevd_defaults.py': PYDEV_FILE, - 'pydevd_dont_trace.py': PYDEV_FILE, - 'pydevd_dont_trace_files.py': PYDEV_FILE, - 'pydevd_exec2.py': PYDEV_FILE, - 'pydevd_extension_api.py': PYDEV_FILE, - 'pydevd_extension_utils.py': PYDEV_FILE, - 'pydevd_file_utils.py': PYDEV_FILE, - 'pydevd_filtering.py': PYDEV_FILE, - 'pydevd_frame.py': PYDEV_FILE, - 'pydevd_frame_eval_cython_wrapper.py': PYDEV_FILE, - 'pydevd_frame_eval_main.py': PYDEV_FILE, - 'pydevd_frame_tracing.py': PYDEV_FILE, - 'pydevd_frame_utils.py': PYDEV_FILE, - 'pydevd_gevent_integration.py': PYDEV_FILE, - 'pydevd_helpers.py': PYDEV_FILE, - 'pydevd_import_class.py': PYDEV_FILE, - 'pydevd_io.py': PYDEV_FILE, - 'pydevd_json_debug_options.py': PYDEV_FILE, - 'pydevd_line_validation.py': PYDEV_FILE, - 'pydevd_modify_bytecode.py': PYDEV_FILE, - 'pydevd_net_command.py': PYDEV_FILE, - 'pydevd_net_command_factory_json.py': PYDEV_FILE, - 'pydevd_net_command_factory_xml.py': PYDEV_FILE, - 'pydevd_plugin_numpy_types.py': PYDEV_FILE, - 'pydevd_plugin_pandas_types.py': PYDEV_FILE, - 'pydevd_plugin_utils.py': PYDEV_FILE, - 'pydevd_plugins_django_form_str.py': PYDEV_FILE, - 'pydevd_process_net_command.py': PYDEV_FILE, - 'pydevd_process_net_command_json.py': PYDEV_FILE, - 'pydevd_referrers.py': PYDEV_FILE, - 'pydevd_reload.py': PYDEV_FILE, - 'pydevd_resolver.py': PYDEV_FILE, - 'pydevd_runpy.py': PYDEV_FILE, - 'pydevd_safe_repr.py': PYDEV_FILE, - 'pydevd_save_locals.py': PYDEV_FILE, - 'pydevd_schema.py': PYDEV_FILE, - 'pydevd_schema_log.py': PYDEV_FILE, - 'pydevd_signature.py': PYDEV_FILE, - 'pydevd_source_mapping.py': PYDEV_FILE, - 'pydevd_stackless.py': PYDEV_FILE, - 'pydevd_suspended_frames.py': PYDEV_FILE, - 'pydevd_thread_lifecycle.py': PYDEV_FILE, - 'pydevd_thread_wrappers.py': PYDEV_FILE, - 'pydevd_timeout.py': PYDEV_FILE, - 'pydevd_trace_api.py': PYDEV_FILE, - 'pydevd_trace_dispatch.py': PYDEV_FILE, - 'pydevd_trace_dispatch_regular.py': PYDEV_FILE, - 'pydevd_traceproperty.py': PYDEV_FILE, - 'pydevd_tracing.py': PYDEV_FILE, - 'pydevd_utils.py': PYDEV_FILE, - 'pydevd_vars.py': PYDEV_FILE, - 'pydevd_vm_type.py': PYDEV_FILE, - 'pydevd_xml.py': PYDEV_FILE, -} - -# if we try to trace io.py it seems it can get halted (see http://bugs.python.org/issue4716) -DONT_TRACE['io.py'] = LIB_FILE - -# Don't trace common encodings too -DONT_TRACE['cp1252.py'] = LIB_FILE -DONT_TRACE['utf_8.py'] = LIB_FILE -DONT_TRACE['codecs.py'] = LIB_FILE diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/backbones/uniformer.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/backbones/uniformer.py deleted file mode 100644 index 0c4bb88e4c928540cca9ab609988b916520f5b7a..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/backbones/uniformer.py +++ /dev/null @@ -1,422 +0,0 @@ -# -------------------------------------------------------- -# UniFormer -# Copyright (c) 2022 SenseTime X-Lab -# Licensed under The MIT License [see LICENSE for details] -# Written by Kunchang Li -# -------------------------------------------------------- - -from collections import OrderedDict -import math - -from functools import partial -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -import numpy as np -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from annotator.uniformer.mmcv_custom import load_checkpoint -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class CMlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Conv2d(in_features, hidden_features, 1) - self.act = act_layer() - self.fc2 = nn.Conv2d(hidden_features, out_features, 1) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class CBlock(nn.Module): - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = nn.BatchNorm2d(dim) - self.conv1 = nn.Conv2d(dim, dim, 1) - self.conv2 = nn.Conv2d(dim, dim, 1) - self.attn = nn.Conv2d(dim, dim, 5, padding=2, groups=dim) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = nn.BatchNorm2d(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = CMlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - x = x + self.drop_path(self.conv2(self.attn(self.conv1(self.norm1(x))))) - x = x + self.drop_path(self.mlp(self.norm2(x))) - return x - - -class Attention(nn.Module): - def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SABlock(nn.Module): - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - B, N, H, W = x.shape - x = x.flatten(2).transpose(1, 2) - x = x + self.drop_path(self.attn(self.norm1(x))) - x = x + self.drop_path(self.mlp(self.norm2(x))) - x = x.transpose(1, 2).reshape(B, N, H, W) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class SABlock_Windows(nn.Module): - def __init__(self, dim, num_heads, window_size=14, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.window_size=window_size - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - x = x.permute(0, 2, 3, 1) - B, H, W, C = x.shape - shortcut = x - x = self.norm1(x) - - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - x_windows = window_partition(x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - x = x.permute(0, 3, 1, 2).reshape(B, C, H, W) - return x - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - """ - def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0]) - self.img_size = img_size - self.patch_size = patch_size - self.num_patches = num_patches - self.norm = nn.LayerNorm(embed_dim) - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x): - B, _, H, W = x.shape - x = self.proj(x) - B, _, H, W = x.shape - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous() - return x - - -@BACKBONES.register_module() -class UniFormer(nn.Module): - """ Vision Transformer - A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` - - https://arxiv.org/abs/2010.11929 - """ - def __init__(self, layers=[3, 4, 8, 3], img_size=224, in_chans=3, num_classes=80, embed_dim=[64, 128, 320, 512], - head_dim=64, mlp_ratio=4., qkv_bias=True, qk_scale=None, representation_size=None, - drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=partial(nn.LayerNorm, eps=1e-6), - pretrained_path=None, use_checkpoint=False, checkpoint_num=[0, 0, 0, 0], - windows=False, hybrid=False, window_size=14): - """ - Args: - layer (list): number of block in each layer - img_size (int, tuple): input image size - in_chans (int): number of input channels - num_classes (int): number of classes for classification head - embed_dim (int): embedding dimension - head_dim (int): dimension of attention heads - mlp_ratio (int): ratio of mlp hidden dim to embedding dim - qkv_bias (bool): enable bias for qkv if True - qk_scale (float): override default qk scale of head_dim ** -0.5 if set - representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set - drop_rate (float): dropout rate - attn_drop_rate (float): attention dropout rate - drop_path_rate (float): stochastic depth rate - norm_layer (nn.Module): normalization layer - pretrained_path (str): path of pretrained model - use_checkpoint (bool): whether use checkpoint - checkpoint_num (list): index for using checkpoint in every stage - windows (bool): whether use window MHRA - hybrid (bool): whether use hybrid MHRA - window_size (int): size of window (>14) - """ - super().__init__() - self.num_classes = num_classes - self.use_checkpoint = use_checkpoint - self.checkpoint_num = checkpoint_num - self.windows = windows - print(f'Use Checkpoint: {self.use_checkpoint}') - print(f'Checkpoint Number: {self.checkpoint_num}') - self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models - norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6) - - self.patch_embed1 = PatchEmbed( - img_size=img_size, patch_size=4, in_chans=in_chans, embed_dim=embed_dim[0]) - self.patch_embed2 = PatchEmbed( - img_size=img_size // 4, patch_size=2, in_chans=embed_dim[0], embed_dim=embed_dim[1]) - self.patch_embed3 = PatchEmbed( - img_size=img_size // 8, patch_size=2, in_chans=embed_dim[1], embed_dim=embed_dim[2]) - self.patch_embed4 = PatchEmbed( - img_size=img_size // 16, patch_size=2, in_chans=embed_dim[2], embed_dim=embed_dim[3]) - - self.pos_drop = nn.Dropout(p=drop_rate) - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(layers))] # stochastic depth decay rule - num_heads = [dim // head_dim for dim in embed_dim] - self.blocks1 = nn.ModuleList([ - CBlock( - dim=embed_dim[0], num_heads=num_heads[0], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer) - for i in range(layers[0])]) - self.norm1=norm_layer(embed_dim[0]) - self.blocks2 = nn.ModuleList([ - CBlock( - dim=embed_dim[1], num_heads=num_heads[1], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]], norm_layer=norm_layer) - for i in range(layers[1])]) - self.norm2 = norm_layer(embed_dim[1]) - if self.windows: - print('Use local window for all blocks in stage3') - self.blocks3 = nn.ModuleList([ - SABlock_Windows( - dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer) - for i in range(layers[2])]) - elif hybrid: - print('Use hybrid window for blocks in stage3') - block3 = [] - for i in range(layers[2]): - if (i + 1) % 4 == 0: - block3.append(SABlock( - dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)) - else: - block3.append(SABlock_Windows( - dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)) - self.blocks3 = nn.ModuleList(block3) - else: - print('Use global window for all blocks in stage3') - self.blocks3 = nn.ModuleList([ - SABlock( - dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer) - for i in range(layers[2])]) - self.norm3 = norm_layer(embed_dim[2]) - self.blocks4 = nn.ModuleList([ - SABlock( - dim=embed_dim[3], num_heads=num_heads[3], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]+layers[2]], norm_layer=norm_layer) - for i in range(layers[3])]) - self.norm4 = norm_layer(embed_dim[3]) - - # Representation layer - if representation_size: - self.num_features = representation_size - self.pre_logits = nn.Sequential(OrderedDict([ - ('fc', nn.Linear(embed_dim, representation_size)), - ('act', nn.Tanh()) - ])) - else: - self.pre_logits = nn.Identity() - - self.apply(self._init_weights) - self.init_weights(pretrained=pretrained_path) - - def init_weights(self, pretrained): - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, map_location='cpu', strict=False, logger=logger) - print(f'Load pretrained model from {pretrained}') - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'pos_embed', 'cls_token'} - - def get_classifier(self): - return self.head - - def reset_classifier(self, num_classes, global_pool=''): - self.num_classes = num_classes - self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() - - def forward_features(self, x): - out = [] - x = self.patch_embed1(x) - x = self.pos_drop(x) - for i, blk in enumerate(self.blocks1): - if self.use_checkpoint and i < self.checkpoint_num[0]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm1(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed2(x) - for i, blk in enumerate(self.blocks2): - if self.use_checkpoint and i < self.checkpoint_num[1]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm2(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed3(x) - for i, blk in enumerate(self.blocks3): - if self.use_checkpoint and i < self.checkpoint_num[2]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm3(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed4(x) - for i, blk in enumerate(self.blocks4): - if self.use_checkpoint and i < self.checkpoint_num[3]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm4(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - return tuple(out) - - def forward(self, x): - x = self.forward_features(x) - return x diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/utils/drop.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/utils/drop.py deleted file mode 100644 index 4520b0ff407d2a95a864086bdbca0065f222aa63..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/utils/drop.py +++ /dev/null @@ -1,31 +0,0 @@ -"""Modified from https://github.com/rwightman/pytorch-image- -models/blob/master/timm/models/layers/drop.py.""" - -import torch -from torch import nn - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - Args: - drop_prob (float): Drop rate for paths of model. Dropout rate has - to be between 0 and 1. Default: 0. - """ - - def __init__(self, drop_prob=0.): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - self.keep_prob = 1 - drop_prob - - def forward(self, x): - if self.drop_prob == 0. or not self.training: - return x - shape = (x.shape[0], ) + (1, ) * ( - x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = self.keep_prob + torch.rand( - shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(self.keep_prob) * random_tensor - return output diff --git a/spaces/TEnngal/bingo/src/components/chat-list.tsx b/spaces/TEnngal/bingo/src/components/chat-list.tsx deleted file mode 100644 index 624a78ef0d7be0f1192cf02a81e2e9cf214cb193..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/components/chat-list.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import React from 'react' - -import { Separator } from '@/components/ui/separator' -import { ChatMessage } from '@/components/chat-message' -import { ChatMessageModel } from '@/lib/bots/bing/types' - -export interface ChatList { - messages: ChatMessageModel[] -} - -export function ChatList({ messages }: ChatList) { - if (!messages.length) { - return null - } - - return ( -
- {messages.map((message, index) => ( - - - {index < messages.length - 1 && ( - - )} - - ))} -
- ) -} diff --git a/spaces/TH5314/newbing/postcss.config.js b/spaces/TH5314/newbing/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/data/test_transforms.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/data/test_transforms.py deleted file mode 100644 index 382048e533708dec3fabf89528564ebc2ad4c83f..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/data/test_transforms.py +++ /dev/null @@ -1,268 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -import numpy as np -import unittest -from unittest import mock -import torch -from PIL import Image, ImageOps -from torch.nn import functional as F - -from detectron2.config import get_cfg -from detectron2.data import detection_utils -from detectron2.data import transforms as T -from detectron2.utils.logger import setup_logger - -logger = logging.getLogger(__name__) - - -def polygon_allclose(poly1, poly2): - """ - Test whether two polygons are the same. - Both arguments are nx2 numpy arrays. - """ - # ABCD and CDAB are the same polygon. So it's important to check after rolling - for k in range(len(poly1)): - rolled_poly1 = np.roll(poly1, k, axis=0) - if np.allclose(rolled_poly1, poly2): - return True - return False - - -class TestTransforms(unittest.TestCase): - def setUp(self): - setup_logger() - - def test_apply_rotated_boxes(self): - np.random.seed(125) - cfg = get_cfg() - is_train = True - augs = detection_utils.build_augmentation(cfg, is_train) - image = np.random.rand(200, 300) - image, transforms = T.apply_augmentations(augs, image) - image_shape = image.shape[:2] # h, w - assert image_shape == (800, 1200) - annotation = {"bbox": [179, 97, 62, 40, -56]} - - boxes = np.array([annotation["bbox"]], dtype=np.float64) # boxes.shape = (1, 5) - transformed_bbox = transforms.apply_rotated_box(boxes)[0] - - expected_bbox = np.array([484, 388, 248, 160, 56], dtype=np.float64) - err_msg = "transformed_bbox = {}, expected {}".format(transformed_bbox, expected_bbox) - assert np.allclose(transformed_bbox, expected_bbox), err_msg - - def test_resize_and_crop(self): - np.random.seed(125) - min_scale = 0.2 - max_scale = 2.0 - target_height = 1100 - target_width = 1000 - resize_aug = T.ResizeScale(min_scale, max_scale, target_height, target_width) - fixed_size_crop_aug = T.FixedSizeCrop((target_height, target_width)) - hflip_aug = T.RandomFlip() - augs = [resize_aug, fixed_size_crop_aug, hflip_aug] - original_image = np.random.rand(900, 800) - image, transforms = T.apply_augmentations(augs, original_image) - image_shape = image.shape[:2] # h, w - self.assertEqual((1100, 1000), image_shape) - - boxes = np.array( - [[91, 46, 144, 111], [523, 251, 614, 295]], - dtype=np.float64, - ) - transformed_bboxs = transforms.apply_box(boxes) - expected_bboxs = np.array( - [ - [895.42, 33.42666667, 933.91125, 80.66], - [554.0825, 182.39333333, 620.17125, 214.36666667], - ], - dtype=np.float64, - ) - err_msg = "transformed_bbox = {}, expected {}".format(transformed_bboxs, expected_bboxs) - self.assertTrue(np.allclose(transformed_bboxs, expected_bboxs), err_msg) - - polygon = np.array([[91, 46], [144, 46], [144, 111], [91, 111]]) - transformed_polygons = transforms.apply_polygons([polygon]) - expected_polygon = np.array([[934.0, 33.0], [934.0, 80.0], [896.0, 80.0], [896.0, 33.0]]) - self.assertEqual(1, len(transformed_polygons)) - err_msg = "transformed_polygon = {}, expected {}".format( - transformed_polygons[0], expected_polygon - ) - self.assertTrue(polygon_allclose(transformed_polygons[0], expected_polygon), err_msg) - - def test_apply_rotated_boxes_unequal_scaling_factor(self): - np.random.seed(125) - h, w = 400, 200 - newh, neww = 800, 800 - image = np.random.rand(h, w) - augs = [] - augs.append(T.Resize(shape=(newh, neww))) - image, transforms = T.apply_augmentations(augs, image) - image_shape = image.shape[:2] # h, w - assert image_shape == (newh, neww) - - boxes = np.array( - [ - [150, 100, 40, 20, 0], - [150, 100, 40, 20, 30], - [150, 100, 40, 20, 90], - [150, 100, 40, 20, -90], - ], - dtype=np.float64, - ) - transformed_boxes = transforms.apply_rotated_box(boxes) - - expected_bboxes = np.array( - [ - [600, 200, 160, 40, 0], - [600, 200, 144.22205102, 52.91502622, 49.10660535], - [600, 200, 80, 80, 90], - [600, 200, 80, 80, -90], - ], - dtype=np.float64, - ) - err_msg = "transformed_boxes = {}, expected {}".format(transformed_boxes, expected_bboxes) - assert np.allclose(transformed_boxes, expected_bboxes), err_msg - - def test_print_augmentation(self): - t = T.RandomCrop("relative", (100, 100)) - self.assertEqual(str(t), "RandomCrop(crop_type='relative', crop_size=(100, 100))") - - t0 = T.RandomFlip(prob=0.5) - self.assertEqual(str(t0), "RandomFlip(prob=0.5)") - - t1 = T.RandomFlip() - self.assertEqual(str(t1), "RandomFlip()") - - t = T.AugmentationList([t0, t1]) - self.assertEqual(str(t), f"AugmentationList[{t0}, {t1}]") - - def test_random_apply_prob_out_of_range_check(self): - test_probabilities = {0.0: True, 0.5: True, 1.0: True, -0.01: False, 1.01: False} - - for given_probability, is_valid in test_probabilities.items(): - if not is_valid: - self.assertRaises(AssertionError, T.RandomApply, None, prob=given_probability) - else: - T.RandomApply(T.NoOpTransform(), prob=given_probability) - - def test_random_apply_wrapping_aug_probability_occured_evaluation(self): - transform_mock = mock.MagicMock(name="MockTransform", spec=T.Augmentation) - image_mock = mock.MagicMock(name="MockImage") - random_apply = T.RandomApply(transform_mock, prob=0.001) - - with mock.patch.object(random_apply, "_rand_range", return_value=0.0001): - transform = random_apply.get_transform(image_mock) - transform_mock.get_transform.assert_called_once_with(image_mock) - self.assertIsNot(transform, transform_mock) - - def test_random_apply_wrapping_std_transform_probability_occured_evaluation(self): - transform_mock = mock.MagicMock(name="MockTransform", spec=T.Transform) - image_mock = mock.MagicMock(name="MockImage") - random_apply = T.RandomApply(transform_mock, prob=0.001) - - with mock.patch.object(random_apply, "_rand_range", return_value=0.0001): - transform = random_apply.get_transform(image_mock) - self.assertIs(transform, transform_mock) - - def test_random_apply_probability_not_occured_evaluation(self): - transform_mock = mock.MagicMock(name="MockTransform", spec=T.Augmentation) - image_mock = mock.MagicMock(name="MockImage") - random_apply = T.RandomApply(transform_mock, prob=0.001) - - with mock.patch.object(random_apply, "_rand_range", return_value=0.9): - transform = random_apply.get_transform(image_mock) - transform_mock.get_transform.assert_not_called() - self.assertIsInstance(transform, T.NoOpTransform) - - def test_augmentation_input_args(self): - input_shape = (100, 100) - output_shape = (50, 50) - - # define two augmentations with different args - class TG1(T.Augmentation): - def get_transform(self, image, sem_seg): - return T.ResizeTransform( - input_shape[0], input_shape[1], output_shape[0], output_shape[1] - ) - - class TG2(T.Augmentation): - def get_transform(self, image): - assert image.shape[:2] == output_shape # check that TG1 is applied - return T.HFlipTransform(output_shape[1]) - - image = np.random.rand(*input_shape).astype("float32") - sem_seg = (np.random.rand(*input_shape) < 0.5).astype("uint8") - inputs = T.AugInput(image, sem_seg=sem_seg) # provide two args - tfms = inputs.apply_augmentations([TG1(), TG2()]) - self.assertIsInstance(tfms[0], T.ResizeTransform) - self.assertIsInstance(tfms[1], T.HFlipTransform) - self.assertTrue(inputs.image.shape[:2] == output_shape) - self.assertTrue(inputs.sem_seg.shape[:2] == output_shape) - - class TG3(T.Augmentation): - def get_transform(self, image, nonexist): - pass - - with self.assertRaises(AttributeError): - inputs.apply_augmentations([TG3()]) - - def test_augmentation_list(self): - input_shape = (100, 100) - image = np.random.rand(*input_shape).astype("float32") - sem_seg = (np.random.rand(*input_shape) < 0.5).astype("uint8") - inputs = T.AugInput(image, sem_seg=sem_seg) # provide two args - - augs = T.AugmentationList([T.RandomFlip(), T.Resize(20)]) - _ = T.AugmentationList([augs, T.Resize(30)])(inputs) - # 3 in latest fvcore (flattened transformlist), 2 in older - # self.assertEqual(len(tfms), 3) - - def test_color_transforms(self): - rand_img = np.random.random((100, 100, 3)) * 255 - rand_img = rand_img.astype("uint8") - - # Test no-op - noop_transform = T.ColorTransform(lambda img: img) - self.assertTrue(np.array_equal(rand_img, noop_transform.apply_image(rand_img))) - - # Test a ImageOps operation - magnitude = np.random.randint(0, 256) - solarize_transform = T.PILColorTransform(lambda img: ImageOps.solarize(img, magnitude)) - expected_img = ImageOps.solarize(Image.fromarray(rand_img), magnitude) - self.assertTrue(np.array_equal(expected_img, solarize_transform.apply_image(rand_img))) - - def test_resize_transform(self): - input_shapes = [(100, 100), (100, 100, 1), (100, 100, 3)] - output_shapes = [(200, 200), (200, 200, 1), (200, 200, 3)] - for in_shape, out_shape in zip(input_shapes, output_shapes): - in_img = np.random.randint(0, 255, size=in_shape, dtype=np.uint8) - tfm = T.ResizeTransform(in_shape[0], in_shape[1], out_shape[0], out_shape[1]) - out_img = tfm.apply_image(in_img) - self.assertEqual(out_img.shape, out_shape) - - def test_resize_shorted_edge_scriptable(self): - def f(image): - newh, neww = T.ResizeShortestEdge.get_output_shape( - image.shape[-2], image.shape[-1], 80, 133 - ) - return F.interpolate(image.unsqueeze(0), size=(newh, neww)) - - input = torch.randn(3, 10, 10) - script_f = torch.jit.script(f) - self.assertTrue(torch.allclose(f(input), script_f(input))) - - # generalize to new shapes - input = torch.randn(3, 8, 100) - self.assertTrue(torch.allclose(f(input), script_f(input))) - - def test_extent_transform(self): - input_shapes = [(100, 100), (100, 100, 1), (100, 100, 3)] - src_rect = (20, 20, 80, 80) - output_shapes = [(200, 200), (200, 200, 1), (200, 200, 3)] - for in_shape, out_shape in zip(input_shapes, output_shapes): - in_img = np.random.randint(0, 255, size=in_shape, dtype=np.uint8) - tfm = T.ExtentTransform(src_rect, out_shape[:2]) - out_img = tfm.apply_image(in_img) - self.assertTrue(out_img.shape == out_shape) diff --git a/spaces/ThirdEyeData/image_bluriness_prediction/app.py b/spaces/ThirdEyeData/image_bluriness_prediction/app.py deleted file mode 100644 index ab30b8aea1375196e9e76b0cd3e1f507b217ecaf..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/image_bluriness_prediction/app.py +++ /dev/null @@ -1,109 +0,0 @@ -import streamlit as st -from tensorflow.keras.models import load_model -import numpy as np # linear algebra -import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) -from sklearn.metrics import roc_curve,auc,classification_report,confusion_matrix -import matplotlib.pyplot as plt -import matplotlib.image as mpimg -from PIL import Image -import cv2 -import keras -from keras.utils import np_utils -from tensorflow.keras.models import Sequential -from tensorflow.keras.layers import Conv2D,MaxPooling2D,Dense,Flatten,Dropout -from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint,ReduceLROnPlateau -from tensorflow.keras.models import Model -from tensorflow.keras.optimizers import Adam,SGD,RMSprop,Adamax -from tensorflow.keras.models import Model, Sequential -from tensorflow.keras.callbacks import ReduceLROnPlateau -from sklearn.model_selection import StratifiedKFold -from tensorflow.keras.applications import MobileNetV2 -from random import shuffle -from tqdm import tqdm -import scipy -import skimage -from skimage.transform import resize -import random -import os -import io -from io import BytesIO,StringIO -from pathlib import Path -import h5py - -model_file_path = "mobile_net_occ.h5" - - -#page_names = ["Blurred or Not Blurred Prediction","Occluded or Not Occluded Prediction"] -#page = st.sidebar.radio('Navigation',page_names) -#st.write("Welcome to the Project") - - -st.title(""" - Prediction of Image Blurriness - """) -#st.subheader("Prediction of Blur or NotBlur Image") -st.write("""Blurring refers to the distortion of the definition of objects in an image, resulting in poor spatial resolution. -Image blur is very common in natural photos, arising from different factors such as object motion, camera lens out-of-focus, and camera shake. -To detect if an image is blurred or not, the variance of Laplacian is used. The Laplacian of an image identifies edges, -and the variance of the same shows how smooth or hard the edge is. Smooth edges mean blurred images, hence sharp images tend to have -large positive and negative Laplacian. We can use this model for filtering blurred images in all kinds of computer vision projects. - """) -images = ["blur1.png","bird1.jpeg","blurimg3.png","images_11.jpeg"] -with st.sidebar: - st.write("choose an image") - st.image(images) -#model_file_path = "mobile_net_occ.h5" - - ##Blurriness Features - -plt. figure(figsize=(10,9)) -def variance_of_laplacian(image): - return cv2.Laplacian(image, cv2.CV_64F).var() - -def threshold(value, thresh): - if value > thresh: - return "Not Blur" - else: - return "Blur" -def blurr_predict(img_iter): - def make_prediction(img_content): - pil_image = Image.open(img_content) - imgplot = plt.imshow(pil_image) - #st.image(pil_image) - plt.show() - gray_cvimage = cv2.cvtColor(np.array(pil_image), cv2.COLOR_RGB2GRAY) - #print(gray_cvimage) - variance_laplacian = variance_of_laplacian(gray_cvimage) - #print(variance_laplacian) - return variance_laplacian - - variance_score = make_prediction(img_iter) - thresh = 2000 - variance_score = variance_score/thresh - predicted_label = threshold(variance_score, 1) - return predicted_label,variance_score - - #image_path = "images_11.jpeg" -file = st.file_uploader('Upload an Image',type=(["jpeg","jpg","png"])) - -if file is None: - st.write("Please upload an image file") -else: - image= Image.open(file) - st.image(image,use_column_width = True) - predicted_label,variance_score = blurr_predict(file) - #st.header(predicted_label) - #st.header(str(round(variance_score,2))) - string = "The image is," + str(predicted_label) + " with the score value of " + str(round(variance_score,2)) - st.subheader(string) - -st.write(""" -For a detailed description please look through our Documentation -""") - -url = 'https://huggingface.co/spaces/ThirdEyeData/image_bluriness_prediction/blob/main/README.md' - -st.markdown(f''' - -''', -unsafe_allow_html=True) \ No newline at end of file diff --git a/spaces/VIPLab/Caption-Anything/caption_anything/segmenter/readme.md b/spaces/VIPLab/Caption-Anything/caption_anything/segmenter/readme.md deleted file mode 100644 index ede6e7556bbd3c968165a331446df20398a572fa..0000000000000000000000000000000000000000 --- a/spaces/VIPLab/Caption-Anything/caption_anything/segmenter/readme.md +++ /dev/null @@ -1,68 +0,0 @@ -### Prepare SAM -``` -pip install git+https://github.com/facebookresearch/segment-anything.git -``` -or -``` -git clone git@github.com:facebookresearch/segment-anything.git -cd segment-anything; pip install -e . -``` - -``` -pip install opencv-python pycocotools matplotlib onnxruntime onnx -``` -### Download the checkpoint: - -https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth - -### Inference - -The prompts are in json format: - -``` -prompts = [ - { - "prompt_type":["click"], - "input_point":[[500, 375]], - "input_label":[1], - "multimask_output":"True", - }, - { - "prompt_type":["click"], - "input_point":[[500, 375], [1125, 625]], - "input_label":[1, 0], - }, - { - "prompt_type":["click", "box"], - "input_box":[425, 600, 700, 875], - "input_point":[[575, 750]], - "input_label": [0] - }, - { - "prompt_type":["box"], - "input_boxes": [ - [75, 275, 1725, 850], - [425, 600, 700, 875], - [1375, 550, 1650, 800], - [1240, 675, 1400, 750], - ] - }, - { - "prompt_type":["everything"] - }, - ] -``` - -In `base_segmenter.py`: -``` -segmenter = BaseSegmenter( - device='cuda', - checkpoint='sam_vit_h_4b8939.pth', - model_type='vit_h' - ) - -for i, prompt in enumerate(prompts): - masks = segmenter.inference(image_path, prompt) -``` - -Outputs are masks (True and False numpy Matrix), shape: (num of masks, height, weight) \ No newline at end of file diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/models/loaders.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/models/loaders.py deleted file mode 100644 index eb7ae50f34dd94e08d16951cbe75c9fb282a7868..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/models/loaders.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility functions to load from the checkpoints. -Each checkpoint is a torch.saved dict with the following keys: -- 'xp.cfg': the hydra config as dumped during training. This should be used - to rebuild the object using the audiocraft.models.builders functions, -- 'model_best_state': a readily loadable best state for the model, including - the conditioner. The model obtained from `xp.cfg` should be compatible - with this state dict. In the case of a LM, the encodec model would not be - bundled along but instead provided separately. - -Those functions also support loading from a remote location with the Torch Hub API. -They also support overriding some parameters, in particular the device and dtype -of the returned model. -""" - -from pathlib import Path -from huggingface_hub import hf_hub_download -import typing as tp -import os - -from omegaconf import OmegaConf -import torch - -from . import builders - - -HF_MODEL_CHECKPOINTS_MAP = { - "small": "facebook/musicgen-small", - "medium": "facebook/musicgen-medium", - "large": "facebook/musicgen-large", - "melody": "facebook/musicgen-melody", -} - - -def _get_state_dict( - file_or_url_or_id: tp.Union[Path, str], - filename: tp.Optional[str] = None, - device='cpu', - cache_dir: tp.Optional[str] = None, -): - # Return the state dict either from a file or url - file_or_url_or_id = str(file_or_url_or_id) - assert isinstance(file_or_url_or_id, str) - - if os.path.isfile(file_or_url_or_id): - return torch.load(file_or_url_or_id, map_location=device) - - elif file_or_url_or_id.startswith('https://'): - return torch.hub.load_state_dict_from_url(file_or_url_or_id, map_location=device, check_hash=True) - - elif file_or_url_or_id in HF_MODEL_CHECKPOINTS_MAP: - assert filename is not None, "filename needs to be defined if using HF checkpoints" - - repo_id = HF_MODEL_CHECKPOINTS_MAP[file_or_url_or_id] - file = hf_hub_download(repo_id=repo_id, filename=filename, cache_dir=cache_dir) - return torch.load(file, map_location=device) - - else: - raise ValueError(f"{file_or_url_or_id} is not a valid name, path or link that can be loaded.") - - -def load_compression_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None): - pkg = _get_state_dict(file_or_url_or_id, filename="compression_state_dict.bin", cache_dir=cache_dir) - cfg = OmegaConf.create(pkg['xp.cfg']) - cfg.device = str(device) - model = builders.get_compression_model(cfg) - model.load_state_dict(pkg['best_state']) - model.eval() - return model - - -def load_lm_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None): - pkg = _get_state_dict(file_or_url_or_id, filename="state_dict.bin", cache_dir=cache_dir) - cfg = OmegaConf.create(pkg['xp.cfg']) - cfg.device = str(device) - if cfg.device == 'cpu': - cfg.transformer_lm.memory_efficient = False - cfg.transformer_lm.custom = True - cfg.dtype = 'float32' - else: - cfg.dtype = 'float16' - model = builders.get_lm_model(cfg) - model.load_state_dict(pkg['best_state']) - model.eval() - model.cfg = cfg - return model diff --git a/spaces/Xenova/semantic-image-search/src/app/utils.js b/spaces/Xenova/semantic-image-search/src/app/utils.js deleted file mode 100644 index f0401723a8079fda923d524eabe7ab23fe3a166f..0000000000000000000000000000000000000000 --- a/spaces/Xenova/semantic-image-search/src/app/utils.js +++ /dev/null @@ -1,52 +0,0 @@ - -import { decode } from "blurhash" - -const SIZE = 32; - -export function blurHashToDataURL(hash) { - if (!hash) return undefined - - const pixels = decode(hash, SIZE, SIZE) - - const canvas = document.createElement("canvas"); - canvas.width = SIZE; - canvas.height = SIZE; - - const ctx = canvas.getContext("2d"); - const imageData = ctx.createImageData(SIZE, SIZE); - imageData.data.set(pixels); - ctx.putImageData(imageData, 0, 0); - - return canvas.toDataURL(); -} - -function downloadData(url, filename) { - - // Create an anchor element with the data URL as the href attribute - const downloadLink = document.createElement('a'); - downloadLink.href = url; - - // Set the download attribute to specify the desired filename for the downloaded image - downloadLink.download = filename; - - // Trigger the download - downloadLink.click(); - - // Clean up: remove the anchor element from the DOM - downloadLink.remove(); -} - -export function downloadImage(url, filename) { - fetch(url, { - headers: new Headers({ - Origin: location.origin, - }), - mode: 'cors', - }) - .then((response) => response.blob()) - .then((blob) => { - let blobUrl = window.URL.createObjectURL(blob) - downloadData(blobUrl, filename) - }) - .catch((e) => console.error(e)) -} diff --git a/spaces/Xenova/whisper-web/README.md b/spaces/Xenova/whisper-web/README.md deleted file mode 100644 index 50f15ac9d750516606cb22d9a99365f33c308b54..0000000000000000000000000000000000000000 --- a/spaces/Xenova/whisper-web/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Whisper Web -emoji: 🎤 -colorFrom: indigo -colorTo: indigo -sdk: static -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/XzJosh/Carol-Bert-VITS2/modules.py b/spaces/XzJosh/Carol-Bert-VITS2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Carol-Bert-VITS2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/XzJosh/Nana7mi-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md b/spaces/XzJosh/Nana7mi-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md deleted file mode 100644 index 7bce039b7f81ee328fdf8efe3f14409200aacbef..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Nana7mi-Bert-VITS2/bert/chinese-roberta-wwm-ext-large/README.md +++ /dev/null @@ -1,57 +0,0 @@ ---- -language: -- zh -tags: -- bert -license: "apache-2.0" ---- - -# Please use 'Bert' related functions to load this model! - -## Chinese BERT with Whole Word Masking -For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. - -**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** -Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu - -This repository is developed based on:https://github.com/google-research/bert - -You may also interested in, -- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm -- Chinese MacBERT: https://github.com/ymcui/MacBERT -- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA -- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet -- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer - -More resources by HFL: https://github.com/ymcui/HFL-Anthology - -## Citation -If you find the technical report or resource is useful, please cite the following technical report in your paper. -- Primary: https://arxiv.org/abs/2004.13922 -``` -@inproceedings{cui-etal-2020-revisiting, - title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", - author = "Cui, Yiming and - Che, Wanxiang and - Liu, Ting and - Qin, Bing and - Wang, Shijin and - Hu, Guoping", - booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", - month = nov, - year = "2020", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", - pages = "657--668", -} -``` -- Secondary: https://arxiv.org/abs/1906.08101 -``` -@article{chinese-bert-wwm, - title={Pre-Training with Whole Word Masking for Chinese BERT}, - author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, - journal={arXiv preprint arXiv:1906.08101}, - year={2019} - } -``` \ No newline at end of file diff --git a/spaces/YazawaSunrise/so-vits-svc-LoveLive/mel_processing.py b/spaces/YazawaSunrise/so-vits-svc-LoveLive/mel_processing.py deleted file mode 100644 index 99c5b35beb83f3b288af0fac5b49ebf2c69f062c..0000000000000000000000000000000000000000 --- a/spaces/YazawaSunrise/so-vits-svc-LoveLive/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/YouLiXiya/Mobile-SAM/segment_anything/linter.sh b/spaces/YouLiXiya/Mobile-SAM/segment_anything/linter.sh deleted file mode 100644 index df2e17436d30e89ff1728109301599f425f1ad6b..0000000000000000000000000000000000000000 --- a/spaces/YouLiXiya/Mobile-SAM/segment_anything/linter.sh +++ /dev/null @@ -1,32 +0,0 @@ -#!/bin/bash -e -# Copyright (c) Facebook, Inc. and its affiliates. - -{ - black --version | grep -E "23\." > /dev/null -} || { - echo "Linter requires 'black==23.*' !" - exit 1 -} - -ISORT_VERSION=$(isort --version-number) -if [[ "$ISORT_VERSION" != 5.12* ]]; then - echo "Linter requires isort==5.12.0 !" - exit 1 -fi - -echo "Running isort ..." -isort . --atomic - -echo "Running black ..." -black -l 100 . - -echo "Running flake8 ..." -if [ -x "$(command -v flake8)" ]; then - flake8 . -else - python3 -m flake8 . -fi - -echo "Running mypy..." - -mypy --exclude 'setup.py|notebooks' . diff --git a/spaces/Yuliang/ECON/lib/common/__init__.py b/spaces/Yuliang/ECON/lib/common/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/cnn/bricks/hswish.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/cnn/bricks/hswish.py deleted file mode 100644 index 7e0c090ff037c99ee6c5c84c4592e87beae02208..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/cnn/bricks/hswish.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .registry import ACTIVATION_LAYERS - - -@ACTIVATION_LAYERS.register_module() -class HSwish(nn.Module): - """Hard Swish Module. - - This module applies the hard swish function: - - .. math:: - Hswish(x) = x * ReLU6(x + 3) / 6 - - Args: - inplace (bool): can optionally do the operation in-place. - Default: False. - - Returns: - Tensor: The output tensor. - """ - - def __init__(self, inplace=False): - super(HSwish, self).__init__() - self.act = nn.ReLU6(inplace) - - def forward(self, x): - return x * self.act(x + 3) / 6 diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/anchor_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/anchor_head.py deleted file mode 100644 index eea73520572725f547216ab639c1ebbdfb50834c..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/anchor_head.py +++ /dev/null @@ -1,751 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import normal_init -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_anchor_generator, - build_assigner, build_bbox_coder, build_sampler, - images_to_levels, multi_apply, multiclass_nms, unmap) -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class AnchorHead(BaseDenseHead, BBoxTestMixin): - """Anchor-based head (RPN, RetinaNet, SSD, etc.). - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. Used in child classes. - anchor_generator (dict): Config dict for anchor generator - bbox_coder (dict): Config of bounding box coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - """ # noqa: W605 - - def __init__(self, - num_classes, - in_channels, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8, 16, 32], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - clip_border=True, - target_means=(.0, .0, .0, .0), - target_stds=(1.0, 1.0, 1.0, 1.0)), - reg_decoded_bbox=False, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - train_cfg=None, - test_cfg=None): - super(AnchorHead, self).__init__() - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - # TODO better way to determine whether sample or not - self.sampling = loss_cls['type'] not in [ - 'FocalLoss', 'GHMC', 'QualityFocalLoss' - ] - if self.use_sigmoid_cls: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - - if self.cls_out_channels <= 0: - raise ValueError(f'num_classes={num_classes} is too small') - self.reg_decoded_bbox = reg_decoded_bbox - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.fp16_enabled = False - - self.anchor_generator = build_anchor_generator(anchor_generator) - # usually the numbers of anchors for each level are the same - # except SSD detectors - self.num_anchors = self.anchor_generator.num_base_anchors[0] - self._init_layers() - - def _init_layers(self): - """Initialize layers of the head.""" - self.conv_cls = nn.Conv2d(self.in_channels, - self.num_anchors * self.cls_out_channels, 1) - self.conv_reg = nn.Conv2d(self.in_channels, self.num_anchors * 4, 1) - - def init_weights(self): - """Initialize weights of the head.""" - normal_init(self.conv_cls, std=0.01) - normal_init(self.conv_reg, std=0.01) - - def forward_single(self, x): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level \ - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale \ - level, the channels number is num_anchors * 4. - """ - cls_score = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - return cls_score, bbox_pred - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: A tuple of classification scores and bbox prediction. - - - cls_scores (list[Tensor]): Classification scores for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_anchors * num_classes. - - bbox_preds (list[Tensor]): Box energies / deltas for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_anchors * 4. - """ - return multi_apply(self.forward_single, feats) - - def get_anchors(self, featmap_sizes, img_metas, device='cuda'): - """Get anchors according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): Device for returned tensors - - Returns: - tuple: - anchor_list (list[Tensor]): Anchors of each image. - valid_flag_list (list[Tensor]): Valid flags of each image. - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # anchors for one time - multi_level_anchors = self.anchor_generator.grid_anchors( - featmap_sizes, device) - anchor_list = [multi_level_anchors for _ in range(num_imgs)] - - # for each image, we compute valid flags of multi level anchors - valid_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = self.anchor_generator.valid_flags( - featmap_sizes, img_meta['pad_shape'], device) - valid_flag_list.append(multi_level_flags) - - return anchor_list, valid_flag_list - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors ,4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - img_meta (dict): Meta info of the image. - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - labels_list (list[Tensor]): Labels of each level - label_weights_list (list[Tensor]): Label weights of each level - bbox_targets_list (list[Tensor]): BBox targets of each level - bbox_weights_list (list[Tensor]): BBox weights of each level - num_total_pos (int): Number of positive samples in all images - num_total_neg (int): Number of negative samples in all images - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - assign_result = self.assigner.assign( - anchors, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - labels = unmap( - labels, num_total_anchors, inside_flags, - fill=self.num_classes) # fill bg label - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds, sampling_result) - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True, - return_sampling_results=False): - """Compute regression and classification targets for anchors in - multiple images. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, 4). - valid_flag_list (list[list[Tensor]]): Multi level valid flags of - each image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each \ - level. - - bbox_targets_list (list[Tensor]): BBox targets of each level. - - bbox_weights_list (list[Tensor]): BBox weights of each level. - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - additional_returns: This function enables user-defined returns from - `self._get_targets_single`. These returns are currently refined - to properties at each feature map (i.e. having HxW dimension). - The results will be concatenated after the end - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors to a single tensor - concat_anchor_list = [] - concat_valid_flag_list = [] - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - concat_anchor_list.append(torch.cat(anchor_list[i])) - concat_valid_flag_list.append(torch.cat(valid_flag_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - concat_anchor_list, - concat_valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - (all_labels, all_label_weights, all_bbox_targets, all_bbox_weights, - pos_inds_list, neg_inds_list, sampling_results_list) = results[:7] - rest_results = list(results[7:]) # user-added return values - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - res = (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - if return_sampling_results: - res = res + (sampling_results_list, ) - for i, r in enumerate(rest_results): # user-added return values - rest_results[i] = images_to_levels(r, num_level_anchors) - - return res + tuple(rest_results) - - def loss_single(self, cls_score, bbox_pred, anchors, labels, label_weights, - bbox_targets, bbox_weights, num_total_samples): - """Compute loss of a single scale level. - - Args: - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor wight - shape (N, num_total_anchors, 4). - bbox_weights (Tensor): BBox regression loss weights of each anchor - with shape (N, num_total_anchors, 4). - num_total_samples (int): If sampling, num total samples equal to - the number of total anchors; Otherwise, it is the number of - positive anchors. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # classification loss - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - # regression loss - bbox_targets = bbox_targets.reshape(-1, 4) - bbox_weights = bbox_weights.reshape(-1, 4) - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - anchors = anchors.reshape(-1, 4) - bbox_pred = self.bbox_coder.decode(anchors, bbox_pred) - loss_bbox = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - return loss_cls, loss_bbox - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - return dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - img_metas, - cfg=None, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each level in the - feature pyramid, has shape - (N, num_anchors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for each - level in the feature pyramid, has shape - (N, num_anchors * 4, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - - Example: - >>> import mmcv - >>> self = AnchorHead( - >>> num_classes=9, - >>> in_channels=1, - >>> anchor_generator=dict( - >>> type='AnchorGenerator', - >>> scales=[8], - >>> ratios=[0.5, 1.0, 2.0], - >>> strides=[4,])) - >>> img_metas = [{'img_shape': (32, 32, 3), 'scale_factor': 1}] - >>> cfg = mmcv.Config(dict( - >>> score_thr=0.00, - >>> nms=dict(type='nms', iou_thr=1.0), - >>> max_per_img=10)) - >>> feat = torch.rand(1, 1, 3, 3) - >>> cls_score, bbox_pred = self.forward_single(feat) - >>> # note the input lists are over different levels, not images - >>> cls_scores, bbox_preds = [cls_score], [bbox_pred] - >>> result_list = self.get_bboxes(cls_scores, bbox_preds, - >>> img_metas, cfg) - >>> det_bboxes, det_labels = result_list[0] - >>> assert len(result_list) == 1 - >>> assert det_bboxes.shape[1] == 5 - >>> assert len(det_bboxes) == len(det_labels) == cfg.max_per_img - """ - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - - device = cls_scores[0].device - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - mlvl_anchors = self.anchor_generator.grid_anchors( - featmap_sizes, device=device) - - mlvl_cls_scores = [cls_scores[i].detach() for i in range(num_levels)] - mlvl_bbox_preds = [bbox_preds[i].detach() for i in range(num_levels)] - - if torch.onnx.is_in_onnx_export(): - assert len( - img_metas - ) == 1, 'Only support one input image while in exporting to ONNX' - img_shapes = img_metas[0]['img_shape_for_onnx'] - else: - img_shapes = [ - img_metas[i]['img_shape'] - for i in range(cls_scores[0].shape[0]) - ] - scale_factors = [ - img_metas[i]['scale_factor'] for i in range(cls_scores[0].shape[0]) - ] - - if with_nms: - # some heads don't support with_nms argument - result_list = self._get_bboxes(mlvl_cls_scores, mlvl_bbox_preds, - mlvl_anchors, img_shapes, - scale_factors, cfg, rescale) - else: - result_list = self._get_bboxes(mlvl_cls_scores, mlvl_bbox_preds, - mlvl_anchors, img_shapes, - scale_factors, cfg, rescale, - with_nms) - return result_list - - def _get_bboxes(self, - mlvl_cls_scores, - mlvl_bbox_preds, - mlvl_anchors, - img_shapes, - scale_factors, - cfg, - rescale=False, - with_nms=True): - """Transform outputs for a batch item into bbox predictions. - - Args: - mlvl_cls_scores (list[Tensor]): Each element in the list is - the scores of bboxes of single level in the feature pyramid, - has shape (N, num_anchors * num_classes, H, W). - mlvl_bbox_preds (list[Tensor]): Each element in the list is the - bboxes predictions of single level in the feature pyramid, - has shape (N, num_anchors * 4, H, W). - mlvl_anchors (list[Tensor]): Each element in the list is - the anchors of single level in feature pyramid, has shape - (num_anchors, 4). - img_shapes (list[tuple[int]]): Each tuple in the list represent - the shape(height, width, 3) of single image in the batch. - scale_factors (list[ndarray]): Scale factor of the batch - image arange as list[(w_scale, h_scale, w_scale, h_scale)]. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(mlvl_cls_scores) == len(mlvl_bbox_preds) == len( - mlvl_anchors) - batch_size = mlvl_cls_scores[0].shape[0] - # convert to tensor to keep tracing - nms_pre_tensor = torch.tensor( - cfg.get('nms_pre', -1), - device=mlvl_cls_scores[0].device, - dtype=torch.long) - - mlvl_bboxes = [] - mlvl_scores = [] - for cls_score, bbox_pred, anchors in zip(mlvl_cls_scores, - mlvl_bbox_preds, - mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(batch_size, -1, - self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(batch_size, -1, 4) - anchors = anchors.expand_as(bbox_pred) - # Always keep topk op for dynamic input in onnx - if nms_pre_tensor > 0 and (torch.onnx.is_in_onnx_export() - or scores.shape[-2] > nms_pre_tensor): - from torch import _shape_as_tensor - # keep shape as tensor and get k - num_anchor = _shape_as_tensor(scores)[-2].to( - nms_pre_tensor.device) - nms_pre = torch.where(nms_pre_tensor < num_anchor, - nms_pre_tensor, num_anchor) - - # Get maximum scores for foreground classes. - if self.use_sigmoid_cls: - max_scores, _ = scores.max(-1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = scores[..., :-1].max(-1) - - _, topk_inds = max_scores.topk(nms_pre) - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds) - anchors = anchors[batch_inds, topk_inds, :] - bbox_pred = bbox_pred[batch_inds, topk_inds, :] - scores = scores[batch_inds, topk_inds, :] - - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shapes) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - - batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1) - if rescale: - batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor( - scale_factors).unsqueeze(1) - batch_mlvl_scores = torch.cat(mlvl_scores, dim=1) - - # Set max number of box to be feed into nms in deployment - deploy_nms_pre = cfg.get('deploy_nms_pre', -1) - if deploy_nms_pre > 0 and torch.onnx.is_in_onnx_export(): - # Get maximum scores for foreground classes. - if self.use_sigmoid_cls: - max_scores, _ = batch_mlvl_scores.max(-1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = batch_mlvl_scores[..., :-1].max(-1) - _, topk_inds = max_scores.topk(deploy_nms_pre) - batch_inds = torch.arange(batch_size).view(-1, - 1).expand_as(topk_inds) - batch_mlvl_scores = batch_mlvl_scores[batch_inds, topk_inds] - batch_mlvl_bboxes = batch_mlvl_bboxes[batch_inds, topk_inds] - if self.use_sigmoid_cls: - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = batch_mlvl_scores.new_zeros(batch_size, - batch_mlvl_scores.shape[1], - 1) - batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1) - - if with_nms: - det_results = [] - for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes, - batch_mlvl_scores): - det_bbox, det_label = multiclass_nms(mlvl_bboxes, mlvl_scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - det_results.append(tuple([det_bbox, det_label])) - else: - det_results = [ - tuple(mlvl_bs) - for mlvl_bs in zip(batch_mlvl_bboxes, batch_mlvl_scores) - ] - return det_results - - def aug_test(self, feats, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[ndarray]: bbox results of each class - """ - return self.aug_test_bboxes(feats, img_metas, rescale=rescale) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/pisa_roi_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/pisa_roi_head.py deleted file mode 100644 index e01113629837eb9c065ba40cd4025899b7bd0172..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/pisa_roi_head.py +++ /dev/null @@ -1,159 +0,0 @@ -from mmdet.core import bbox2roi -from ..builder import HEADS -from ..losses.pisa_loss import carl_loss, isr_p -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class PISARoIHead(StandardRoIHead): - r"""The RoI head for `Prime Sample Attention in Object Detection - `_.""" - - def forward_train(self, - x, - img_metas, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None): - """Forward function for training. - - Args: - x (list[Tensor]): List of multi-level img features. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - proposals (list[Tensors]): List of region proposals. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (list[Tensor], optional): Specify which bounding - boxes can be ignored when computing the loss. - gt_masks (None | Tensor) : True segmentation masks for each box - used if the architecture supports a segmentation task. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # assign gts and sample proposals - if self.with_bbox or self.with_mask: - num_imgs = len(img_metas) - if gt_bboxes_ignore is None: - gt_bboxes_ignore = [None for _ in range(num_imgs)] - sampling_results = [] - neg_label_weights = [] - for i in range(num_imgs): - assign_result = self.bbox_assigner.assign( - proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i], - gt_labels[i]) - sampling_result = self.bbox_sampler.sample( - assign_result, - proposal_list[i], - gt_bboxes[i], - gt_labels[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - # neg label weight is obtained by sampling when using ISR-N - neg_label_weight = None - if isinstance(sampling_result, tuple): - sampling_result, neg_label_weight = sampling_result - sampling_results.append(sampling_result) - neg_label_weights.append(neg_label_weight) - - losses = dict() - # bbox head forward and loss - if self.with_bbox: - bbox_results = self._bbox_forward_train( - x, - sampling_results, - gt_bboxes, - gt_labels, - img_metas, - neg_label_weights=neg_label_weights) - losses.update(bbox_results['loss_bbox']) - - # mask head forward and loss - if self.with_mask: - mask_results = self._mask_forward_train(x, sampling_results, - bbox_results['bbox_feats'], - gt_masks, img_metas) - losses.update(mask_results['loss_mask']) - - return losses - - def _bbox_forward(self, x, rois): - """Box forward function used in both training and testing.""" - # TODO: a more flexible way to decide which feature maps to use - bbox_feats = self.bbox_roi_extractor( - x[:self.bbox_roi_extractor.num_inputs], rois) - if self.with_shared_head: - bbox_feats = self.shared_head(bbox_feats) - cls_score, bbox_pred = self.bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def _bbox_forward_train(self, - x, - sampling_results, - gt_bboxes, - gt_labels, - img_metas, - neg_label_weights=None): - """Run forward function and calculate loss for box head in training.""" - rois = bbox2roi([res.bboxes for res in sampling_results]) - - bbox_results = self._bbox_forward(x, rois) - - bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes, - gt_labels, self.train_cfg) - - # neg_label_weights obtained by sampler is image-wise, mapping back to - # the corresponding location in label weights - if neg_label_weights[0] is not None: - label_weights = bbox_targets[1] - cur_num_rois = 0 - for i in range(len(sampling_results)): - num_pos = sampling_results[i].pos_inds.size(0) - num_neg = sampling_results[i].neg_inds.size(0) - label_weights[cur_num_rois + num_pos:cur_num_rois + num_pos + - num_neg] = neg_label_weights[i] - cur_num_rois += num_pos + num_neg - - cls_score = bbox_results['cls_score'] - bbox_pred = bbox_results['bbox_pred'] - - # Apply ISR-P - isr_cfg = self.train_cfg.get('isr', None) - if isr_cfg is not None: - bbox_targets = isr_p( - cls_score, - bbox_pred, - bbox_targets, - rois, - sampling_results, - self.bbox_head.loss_cls, - self.bbox_head.bbox_coder, - **isr_cfg, - num_class=self.bbox_head.num_classes) - loss_bbox = self.bbox_head.loss(cls_score, bbox_pred, rois, - *bbox_targets) - - # Add CARL Loss - carl_cfg = self.train_cfg.get('carl', None) - if carl_cfg is not None: - loss_carl = carl_loss( - cls_score, - bbox_targets[0], - bbox_pred, - bbox_targets[2], - self.bbox_head.loss_bbox, - **carl_cfg, - num_class=self.bbox_head.num_classes) - loss_bbox.update(loss_carl) - - bbox_results.update(loss_bbox=loss_bbox) - return bbox_results diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/losses/balanced_l1_loss.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/losses/balanced_l1_loss.py deleted file mode 100644 index 7bcd13ff26dbdc9f6eff8d7c7b5bde742a8d7d1d..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/losses/balanced_l1_loss.py +++ /dev/null @@ -1,120 +0,0 @@ -import mmcv -import numpy as np -import torch -import torch.nn as nn - -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def balanced_l1_loss(pred, - target, - beta=1.0, - alpha=0.5, - gamma=1.5, - reduction='mean'): - """Calculate balanced L1 loss. - - Please see the `Libra R-CNN `_ - - Args: - pred (torch.Tensor): The prediction with shape (N, 4). - target (torch.Tensor): The learning target of the prediction with - shape (N, 4). - beta (float): The loss is a piecewise function of prediction and target - and ``beta`` serves as a threshold for the difference between the - prediction and target. Defaults to 1.0. - alpha (float): The denominator ``alpha`` in the balanced L1 loss. - Defaults to 0.5. - gamma (float): The ``gamma`` in the balanced L1 loss. - Defaults to 1.5. - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert beta > 0 - assert pred.size() == target.size() and target.numel() > 0 - - diff = torch.abs(pred - target) - b = np.e**(gamma / alpha) - 1 - loss = torch.where( - diff < beta, alpha / b * - (b * diff + 1) * torch.log(b * diff / beta + 1) - alpha * diff, - gamma * diff + gamma / b - alpha * beta) - - return loss - - -@LOSSES.register_module() -class BalancedL1Loss(nn.Module): - """Balanced L1 Loss. - - arXiv: https://arxiv.org/pdf/1904.02701.pdf (CVPR 2019) - - Args: - alpha (float): The denominator ``alpha`` in the balanced L1 loss. - Defaults to 0.5. - gamma (float): The ``gamma`` in the balanced L1 loss. Defaults to 1.5. - beta (float, optional): The loss is a piecewise function of prediction - and target. ``beta`` serves as a threshold for the difference - between the prediction and target. Defaults to 1.0. - reduction (str, optional): The method that reduces the loss to a - scalar. Options are "none", "mean" and "sum". - loss_weight (float, optional): The weight of the loss. Defaults to 1.0 - """ - - def __init__(self, - alpha=0.5, - gamma=1.5, - beta=1.0, - reduction='mean', - loss_weight=1.0): - super(BalancedL1Loss, self).__init__() - self.alpha = alpha - self.gamma = gamma - self.beta = beta - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function of loss. - - Args: - pred (torch.Tensor): The prediction with shape (N, 4). - target (torch.Tensor): The learning target of the prediction with - shape (N, 4). - weight (torch.Tensor, optional): Sample-wise loss weight with - shape (N, ). - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss_bbox = self.loss_weight * balanced_l1_loss( - pred, - target, - weight, - alpha=self.alpha, - gamma=self.gamma, - beta=self.beta, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss_bbox diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/quaternion.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/quaternion.py deleted file mode 100644 index e2daa00aef1df60e43775864d1dd3d551f89ded8..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/quaternion.py +++ /dev/null @@ -1,423 +0,0 @@ -# Copyright (c) 2018-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# - -import torch -import numpy as np - -_EPS4 = np.finfo(float).eps * 4.0 - -_FLOAT_EPS = np.finfo(np.float).eps - -# PyTorch-backed implementations -def qinv(q): - assert q.shape[-1] == 4, 'q must be a tensor of shape (*, 4)' - mask = torch.ones_like(q) - mask[..., 1:] = -mask[..., 1:] - return q * mask - - -def qinv_np(q): - assert q.shape[-1] == 4, 'q must be a tensor of shape (*, 4)' - return qinv(torch.from_numpy(q).float()).numpy() - - -def qnormalize(q): - assert q.shape[-1] == 4, 'q must be a tensor of shape (*, 4)' - return q / torch.norm(q, dim=-1, keepdim=True) - - -def qmul(q, r): - """ - Multiply quaternion(s) q with quaternion(s) r. - Expects two equally-sized tensors of shape (*, 4), where * denotes any number of dimensions. - Returns q*r as a tensor of shape (*, 4). - """ - assert q.shape[-1] == 4 - assert r.shape[-1] == 4 - - original_shape = q.shape - - # Compute outer product - terms = torch.bmm(r.view(-1, 4, 1), q.view(-1, 1, 4)) - - w = terms[:, 0, 0] - terms[:, 1, 1] - terms[:, 2, 2] - terms[:, 3, 3] - x = terms[:, 0, 1] + terms[:, 1, 0] - terms[:, 2, 3] + terms[:, 3, 2] - y = terms[:, 0, 2] + terms[:, 1, 3] + terms[:, 2, 0] - terms[:, 3, 1] - z = terms[:, 0, 3] - terms[:, 1, 2] + terms[:, 2, 1] + terms[:, 3, 0] - return torch.stack((w, x, y, z), dim=1).view(original_shape) - - -def qrot(q, v): - """ - Rotate vector(s) v about the rotation described by quaternion(s) q. - Expects a tensor of shape (*, 4) for q and a tensor of shape (*, 3) for v, - where * denotes any number of dimensions. - Returns a tensor of shape (*, 3). - """ - assert q.shape[-1] == 4 - assert v.shape[-1] == 3 - assert q.shape[:-1] == v.shape[:-1] - - original_shape = list(v.shape) - # print(q.shape) - q = q.contiguous().view(-1, 4) - v = v.contiguous().view(-1, 3) - - qvec = q[:, 1:] - uv = torch.cross(qvec, v, dim=1) - uuv = torch.cross(qvec, uv, dim=1) - return (v + 2 * (q[:, :1] * uv + uuv)).view(original_shape) - - -def qeuler(q, order, epsilon=0, deg=True): - """ - Convert quaternion(s) q to Euler angles. - Expects a tensor of shape (*, 4), where * denotes any number of dimensions. - Returns a tensor of shape (*, 3). - """ - assert q.shape[-1] == 4 - - original_shape = list(q.shape) - original_shape[-1] = 3 - q = q.view(-1, 4) - - q0 = q[:, 0] - q1 = q[:, 1] - q2 = q[:, 2] - q3 = q[:, 3] - - if order == 'xyz': - x = torch.atan2(2 * (q0 * q1 - q2 * q3), 1 - 2 * (q1 * q1 + q2 * q2)) - y = torch.asin(torch.clamp(2 * (q1 * q3 + q0 * q2), -1 + epsilon, 1 - epsilon)) - z = torch.atan2(2 * (q0 * q3 - q1 * q2), 1 - 2 * (q2 * q2 + q3 * q3)) - elif order == 'yzx': - x = torch.atan2(2 * (q0 * q1 - q2 * q3), 1 - 2 * (q1 * q1 + q3 * q3)) - y = torch.atan2(2 * (q0 * q2 - q1 * q3), 1 - 2 * (q2 * q2 + q3 * q3)) - z = torch.asin(torch.clamp(2 * (q1 * q2 + q0 * q3), -1 + epsilon, 1 - epsilon)) - elif order == 'zxy': - x = torch.asin(torch.clamp(2 * (q0 * q1 + q2 * q3), -1 + epsilon, 1 - epsilon)) - y = torch.atan2(2 * (q0 * q2 - q1 * q3), 1 - 2 * (q1 * q1 + q2 * q2)) - z = torch.atan2(2 * (q0 * q3 - q1 * q2), 1 - 2 * (q1 * q1 + q3 * q3)) - elif order == 'xzy': - x = torch.atan2(2 * (q0 * q1 + q2 * q3), 1 - 2 * (q1 * q1 + q3 * q3)) - y = torch.atan2(2 * (q0 * q2 + q1 * q3), 1 - 2 * (q2 * q2 + q3 * q3)) - z = torch.asin(torch.clamp(2 * (q0 * q3 - q1 * q2), -1 + epsilon, 1 - epsilon)) - elif order == 'yxz': - x = torch.asin(torch.clamp(2 * (q0 * q1 - q2 * q3), -1 + epsilon, 1 - epsilon)) - y = torch.atan2(2 * (q1 * q3 + q0 * q2), 1 - 2 * (q1 * q1 + q2 * q2)) - z = torch.atan2(2 * (q1 * q2 + q0 * q3), 1 - 2 * (q1 * q1 + q3 * q3)) - elif order == 'zyx': - x = torch.atan2(2 * (q0 * q1 + q2 * q3), 1 - 2 * (q1 * q1 + q2 * q2)) - y = torch.asin(torch.clamp(2 * (q0 * q2 - q1 * q3), -1 + epsilon, 1 - epsilon)) - z = torch.atan2(2 * (q0 * q3 + q1 * q2), 1 - 2 * (q2 * q2 + q3 * q3)) - else: - raise - - if deg: - return torch.stack((x, y, z), dim=1).view(original_shape) * 180 / np.pi - else: - return torch.stack((x, y, z), dim=1).view(original_shape) - - -# Numpy-backed implementations - -def qmul_np(q, r): - q = torch.from_numpy(q).contiguous().float() - r = torch.from_numpy(r).contiguous().float() - return qmul(q, r).numpy() - - -def qrot_np(q, v): - q = torch.from_numpy(q).contiguous().float() - v = torch.from_numpy(v).contiguous().float() - return qrot(q, v).numpy() - - -def qeuler_np(q, order, epsilon=0, use_gpu=False): - if use_gpu: - q = torch.from_numpy(q).cuda().float() - return qeuler(q, order, epsilon).cpu().numpy() - else: - q = torch.from_numpy(q).contiguous().float() - return qeuler(q, order, epsilon).numpy() - - -def qfix(q): - """ - Enforce quaternion continuity across the time dimension by selecting - the representation (q or -q) with minimal distance (or, equivalently, maximal dot product) - between two consecutive frames. - - Expects a tensor of shape (L, J, 4), where L is the sequence length and J is the number of joints. - Returns a tensor of the same shape. - """ - assert len(q.shape) == 3 - assert q.shape[-1] == 4 - - result = q.copy() - dot_products = np.sum(q[1:] * q[:-1], axis=2) - mask = dot_products < 0 - mask = (np.cumsum(mask, axis=0) % 2).astype(bool) - result[1:][mask] *= -1 - return result - - -def euler2quat(e, order, deg=True): - """ - Convert Euler angles to quaternions. - """ - assert e.shape[-1] == 3 - - original_shape = list(e.shape) - original_shape[-1] = 4 - - e = e.view(-1, 3) - - ## if euler angles in degrees - if deg: - e = e * np.pi / 180. - - x = e[:, 0] - y = e[:, 1] - z = e[:, 2] - - rx = torch.stack((torch.cos(x / 2), torch.sin(x / 2), torch.zeros_like(x), torch.zeros_like(x)), dim=1) - ry = torch.stack((torch.cos(y / 2), torch.zeros_like(y), torch.sin(y / 2), torch.zeros_like(y)), dim=1) - rz = torch.stack((torch.cos(z / 2), torch.zeros_like(z), torch.zeros_like(z), torch.sin(z / 2)), dim=1) - - result = None - for coord in order: - if coord == 'x': - r = rx - elif coord == 'y': - r = ry - elif coord == 'z': - r = rz - else: - raise - if result is None: - result = r - else: - result = qmul(result, r) - - # Reverse antipodal representation to have a non-negative "w" - if order in ['xyz', 'yzx', 'zxy']: - result *= -1 - - return result.view(original_shape) - - -def expmap_to_quaternion(e): - """ - Convert axis-angle rotations (aka exponential maps) to quaternions. - Stable formula from "Practical Parameterization of Rotations Using the Exponential Map". - Expects a tensor of shape (*, 3), where * denotes any number of dimensions. - Returns a tensor of shape (*, 4). - """ - assert e.shape[-1] == 3 - - original_shape = list(e.shape) - original_shape[-1] = 4 - e = e.reshape(-1, 3) - - theta = np.linalg.norm(e, axis=1).reshape(-1, 1) - w = np.cos(0.5 * theta).reshape(-1, 1) - xyz = 0.5 * np.sinc(0.5 * theta / np.pi) * e - return np.concatenate((w, xyz), axis=1).reshape(original_shape) - - -def euler_to_quaternion(e, order): - """ - Convert Euler angles to quaternions. - """ - assert e.shape[-1] == 3 - - original_shape = list(e.shape) - original_shape[-1] = 4 - - e = e.reshape(-1, 3) - - x = e[:, 0] - y = e[:, 1] - z = e[:, 2] - - rx = np.stack((np.cos(x / 2), np.sin(x / 2), np.zeros_like(x), np.zeros_like(x)), axis=1) - ry = np.stack((np.cos(y / 2), np.zeros_like(y), np.sin(y / 2), np.zeros_like(y)), axis=1) - rz = np.stack((np.cos(z / 2), np.zeros_like(z), np.zeros_like(z), np.sin(z / 2)), axis=1) - - result = None - for coord in order: - if coord == 'x': - r = rx - elif coord == 'y': - r = ry - elif coord == 'z': - r = rz - else: - raise - if result is None: - result = r - else: - result = qmul_np(result, r) - - # Reverse antipodal representation to have a non-negative "w" - if order in ['xyz', 'yzx', 'zxy']: - result *= -1 - - return result.reshape(original_shape) - - -def quaternion_to_matrix(quaternions): - """ - Convert rotations given as quaternions to rotation matrices. - Args: - quaternions: quaternions with real part first, - as tensor of shape (..., 4). - Returns: - Rotation matrices as tensor of shape (..., 3, 3). - """ - r, i, j, k = torch.unbind(quaternions, -1) - two_s = 2.0 / (quaternions * quaternions).sum(-1) - - o = torch.stack( - ( - 1 - two_s * (j * j + k * k), - two_s * (i * j - k * r), - two_s * (i * k + j * r), - two_s * (i * j + k * r), - 1 - two_s * (i * i + k * k), - two_s * (j * k - i * r), - two_s * (i * k - j * r), - two_s * (j * k + i * r), - 1 - two_s * (i * i + j * j), - ), - -1, - ) - return o.reshape(quaternions.shape[:-1] + (3, 3)) - - -def quaternion_to_matrix_np(quaternions): - q = torch.from_numpy(quaternions).contiguous().float() - return quaternion_to_matrix(q).numpy() - - -def quaternion_to_cont6d_np(quaternions): - rotation_mat = quaternion_to_matrix_np(quaternions) - cont_6d = np.concatenate([rotation_mat[..., 0], rotation_mat[..., 1]], axis=-1) - return cont_6d - - -def quaternion_to_cont6d(quaternions): - rotation_mat = quaternion_to_matrix(quaternions) - cont_6d = torch.cat([rotation_mat[..., 0], rotation_mat[..., 1]], dim=-1) - return cont_6d - - -def cont6d_to_matrix(cont6d): - assert cont6d.shape[-1] == 6, "The last dimension must be 6" - x_raw = cont6d[..., 0:3] - y_raw = cont6d[..., 3:6] - - x = x_raw / torch.norm(x_raw, dim=-1, keepdim=True) - z = torch.cross(x, y_raw, dim=-1) - z = z / torch.norm(z, dim=-1, keepdim=True) - - y = torch.cross(z, x, dim=-1) - - x = x[..., None] - y = y[..., None] - z = z[..., None] - - mat = torch.cat([x, y, z], dim=-1) - return mat - - -def cont6d_to_matrix_np(cont6d): - q = torch.from_numpy(cont6d).contiguous().float() - return cont6d_to_matrix(q).numpy() - - -def qpow(q0, t, dtype=torch.float): - ''' q0 : tensor of quaternions - t: tensor of powers - ''' - q0 = qnormalize(q0) - theta0 = torch.acos(q0[..., 0]) - - ## if theta0 is close to zero, add epsilon to avoid NaNs - mask = (theta0 <= 10e-10) * (theta0 >= -10e-10) - theta0 = (1 - mask) * theta0 + mask * 10e-10 - v0 = q0[..., 1:] / torch.sin(theta0).view(-1, 1) - - if isinstance(t, torch.Tensor): - q = torch.zeros(t.shape + q0.shape) - theta = t.view(-1, 1) * theta0.view(1, -1) - else: ## if t is a number - q = torch.zeros(q0.shape) - theta = t * theta0 - - q[..., 0] = torch.cos(theta) - q[..., 1:] = v0 * torch.sin(theta).unsqueeze(-1) - - return q.to(dtype) - - -def qslerp(q0, q1, t): - ''' - q0: starting quaternion - q1: ending quaternion - t: array of points along the way - - Returns: - Tensor of Slerps: t.shape + q0.shape - ''' - - q0 = qnormalize(q0) - q1 = qnormalize(q1) - q_ = qpow(qmul(q1, qinv(q0)), t) - - return qmul(q_, - q0.contiguous().view(torch.Size([1] * len(t.shape)) + q0.shape).expand(t.shape + q0.shape).contiguous()) - - -def qbetween(v0, v1): - ''' - find the quaternion used to rotate v0 to v1 - ''' - assert v0.shape[-1] == 3, 'v0 must be of the shape (*, 3)' - assert v1.shape[-1] == 3, 'v1 must be of the shape (*, 3)' - - v = torch.cross(v0, v1) - w = torch.sqrt((v0 ** 2).sum(dim=-1, keepdim=True) * (v1 ** 2).sum(dim=-1, keepdim=True)) + (v0 * v1).sum(dim=-1, - keepdim=True) - return qnormalize(torch.cat([w, v], dim=-1)) - - -def qbetween_np(v0, v1): - ''' - find the quaternion used to rotate v0 to v1 - ''' - assert v0.shape[-1] == 3, 'v0 must be of the shape (*, 3)' - assert v1.shape[-1] == 3, 'v1 must be of the shape (*, 3)' - - v0 = torch.from_numpy(v0).float() - v1 = torch.from_numpy(v1).float() - return qbetween(v0, v1).numpy() - - -def lerp(p0, p1, t): - if not isinstance(t, torch.Tensor): - t = torch.Tensor([t]) - - new_shape = t.shape + p0.shape - new_view_t = t.shape + torch.Size([1] * len(p0.shape)) - new_view_p = torch.Size([1] * len(t.shape)) + p0.shape - p0 = p0.view(new_view_p).expand(new_shape) - p1 = p1.view(new_view_p).expand(new_shape) - t = t.view(new_view_t).expand(new_shape) - - return p0 + t * (p1 - p0) diff --git a/spaces/ahnafsamin/GroTTS-FastSpeech2/app.py b/spaces/ahnafsamin/GroTTS-FastSpeech2/app.py deleted file mode 100644 index e009177d830aa4740f5e4be6678df062b3470c44..0000000000000000000000000000000000000000 --- a/spaces/ahnafsamin/GroTTS-FastSpeech2/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import os - - - -os.environ["CURL_CA_BUNDLE"]="" - -import gradio as gr -import time -import urllib.request -from pathlib import Path -import os -import torch -import scipy.io.wavfile -from espnet2.bin.tts_inference import Text2Speech -from espnet2.utils.types import str_or_none -from parallel_wavegan.utils import download_pretrained_model - - -gos_text2speech = Text2Speech.from_pretrained( - model_tag="https://huggingface.co/ahnafsamin/FastSpeech2-gronings/resolve/main/tts_train_fastspeech2_raw_char_tacotron_train.loss.ave.zip", - vocoder_tag="parallel_wavegan/ljspeech_parallel_wavegan.v3" -) - -hoogelandsters_text2speech = Text2Speech.from_pretrained( - model_tag="https://huggingface.co/ahnafsamin/FastSpeech2-gronings-hoogelandsters/resolve/main/tts_train_fastspeech2_raw_char_tacotron_train.loss.ave.zip", - vocoder_tag="parallel_wavegan/ljspeech_parallel_wavegan.v3" -) - -westerkwartiers_text2speech = Text2Speech.from_pretrained( - model_tag="https://huggingface.co/ahnafsamin/FastSpeech2-gronings-westerkwartiers/resolve/main/tts_train_fastspeech2_raw_char_tacotron_train.loss.ave.zip", - vocoder_tag="parallel_wavegan/ljspeech_parallel_wavegan.v3" -) - -oldambster_text2speech = Text2Speech.from_pretrained( - model_tag="https://huggingface.co/ahnafsamin/FastSpeech2-gronings-oldambster/resolve/main/tts_train_fastspeech2_raw_char_tacotron_train.loss.ave.zip", - vocoder_tag="parallel_wavegan/ljspeech_parallel_wavegan.v3" -) - -def inference(text,lang): - with torch.no_grad(): - if lang == "gronings": - wav = gos_text2speech(text)["wav"] - scipy.io.wavfile.write("out.wav", gos_text2speech.fs , wav.view(-1).cpu().numpy()) - elif lang == "gronings hoogelandsters": - wav = hoogelandsters_text2speech(text)["wav"] - scipy.io.wavfile.write("out.wav", hoogelandsters_text2speech.fs , wav.view(-1).cpu().numpy()) - elif lang == "gronings westerkwartiers": - wav = westerkwartiers_text2speech(text)["wav"] - wav = wav * 15 - scipy.io.wavfile.write("out.wav", westerkwartiers_text2speech.fs , wav.view(-1).cpu().numpy()) - #data, sr = librosa.load("output.wav") - #factor = 2.0 - #data *= factor - #sf.write("out.wav", data, sr) - - elif lang == "gronings oldambster": - wav = oldambster_text2speech(text)["wav"] - scipy.io.wavfile.write("out.wav", oldambster_text2speech.fs , wav.view(-1).cpu().numpy()) - - return "out.wav", "out.wav" - -title = "GroTTS" -examples = [ - ['Ze gingen mit klas noar waddendiek, over en deur bragel lopen.', 'gronings'] -] - - -gr.Interface( - inference, - [gr.inputs.Textbox(label="input text", lines=3), gr.inputs.Radio(choices=["gronings", "gronings hoogelandsters", "gronings westerkwartiers", "gronings oldambster"], type="value", default="gronings", label="language")], - [gr.outputs.Audio(type="file", label="Output"), gr.outputs.File()], - title=title, - examples=examples - ).launch(enable_queue=True) diff --git a/spaces/aifartist/sdzoom-Latent-Consistency-Model/gradio-app.py b/spaces/aifartist/sdzoom-Latent-Consistency-Model/gradio-app.py deleted file mode 100644 index d6ee6a5c07c3e4b911c52e52d9a830870a2e8c7e..0000000000000000000000000000000000000000 --- a/spaces/aifartist/sdzoom-Latent-Consistency-Model/gradio-app.py +++ /dev/null @@ -1,140 +0,0 @@ -import gradio as gr -from PIL import Image -import torch -from diffusers import DiffusionPipeline, AutoencoderTiny -import os - -SAFETY_CHECKER = os.environ.get("SAFETY_CHECKER", None) -TORCH_COMPILE = os.environ.get("TORCH_COMPILE", None) - -if SAFETY_CHECKER: - pipe = DiffusionPipeline.from_pretrained( - "SimianLuo/LCM_Dreamshaper_v7", - custom_pipeline="lcm_txt2img", - scheduler=None, - ) -else: - pipe = DiffusionPipeline.from_pretrained( - "SimianLuo/LCM_Dreamshaper_v7", - custom_pipeline="lcm_txt2img", - scheduler=None, - safety_checker=None, - ) -pipe.to(device="cuda", dtype=torch.float16) -pipe.vae = AutoencoderTiny.from_pretrained( - "madebyollin/taesd", device="cuda", torch_dtype=torch.float16 -) -pipe.vae = pipe.vae.cuda() -pipe.unet.to(memory_format=torch.channels_last) -pipe.set_progress_bar_config(disable=True) - -if TORCH_COMPILE: - pipe.text_encoder = torch.compile(pipe.text_encoder, mode="max-autotune") - pipe.tokenizer = torch.compile(pipe.tokenizer, mode="max-autotune") - pipe.unet = torch.compile(pipe.unet, mode="max-autotune") - pipe.vae = torch.compile(pipe.vae, mode="max-autotune") - - -def predict(prompt1, prompt2, merge_ratio, guidance, steps, sharpness, seed=1231231): - torch.manual_seed(seed) - results = pipe( - prompt1=prompt1, - prompt2=prompt2, - sv=merge_ratio, - sharpness=sharpness, - width=512, - height=512, - num_inference_steps=steps, - guidance_scale=guidance, - lcm_origin_steps=50, - output_type="pil", - # return_dict=False, - ) - nsfw_content_detected = ( - results.nsfw_content_detected[0] - if "nsfw_content_detected" in results - else False - ) - if nsfw_content_detected: - raise gr.Error("NSFW content detected. Please try another prompt.") - return results.images[0] - - -css = """ -#container{ - margin: 0 auto; - max-width: 80rem; -} -#intro{ - max-width: 32rem; - text-align: center; - margin: 0 auto; -} -""" -with gr.Blocks(css=css) as demo: - with gr.Column(elem_id="container"): - gr.Markdown( - """# SDZoom - - Welcome to sdzoom, a testbed application designed for optimizing and experimenting with various - configurations to achieve the fastest Stable Diffusion (SD) pipelines. - RTSD leverages the expertise provided by Latent Consistency Models (LCM). For more information about LCM, - visit their website at [Latent Consistency Models](https://latent-consistency-models.github.io/). - - """, - elem_id="intro", - ) - with gr.Row(): - with gr.Column(): - image = gr.Image(type="pil") - with gr.Column(): - merge_ratio = gr.Slider( - value=50, minimum=1, maximum=100, step=1, label="Merge Ratio" - ) - guidance = gr.Slider( - label="Guidance", minimum=1, maximum=50, value=10.0, step=0.01 - ) - steps = gr.Slider(label="Steps", value=4, minimum=2, maximum=20, step=1) - sharpness = gr.Slider( - value=1.0, minimum=0, maximum=1, step=0.001, label="Sharpness" - ) - seed = gr.Slider( - randomize=True, minimum=0, maximum=12013012031030, label="Seed" - ) - prompt1 = gr.Textbox(label="Prompt 1") - prompt2 = gr.Textbox(label="Prompt 2") - generate_bt = gr.Button("Generate") - - inputs = [prompt1, prompt2, merge_ratio, guidance, steps, sharpness, seed] - gr.Examples( - examples=[ - ["Elon Musk", "Mark Zuckerberg", 50, 10.0, 4, 1.0, 1231231], - ["Elon Musk", "Bill Gates", 50, 10.0, 4, 1.0, 53453], - [ - "Asian women, intricate jewlery in her hair, 8k", - "Tom Cruise, intricate jewlery in her hair, 8k", - 50, - 10.0, - 4, - 1.0, - 542343, - ], - ], - fn=predict, - inputs=inputs, - outputs=image, - ) - generate_bt.click(fn=predict, inputs=inputs, outputs=image, show_progress=False) - seed.change(fn=predict, inputs=inputs, outputs=image, show_progress=False) - merge_ratio.change( - fn=predict, inputs=inputs, outputs=image, show_progress=False - ) - guidance.change(fn=predict, inputs=inputs, outputs=image, show_progress=False) - steps.change(fn=predict, inputs=inputs, outputs=image, show_progress=False) - sharpness.change(fn=predict, inputs=inputs, outputs=image, show_progress=False) - prompt1.change(fn=predict, inputs=inputs, outputs=image, show_progress=False) - prompt2.change(fn=predict, inputs=inputs, outputs=image, show_progress=False) - -demo.queue() -if __name__ == "__main__": - demo.launch() diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/jsut/voc1/path.sh b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/jsut/voc1/path.sh deleted file mode 100644 index b0ca27c615f70aa29e240222ec370f8ad4e7b45a..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/jsut/voc1/path.sh +++ /dev/null @@ -1,33 +0,0 @@ -# cuda related -export CUDA_HOME=/usr/local/cuda-10.0 -export LD_LIBRARY_PATH="${CUDA_HOME}/lib64:${LD_LIBRARY_PATH}" - -# path related -export PRJ_ROOT="${PWD}/../../.." -if [ -e "${PRJ_ROOT}/tools/venv/bin/activate" ]; then - # shellcheck disable=SC1090 - . "${PRJ_ROOT}/tools/venv/bin/activate" -fi - -# python related -export OMP_NUM_THREADS=1 -export PYTHONIOENCODING=UTF-8 -export MPL_BACKEND=Agg - -# check installation -if ! command -v parallel-wavegan-train > /dev/null; then - echo "Error: It seems setup is not finished." >&2 - echo "Error: Please setup your environment by following README.md" >&2 - return 1 -fi -if ! command -v jq > /dev/null; then - echo "Error: It seems jq is not installed." >&2 - echo "Error: Please install via \`sudo apt-get install jq\`." >&2 - echo "Error: If you do not have sudo, please download from https://stedolan.github.io/jq/download/." >&2 - return 1 -fi -if ! command -v yq > /dev/null; then - echo "Error: It seems yq is not installed." >&2 - echo "Error: Please install via \`pip install yq\`." >&2 - return 1 -fi diff --git a/spaces/akhaliq/stylegan3_clip/metrics/perceptual_path_length.py b/spaces/akhaliq/stylegan3_clip/metrics/perceptual_path_length.py deleted file mode 100644 index 7fb74396475181c3a80feb6321d3b0f45eda7000..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/stylegan3_clip/metrics/perceptual_path_length.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Perceptual Path Length (PPL) from the paper "A Style-Based Generator -Architecture for Generative Adversarial Networks". Matches the original -implementation by Karras et al. at -https://github.com/NVlabs/stylegan/blob/master/metrics/perceptual_path_length.py""" - -import copy -import numpy as np -import torch -from . import metric_utils - -#---------------------------------------------------------------------------- - -# Spherical interpolation of a batch of vectors. -def slerp(a, b, t): - a = a / a.norm(dim=-1, keepdim=True) - b = b / b.norm(dim=-1, keepdim=True) - d = (a * b).sum(dim=-1, keepdim=True) - p = t * torch.acos(d) - c = b - d * a - c = c / c.norm(dim=-1, keepdim=True) - d = a * torch.cos(p) + c * torch.sin(p) - d = d / d.norm(dim=-1, keepdim=True) - return d - -#---------------------------------------------------------------------------- - -class PPLSampler(torch.nn.Module): - def __init__(self, G, G_kwargs, epsilon, space, sampling, crop, vgg16): - assert space in ['z', 'w'] - assert sampling in ['full', 'end'] - super().__init__() - self.G = copy.deepcopy(G) - self.G_kwargs = G_kwargs - self.epsilon = epsilon - self.space = space - self.sampling = sampling - self.crop = crop - self.vgg16 = copy.deepcopy(vgg16) - - def forward(self, c): - # Generate random latents and interpolation t-values. - t = torch.rand([c.shape[0]], device=c.device) * (1 if self.sampling == 'full' else 0) - z0, z1 = torch.randn([c.shape[0] * 2, self.G.z_dim], device=c.device).chunk(2) - - # Interpolate in W or Z. - if self.space == 'w': - w0, w1 = self.G.mapping(z=torch.cat([z0,z1]), c=torch.cat([c,c])).chunk(2) - wt0 = w0.lerp(w1, t.unsqueeze(1).unsqueeze(2)) - wt1 = w0.lerp(w1, t.unsqueeze(1).unsqueeze(2) + self.epsilon) - else: # space == 'z' - zt0 = slerp(z0, z1, t.unsqueeze(1)) - zt1 = slerp(z0, z1, t.unsqueeze(1) + self.epsilon) - wt0, wt1 = self.G.mapping(z=torch.cat([zt0,zt1]), c=torch.cat([c,c])).chunk(2) - - # Randomize noise buffers. - for name, buf in self.G.named_buffers(): - if name.endswith('.noise_const'): - buf.copy_(torch.randn_like(buf)) - - # Generate images. - img = self.G.synthesis(ws=torch.cat([wt0,wt1]), noise_mode='const', force_fp32=True, **self.G_kwargs) - - # Center crop. - if self.crop: - assert img.shape[2] == img.shape[3] - c = img.shape[2] // 8 - img = img[:, :, c*3 : c*7, c*2 : c*6] - - # Downsample to 256x256. - factor = self.G.img_resolution // 256 - if factor > 1: - img = img.reshape([-1, img.shape[1], img.shape[2] // factor, factor, img.shape[3] // factor, factor]).mean([3, 5]) - - # Scale dynamic range from [-1,1] to [0,255]. - img = (img + 1) * (255 / 2) - if self.G.img_channels == 1: - img = img.repeat([1, 3, 1, 1]) - - # Evaluate differential LPIPS. - lpips_t0, lpips_t1 = self.vgg16(img, resize_images=False, return_lpips=True).chunk(2) - dist = (lpips_t0 - lpips_t1).square().sum(1) / self.epsilon ** 2 - return dist - -#---------------------------------------------------------------------------- - -def compute_ppl(opts, num_samples, epsilon, space, sampling, crop, batch_size): - vgg16_url = 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/metrics/vgg16.pkl' - vgg16 = metric_utils.get_feature_detector(vgg16_url, num_gpus=opts.num_gpus, rank=opts.rank, verbose=opts.progress.verbose) - - # Setup sampler and labels. - sampler = PPLSampler(G=opts.G, G_kwargs=opts.G_kwargs, epsilon=epsilon, space=space, sampling=sampling, crop=crop, vgg16=vgg16) - sampler.eval().requires_grad_(False).to(opts.device) - c_iter = metric_utils.iterate_random_labels(opts=opts, batch_size=batch_size) - - # Sampling loop. - dist = [] - progress = opts.progress.sub(tag='ppl sampling', num_items=num_samples) - for batch_start in range(0, num_samples, batch_size * opts.num_gpus): - progress.update(batch_start) - x = sampler(next(c_iter)) - for src in range(opts.num_gpus): - y = x.clone() - if opts.num_gpus > 1: - torch.distributed.broadcast(y, src=src) - dist.append(y) - progress.update(num_samples) - - # Compute PPL. - if opts.rank != 0: - return float('nan') - dist = torch.cat(dist)[:num_samples].cpu().numpy() - lo = np.percentile(dist, 1, interpolation='lower') - hi = np.percentile(dist, 99, interpolation='higher') - ppl = np.extract(np.logical_and(dist >= lo, dist <= hi), dist).mean() - return float(ppl) - -#---------------------------------------------------------------------------- diff --git a/spaces/akhaliq/yolov7/utils/aws/__init__.py b/spaces/akhaliq/yolov7/utils/aws/__init__.py deleted file mode 100644 index e9691f241edc06ad981b36ca27f7eff9e46686ed..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/yolov7/utils/aws/__init__.py +++ /dev/null @@ -1 +0,0 @@ -#init \ No newline at end of file diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/models/wheel.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/models/wheel.py deleted file mode 100644 index aaf218d1a00ce41795ec9dc63e75df034ffb65d1..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/models/wheel.py +++ /dev/null @@ -1,89 +0,0 @@ -"""Represents a wheel file and provides access to the various parts of the -name that have meaning. -""" -import re -from typing import Dict, Iterable, List - -from pip._vendor.packaging.tags import Tag - -from pip._internal.exceptions import InvalidWheelFilename - - -class Wheel: - """A wheel file""" - - wheel_file_re = re.compile( - r"""^(?P(?P[^\s-]+?)-(?P[^\s-]*?)) - ((-(?P\d[^-]*?))?-(?P[^\s-]+?)-(?P[^\s-]+?)-(?P[^\s-]+?) - \.whl|\.dist-info)$""", - re.VERBOSE, - ) - - def __init__(self, filename: str) -> None: - """ - :raises InvalidWheelFilename: when the filename is invalid for a wheel - """ - wheel_info = self.wheel_file_re.match(filename) - if not wheel_info: - raise InvalidWheelFilename(f"{filename} is not a valid wheel filename.") - self.filename = filename - self.name = wheel_info.group("name").replace("_", "-") - # we'll assume "_" means "-" due to wheel naming scheme - # (https://github.com/pypa/pip/issues/1150) - self.version = wheel_info.group("ver").replace("_", "-") - self.build_tag = wheel_info.group("build") - self.pyversions = wheel_info.group("pyver").split(".") - self.abis = wheel_info.group("abi").split(".") - self.plats = wheel_info.group("plat").split(".") - - # All the tag combinations from this file - self.file_tags = { - Tag(x, y, z) for x in self.pyversions for y in self.abis for z in self.plats - } - - def get_formatted_file_tags(self) -> List[str]: - """Return the wheel's tags as a sorted list of strings.""" - return sorted(str(tag) for tag in self.file_tags) - - def support_index_min(self, tags: List[Tag]) -> int: - """Return the lowest index that one of the wheel's file_tag combinations - achieves in the given list of supported tags. - - For example, if there are 8 supported tags and one of the file tags - is first in the list, then return 0. - - :param tags: the PEP 425 tags to check the wheel against, in order - with most preferred first. - - :raises ValueError: If none of the wheel's file tags match one of - the supported tags. - """ - return min(tags.index(tag) for tag in self.file_tags if tag in tags) - - def find_most_preferred_tag( - self, tags: List[Tag], tag_to_priority: Dict[Tag, int] - ) -> int: - """Return the priority of the most preferred tag that one of the wheel's file - tag combinations achieves in the given list of supported tags using the given - tag_to_priority mapping, where lower priorities are more-preferred. - - This is used in place of support_index_min in some cases in order to avoid - an expensive linear scan of a large list of tags. - - :param tags: the PEP 425 tags to check the wheel against. - :param tag_to_priority: a mapping from tag to priority of that tag, where - lower is more preferred. - - :raises ValueError: If none of the wheel's file tags match one of - the supported tags. - """ - return min( - tag_to_priority[tag] for tag in self.file_tags if tag in tag_to_priority - ) - - def supported(self, tags: Iterable[Tag]) -> bool: - """Return whether the wheel is compatible with one of the given tags. - - :param tags: the PEP 425 tags to check the wheel against. - """ - return not self.file_tags.isdisjoint(tags) diff --git a/spaces/ali-ghamdan/gfp-Gans/CODE_OF_CONDUCT.md b/spaces/ali-ghamdan/gfp-Gans/CODE_OF_CONDUCT.md deleted file mode 100644 index e8cc4daa4345590464314889b187d6a2d7a8e20f..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/gfp-Gans/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,128 +0,0 @@ -# Contributor Covenant Code of Conduct - -## Our Pledge - -We as members, contributors, and leaders pledge to make participation in our -community a harassment-free experience for everyone, regardless of age, body -size, visible or invisible disability, ethnicity, sex characteristics, gender -identity and expression, level of experience, education, socio-economic status, -nationality, personal appearance, race, religion, or sexual identity -and orientation. - -We pledge to act and interact in ways that contribute to an open, welcoming, -diverse, inclusive, and healthy community. - -## Our Standards - -Examples of behavior that contributes to a positive environment for our -community include: - -* Demonstrating empathy and kindness toward other people -* Being respectful of differing opinions, viewpoints, and experiences -* Giving and gracefully accepting constructive feedback -* Accepting responsibility and apologizing to those affected by our mistakes, - and learning from the experience -* Focusing on what is best not just for us as individuals, but for the - overall community - -Examples of unacceptable behavior include: - -* The use of sexualized language or imagery, and sexual attention or - advances of any kind -* Trolling, insulting or derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or email - address, without their explicit permission -* Other conduct which could reasonably be considered inappropriate in a - professional setting - -## Enforcement Responsibilities - -Community leaders are responsible for clarifying and enforcing our standards of -acceptable behavior and will take appropriate and fair corrective action in -response to any behavior that they deem inappropriate, threatening, offensive, -or harmful. - -Community leaders have the right and responsibility to remove, edit, or reject -comments, commits, code, wiki edits, issues, and other contributions that are -not aligned to this Code of Conduct, and will communicate reasons for moderation -decisions when appropriate. - -## Scope - -This Code of Conduct applies within all community spaces, and also applies when -an individual is officially representing the community in public spaces. -Examples of representing our community include using an official e-mail address, -posting via an official social media account, or acting as an appointed -representative at an online or offline event. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported to the community leaders responsible for enforcement at -xintao.wang@outlook.com or xintaowang@tencent.com. -All complaints will be reviewed and investigated promptly and fairly. - -All community leaders are obligated to respect the privacy and security of the -reporter of any incident. - -## Enforcement Guidelines - -Community leaders will follow these Community Impact Guidelines in determining -the consequences for any action they deem in violation of this Code of Conduct: - -### 1. Correction - -**Community Impact**: Use of inappropriate language or other behavior deemed -unprofessional or unwelcome in the community. - -**Consequence**: A private, written warning from community leaders, providing -clarity around the nature of the violation and an explanation of why the -behavior was inappropriate. A public apology may be requested. - -### 2. Warning - -**Community Impact**: A violation through a single incident or series -of actions. - -**Consequence**: A warning with consequences for continued behavior. No -interaction with the people involved, including unsolicited interaction with -those enforcing the Code of Conduct, for a specified period of time. This -includes avoiding interactions in community spaces as well as external channels -like social media. Violating these terms may lead to a temporary or -permanent ban. - -### 3. Temporary Ban - -**Community Impact**: A serious violation of community standards, including -sustained inappropriate behavior. - -**Consequence**: A temporary ban from any sort of interaction or public -communication with the community for a specified period of time. No public or -private interaction with the people involved, including unsolicited interaction -with those enforcing the Code of Conduct, is allowed during this period. -Violating these terms may lead to a permanent ban. - -### 4. Permanent Ban - -**Community Impact**: Demonstrating a pattern of violation of community -standards, including sustained inappropriate behavior, harassment of an -individual, or aggression toward or disparagement of classes of individuals. - -**Consequence**: A permanent ban from any sort of public interaction within -the community. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], -version 2.0, available at -https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. - -Community Impact Guidelines were inspired by [Mozilla's code of conduct -enforcement ladder](https://github.com/mozilla/diversity). - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see the FAQ at -https://www.contributor-covenant.org/faq. Translations are available at -https://www.contributor-covenant.org/translations. diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/c/src/com_portaudio_BlockingStream.h b/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/c/src/com_portaudio_BlockingStream.h deleted file mode 100644 index e405faef03d23b2578ec381c6f71f0d374163695..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/c/src/com_portaudio_BlockingStream.h +++ /dev/null @@ -1,130 +0,0 @@ -/* DO NOT EDIT THIS FILE - it is machine generated */ -#if defined(__APPLE__) -#include -#else -#include -#endif - -/* Header for class com_portaudio_BlockingStream */ - -#ifndef _Included_com_portaudio_BlockingStream -#define _Included_com_portaudio_BlockingStream -#ifdef __cplusplus -extern "C" { -#endif -/* - * Class: com_portaudio_BlockingStream - * Method: getReadAvailable - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_BlockingStream_getReadAvailable - (JNIEnv *, jobject); - -/* - * Class: com_portaudio_BlockingStream - * Method: getWriteAvailable - * Signature: ()I - */ -JNIEXPORT jint JNICALL Java_com_portaudio_BlockingStream_getWriteAvailable - (JNIEnv *, jobject); - -/* - * Class: com_portaudio_BlockingStream - * Method: readFloats - * Signature: ([FI)Z - */ -JNIEXPORT jboolean JNICALL Java_com_portaudio_BlockingStream_readFloats - (JNIEnv *, jobject, jfloatArray, jint); - -/* - * Class: com_portaudio_BlockingStream - * Method: writeFloats - * Signature: ([FI)Z - */ -JNIEXPORT jboolean JNICALL Java_com_portaudio_BlockingStream_writeFloats - (JNIEnv *, jobject, jfloatArray, jint); - -/* - * Class: com_portaudio_BlockingStream - * Method: readShorts - * Signature: ([SI)Z - */ -JNIEXPORT jboolean JNICALL Java_com_portaudio_BlockingStream_readShorts - (JNIEnv *, jobject, jshortArray, jint); - -/* - * Class: com_portaudio_BlockingStream - * Method: writeShorts - * Signature: ([SI)Z - */ -JNIEXPORT jboolean JNICALL Java_com_portaudio_BlockingStream_writeShorts - (JNIEnv *, jobject, jshortArray, jint); - -/* - * Class: com_portaudio_BlockingStream - * Method: start - * Signature: ()V - */ -JNIEXPORT void JNICALL Java_com_portaudio_BlockingStream_start - (JNIEnv *, jobject); - -/* - * Class: com_portaudio_BlockingStream - * Method: stop - * Signature: ()V - */ -JNIEXPORT void JNICALL Java_com_portaudio_BlockingStream_stop - (JNIEnv *, jobject); - -/* - * Class: com_portaudio_BlockingStream - * Method: abort - * Signature: ()V - */ -JNIEXPORT void JNICALL Java_com_portaudio_BlockingStream_abort - (JNIEnv *, jobject); - -/* - * Class: com_portaudio_BlockingStream - * Method: close - * Signature: ()V - */ -JNIEXPORT void JNICALL Java_com_portaudio_BlockingStream_close - (JNIEnv *, jobject); - -/* - * Class: com_portaudio_BlockingStream - * Method: isStopped - * Signature: ()Z - */ -JNIEXPORT jboolean JNICALL Java_com_portaudio_BlockingStream_isStopped - (JNIEnv *, jobject); - -/* - * Class: com_portaudio_BlockingStream - * Method: isActive - * Signature: ()Z - */ -JNIEXPORT jboolean JNICALL Java_com_portaudio_BlockingStream_isActive - (JNIEnv *, jobject); - -/* - * Class: com_portaudio_BlockingStream - * Method: getTime - * Signature: ()D - */ -JNIEXPORT jdouble JNICALL Java_com_portaudio_BlockingStream_getTime - (JNIEnv *, jobject); - -/* - * Class: com_portaudio_BlockingStream - * Method: getInfo - * Signature: (Lcom/portaudio/StreamInfo;)V - */ -JNIEXPORT void JNICALL Java_com_portaudio_BlockingStream_getInfo - (JNIEnv *, jobject, jobject); - -#ifdef __cplusplus -} -#endif -#endif diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/coreaudio/pa_mac_core_utilities.c b/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/coreaudio/pa_mac_core_utilities.c deleted file mode 100644 index 0d3b18312a713faada2b20f1483c0dba6821829f..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/src/hostapi/coreaudio/pa_mac_core_utilities.c +++ /dev/null @@ -1,814 +0,0 @@ -/* - * Helper and utility functions for pa_mac_core.c (Apple AUHAL implementation) - * - * PortAudio Portable Real-Time Audio Library - * Latest Version at: http://www.portaudio.com - * - * Written by Bjorn Roche of XO Audio LLC, from PA skeleton code. - * Portions copied from code by Dominic Mazzoni (who wrote a HAL implementation) - * - * Dominic's code was based on code by Phil Burk, Darren Gibbs, - * Gord Peters, Stephane Letz, and Greg Pfiel. - * - * The following people also deserve acknowledgements: - * - * Olivier Tristan for feedback and testing - * Glenn Zelniker and Z-Systems engineering for sponsoring the Blocking I/O - * interface. - * - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 1999-2002 Ross Bencina, Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** - @file - @ingroup hostapi_src -*/ - -#include "pa_mac_core_utilities.h" -#include "pa_mac_core_internal.h" -#include -#include -#include -#include - -OSStatus PaMacCore_AudioHardwareGetProperty( - AudioHardwarePropertyID inPropertyID, - UInt32* ioPropertyDataSize, - void* outPropertyData ) -{ - AudioObjectPropertyAddress address = { inPropertyID, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; - return AudioObjectGetPropertyData(kAudioObjectSystemObject, &address, 0, NULL, ioPropertyDataSize, outPropertyData); -} - -OSStatus PaMacCore_AudioHardwareGetPropertySize( - AudioHardwarePropertyID inPropertyID, - UInt32* outSize ) -{ - AudioObjectPropertyAddress address = { inPropertyID, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; - return AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &address, 0, NULL, outSize); -} - -OSStatus PaMacCore_AudioDeviceGetProperty( - AudioDeviceID inDevice, - UInt32 inChannel, - Boolean isInput, - AudioDevicePropertyID inPropertyID, - UInt32* ioPropertyDataSize, - void* outPropertyData ) -{ - AudioObjectPropertyScope scope = isInput ? kAudioDevicePropertyScopeInput : kAudioDevicePropertyScopeOutput; - AudioObjectPropertyAddress address = { inPropertyID, scope, inChannel }; - return AudioObjectGetPropertyData(inDevice, &address, 0, NULL, ioPropertyDataSize, outPropertyData); -} - -OSStatus PaMacCore_AudioDeviceSetProperty( - AudioDeviceID inDevice, - const AudioTimeStamp* inWhen, - UInt32 inChannel, - Boolean isInput, - AudioDevicePropertyID inPropertyID, - UInt32 inPropertyDataSize, - const void* inPropertyData ) -{ - AudioObjectPropertyScope scope = isInput ? kAudioDevicePropertyScopeInput : kAudioDevicePropertyScopeOutput; - AudioObjectPropertyAddress address = { inPropertyID, scope, inChannel }; - return AudioObjectSetPropertyData(inDevice, &address, 0, NULL, inPropertyDataSize, inPropertyData); -} - -OSStatus PaMacCore_AudioDeviceGetPropertySize( - AudioDeviceID inDevice, - UInt32 inChannel, - Boolean isInput, - AudioDevicePropertyID inPropertyID, - UInt32* outSize ) -{ - AudioObjectPropertyScope scope = isInput ? kAudioDevicePropertyScopeInput : kAudioDevicePropertyScopeOutput; - AudioObjectPropertyAddress address = { inPropertyID, scope, inChannel }; - return AudioObjectGetPropertyDataSize(inDevice, &address, 0, NULL, outSize); -} - -OSStatus PaMacCore_AudioDeviceAddPropertyListener( - AudioDeviceID inDevice, - UInt32 inChannel, - Boolean isInput, - AudioDevicePropertyID inPropertyID, - AudioObjectPropertyListenerProc inProc, - void* inClientData ) -{ - AudioObjectPropertyScope scope = isInput ? kAudioDevicePropertyScopeInput : kAudioDevicePropertyScopeOutput; - AudioObjectPropertyAddress address = { inPropertyID, scope, inChannel }; - return AudioObjectAddPropertyListener(inDevice, &address, inProc, inClientData); -} - -OSStatus PaMacCore_AudioDeviceRemovePropertyListener( - AudioDeviceID inDevice, - UInt32 inChannel, - Boolean isInput, - AudioDevicePropertyID inPropertyID, - AudioObjectPropertyListenerProc inProc, - void* inClientData ) -{ - AudioObjectPropertyScope scope = isInput ? kAudioDevicePropertyScopeInput : kAudioDevicePropertyScopeOutput; - AudioObjectPropertyAddress address = { inPropertyID, scope, inChannel }; - return AudioObjectRemovePropertyListener(inDevice, &address, inProc, inClientData); -} - -OSStatus PaMacCore_AudioStreamGetProperty( - AudioStreamID inStream, - UInt32 inChannel, - AudioDevicePropertyID inPropertyID, - UInt32* ioPropertyDataSize, - void* outPropertyData ) -{ - AudioObjectPropertyAddress address = { inPropertyID, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; - return AudioObjectGetPropertyData(inStream, &address, 0, NULL, ioPropertyDataSize, outPropertyData); -} - -PaError PaMacCore_SetUnixError( int err, int line ) -{ - PaError ret; - const char *errorText; - - if( err == 0 ) - { - return paNoError; - } - - ret = paNoError; - errorText = strerror( err ); - - /** Map Unix error to PaError. Pretty much the only one that maps - is ENOMEM. */ - if( err == ENOMEM ) - ret = paInsufficientMemory; - else - ret = paInternalError; - - DBUG(("%d on line %d: msg='%s'\n", err, line, errorText)); - PaUtil_SetLastHostErrorInfo( paCoreAudio, err, errorText ); - - return ret; -} - -/* - * Translates MacOS generated errors into PaErrors - */ -PaError PaMacCore_SetError(OSStatus error, int line, int isError) -{ - /*FIXME: still need to handle possible ComponentResult values.*/ - /* unfortunately, they don't seem to be documented anywhere.*/ - PaError result; - const char *errorType; - const char *errorText; - - switch (error) { - case kAudioHardwareNoError: - return paNoError; - case kAudioHardwareNotRunningError: - errorText = "Audio Hardware Not Running"; - result = paInternalError; - break; - case kAudioHardwareUnspecifiedError: - errorText = "Unspecified Audio Hardware Error"; - result = paInternalError; - break; - case kAudioHardwareUnknownPropertyError: - errorText = "Audio Hardware: Unknown Property"; - result = paInternalError; - break; - case kAudioHardwareBadPropertySizeError: - errorText = "Audio Hardware: Bad Property Size"; - result = paInternalError; - break; - case kAudioHardwareIllegalOperationError: - errorText = "Audio Hardware: Illegal Operation"; - result = paInternalError; - break; - case kAudioHardwareBadDeviceError: - errorText = "Audio Hardware: Bad Device"; - result = paInvalidDevice; - break; - case kAudioHardwareBadStreamError: - errorText = "Audio Hardware: BadStream"; - result = paBadStreamPtr; - break; - case kAudioHardwareUnsupportedOperationError: - errorText = "Audio Hardware: Unsupported Operation"; - result = paInternalError; - break; - case kAudioDeviceUnsupportedFormatError: - errorText = "Audio Device: Unsupported Format"; - result = paSampleFormatNotSupported; - break; - case kAudioDevicePermissionsError: - errorText = "Audio Device: Permissions Error"; - result = paDeviceUnavailable; - break; - /* Audio Unit Errors: http://developer.apple.com/documentation/MusicAudio/Reference/CoreAudio/audio_units/chapter_5_section_3.html */ - case kAudioUnitErr_InvalidProperty: - errorText = "Audio Unit: Invalid Property"; - result = paInternalError; - break; - case kAudioUnitErr_InvalidParameter: - errorText = "Audio Unit: Invalid Parameter"; - result = paInternalError; - break; - case kAudioUnitErr_NoConnection: - errorText = "Audio Unit: No Connection"; - result = paInternalError; - break; - case kAudioUnitErr_FailedInitialization: - errorText = "Audio Unit: Initialization Failed"; - result = paInternalError; - break; - case kAudioUnitErr_TooManyFramesToProcess: - errorText = "Audio Unit: Too Many Frames"; - result = paInternalError; - break; - case kAudioUnitErr_InvalidFile: - errorText = "Audio Unit: Invalid File"; - result = paInternalError; - break; - case kAudioUnitErr_UnknownFileType: - errorText = "Audio Unit: Unknown File Type"; - result = paInternalError; - break; - case kAudioUnitErr_FileNotSpecified: - errorText = "Audio Unit: File Not Specified"; - result = paInternalError; - break; - case kAudioUnitErr_FormatNotSupported: - errorText = "Audio Unit: Format Not Supported"; - result = paInternalError; - break; - case kAudioUnitErr_Uninitialized: - errorText = "Audio Unit: Uninitialized"; - result = paInternalError; - break; - case kAudioUnitErr_InvalidScope: - errorText = "Audio Unit: Invalid Scope"; - result = paInternalError; - break; - case kAudioUnitErr_PropertyNotWritable: - errorText = "Audio Unit: PropertyNotWritable"; - result = paInternalError; - break; - case kAudioUnitErr_InvalidPropertyValue: - errorText = "Audio Unit: Invalid Property Value"; - result = paInternalError; - break; - case kAudioUnitErr_PropertyNotInUse: - errorText = "Audio Unit: Property Not In Use"; - result = paInternalError; - break; - case kAudioUnitErr_Initialized: - errorText = "Audio Unit: Initialized"; - result = paInternalError; - break; - case kAudioUnitErr_InvalidOfflineRender: - errorText = "Audio Unit: Invalid Offline Render"; - result = paInternalError; - break; - case kAudioUnitErr_Unauthorized: - errorText = "Audio Unit: Unauthorized"; - result = paInternalError; - break; - case kAudioUnitErr_CannotDoInCurrentContext: - errorText = "Audio Unit: cannot do in current context"; - result = paInternalError; - break; - default: - errorText = "Unknown Error"; - result = paInternalError; - } - - if (isError) - errorType = "Error"; - else - errorType = "Warning"; - - char str[20]; - // see if it appears to be a 4-char-code - *(UInt32 *)(str + 1) = CFSwapInt32HostToBig(error); - if (isprint(str[1]) && isprint(str[2]) && isprint(str[3]) && isprint(str[4])) - { - str[0] = str[5] = '\''; - str[6] = '\0'; - } else { - // no, format it as an integer - sprintf(str, "%d", (int)error); - } - - DBUG(("%s on line %d: err='%s', msg=%s\n", errorType, line, str, errorText)); - - PaUtil_SetLastHostErrorInfo( paCoreAudio, error, errorText ); - - return result; -} - -/* - * This function computes an appropriate ring buffer size given - * a requested latency (in seconds), sample rate and framesPerBuffer. - * - * The returned ringBufferSize is computed using the following - * constraints: - * - it must be at least 4. - * - it must be at least 3x framesPerBuffer. - * - it must be at least 2x the suggestedLatency. - * - it must be a power of 2. - * This function attempts to compute the minimum such size. - * - * FEEDBACK: too liberal/conservative/another way? - */ -long computeRingBufferSize( const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - long inputFramesPerBuffer, - long outputFramesPerBuffer, - double sampleRate ) -{ - long ringSize; - int index; - int i; - double latency ; - long framesPerBuffer ; - - VVDBUG(( "computeRingBufferSize()\n" )); - - assert( inputParameters || outputParameters ); - - if( outputParameters && inputParameters ) - { - latency = MAX( inputParameters->suggestedLatency, outputParameters->suggestedLatency ); - framesPerBuffer = MAX( inputFramesPerBuffer, outputFramesPerBuffer ); - } - else if( outputParameters ) - { - latency = outputParameters->suggestedLatency; - framesPerBuffer = outputFramesPerBuffer ; - } - else /* we have inputParameters */ - { - latency = inputParameters->suggestedLatency; - framesPerBuffer = inputFramesPerBuffer ; - } - - ringSize = (long) ( latency * sampleRate * 2 + .5); - VDBUG( ( "suggested latency : %d\n", (int) (latency*sampleRate) ) ); - if( ringSize < framesPerBuffer * 3 ) - ringSize = framesPerBuffer * 3 ; - VDBUG(("framesPerBuffer:%d\n",(int)framesPerBuffer)); - VDBUG(("Ringbuffer size (1): %d\n", (int)ringSize )); - - /* make sure it's at least 4 */ - ringSize = MAX( ringSize, 4 ); - - /* round up to the next power of 2 */ - index = -1; - for( i=0; i> i & 0x01 ) - index = i; - assert( index > 0 ); - if( ringSize <= ( 0x01 << index ) ) - ringSize = 0x01 << index ; - else - ringSize = 0x01 << ( index + 1 ); - - VDBUG(( "Final Ringbuffer size (2): %d\n", (int)ringSize )); - return ringSize; -} - - -/* - * During testing of core audio, I found that serious crashes could occur - * if properties such as sample rate were changed multiple times in rapid - * succession. The function below could be used to with a condition variable. - * to prevent propertychanges from happening until the last property - * change is acknowledged. Instead, I implemented a busy-wait, which is simpler - * to implement b/c in second round of testing (nov '09) property changes occurred - * quickly and so there was no real way to test the condition variable implementation. - * therefore, this function is not used, but it is aluded to in commented code below, - * since it represents a theoretically better implementation. - */ - -OSStatus propertyProc( - AudioObjectID inObjectID, - UInt32 inNumberAddresses, - const AudioObjectPropertyAddress* inAddresses, - void* inClientData ) -{ - // this is where we would set the condition variable - return noErr; -} - -/* sets the value of the given property and waits for the change to - be acknowledged, and returns the final value, which is not guaranteed - by this function to be the same as the desired value. Obviously, this - function can only be used for data whose input and output are the - same size and format, and their size and format are known in advance. - whether or not the call succeeds, if the data is successfully read, - it is returned in outPropertyData. If it is not read successfully, - outPropertyData is zeroed, which may or may not be useful in - determining if the property was read. */ -PaError AudioDeviceSetPropertyNowAndWaitForChange( - AudioDeviceID inDevice, - UInt32 inChannel, - Boolean isInput, - AudioDevicePropertyID inPropertyID, - UInt32 inPropertyDataSize, - const void *inPropertyData, - void *outPropertyData ) -{ - OSStatus macErr; - UInt32 outPropertyDataSize = inPropertyDataSize; - - /* First, see if it already has that value. If so, return. */ - macErr = PaMacCore_AudioDeviceGetProperty( inDevice, inChannel, - isInput, inPropertyID, - &outPropertyDataSize, outPropertyData ); - if( macErr ) { - memset( outPropertyData, 0, inPropertyDataSize ); - goto failMac; - } - if( inPropertyDataSize!=outPropertyDataSize ) - return paInternalError; - if( 0==memcmp( outPropertyData, inPropertyData, outPropertyDataSize ) ) - return paNoError; - - /* Ideally, we'd use a condition variable to determine changes. - we could set that up here. */ - - /* If we were using a cond variable, we'd do something useful here, - but for now, this is just to make 10.6 happy. */ - macErr = PaMacCore_AudioDeviceAddPropertyListener( inDevice, inChannel, isInput, - inPropertyID, propertyProc, - NULL ); - if( macErr ) - /* we couldn't add a listener. */ - goto failMac; - - /* set property */ - macErr = PaMacCore_AudioDeviceSetProperty( inDevice, NULL, inChannel, - isInput, inPropertyID, - inPropertyDataSize, inPropertyData ); - if( macErr ) - goto failMac; - - /* busy-wait up to 30 seconds for the property to change */ - /* busy-wait is justified here only because the correct alternative (condition variable) - was hard to test, since most of the waiting ended up being for setting rather than - getting in OS X 10.5. This was not the case in earlier OS versions. */ - struct timeval tv1, tv2; - gettimeofday( &tv1, NULL ); - memcpy( &tv2, &tv1, sizeof( struct timeval ) ); - while( tv2.tv_sec - tv1.tv_sec < 30 ) { - /* now read the property back out */ - macErr = PaMacCore_AudioDeviceGetProperty( inDevice, inChannel, - isInput, inPropertyID, - &outPropertyDataSize, outPropertyData ); - if( macErr ) { - memset( outPropertyData, 0, inPropertyDataSize ); - goto failMac; - } - /* and compare... */ - if( 0==memcmp( outPropertyData, inPropertyData, outPropertyDataSize ) ) { - PaMacCore_AudioDeviceRemovePropertyListener( inDevice, inChannel, isInput, inPropertyID, propertyProc, NULL); - return paNoError; - } - /* No match yet, so let's sleep and try again. */ - Pa_Sleep( 100 ); - gettimeofday( &tv2, NULL ); - } - DBUG( ("Timeout waiting for device setting.\n" ) ); - - PaMacCore_AudioDeviceRemovePropertyListener( inDevice, inChannel, isInput, inPropertyID, propertyProc, NULL ); - return paNoError; - -failMac: - PaMacCore_AudioDeviceRemovePropertyListener( inDevice, inChannel, isInput, inPropertyID, propertyProc, NULL ); - return ERR( macErr ); -} - -/* - * Sets the sample rate the HAL device. - * if requireExact: set the sample rate or fail. - * - * otherwise : set the exact sample rate. - * If that fails, check for available sample rates, and choose one - * higher than the requested rate. If there isn't a higher one, - * just use the highest available. - */ -PaError setBestSampleRateForDevice( const AudioDeviceID device, - const bool isOutput, - const bool requireExact, - const Float64 desiredSrate ) -{ - const bool isInput = isOutput ? 0 : 1; - Float64 srate; - UInt32 propsize = sizeof( Float64 ); - OSErr err; - AudioValueRange *ranges; - int i=0; - Float64 max = -1; /*the maximum rate available*/ - Float64 best = -1; /*the lowest sample rate still greater than desired rate*/ - VDBUG(("Setting sample rate for device %ld to %g.\n",(long)device,(float)desiredSrate)); - - /* -- try setting the sample rate -- */ - srate = 0; - err = AudioDeviceSetPropertyNowAndWaitForChange( - device, 0, isInput, - kAudioDevicePropertyNominalSampleRate, - propsize, &desiredSrate, &srate ); - - /* -- if the rate agrees, and was changed, we are done -- */ - if( srate != 0 && srate == desiredSrate ) - return paNoError; - /* -- if the rate agrees, and we got no errors, we are done -- */ - if( !err && srate == desiredSrate ) - return paNoError; - /* -- we've failed if the rates disagree and we are setting input -- */ - if( requireExact ) - return paInvalidSampleRate; - - /* -- generate a list of available sample rates -- */ - err = PaMacCore_AudioDeviceGetPropertySize( device, 0, isInput, - kAudioDevicePropertyAvailableNominalSampleRates, - &propsize ); - if( err ) - return ERR( err ); - ranges = (AudioValueRange *)calloc( 1, propsize ); - if( !ranges ) - return paInsufficientMemory; - err = PaMacCore_AudioDeviceGetProperty( device, 0, isInput, - kAudioDevicePropertyAvailableNominalSampleRates, - &propsize, ranges ); - if( err ) - { - free( ranges ); - return ERR( err ); - } - VDBUG(("Requested sample rate of %g was not available.\n", (float)desiredSrate)); - VDBUG(("%lu Available Sample Rates are:\n",propsize/sizeof(AudioValueRange))); -#ifdef MAC_CORE_VERBOSE_DEBUG - for( i=0; i max ) max = ranges[i].mMaximum; - if( ranges[i].mMinimum > desiredSrate ) { - if( best < 0 ) - best = ranges[i].mMinimum; - else if( ranges[i].mMinimum < best ) - best = ranges[i].mMinimum; - } - } - if( best < 0 ) - best = max; - VDBUG( ("Maximum Rate %g. best is %g.\n", max, best ) ); - free( ranges ); - - /* -- set the sample rate -- */ - propsize = sizeof( best ); - srate = 0; - err = AudioDeviceSetPropertyNowAndWaitForChange( - device, 0, isInput, - kAudioDevicePropertyNominalSampleRate, - propsize, &best, &srate ); - - /* -- if the set rate matches, we are done -- */ - if( srate != 0 && srate == best ) - return paNoError; - - if( err ) - return ERR( err ); - - /* -- otherwise, something weird happened: we didn't set the rate, and we got no errors. Just bail. */ - return paInternalError; -} - - -/* - Attempts to set the requestedFramesPerBuffer. If it can't set the exact - value, it settles for something smaller if available. If nothing smaller - is available, it uses the smallest available size. - actualFramesPerBuffer will be set to the actual value on successful return. - OK to pass NULL to actualFramesPerBuffer. - The logic is very similar too setBestSampleRate only failure here is - not usually catastrophic. -*/ -PaError setBestFramesPerBuffer( const AudioDeviceID device, - const bool isOutput, - UInt32 requestedFramesPerBuffer, - UInt32 *actualFramesPerBuffer ) -{ - UInt32 afpb; - const bool isInput = !isOutput; - UInt32 propsize = sizeof(UInt32); - OSErr err; - AudioValueRange range; - - if( actualFramesPerBuffer == NULL ) - { - actualFramesPerBuffer = &afpb; - } - - /* -- try and set exact FPB -- */ - err = PaMacCore_AudioDeviceSetProperty( device, NULL, 0, isInput, - kAudioDevicePropertyBufferFrameSize, - propsize, &requestedFramesPerBuffer); - err = PaMacCore_AudioDeviceGetProperty( device, 0, isInput, - kAudioDevicePropertyBufferFrameSize, - &propsize, actualFramesPerBuffer); - if( err ) - { - return ERR( err ); - } - // Did we get the size we asked for? - if( *actualFramesPerBuffer == requestedFramesPerBuffer ) - { - return paNoError; /* we are done */ - } - - // Clip requested value against legal range for the device. - propsize = sizeof(AudioValueRange); - err = PaMacCore_AudioDeviceGetProperty( device, 0, isInput, - kAudioDevicePropertyBufferFrameSizeRange, - &propsize, &range ); - if( err ) - { - return ERR( err ); - } - if( requestedFramesPerBuffer < range.mMinimum ) - { - requestedFramesPerBuffer = range.mMinimum; - } - else if( requestedFramesPerBuffer > range.mMaximum ) - { - requestedFramesPerBuffer = range.mMaximum; - } - - /* --- set the buffer size (ignore errors) -- */ - propsize = sizeof( UInt32 ); - err = PaMacCore_AudioDeviceSetProperty( device, NULL, 0, isInput, - kAudioDevicePropertyBufferFrameSize, - propsize, &requestedFramesPerBuffer ); - /* --- read the property to check that it was set -- */ - err = PaMacCore_AudioDeviceGetProperty( device, 0, isInput, - kAudioDevicePropertyBufferFrameSize, - &propsize, actualFramesPerBuffer ); - - if( err ) - return ERR( err ); - - return paNoError; -} - -/********************** - * - * XRun stuff - * - **********************/ - -struct PaMacXRunListNode_s { - PaMacCoreStream *stream; - struct PaMacXRunListNode_s *next; -} ; - -typedef struct PaMacXRunListNode_s PaMacXRunListNode; - -/** Always empty, so that it can always be the one returned by - addToXRunListenerList. note that it's not a pointer. */ -static PaMacXRunListNode firstXRunListNode; -static int xRunListSize; -static pthread_mutex_t xrunMutex; - -OSStatus xrunCallback( - AudioObjectID inDevice, - UInt32 inNumberAddresses, - const AudioObjectPropertyAddress* inAddresses, - void * inClientData) -{ - PaMacXRunListNode *node = (PaMacXRunListNode *) inClientData; - bool isInput = inAddresses->mScope == kAudioDevicePropertyScopeInput; - - int ret = pthread_mutex_trylock( &xrunMutex ) ; - - if( ret == 0 ) { - - node = node->next ; //skip the first node - - for( ; node; node=node->next ) { - PaMacCoreStream *stream = node->stream; - - if( stream->state != ACTIVE ) - continue; //if the stream isn't active, we don't care if the device is dropping - - if( isInput ) { - if( stream->inputDevice == inDevice ) - OSAtomicOr32( paInputOverflow, &stream->xrunFlags ); - } else { - if( stream->outputDevice == inDevice ) - OSAtomicOr32( paOutputUnderflow, &stream->xrunFlags ); - } - } - - pthread_mutex_unlock( &xrunMutex ); - } - - return 0; -} - -int initializeXRunListenerList( void ) -{ - xRunListSize = 0; - bzero( (void *) &firstXRunListNode, sizeof(firstXRunListNode) ); - return pthread_mutex_init( &xrunMutex, NULL ); -} -int destroyXRunListenerList( void ) -{ - PaMacXRunListNode *node; - node = firstXRunListNode.next; - while( node ) { - PaMacXRunListNode *tmp = node; - node = node->next; - free( tmp ); - } - xRunListSize = 0; - return pthread_mutex_destroy( &xrunMutex ); -} - -void *addToXRunListenerList( void *stream ) -{ - pthread_mutex_lock( &xrunMutex ); - PaMacXRunListNode *newNode; - // setup new node: - newNode = (PaMacXRunListNode *) malloc( sizeof( PaMacXRunListNode ) ); - newNode->stream = (PaMacCoreStream *) stream; - newNode->next = firstXRunListNode.next; - // insert: - firstXRunListNode.next = newNode; - pthread_mutex_unlock( &xrunMutex ); - - return &firstXRunListNode; -} - -int removeFromXRunListenerList( void *stream ) -{ - pthread_mutex_lock( &xrunMutex ); - PaMacXRunListNode *node, *prev; - prev = &firstXRunListNode; - node = firstXRunListNode.next; - while( node ) { - if( node->stream == stream ) { - //found it: - --xRunListSize; - prev->next = node->next; - free( node ); - pthread_mutex_unlock( &xrunMutex ); - return xRunListSize; - } - prev = prev->next; - node = node->next; - } - - pthread_mutex_unlock( &xrunMutex ); - // failure - return xRunListSize; -} diff --git a/spaces/anakin87/who-killed-laura-palmer/app_utils/backend_utils.py b/spaces/anakin87/who-killed-laura-palmer/app_utils/backend_utils.py deleted file mode 100644 index 562e411ef10b65b0578b6f8b1db704e8767d42db..0000000000000000000000000000000000000000 --- a/spaces/anakin87/who-killed-laura-palmer/app_utils/backend_utils.py +++ /dev/null @@ -1,56 +0,0 @@ -import shutil -from haystack.document_stores import FAISSDocumentStore -from haystack.nodes import EmbeddingRetriever -from haystack.pipelines import ExtractiveQAPipeline -from haystack.nodes import FARMReader -import streamlit as st - -from app_utils.config import (INDEX_DIR, RETRIEVER_MODEL, RETRIEVER_MODEL_FORMAT, - READER_MODEL, READER_CONFIG_THRESHOLD, QUESTIONS_PATH) - -# cached to make index and models load only at start -@st.cache(hash_funcs={"builtins.SwigPyObject": lambda _: None}, - allow_output_mutation=True) -def start_haystack(): - """ - load document store, retriever, reader and create pipeline - """ - shutil.copy(f'{INDEX_DIR}/faiss_document_store.db', '.') - document_store = FAISSDocumentStore( - faiss_index_path=f'{INDEX_DIR}/my_faiss_index.faiss', - faiss_config_path=f'{INDEX_DIR}/my_faiss_index.json') - print(f'Index size: {document_store.get_document_count()}') - - retriever = EmbeddingRetriever( - document_store=document_store, - embedding_model=RETRIEVER_MODEL, - model_format=RETRIEVER_MODEL_FORMAT - ) - - reader = FARMReader(model_name_or_path=READER_MODEL, - use_gpu=False, - confidence_threshold=READER_CONFIG_THRESHOLD) - - pipe = ExtractiveQAPipeline(reader, retriever) - return pipe - -pipe = start_haystack() -# the pipeline is not included as parameter of the following function, -# because it is difficult to cache -@st.cache(persist=True, allow_output_mutation=True) -def query(question: str, retriever_top_k: int = 10, reader_top_k: int = 5): - """Run query and get answers""" - params = {"Retriever": {"top_k": retriever_top_k}, - "Reader": {"top_k": reader_top_k}} - results = pipe.run(question, params=params) - return results - -@st.cache() -def load_questions(): - """Load selected questions from file""" - with open(QUESTIONS_PATH) as fin: - questions = [line.strip() for line in fin.readlines() - if not line.startswith('#')] - return questions - - \ No newline at end of file diff --git a/spaces/anonymous-demo/Anonymous-TranSVAE-Demo/README.md b/spaces/anonymous-demo/Anonymous-TranSVAE-Demo/README.md deleted file mode 100644 index b89f0fd3bc9233d67f4d10a291ee67a95912067c..0000000000000000000000000000000000000000 --- a/spaces/anonymous-demo/Anonymous-TranSVAE-Demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anonymous TranSVAE Demo -emoji: 👦🏻👽 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: cc-by-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/aodianyun/stable-diffusion-webui/scripts/postprocessing_codeformer.py b/spaces/aodianyun/stable-diffusion-webui/scripts/postprocessing_codeformer.py deleted file mode 100644 index 7e337ec41ffffe11fd88fced0ff4f6338d959571..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/scripts/postprocessing_codeformer.py +++ /dev/null @@ -1,36 +0,0 @@ -from PIL import Image -import numpy as np - -from modules import scripts_postprocessing, codeformer_model -import gradio as gr - -from modules.ui_components import FormRow - - -class ScriptPostprocessingCodeFormer(scripts_postprocessing.ScriptPostprocessing): - name = "CodeFormer" - order = 3000 - - def ui(self): - with FormRow(): - codeformer_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="CodeFormer visibility", value=0, elem_id="extras_codeformer_visibility") - codeformer_weight = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="CodeFormer weight (0 = maximum effect, 1 = minimum effect)", value=0, elem_id="extras_codeformer_weight") - - return { - "codeformer_visibility": codeformer_visibility, - "codeformer_weight": codeformer_weight, - } - - def process(self, pp: scripts_postprocessing.PostprocessedImage, codeformer_visibility, codeformer_weight): - if codeformer_visibility == 0: - return - - restored_img = codeformer_model.codeformer.restore(np.array(pp.image, dtype=np.uint8), w=codeformer_weight) - res = Image.fromarray(restored_img) - - if codeformer_visibility < 1.0: - res = Image.blend(pp.image, res, codeformer_visibility) - - pp.image = res - pp.info["CodeFormer visibility"] = round(codeformer_visibility, 3) - pp.info["CodeFormer weight"] = round(codeformer_weight, 3) diff --git a/spaces/aphenx/bingo/src/components/chat-suggestions.tsx b/spaces/aphenx/bingo/src/components/chat-suggestions.tsx deleted file mode 100644 index 00c2fee295c9e010946046eb71705a5e131f7a5a..0000000000000000000000000000000000000000 --- a/spaces/aphenx/bingo/src/components/chat-suggestions.tsx +++ /dev/null @@ -1,45 +0,0 @@ -import React, { useMemo } from 'react' -import Image from 'next/image' -import HelpIcon from '@/assets/images/help.svg' -import { SuggestedResponse } from '@/lib/bots/bing/types' -import { useBing } from '@/lib/hooks/use-bing' -import { atom, useAtom } from 'jotai' - -type Suggestions = SuggestedResponse[] -const helpSuggestions = ['为什么不回应某些主题', '告诉我更多关于必应的资迅', '必应如何使用 AI?'].map((text) => ({ text })) -const suggestionsAtom = atom([]) - -type ChatSuggestionsProps = React.ComponentProps<'div'> & Pick, 'setInput'> & { suggestions?: Suggestions } - -export function ChatSuggestions({ setInput, suggestions = [] }: ChatSuggestionsProps) { - const [currentSuggestions, setSuggestions] = useAtom(suggestionsAtom) - const toggleSuggestions = (() => { - if (currentSuggestions === helpSuggestions) { - setSuggestions(suggestions) - } else { - setSuggestions(helpSuggestions) - } - }) - - useMemo(() => { - setSuggestions(suggestions) - window.scrollBy(0, 2000) - }, [suggestions.length]) - - return currentSuggestions?.length ? ( -
-
- - { - currentSuggestions.map(suggestion => ( - - )) - } -
-
- ) : null -} diff --git a/spaces/arnavkartikeya/SCRIPture-final/data/flickr30k_dataset.py b/spaces/arnavkartikeya/SCRIPture-final/data/flickr30k_dataset.py deleted file mode 100644 index 018ab387014ddaf554c4d3184cfc0e2ba8b2d487..0000000000000000000000000000000000000000 --- a/spaces/arnavkartikeya/SCRIPture-final/data/flickr30k_dataset.py +++ /dev/null @@ -1,93 +0,0 @@ -import os -import json - -from torch.utils.data import Dataset -from torchvision.datasets.utils import download_url - -from PIL import Image - -from data.utils import pre_caption - -class flickr30k_train(Dataset): - def __init__(self, transform, image_root, ann_root, max_words=30, prompt=''): - ''' - image_root (string): Root directory of images (e.g. flickr30k/) - ann_root (string): directory to store the annotation file - ''' - url = 'https://storage.googleapis.com/sfr-vision-language-research/datasets/flickr30k_train.json' - filename = 'flickr30k_train.json' - - download_url(url,ann_root) - - self.annotation = json.load(open(os.path.join(ann_root,filename),'r')) - self.transform = transform - self.image_root = image_root - self.max_words = max_words - self.prompt = prompt - - self.img_ids = {} - n = 0 - for ann in self.annotation: - img_id = ann['image_id'] - if img_id not in self.img_ids.keys(): - self.img_ids[img_id] = n - n += 1 - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - ann = self.annotation[index] - - image_path = os.path.join(self.image_root,ann['image']) - image = Image.open(image_path).convert('RGB') - image = self.transform(image) - - caption = self.prompt+pre_caption(ann['caption'], self.max_words) - - return image, caption, self.img_ids[ann['image_id']] - - -class flickr30k_retrieval_eval(Dataset): - def __init__(self, transform, image_root, ann_root, split, max_words=30): - ''' - image_root (string): Root directory of images (e.g. flickr30k/) - ann_root (string): directory to store the annotation file - split (string): val or test - ''' - urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/flickr30k_val.json', - 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/flickr30k_test.json'} - filenames = {'val':'flickr30k_val.json','test':'flickr30k_test.json'} - - download_url(urls[split],ann_root) - - self.annotation = json.load(open(os.path.join(ann_root,filenames[split]),'r')) - self.transform = transform - self.image_root = image_root - - self.text = [] - self.image = [] - self.txt2img = {} - self.img2txt = {} - - txt_id = 0 - for img_id, ann in enumerate(self.annotation): - self.image.append(ann['image']) - self.img2txt[img_id] = [] - for i, caption in enumerate(ann['caption']): - self.text.append(pre_caption(caption,max_words)) - self.img2txt[img_id].append(txt_id) - self.txt2img[txt_id] = img_id - txt_id += 1 - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - image_path = os.path.join(self.image_root, self.annotation[index]['image']) - image = Image.open(image_path).convert('RGB') - image = self.transform(image) - - return image, index \ No newline at end of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/speech_to_speech_dataset.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/speech_to_speech_dataset.py deleted file mode 100644 index 4b7f8b6824dec8082733284f92050048fcc743e6..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/speech_to_speech_dataset.py +++ /dev/null @@ -1,428 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass -from pathlib import Path -from typing import Dict, List, Optional, Tuple - -import torch - -from fairseq.data import ConcatDataset, Dictionary -from fairseq.data import data_utils as fairseq_data_utils -from fairseq.data.audio.data_cfg import S2SDataConfig -from fairseq.data.audio.audio_utils import get_features_or_waveform -from fairseq.data.audio.speech_to_text_dataset import ( - SpeechToTextDataset, - SpeechToTextDatasetCreator, - _collate_frames, -) - -logger = logging.getLogger(__name__) - - -@dataclass -class SpeechToSpeechDatasetItem(object): - index: int - source: torch.Tensor - target: Optional[torch.Tensor] = None - target_speaker: Optional[torch.Tensor] = None - tgt_lang_tag: Optional[int] = None - - -class SpeechToSpeechDataset(SpeechToTextDataset): - def __init__( - self, - split: str, - is_train_split: bool, - data_cfg: S2SDataConfig, - src_audio_paths: List[str], - src_n_frames: List[int], - tgt_audio_paths: List[str], - tgt_n_frames: List[int], - src_langs: Optional[List[str]] = None, - tgt_langs: Optional[List[str]] = None, - ids: Optional[List[str]] = None, - target_is_code: bool = False, - tgt_dict: Dictionary = None, - n_frames_per_step: int = 1, - ): - tgt_texts = tgt_audio_paths if target_is_code else None - super().__init__( - split, - is_train_split, - data_cfg, - src_audio_paths, - src_n_frames, - ids=ids, - tgt_dict=tgt_dict, - tgt_texts=tgt_texts, - src_langs=src_langs, - tgt_langs=tgt_langs, - n_frames_per_step=n_frames_per_step, - ) - - self.tgt_audio_paths = tgt_audio_paths - self.tgt_lens = [t // self.n_frames_per_step for t in tgt_n_frames] - - assert not target_is_code or tgt_dict is not None - self.target_is_code = target_is_code - - assert len(tgt_audio_paths) == self.n_samples - assert len(tgt_n_frames) == self.n_samples - - self.tgt_speakers = None - if self.cfg.target_speaker_embed: - samples = SpeechToTextDatasetCreator._load_samples_from_tsv( - self.cfg.target_speaker_embed, split - ) - spk_emb_dict = {s["id"]: s["speaker_embed"] for s in samples} - self.tgt_speakers = [spk_emb_dict[id] for id in self.ids] - assert len(self.tgt_speakers) == self.n_samples - - logger.info(self.__repr__()) - - def pack_units(self, input: torch.Tensor) -> torch.Tensor: - if self.n_frames_per_step <= 1: - return input - - offset = 4 - vocab_size = ( - len(self.tgt_dict) - offset - ) # remove offset from , , , , which is specific to fairseq dictionary - - assert input.dim() == 1 - stacked_input = ( - input[:-1].view(-1, self.n_frames_per_step) - offset - ) # remove - scale = [ - pow(vocab_size, self.n_frames_per_step - 1 - i) - for i in range(self.n_frames_per_step) - ] - scale = torch.LongTensor(scale).squeeze(0) - res = input.new((len(input) - 1) // self.n_frames_per_step + 1).fill_(input[-1]) - res[:-1] = (stacked_input * scale).sum(dim=1) + offset - - return res - - def __getitem__(self, index: int) -> SpeechToSpeechDatasetItem: - source = self._get_source_audio(index) - - tgt_lang_tag = None - if self.cfg.prepend_tgt_lang_tag_as_bos: - # prepend_tgt_lang_tag_as_bos: put tgt_lang_tag as bos of target - tgt_lang_tag = self.get_lang_tag_idx(self.tgt_langs[index], self.tgt_dict) - - if not self.target_is_code: - target = get_features_or_waveform(self.tgt_audio_paths[index]) - target = torch.from_numpy(target).float() - target = self.pack_frames(target) - else: - target = self.tgt_dict.encode_line( - self.tgt_audio_paths[index], - add_if_not_exist=False, - append_eos=True, - ).long() - if self.n_frames_per_step > 1: - n_tgt_frame = target.size(0) - 1 # exclude - keep_n_tgt_frame = n_tgt_frame - n_tgt_frame % self.n_frames_per_step - target = torch.cat( - ( - target[:keep_n_tgt_frame], - target.new_full((1,), self.tgt_dict.eos()), - ), - dim=0, - ) - - if self.tgt_speakers: - tgt_spk = get_features_or_waveform(self.tgt_speakers[index]) - tgt_spk = torch.from_numpy(tgt_spk).float() - else: - tgt_spk = torch.FloatTensor([]) - - return SpeechToSpeechDatasetItem( - index=index, - source=source, - target=target, - target_speaker=tgt_spk, - tgt_lang_tag=tgt_lang_tag, - ) - - def _collate_target(self, samples: List[SpeechToSpeechDatasetItem]) -> torch.Tensor: - if self.target_is_code: - target = fairseq_data_utils.collate_tokens( - [x.target for x in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ) - # convert stacked units to a single id - pack_targets = [self.pack_units(x.target) for x in samples] - prev_output_tokens = fairseq_data_utils.collate_tokens( - pack_targets, - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=True, - ) - target_lengths = torch.tensor( - [x.size(0) for x in pack_targets], dtype=torch.long - ) - else: - target = _collate_frames([x.target for x in samples], is_audio_input=False) - bsz, _, d = target.size() - prev_output_tokens = torch.cat( - (target.new_full((bsz, 1, d), 0.0), target[:, :-1, :]), dim=1 - ) - target_lengths = torch.tensor( - [x.target.size(0) for x in samples], dtype=torch.long - ) - - return target, prev_output_tokens, target_lengths - - def collater( - self, samples: List[SpeechToSpeechDatasetItem], return_order: bool = False - ) -> Dict: - if len(samples) == 0: - return {} - indices = torch.tensor([x.index for x in samples], dtype=torch.long) - frames = _collate_frames([x.source for x in samples], self.cfg.use_audio_input) - # sort samples by descending number of frames - n_frames = torch.tensor([x.source.size(0) for x in samples], dtype=torch.long) - n_frames, order = n_frames.sort(descending=True) - indices = indices.index_select(0, order) - frames = frames.index_select(0, order) - - target, prev_output_tokens, target_lengths = self._collate_target(samples) - target = target.index_select(0, order) - target_lengths = target_lengths.index_select(0, order) - prev_output_tokens = prev_output_tokens.index_select(0, order) - ntokens = sum(x.target.size(0) for x in samples) - - tgt_speakers = None - if self.cfg.target_speaker_embed: - tgt_speakers = _collate_frames( - [x.target_speaker for x in samples], is_audio_input=True - ).index_select(0, order) - - net_input = { - "src_tokens": frames, - "src_lengths": n_frames, - "prev_output_tokens": prev_output_tokens, - "tgt_speaker": tgt_speakers, # TODO: unify "speaker" and "tgt_speaker" - } - if self.tgt_texts is not None and samples[0].tgt_lang_tag is not None: - for i in range(len(samples)): - net_input["prev_output_tokens"][i][0] = samples[order[i]].tgt_lang_tag - out = { - "id": indices, - "net_input": net_input, - "speaker": tgt_speakers, # to support Tacotron2 loss for speech-to-spectrogram model - "target": target, - "target_lengths": target_lengths, - "ntokens": ntokens, - "nsentences": len(samples), - } - if return_order: - out["order"] = order - return out - - -class TextTargetMultitaskData(object): - # mandatory columns - KEY_ID, KEY_TEXT = "id", "tgt_text" - - def __init__(self, args, split, tgt_dict): - samples = SpeechToTextDatasetCreator._load_samples_from_tsv(args.data, split) - self.data = {s[self.KEY_ID]: s[self.KEY_TEXT] for s in samples} - self.dict = tgt_dict - self.append_eos = args.decoder_type != "ctc" - - def get(self, sample_id): - if sample_id in self.data: - return self.dict.encode_line( - self.data[sample_id], - add_if_not_exist=False, - append_eos=self.append_eos, - ) - else: - logger.warning(f"no target for {sample_id}") - return torch.IntTensor([]) - - def collater(self, samples: List[torch.Tensor]) -> torch.Tensor: - out = fairseq_data_utils.collate_tokens( - samples, - self.dict.pad(), - self.dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ).long() - - prev_out = fairseq_data_utils.collate_tokens( - samples, - self.dict.pad(), - self.dict.eos(), - left_pad=False, - move_eos_to_beginning=True, - ).long() - - target_lengths = torch.tensor([t.size(0) for t in samples], dtype=torch.long) - ntokens = sum(t.size(0) for t in samples) - - output = { - "prev_output_tokens": prev_out, - "target": out, - "target_lengths": target_lengths, - "ntokens": ntokens, - } - - return output - - -class SpeechToSpeechMultitaskDataset(SpeechToSpeechDataset): - def __init__(self, *argv): - super().__init__(*argv) - self.multitask_data = {} - - def add_multitask_dataset(self, task_name, task_data): - self.multitask_data[task_name] = task_data - - def __getitem__( - self, index: int - ) -> Tuple[SpeechToSpeechDatasetItem, Dict[str, torch.Tensor]]: - s2s_data = super().__getitem__(index) - - multitask_target = {} - sample_id = self.ids[index] - for task_name, task_dataset in self.multitask_data.items(): - multitask_target[task_name] = task_dataset.get(sample_id) - - return s2s_data, multitask_target - - def collater( - self, samples: List[Tuple[SpeechToSpeechDatasetItem, Dict[str, torch.Tensor]]] - ) -> Dict: - if len(samples) == 0: - return {} - - out = super().collater([s for s, _ in samples], return_order=True) - order = out["order"] - del out["order"] - - for task_name, task_dataset in self.multitask_data.items(): - if "multitask" not in out: - out["multitask"] = {} - d = [s[task_name] for _, s in samples] - task_target = task_dataset.collater(d) - out["multitask"][task_name] = { - "target": task_target["target"].index_select(0, order), - "target_lengths": task_target["target_lengths"].index_select(0, order), - "ntokens": task_target["ntokens"], - } - out["multitask"][task_name]["net_input"] = { - "prev_output_tokens": task_target["prev_output_tokens"].index_select( - 0, order - ), - } - - return out - - -class SpeechToSpeechDatasetCreator(object): - # mandatory columns - KEY_ID, KEY_SRC_AUDIO, KEY_SRC_N_FRAMES = "id", "src_audio", "src_n_frames" - KEY_TGT_AUDIO, KEY_TGT_N_FRAMES = "tgt_audio", "tgt_n_frames" - # optional columns - KEY_SRC_LANG, KEY_TGT_LANG = "src_lang", "tgt_lang" - # default values - DEFAULT_LANG = "" - - @classmethod - def _from_list( - cls, - split_name: str, - is_train_split, - samples: List[Dict], - data_cfg: S2SDataConfig, - target_is_code: bool = False, - target_dictionary: Dictionary = None, - n_frames_per_step: int = 1, - multitask: Optional[Dict] = None, - ) -> SpeechToSpeechDataset: - audio_root = Path(data_cfg.audio_root) - ids = [s[cls.KEY_ID] for s in samples] - src_audio_paths = [ - (audio_root / s[cls.KEY_SRC_AUDIO]).as_posix() for s in samples - ] - tgt_audio_paths = [ - s[cls.KEY_TGT_AUDIO] - if target_is_code - else (audio_root / s[cls.KEY_TGT_AUDIO]).as_posix() - for s in samples - ] - src_n_frames = [int(s[cls.KEY_SRC_N_FRAMES]) for s in samples] - tgt_n_frames = [int(s[cls.KEY_TGT_N_FRAMES]) for s in samples] - src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples] - tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples] - - has_multitask = len(multitask) > 0 - dataset_cls = ( - SpeechToSpeechMultitaskDataset if has_multitask else SpeechToSpeechDataset - ) - - ds = dataset_cls( - split_name, - is_train_split, - data_cfg, - src_audio_paths, - src_n_frames, - tgt_audio_paths, - tgt_n_frames, - src_langs, - tgt_langs, - ids, - target_is_code, - target_dictionary, - n_frames_per_step, - ) - - if has_multitask: - for task_name, task_obj in multitask.items(): - task_data = TextTargetMultitaskData( - task_obj.args, split_name, task_obj.target_dictionary - ) - ds.add_multitask_dataset(task_name, task_data) - return ds - - @classmethod - def from_tsv( - cls, - root: str, - data_cfg: S2SDataConfig, - splits: str, - is_train_split: bool, - epoch: int, - seed: int, - target_is_code: bool = False, - target_dictionary: Dictionary = None, - n_frames_per_step: int = 1, - multitask: Optional[Dict] = None, - ) -> SpeechToSpeechDataset: - datasets = [] - for split in splits.split(","): - samples = SpeechToTextDatasetCreator._load_samples_from_tsv(root, split) - ds = cls._from_list( - split, - is_train_split, - samples, - data_cfg, - target_is_code, - target_dictionary, - n_frames_per_step, - multitask, - ) - datasets.append(ds) - return ConcatDataset(datasets) if len(datasets) > 1 else datasets[0] diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Tanh Hong.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Tanh Hong.html deleted file mode 100644 index c8cb91a59464f8589819ad40f4b29bf12e7af822..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Tanh Hong.html +++ /dev/null @@ -1,132 +0,0 @@ - - - - Tanh Hong - - - - -
-

Tanh Hong

- -
-
1- How did you hear about SM? What motivated you to become a mentor with SM?
- A friend was a mentor with SM. Want to help new people get into DS and also promote it to other people.

2- Do you have any previous mentorship experience formal or informal?
- During Phd - Mentored undergrad and masters students, helped them finish their thesis and also was a TA. Designed course in Advanced ML in bio-medical. Helped students reaching their goals. 
- In job - managed team of more than 10 people - encouraged and motivated team members. 

3- What's your DS career journey been like? 
- Did PhD in 4 years in ML & Computer vision. 
- Senior Data Scientist in Usee (FPT Americas)
- Currently working in Automotive industry ( Top 4 in the world ) exploring relationship in Big data, generate insights and learning techniques. 

4- What are some of the challenges that beginners face when landing a DS related role? How can you help them with this?
- There is a big gap in learning DS and tackling real issues in the industry. Schools provide industry-ambiguous data sets. The preparation is done on ready-to-use data sets, and less focus is on the techniques. 

Can help mentees with realistic projects so they can learn by doing projects - ex. building 3D Models. 

5- Do you have any questions regarding SM?
- Friend used to mentor with SM a year ago - has anything changed in 1 year?
- How many mentors and mentees are with the platform?
- Are mentees new to the field or have some experience?
- Do mentees have to be in Canada?
- How do we know if a mentee gets a job?
- Typical length of mentorship?
- Do students/Mentees have a good background?
- Do we get chance to interview with mentee?
- Is there any fees to use the platform?
-
- -
- - - \ No newline at end of file diff --git a/spaces/auto-academic/auto-draft/latex_templates/ICLR2022/template.tex b/spaces/auto-academic/auto-draft/latex_templates/ICLR2022/template.tex deleted file mode 100644 index 45b8f35308d03b2514f744589ac3e601bf4775d5..0000000000000000000000000000000000000000 --- a/spaces/auto-academic/auto-draft/latex_templates/ICLR2022/template.tex +++ /dev/null @@ -1,35 +0,0 @@ -\documentclass{article} % For LaTeX2e -\UseRawInputEncoding -\usepackage{graphicx} -\usepackage{booktabs} -\usepackage{iclr2022_conference, times} -\input{math_commands.tex} -\usepackage{hyperref} -\usepackage{url} -\usepackage{algorithm} -\usepackage{algpseudocode} - -\title{TITLE} -\author{GPT-4} - -\newcommand{\fix}{\marginpar{FIX}} -\newcommand{\new}{\marginpar{NEW}} - -\begin{document} -\maketitle -\input{abstract.tex} -\input{introduction.tex} -\input{related works.tex} -\input{backgrounds.tex} -\input{methodology.tex} -\input{experiments.tex} -\input{conclusion.tex} - -\bibliography{ref} -\bibliographystyle{iclr2022_conference} - -%\appendix -%\section{Appendix} -%You may include other additional sections here. - -\end{document} diff --git a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/__init__.py b/spaces/badayvedat/AudioSep/models/CLAP/open_clip/__init__.py deleted file mode 100644 index e9f728f2f273be5d5fdbec6c6cc41d737176a8c0..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -from .factory import ( - list_models, - create_model, - create_model_and_transforms, - add_model_config, -) -from .loss import ClipLoss, gather_features, LPLoss, lp_gather_features, LPMetrics -from .model import ( - CLAP, - CLAPTextCfg, - CLAPVisionCfg, - CLAPAudioCfp, - convert_weights_to_fp16, - trace_model, -) -from .openai import load_openai_model, list_openai_models -from .pretrained import ( - list_pretrained, - list_pretrained_tag_models, - list_pretrained_model_tags, - get_pretrained_url, - download_pretrained, -) -from .tokenizer import SimpleTokenizer, tokenize -from .transform import image_transform diff --git a/spaces/betterme/mestreamlit/pages/997_streamlit_aggrid.py b/spaces/betterme/mestreamlit/pages/997_streamlit_aggrid.py deleted file mode 100644 index 057be818ff8e5694735cdcb687e4c6d40fc798ba..0000000000000000000000000000000000000000 --- a/spaces/betterme/mestreamlit/pages/997_streamlit_aggrid.py +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# @Project : Python. -# @File : 997_streamlit_aggrid -# @Time : 2022/10/17 下午1:14 -# @Author : yuanjie -# @WeChat : meutils -# @Software : PyCharm -# @Description : - - -from st_aggrid import AgGrid -import pandas as pd - -df = pd.read_csv('./data/airline-safety.csv') -AgGrid(df) \ No newline at end of file diff --git a/spaces/bhasker412/IDD-YOLO-Tracking/utils/general.py b/spaces/bhasker412/IDD-YOLO-Tracking/utils/general.py deleted file mode 100644 index 6b7edb3e013683b2ee38af9ce9e103616aaaa3ff..0000000000000000000000000000000000000000 --- a/spaces/bhasker412/IDD-YOLO-Tracking/utils/general.py +++ /dev/null @@ -1,892 +0,0 @@ -# YOLOR general utils - -import glob -import logging -import math -import os -import platform -import random -import re -import subprocess -import time -from pathlib import Path - -import cv2 -import numpy as np -import pandas as pd -import torch -import torchvision -import yaml - -from utils.google_utils import gsutil_getsize -from utils.metrics import fitness -from utils.torch_utils import init_torch_seeds - -# Settings -torch.set_printoptions(linewidth=320, precision=5, profile='long') -np.set_printoptions(linewidth=320, formatter={'float_kind': '{:11.5g}'.format}) # format short g, %precision=5 -pd.options.display.max_columns = 10 -cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader) -os.environ['NUMEXPR_MAX_THREADS'] = str(min(os.cpu_count(), 8)) # NumExpr max threads - - -def set_logging(rank=-1): - logging.basicConfig( - format="%(message)s", - level=logging.INFO if rank in [-1, 0] else logging.WARN) - - -def init_seeds(seed=0): - # Initialize random number generator (RNG) seeds - random.seed(seed) - np.random.seed(seed) - init_torch_seeds(seed) - - -def get_latest_run(search_dir='.'): - # Return path to most recent 'last.pt' in /runs (i.e. to --resume from) - last_list = glob.glob(f'{search_dir}/**/last*.pt', recursive=True) - return max(last_list, key=os.path.getctime) if last_list else '' - - -def isdocker(): - # Is environment a Docker container - return Path('/workspace').exists() # or Path('/.dockerenv').exists() - - -def emojis(str=''): - # Return platform-dependent emoji-safe version of string - return str.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else str - - -def check_online(): - # Check internet connectivity - import socket - try: - socket.create_connection(("1.1.1.1", 443), 5) # check host accesability - return True - except OSError: - return False - - -def check_git_status(): - # Recommend 'git pull' if code is out of date - print(colorstr('github: '), end='') - try: - assert Path('.git').exists(), 'skipping check (not a git repository)' - assert not isdocker(), 'skipping check (Docker image)' - assert check_online(), 'skipping check (offline)' - - cmd = 'git fetch && git config --get remote.origin.url' - url = subprocess.check_output(cmd, shell=True).decode().strip().rstrip('.git') # github repo url - branch = subprocess.check_output('git rev-parse --abbrev-ref HEAD', shell=True).decode().strip() # checked out - n = int(subprocess.check_output(f'git rev-list {branch}..origin/master --count', shell=True)) # commits behind - if n > 0: - s = f"⚠️ WARNING: code is out of date by {n} commit{'s' * (n > 1)}. " \ - f"Use 'git pull' to update or 'git clone {url}' to download latest." - else: - s = f'up to date with {url} ✅' - print(emojis(s)) # emoji-safe - except Exception as e: - print(e) - - -def check_requirements(requirements='requirements.txt', exclude=()): - # Check installed dependencies meet requirements (pass *.txt file or list of packages) - import pkg_resources as pkg - prefix = colorstr('red', 'bold', 'requirements:') - if isinstance(requirements, (str, Path)): # requirements.txt file - file = Path(requirements) - if not file.exists(): - print(f"{prefix} {file.resolve()} not found, check failed.") - return - requirements = [f'{x.name}{x.specifier}' for x in pkg.parse_requirements(file.open()) if x.name not in exclude] - else: # list or tuple of packages - requirements = [x for x in requirements if x not in exclude] - - n = 0 # number of packages updates - for r in requirements: - try: - pkg.require(r) - except Exception as e: # DistributionNotFound or VersionConflict if requirements not met - n += 1 - print(f"{prefix} {e.req} not found and is required by YOLOR, attempting auto-update...") - print(subprocess.check_output(f"pip install '{e.req}'", shell=True).decode()) - - if n: # if packages updated - source = file.resolve() if 'file' in locals() else requirements - s = f"{prefix} {n} package{'s' * (n > 1)} updated per {source}\n" \ - f"{prefix} ⚠️ {colorstr('bold', 'Restart runtime or rerun command for updates to take effect')}\n" - print(emojis(s)) # emoji-safe - - -def check_img_size(img_size, s=32): - # Verify img_size is a multiple of stride s - new_size = make_divisible(img_size, int(s)) # ceil gs-multiple - if new_size != img_size: - print('WARNING: --img-size %g must be multiple of max stride %g, updating to %g' % (img_size, s, new_size)) - return new_size - - -def check_imshow(): - # Check if environment supports image displays - try: - assert not isdocker(), 'cv2.imshow() is disabled in Docker environments' - cv2.imshow('test', np.zeros((1, 1, 3))) - cv2.waitKey(1) - cv2.destroyAllWindows() - cv2.waitKey(1) - return True - except Exception as e: - print(f'WARNING: Environment does not support cv2.imshow() or PIL Image.show() image displays\n{e}') - return False - - -def check_file(file): - # Search for file if not found - if Path(file).is_file() or file == '': - return file - else: - files = glob.glob('./**/' + file, recursive=True) # find file - assert len(files), f'File Not Found: {file}' # assert file was found - assert len(files) == 1, f"Multiple files match '{file}', specify exact path: {files}" # assert unique - return files[0] # return file - - -def check_dataset(dict): - # Download dataset if not found locally - val, s = dict.get('val'), dict.get('download') - if val and len(val): - val = [Path(x).resolve() for x in (val if isinstance(val, list) else [val])] # val path - if not all(x.exists() for x in val): - print('\nWARNING: Dataset not found, nonexistent paths: %s' % [str(x) for x in val if not x.exists()]) - if s and len(s): # download script - print('Downloading %s ...' % s) - if s.startswith('http') and s.endswith('.zip'): # URL - f = Path(s).name # filename - torch.hub.download_url_to_file(s, f) - r = os.system('unzip -q %s -d ../ && rm %s' % (f, f)) # unzip - else: # bash script - r = os.system(s) - print('Dataset autodownload %s\n' % ('success' if r == 0 else 'failure')) # analyze return value - else: - raise Exception('Dataset not found.') - - -def make_divisible(x, divisor): - # Returns x evenly divisible by divisor - return math.ceil(x / divisor) * divisor - - -def clean_str(s): - # Cleans a string by replacing special characters with underscore _ - return re.sub(pattern="[|@#!¡·$€%&()=?¿^*;:,¨´><+]", repl="_", string=s) - - -def one_cycle(y1=0.0, y2=1.0, steps=100): - # lambda function for sinusoidal ramp from y1 to y2 - return lambda x: ((1 - math.cos(x * math.pi / steps)) / 2) * (y2 - y1) + y1 - - -def colorstr(*input): - # Colors a string https://en.wikipedia.org/wiki/ANSI_escape_code, i.e. colorstr('blue', 'hello world') - *args, string = input if len(input) > 1 else ('blue', 'bold', input[0]) # color arguments, string - colors = {'black': '\033[30m', # basic colors - 'red': '\033[31m', - 'green': '\033[32m', - 'yellow': '\033[33m', - 'blue': '\033[34m', - 'magenta': '\033[35m', - 'cyan': '\033[36m', - 'white': '\033[37m', - 'bright_black': '\033[90m', # bright colors - 'bright_red': '\033[91m', - 'bright_green': '\033[92m', - 'bright_yellow': '\033[93m', - 'bright_blue': '\033[94m', - 'bright_magenta': '\033[95m', - 'bright_cyan': '\033[96m', - 'bright_white': '\033[97m', - 'end': '\033[0m', # misc - 'bold': '\033[1m', - 'underline': '\033[4m'} - return ''.join(colors[x] for x in args) + f'{string}' + colors['end'] - - -def labels_to_class_weights(labels, nc=80): - # Get class weights (inverse frequency) from training labels - if labels[0] is None: # no labels loaded - return torch.Tensor() - - labels = np.concatenate(labels, 0) # labels.shape = (866643, 5) for COCO - classes = labels[:, 0].astype(np.int) # labels = [class xywh] - weights = np.bincount(classes, minlength=nc) # occurrences per class - - # Prepend gridpoint count (for uCE training) - # gpi = ((320 / 32 * np.array([1, 2, 4])) ** 2 * 3).sum() # gridpoints per image - # weights = np.hstack([gpi * len(labels) - weights.sum() * 9, weights * 9]) ** 0.5 # prepend gridpoints to start - - weights[weights == 0] = 1 # replace empty bins with 1 - weights = 1 / weights # number of targets per class - weights /= weights.sum() # normalize - return torch.from_numpy(weights) - - -def labels_to_image_weights(labels, nc=80, class_weights=np.ones(80)): - # Produces image weights based on class_weights and image contents - class_counts = np.array([np.bincount(x[:, 0].astype(np.int), minlength=nc) for x in labels]) - image_weights = (class_weights.reshape(1, nc) * class_counts).sum(1) - # index = random.choices(range(n), weights=image_weights, k=1) # weight image sample - return image_weights - - -def coco80_to_coco91_class(): # converts 80-index (val2014) to 91-index (paper) - # https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/ - # a = np.loadtxt('data/coco.names', dtype='str', delimiter='\n') - # b = np.loadtxt('data/coco_paper.names', dtype='str', delimiter='\n') - # x1 = [list(a[i] == b).index(True) + 1 for i in range(80)] # darknet to coco - # x2 = [list(b[i] == a).index(True) if any(b[i] == a) else None for i in range(91)] # coco to darknet - x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 28, 31, 32, 33, 34, - 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, - 64, 65, 67, 70, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90] - return x - - -def xyxy2xywh(x): - # Convert nx4 boxes from [x1, y1, x2, y2] to [x, y, w, h] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center - y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center - y[:, 2] = x[:, 2] - x[:, 0] # width - y[:, 3] = x[:, 3] - x[:, 1] # height - return y - - -def xywh2xyxy(x): - # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x - y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y - y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x - y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y - return y - - -def xywhn2xyxy(x, w=640, h=640, padw=0, padh=0): - # Convert nx4 boxes from [x, y, w, h] normalized to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = w * (x[:, 0] - x[:, 2] / 2) + padw # top left x - y[:, 1] = h * (x[:, 1] - x[:, 3] / 2) + padh # top left y - y[:, 2] = w * (x[:, 0] + x[:, 2] / 2) + padw # bottom right x - y[:, 3] = h * (x[:, 1] + x[:, 3] / 2) + padh # bottom right y - return y - - -def xyn2xy(x, w=640, h=640, padw=0, padh=0): - # Convert normalized segments into pixel segments, shape (n,2) - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = w * x[:, 0] + padw # top left x - y[:, 1] = h * x[:, 1] + padh # top left y - return y - - -def segment2box(segment, width=640, height=640): - # Convert 1 segment label to 1 box label, applying inside-image constraint, i.e. (xy1, xy2, ...) to (xyxy) - x, y = segment.T # segment xy - inside = (x >= 0) & (y >= 0) & (x <= width) & (y <= height) - x, y, = x[inside], y[inside] - return np.array([x.min(), y.min(), x.max(), y.max()]) if any(x) else np.zeros((1, 4)) # xyxy - - -def segments2boxes(segments): - # Convert segment labels to box labels, i.e. (cls, xy1, xy2, ...) to (cls, xywh) - boxes = [] - for s in segments: - x, y = s.T # segment xy - boxes.append([x.min(), y.min(), x.max(), y.max()]) # cls, xyxy - return xyxy2xywh(np.array(boxes)) # cls, xywh - - -def resample_segments(segments, n=1000): - # Up-sample an (n,2) segment - for i, s in enumerate(segments): - s = np.concatenate((s, s[0:1, :]), axis=0) - x = np.linspace(0, len(s) - 1, n) - xp = np.arange(len(s)) - segments[i] = np.concatenate([np.interp(x, xp, s[:, i]) for i in range(2)]).reshape(2, -1).T # segment xy - return segments - - -def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None): - # Rescale coords (xyxy) from img1_shape to img0_shape - if ratio_pad is None: # calculate from img0_shape - gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new - pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding - else: - gain = ratio_pad[0][0] - pad = ratio_pad[1] - - coords[:, [0, 2]] -= pad[0] # x padding - coords[:, [1, 3]] -= pad[1] # y padding - coords[:, :4] /= gain - clip_coords(coords, img0_shape) - return coords - - -def clip_coords(boxes, img_shape): - # Clip bounding xyxy bounding boxes to image shape (height, width) - boxes[:, 0].clamp_(0, img_shape[1]) # x1 - boxes[:, 1].clamp_(0, img_shape[0]) # y1 - boxes[:, 2].clamp_(0, img_shape[1]) # x2 - boxes[:, 3].clamp_(0, img_shape[0]) # y2 - - -def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-7): - # Returns the IoU of box1 to box2. box1 is 4, box2 is nx4 - box2 = box2.T - - # Get the coordinates of bounding boxes - if x1y1x2y2: # x1, y1, x2, y2 = box1 - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - else: # transform from xywh to xyxy - b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2 - b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2 - b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2 - b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2 - - # Intersection area - inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ - (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) - - # Union Area - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - union = w1 * h1 + w2 * h2 - inter + eps - - iou = inter / union - - if GIoU or DIoU or CIoU: - cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width - ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height - if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 - c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared - rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + - (b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared - if DIoU: - return iou - rho2 / c2 # DIoU - elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 - v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / (h2 + eps)) - torch.atan(w1 / (h1 + eps)), 2) - with torch.no_grad(): - alpha = v / (v - iou + (1 + eps)) - return iou - (rho2 / c2 + v * alpha) # CIoU - else: # GIoU https://arxiv.org/pdf/1902.09630.pdf - c_area = cw * ch + eps # convex area - return iou - (c_area - union) / c_area # GIoU - else: - return iou # IoU - - - - -def bbox_alpha_iou(box1, box2, x1y1x2y2=False, GIoU=False, DIoU=False, CIoU=False, alpha=2, eps=1e-9): - # Returns tsqrt_he IoU of box1 to box2. box1 is 4, box2 is nx4 - box2 = box2.T - - # Get the coordinates of bounding boxes - if x1y1x2y2: # x1, y1, x2, y2 = box1 - b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3] - b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3] - else: # transform from xywh to xyxy - b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2 - b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2 - b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2 - b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2 - - # Intersection area - inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \ - (torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0) - - # Union Area - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - union = w1 * h1 + w2 * h2 - inter + eps - - # change iou into pow(iou+eps) - # iou = inter / union - iou = torch.pow(inter/union + eps, alpha) - # beta = 2 * alpha - if GIoU or DIoU or CIoU: - cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width - ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height - if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1 - c2 = (cw ** 2 + ch ** 2) ** alpha + eps # convex diagonal - rho_x = torch.abs(b2_x1 + b2_x2 - b1_x1 - b1_x2) - rho_y = torch.abs(b2_y1 + b2_y2 - b1_y1 - b1_y2) - rho2 = ((rho_x ** 2 + rho_y ** 2) / 4) ** alpha # center distance - if DIoU: - return iou - rho2 / c2 # DIoU - elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47 - v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - with torch.no_grad(): - alpha_ciou = v / ((1 + eps) - inter / union + v) - # return iou - (rho2 / c2 + v * alpha_ciou) # CIoU - return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha)) # CIoU - else: # GIoU https://arxiv.org/pdf/1902.09630.pdf - # c_area = cw * ch + eps # convex area - # return iou - (c_area - union) / c_area # GIoU - c_area = torch.max(cw * ch + eps, union) # convex area - return iou - torch.pow((c_area - union) / c_area + eps, alpha) # GIoU - else: - return iou # torch.log(iou+eps) or iou - - -def box_iou(box1, box2): - # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - box1 (Tensor[N, 4]) - box2 (Tensor[M, 4]) - Returns: - iou (Tensor[N, M]): the NxM matrix containing the pairwise - IoU values for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter) - - -def wh_iou(wh1, wh2): - # Returns the nxm IoU matrix. wh1 is nx2, wh2 is mx2 - wh1 = wh1[:, None] # [N,1,2] - wh2 = wh2[None] # [1,M,2] - inter = torch.min(wh1, wh2).prod(2) # [N,M] - return inter / (wh1.prod(2) + wh2.prod(2) - inter) # iou = inter / (area1 + area2 - inter) - - -def box_giou(box1, box2): - """ - Return generalized intersection-over-union (Jaccard index) between two sets of boxes. - Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with - ``0 <= x1 < x2`` and ``0 <= y1 < y2``. - Args: - boxes1 (Tensor[N, 4]): first set of boxes - boxes2 (Tensor[M, 4]): second set of boxes - Returns: - Tensor[N, M]: the NxM matrix containing the pairwise generalized IoU values - for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - union = (area1[:, None] + area2 - inter) - - iou = inter / union - - lti = torch.min(box1[:, None, :2], box2[:, :2]) - rbi = torch.max(box1[:, None, 2:], box2[:, 2:]) - - whi = (rbi - lti).clamp(min=0) # [N,M,2] - areai = whi[:, :, 0] * whi[:, :, 1] - - return iou - (areai - union) / areai - - -def box_ciou(box1, box2, eps: float = 1e-7): - """ - Return complete intersection-over-union (Jaccard index) between two sets of boxes. - Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with - ``0 <= x1 < x2`` and ``0 <= y1 < y2``. - Args: - boxes1 (Tensor[N, 4]): first set of boxes - boxes2 (Tensor[M, 4]): second set of boxes - eps (float, optional): small number to prevent division by zero. Default: 1e-7 - Returns: - Tensor[N, M]: the NxM matrix containing the pairwise complete IoU values - for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - union = (area1[:, None] + area2 - inter) - - iou = inter / union - - lti = torch.min(box1[:, None, :2], box2[:, :2]) - rbi = torch.max(box1[:, None, 2:], box2[:, 2:]) - - whi = (rbi - lti).clamp(min=0) # [N,M,2] - diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps - - # centers of boxes - x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2 - y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2 - x_g = (box2[:, 0] + box2[:, 2]) / 2 - y_g = (box2[:, 1] + box2[:, 3]) / 2 - # The distance between boxes' centers squared. - centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2 - - w_pred = box1[:, None, 2] - box1[:, None, 0] - h_pred = box1[:, None, 3] - box1[:, None, 1] - - w_gt = box2[:, 2] - box2[:, 0] - h_gt = box2[:, 3] - box2[:, 1] - - v = (4 / (torch.pi ** 2)) * torch.pow((torch.atan(w_gt / h_gt) - torch.atan(w_pred / h_pred)), 2) - with torch.no_grad(): - alpha = v / (1 - iou + v + eps) - return iou - (centers_distance_squared / diagonal_distance_squared) - alpha * v - - -def box_diou(box1, box2, eps: float = 1e-7): - """ - Return distance intersection-over-union (Jaccard index) between two sets of boxes. - Both sets of boxes are expected to be in ``(x1, y1, x2, y2)`` format with - ``0 <= x1 < x2`` and ``0 <= y1 < y2``. - Args: - boxes1 (Tensor[N, 4]): first set of boxes - boxes2 (Tensor[M, 4]): second set of boxes - eps (float, optional): small number to prevent division by zero. Default: 1e-7 - Returns: - Tensor[N, M]: the NxM matrix containing the pairwise distance IoU values - for every element in boxes1 and boxes2 - """ - - def box_area(box): - # box = 4xn - return (box[2] - box[0]) * (box[3] - box[1]) - - area1 = box_area(box1.T) - area2 = box_area(box2.T) - - inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2) - union = (area1[:, None] + area2 - inter) - - iou = inter / union - - lti = torch.min(box1[:, None, :2], box2[:, :2]) - rbi = torch.max(box1[:, None, 2:], box2[:, 2:]) - - whi = (rbi - lti).clamp(min=0) # [N,M,2] - diagonal_distance_squared = (whi[:, :, 0] ** 2) + (whi[:, :, 1] ** 2) + eps - - # centers of boxes - x_p = (box1[:, None, 0] + box1[:, None, 2]) / 2 - y_p = (box1[:, None, 1] + box1[:, None, 3]) / 2 - x_g = (box2[:, 0] + box2[:, 2]) / 2 - y_g = (box2[:, 1] + box2[:, 3]) / 2 - # The distance between boxes' centers squared. - centers_distance_squared = (x_p - x_g) ** 2 + (y_p - y_g) ** 2 - - # The distance IoU is the IoU penalized by a normalized - # distance between boxes' centers squared. - return iou - (centers_distance_squared / diagonal_distance_squared) - - -def non_max_suppression(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False, - labels=()): - """Runs Non-Maximum Suppression (NMS) on inference results - - Returns: - list of detections, on (n,6) tensor per image [xyxy, conf, cls] - """ - - nc = prediction.shape[2] - 5 # number of classes - xc = prediction[..., 4] > conf_thres # candidates - - # Settings - min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height - max_det = 300 # maximum number of detections per image - max_nms = 30000 # maximum number of boxes into torchvision.ops.nms() - time_limit = 10.0 # seconds to quit after - redundant = True # require redundant detections - multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img) - merge = False # use merge-NMS - - t = time.time() - output = [torch.zeros((0, 6), device=prediction.device)] * prediction.shape[0] - for xi, x in enumerate(prediction): # image index, image inference - # Apply constraints - # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height - x = x[xc[xi]] # confidence - - # Cat apriori labels if autolabelling - if labels and len(labels[xi]): - l = labels[xi] - v = torch.zeros((len(l), nc + 5), device=x.device) - v[:, :4] = l[:, 1:5] # box - v[:, 4] = 1.0 # conf - v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls - x = torch.cat((x, v), 0) - - # If none remain process next image - if not x.shape[0]: - continue - - # Compute conf - if nc == 1: - x[:, 5:] = x[:, 4:5] # for models with one class, cls_loss is 0 and cls_conf is always 0.5, - # so there is no need to multiplicate. - else: - x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf - - # Box (center x, center y, width, height) to (x1, y1, x2, y2) - box = xywh2xyxy(x[:, :4]) - - # Detections matrix nx6 (xyxy, conf, cls) - if multi_label: - i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T - x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) - else: # best class only - conf, j = x[:, 5:].max(1, keepdim=True) - x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] - - # Filter by class - if classes is not None: - x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] - - # Apply finite constraint - # if not torch.isfinite(x).all(): - # x = x[torch.isfinite(x).all(1)] - - # Check shape - n = x.shape[0] # number of boxes - if not n: # no boxes - continue - elif n > max_nms: # excess boxes - x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence - - # Batched NMS - c = x[:, 5:6] * (0 if agnostic else max_wh) # classes - boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores - i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS - if i.shape[0] > max_det: # limit detections - i = i[:max_det] - if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) - # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) - iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix - weights = iou * scores[None] # box weights - x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes - if redundant: - i = i[iou.sum(1) > 1] # require redundancy - - output[xi] = x[i] - if (time.time() - t) > time_limit: - print(f'WARNING: NMS time limit {time_limit}s exceeded') - break # time limit exceeded - - return output - - -def non_max_suppression_kpt(prediction, conf_thres=0.25, iou_thres=0.45, classes=None, agnostic=False, multi_label=False, - labels=(), kpt_label=False, nc=None, nkpt=None): - """Runs Non-Maximum Suppression (NMS) on inference results - - Returns: - list of detections, on (n,6) tensor per image [xyxy, conf, cls] - """ - if nc is None: - nc = prediction.shape[2] - 5 if not kpt_label else prediction.shape[2] - 56 # number of classes - xc = prediction[..., 4] > conf_thres # candidates - - # Settings - min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height - max_det = 300 # maximum number of detections per image - max_nms = 30000 # maximum number of boxes into torchvision.ops.nms() - time_limit = 10.0 # seconds to quit after - redundant = True # require redundant detections - multi_label &= nc > 1 # multiple labels per box (adds 0.5ms/img) - merge = False # use merge-NMS - - t = time.time() - output = [torch.zeros((0,6), device=prediction.device)] * prediction.shape[0] - for xi, x in enumerate(prediction): # image index, image inference - # Apply constraints - # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height - x = x[xc[xi]] # confidence - - # Cat apriori labels if autolabelling - if labels and len(labels[xi]): - l = labels[xi] - v = torch.zeros((len(l), nc + 5), device=x.device) - v[:, :4] = l[:, 1:5] # box - v[:, 4] = 1.0 # conf - v[range(len(l)), l[:, 0].long() + 5] = 1.0 # cls - x = torch.cat((x, v), 0) - - # If none remain process next image - if not x.shape[0]: - continue - - # Compute conf - x[:, 5:5+nc] *= x[:, 4:5] # conf = obj_conf * cls_conf - - # Box (center x, center y, width, height) to (x1, y1, x2, y2) - box = xywh2xyxy(x[:, :4]) - - # Detections matrix nx6 (xyxy, conf, cls) - if multi_label: - i, j = (x[:, 5:] > conf_thres).nonzero(as_tuple=False).T - x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) - else: # best class only - if not kpt_label: - conf, j = x[:, 5:].max(1, keepdim=True) - x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] - else: - kpts = x[:, 6:] - conf, j = x[:, 5:6].max(1, keepdim=True) - x = torch.cat((box, conf, j.float(), kpts), 1)[conf.view(-1) > conf_thres] - - - # Filter by class - if classes is not None: - x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] - - # Apply finite constraint - # if not torch.isfinite(x).all(): - # x = x[torch.isfinite(x).all(1)] - - # Check shape - n = x.shape[0] # number of boxes - if not n: # no boxes - continue - elif n > max_nms: # excess boxes - x = x[x[:, 4].argsort(descending=True)[:max_nms]] # sort by confidence - - # Batched NMS - c = x[:, 5:6] * (0 if agnostic else max_wh) # classes - boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores - i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS - if i.shape[0] > max_det: # limit detections - i = i[:max_det] - if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean) - # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) - iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix - weights = iou * scores[None] # box weights - x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes - if redundant: - i = i[iou.sum(1) > 1] # require redundancy - - output[xi] = x[i] - if (time.time() - t) > time_limit: - print(f'WARNING: NMS time limit {time_limit}s exceeded') - break # time limit exceeded - - return output - - -def strip_optimizer(f='best.pt', s=''): # from utils.general import *; strip_optimizer() - # Strip optimizer from 'f' to finalize training, optionally save as 's' - x = torch.load(f, map_location=torch.device('cpu')) - if x.get('ema'): - x['model'] = x['ema'] # replace model with ema - for k in 'optimizer', 'training_results', 'wandb_id', 'ema', 'updates': # keys - x[k] = None - x['epoch'] = -1 - x['model'].half() # to FP16 - for p in x['model'].parameters(): - p.requires_grad = False - torch.save(x, s or f) - mb = os.path.getsize(s or f) / 1E6 # filesize - print(f"Optimizer stripped from {f},{(' saved as %s,' % s) if s else ''} {mb:.1f}MB") - - -def print_mutation(hyp, results, yaml_file='hyp_evolved.yaml', bucket=''): - # Print mutation results to evolve.txt (for use with train.py --evolve) - a = '%10s' * len(hyp) % tuple(hyp.keys()) # hyperparam keys - b = '%10.3g' * len(hyp) % tuple(hyp.values()) # hyperparam values - c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) - print('\n%s\n%s\nEvolved fitness: %s\n' % (a, b, c)) - - if bucket: - url = 'gs://%s/evolve.txt' % bucket - if gsutil_getsize(url) > (os.path.getsize('evolve.txt') if os.path.exists('evolve.txt') else 0): - os.system('gsutil cp %s .' % url) # download evolve.txt if larger than local - - with open('evolve.txt', 'a') as f: # append result - f.write(c + b + '\n') - x = np.unique(np.loadtxt('evolve.txt', ndmin=2), axis=0) # load unique rows - x = x[np.argsort(-fitness(x))] # sort - np.savetxt('evolve.txt', x, '%10.3g') # save sort by fitness - - # Save yaml - for i, k in enumerate(hyp.keys()): - hyp[k] = float(x[0, i + 7]) - with open(yaml_file, 'w') as f: - results = tuple(x[0, :7]) - c = '%10.4g' * len(results) % results # results (P, R, mAP@0.5, mAP@0.5:0.95, val_losses x 3) - f.write('# Hyperparameter Evolution Results\n# Generations: %g\n# Metrics: ' % len(x) + c + '\n\n') - yaml.dump(hyp, f, sort_keys=False) - - if bucket: - os.system('gsutil cp evolve.txt %s gs://%s' % (yaml_file, bucket)) # upload - - -def apply_classifier(x, model, img, im0): - # applies a second stage classifier to yolo outputs - im0 = [im0] if isinstance(im0, np.ndarray) else im0 - for i, d in enumerate(x): # per image - if d is not None and len(d): - d = d.clone() - - # Reshape and pad cutouts - b = xyxy2xywh(d[:, :4]) # boxes - b[:, 2:] = b[:, 2:].max(1)[0].unsqueeze(1) # rectangle to square - b[:, 2:] = b[:, 2:] * 1.3 + 30 # pad - d[:, :4] = xywh2xyxy(b).long() - - # Rescale boxes from img_size to im0 size - scale_coords(img.shape[2:], d[:, :4], im0[i].shape) - - # Classes - pred_cls1 = d[:, 5].long() - ims = [] - for j, a in enumerate(d): # per item - cutout = im0[i][int(a[1]):int(a[3]), int(a[0]):int(a[2])] - im = cv2.resize(cutout, (224, 224)) # BGR - # cv2.imwrite('test%i.jpg' % j, cutout) - - im = im[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x416x416 - im = np.ascontiguousarray(im, dtype=np.float32) # uint8 to float32 - im /= 255.0 # 0 - 255 to 0.0 - 1.0 - ims.append(im) - - pred_cls2 = model(torch.Tensor(ims).to(d.device)).argmax(1) # classifier prediction - x[i] = x[i][pred_cls1 == pred_cls2] # retain matching class detections - - return x - - -def increment_path(path, exist_ok=True, sep=''): - # Increment path, i.e. runs/exp --> runs/exp{sep}0, runs/exp{sep}1 etc. - path = Path(path) # os-agnostic - if (path.exists() and exist_ok) or (not path.exists()): - return str(path) - else: - dirs = glob.glob(f"{path}{sep}*") # similar paths - matches = [re.search(rf"%s{sep}(\d+)" % path.stem, d) for d in dirs] - i = [int(m.groups()[0]) for m in matches if m] # indices - n = max(i) + 1 if i else 2 # increment number - return f"{path}{sep}{n}" # update path diff --git a/spaces/bioriAsaeru/text-to-voice/Astute Graphics Plugins Bundle 1.2.2 Crack.md b/spaces/bioriAsaeru/text-to-voice/Astute Graphics Plugins Bundle 1.2.2 Crack.md deleted file mode 100644 index 05076bdedefff7e2c38042676d975d4ed06766e1..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Astute Graphics Plugins Bundle 1.2.2 Crack.md +++ /dev/null @@ -1,92 +0,0 @@ - -

Astute Graphics Plugins Bundle 1.2.2 Crack: How to Unlock the Full Potential of Adobe Illustrator

-

Adobe Illustrator is one of the most popular and powerful vector design software in the world. However, sometimes it can be frustrating and time-consuming to create and edit vector artwork, especially if you need to perform complex or repetitive tasks.

-

That's why many vector artists and designers use Astute Graphics Plugins Bundle 1.2.2 Crack, a collection of plug-ins that seamlessly integrate into Illustrator and enhance its functionality and performance. Astute Graphics Plugins Bundle 1.2.2 Crack offers a variety of tools that can help you draw more naturally and intuitively, edit and manipulate vectors more easily and accurately, apply and adjust effects more dynamically and creatively, and save time and effort in your workflow.

-

Astute Graphics Plugins Bundle 1.2.2 Crack


Download Ziphttps://urloso.com/2uyObX



-

In this article, we will review some of the features and benefits of Astute Graphics Plugins Bundle 1.2.2 Crack, and show you how to use it to unlock the full potential of Adobe Illustrator.

-

What is Astute Graphics Plugins Bundle 1.2.2 Crack?

-

Astute Graphics Plugins Bundle 1.2.2 Crack is a collection of plug-ins that extend the capabilities of Adobe Illustrator. It includes 18 plug-ins that cover various aspects of vector design, such as drawing, editing, coloring, styling, transforming, aligning, filling, texturing, mirroring, saving, printing, and more.

-

Some of the plug-ins included in Astute Graphics Plugins Bundle 1.2.2 Crack are:

-
    -
  • DynamicSketch: A tool that allows you to sketch with a live preview of your path, adjusting the width and smoothness of your strokes based on the pressure of your tablet or the speed of your mouse.
  • -
  • VectorScribe: A tool that allows you to edit and create vectors faster and smarter, with features such as dynamic shapes, smart removal, path extension, ghost handles, pathscribe, cornerscribe, and more.
  • -
  • Phantasm: A tool that allows you to apply and change effects directly in Illustrator, such as halftones, duotones, color adjustments, curves, levels, hue/saturation, brightness/contrast, etc.
  • -
  • Texturino: A tool that allows you to add textures to your vector artwork with ease and flexibility, using texture brushes and opacity masks.
  • -
  • MirrorMe: A tool that allows you to create symmetrical artwork with live functionality, using axes or grids to mirror your shapes.
  • -
  • InkScribe: A tool that allows you to draw precisely and efficiently in vector, with features such as smart guides, annotations, ghost handles, rubber band mode, etc.
  • -
  • ColliderScribe: A tool that allows you to align shapes accurately and quickly with collision detection and space fill features.
  • -
  • Stipplism: A tool that allows you to explore dot and shape patterns faster and easier than ever before.
  • -
  • And many more!
  • -
-

How to use Astute Graphics Plugins Bundle 1.2.2 Crack?

-

To use Astute Graphics Plugins Bundle 1.2.2 Crack, you need to have Adobe Illustrator installed on your computer. You also need to download the crack file from a reliable source and follow the instructions to install it on your system.

-

Once you have installed Astute Graphics Plugins Bundle 1.2.2 Crack, you can access the plug-ins from the Illustrator menu bar or from the tools panel. Each plug-in has its own interface and settings that you can customize according to your preferences and needs.

-

You can use Astute Graphics Plugins Bundle 1.2.2 Crack for various purposes and projects in Illustrator. For example:

-

-
    -
  • You can use DynamicSketch to draw natural and organic shapes with variable width strokes.
  • -
  • You can use VectorScribe to edit and manipulate vectors with ease and precision.
  • -
  • You can use Phantasm to apply and adjust effects directly in Illustrator without switching to Photoshop.
  • -
  • You can use Texturino to add textures to your vector artwork for more depth and realism.
  • -
  • You can use MirrorMe to create symmetrical artwork with live functionality.
  • -
  • You can use InkScribe to draw precisely and efficiently in vector.
  • -
  • You can use ColliderScribe to align shapes accurately and quickly with collision detection and space fill features.
  • -
  • You can use Stipplism to explore dot and shape patterns faster and easier than ever before.
  • -
-

What are the benefits of using Astute Graphics Plugins Bundle 1.2.2 Crack?

-

Using Astute Graphics Plugins Bundle 1.2.2 Crack can bring many benefits to your vector design workflow in Illustrator. Some of them are:

-
    -
  • You can save time and effort by performing complex or repetitive tasks more quickly and easily.
  • -
  • You can improve your creativity and productivity by exploring new possibilities and techniques in vector design.
  • -
  • You can enhance your quality and accuracy by working with vectors more dynamically and intelligently.
  • -
  • You can simplify your workflow by using intuitive and integrated tools that work seamlessly with Illustrator's native features.
  • -
-

Conclusion

-

Astute Graphics Plugins Bundle 1.2.2 Crack is a collection of plug-ins that enhance the functionality and performance of Adobe Illustrator. It offers a variety of tools that can help you draw more naturally and intuitively, edit and manipulate vectors more easily and accurately, apply and adjust effects more dynamically and creatively, and save time and effort in your workflow.

-

If you are looking for a way to unlock the full potential of Adobe Illustrator, you might want to check out Astute Graphics Plugins Bundle 1.2.2 Crack. It is a must-have for vector artists and designers who want to work faster, smarter, and better in vector.

-

How to learn and master Astute Graphics Plugins Bundle 1.2.2 Crack?

-

If you want to learn and master Astute Graphics Plugins Bundle 1.2.2 Crack, you need to practice and experiment with the plug-ins on your own vector projects. You can also use some resources and tutorials that are available online to help you get started and improve your skills.

-

Some of the resources and tutorials that you can use are:

-
    -
  • The official website of Astute Graphics, where you can find detailed information and documentation about each plug-in, as well as tips and tricks, FAQs, and support.
  • -
  • The official YouTube channel of Astute Graphics, where you can watch video tutorials and demos of the plug-ins, as well as interviews and webinars with vector experts and artists.
  • -
  • The official blog of Astute Graphics, where you can read articles and case studies about the plug-ins, as well as news and updates.
  • -
  • The official forum of Astute Graphics, where you can interact with other users of the plug-ins, ask questions, share feedback, and showcase your work.
  • -
  • The online courses and workshops offered by Astute Graphics, where you can learn from experienced instructors and get certified in using the plug-ins.
  • -
-

How to get the best results with Astute Graphics Plugins Bundle 1.2.2 Crack?

-

To get the best results with Astute Graphics Plugins Bundle 1.2.2 Crack, you need to use the plug-ins wisely and creatively. You should not rely on the plug-ins alone to create your vector artwork, but rather use them as tools that complement and enhance your own vision and style.

-

Some of the tips that you can follow to get the best results with Astute Graphics Plugins Bundle 1.2.2 Crack are:

-
    -
  • Use the plug-ins that suit your needs and preferences. You don't have to use all of them at once or for every project. Choose the ones that help you achieve your goals and solve your problems.
  • -
  • Customize the settings and options of the plug-ins according to your project requirements and personal taste. You can adjust the parameters, presets, modes, colors, brushes, etc. of each plug-in to fit your needs.
  • -
  • Combine and integrate the plug-ins with each other and with Illustrator's native features. You can use multiple plug-ins together to create complex and unique effects and transformations. You can also use the plug-ins with Illustrator's tools, panels, layers, masks, etc. to create a seamless workflow.
  • -
  • Experiment and explore with the plug-ins. You can try different combinations and variations of the plug-ins to discover new possibilities and techniques in vector design. You can also use the plug-ins for purposes other than their intended ones to create unexpected and original results.
  • -
-

Conclusion

-

Astute Graphics Plugins Bundle 1.2.2 Crack is a collection of plug-ins that enhance the functionality and performance of Adobe Illustrator. It offers a variety of tools that can help you draw more naturally and intuitively, edit and manipulate vectors more easily and accurately, apply and adjust effects more dynamically and creatively, and save time and effort in your workflow.

-

If you are looking for a way to unlock the full potential of Adobe Illustrator, you might want to check out Astute Graphics Plugins Bundle 1.2.2 Crack. It is a must-have for vector artists and designers who want to work faster, smarter, and better in vector.

-

However, you should also consider the drawbacks of using Astute Graphics Plugins Bundle 1.2.2 Crack, such as violating the intellectual property rights of Astute Graphics, not receiving any technical support or updates from Astute Graphics, and exposing your computer or data to malware or viruses.

-

You should always respect the rights and wishes of the original developer and use the plug-ins ethically and responsibly. You should also be careful and do some research when looking for a reliable and safe source for Astute Graphics Plugins Bundle 1.2.2 Crack.

-

You should also practice and experiment with the plug-ins on your own vector projects, use some resources and tutorials that are available online to help you learn and master them, use them wisely and creatively according to your needs and preferences, combine them with each other -and with Illustrator's native features for a seamless workflow, experiment -and explore with them for new possibilities -and techniques in vector design, -and get -the best results -with them.

-

Conclusion

-

In this article, we have discussed what Astute Graphics Plugins Bundle 1.2.2 Crack is, how to use it, where to find it, what are the benefits and drawbacks of using it, how to learn and master it, and how to get the best results with it. We hope that this article has been informative and helpful for you.

-

Astute Graphics Plugins Bundle 1.2.2 Crack is a collection of plug-ins that enhance the functionality and performance of Adobe Illustrator. It offers a variety of tools that can help you draw more naturally and intuitively, edit and manipulate vectors more easily and accurately, apply and adjust effects more dynamically and creatively, and save time and effort in your workflow.

-

If you are looking for a way to unlock the full potential of Adobe Illustrator, you might want to check out Astute Graphics Plugins Bundle 1.2.2 Crack. It is a must-have for vector artists and designers who want to work faster, smarter, and better in vector.

-

However, you should also consider the drawbacks of using Astute Graphics Plugins Bundle 1.2.2 Crack, such as violating the intellectual property rights of Astute Graphics, not receiving any technical support or updates from Astute Graphics, and exposing your computer or data to malware or viruses.

-

You should always respect the rights and wishes of the original developer and use the plug-ins ethically and responsibly. You should also be careful and do some research when looking for a reliable and safe source for Astute Graphics Plugins Bundle 1.2.2 Crack.

-

You should also practice and experiment with the plug-ins on your own vector projects, use some resources and tutorials that are available online to help you learn and master them, use them wisely and creatively according to your needs and preferences, combine them with each other -and with Illustrator's native features for a seamless workflow, experiment -and explore with them for new possibilities -and techniques in vector design, -and get -the best results -with them.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Direccionamiento Ip Subredes Ejercicios Resueltos los mejores consejos y trucos para manejar scheart boliviano ra en el mbito de las redes.md b/spaces/bioriAsaeru/text-to-voice/Direccionamiento Ip Subredes Ejercicios Resueltos los mejores consejos y trucos para manejar scheart boliviano ra en el mbito de las redes.md deleted file mode 100644 index f7231236af1ef179fbec2d0e1e343b8498e45332..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Direccionamiento Ip Subredes Ejercicios Resueltos los mejores consejos y trucos para manejar scheart boliviano ra en el mbito de las redes.md +++ /dev/null @@ -1,6 +0,0 @@ -

Direccionamiento Ip Subredes Ejercicios Resueltos scheart boliviano ra


DOWNLOAD ····· https://urloso.com/2uyRJP



- - aaccfb2cb3
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Kuch Kuch Hota Hai HD 720p Download Everything You Need to Know About the Movie.md b/spaces/bioriAsaeru/text-to-voice/Kuch Kuch Hota Hai HD 720p Download Everything You Need to Know About the Movie.md deleted file mode 100644 index 5a2ec50b2d62d490460833248b4847d5f7b9a338..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Kuch Kuch Hota Hai HD 720p Download Everything You Need to Know About the Movie.md +++ /dev/null @@ -1,6 +0,0 @@ -

KuchKuchHotaHaihd720pdownload


Download File ✯✯✯ https://urloso.com/2uyOFN



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/metrics/inception_discriminative_score.py b/spaces/birsardar/stable-diffusion-mat-outpainting-primer/metrics/inception_discriminative_score.py deleted file mode 100644 index 38e867f05c429d7ce97c0ec737492f4342a20b81..0000000000000000000000000000000000000000 --- a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/metrics/inception_discriminative_score.py +++ /dev/null @@ -1,37 +0,0 @@ - -import numpy as np -import scipy.linalg -from . import metric_utils -import sklearn.svm - -#---------------------------------------------------------------------------- - -def compute_ids(opts, max_real, num_gen): - # Direct TorchScript translation of http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz - detector_url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt' - detector_kwargs = dict(return_features=True) # Return raw features before the softmax layer. - - real_activations = metric_utils.compute_feature_stats_for_dataset( - opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs, - rel_lo=0, rel_hi=0, capture_all=True, max_items=max_real).get_all() - - fake_activations = metric_utils.compute_feature_stats_for_generator( - opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs, - rel_lo=0, rel_hi=1, capture_all=True, max_items=num_gen).get_all() - - if opts.rank != 0: - return float('nan') - - svm = sklearn.svm.LinearSVC(dual=False) - svm_inputs = np.concatenate([real_activations, fake_activations]) - svm_targets = np.array([1] * real_activations.shape[0] + [0] * fake_activations.shape[0]) - print('Fitting ...') - svm.fit(svm_inputs, svm_targets) - u_ids = 1 - svm.score(svm_inputs, svm_targets) - real_outputs = svm.decision_function(real_activations) - fake_outputs = svm.decision_function(fake_activations) - p_ids = np.mean(fake_outputs > real_outputs) - - return float(u_ids), float(p_ids) - -#---------------------------------------------------------------------------- diff --git a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/networks/basic_module.py b/spaces/birsardar/stable-diffusion-mat-outpainting-primer/networks/basic_module.py deleted file mode 100644 index 12d71999f66cf9891950e0979fe697a392e7fc21..0000000000000000000000000000000000000000 --- a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/networks/basic_module.py +++ /dev/null @@ -1,583 +0,0 @@ -import sys -sys.path.insert(0, '../') -from collections import OrderedDict -import numpy as np - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch_utils import misc -from torch_utils import persistence -from torch_utils.ops import conv2d_resample -from torch_utils.ops import upfirdn2d -from torch_utils.ops import bias_act - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def normalize_2nd_moment(x, dim=1, eps=1e-8): - return x * (x.square().mean(dim=dim, keepdim=True) + eps).rsqrt() - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class FullyConnectedLayer(nn.Module): - def __init__(self, - in_features, # Number of input features. - out_features, # Number of output features. - bias = True, # Apply additive bias before the activation function? - activation = 'linear', # Activation function: 'relu', 'lrelu', etc. - lr_multiplier = 1, # Learning rate multiplier. - bias_init = 0, # Initial value for the additive bias. - ): - super().__init__() - self.weight = torch.nn.Parameter(torch.randn([out_features, in_features]) / lr_multiplier) - self.bias = torch.nn.Parameter(torch.full([out_features], np.float32(bias_init))) if bias else None - self.activation = activation - - self.weight_gain = lr_multiplier / np.sqrt(in_features) - self.bias_gain = lr_multiplier - - def forward(self, x): - w = self.weight * self.weight_gain - b = self.bias - if b is not None and self.bias_gain != 1: - b = b * self.bias_gain - - if self.activation == 'linear' and b is not None: - # out = torch.addmm(b.unsqueeze(0), x, w.t()) - x = x.matmul(w.t()) - out = x + b.reshape([-1 if i == x.ndim-1 else 1 for i in range(x.ndim)]) - else: - x = x.matmul(w.t()) - out = bias_act.bias_act(x, b, act=self.activation, dim=x.ndim-1) - return out - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class Conv2dLayer(nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - kernel_size, # Width and height of the convolution kernel. - bias = True, # Apply additive bias before the activation function? - activation = 'linear', # Activation function: 'relu', 'lrelu', etc. - up = 1, # Integer upsampling factor. - down = 1, # Integer downsampling factor. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = None, # Clamp the output to +-X, None = disable clamping. - trainable = True, # Update the weights of this layer during training? - ): - super().__init__() - self.activation = activation - self.up = up - self.down = down - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.conv_clamp = conv_clamp - self.padding = kernel_size // 2 - self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2)) - self.act_gain = bias_act.activation_funcs[activation].def_gain - - weight = torch.randn([out_channels, in_channels, kernel_size, kernel_size]) - bias = torch.zeros([out_channels]) if bias else None - if trainable: - self.weight = torch.nn.Parameter(weight) - self.bias = torch.nn.Parameter(bias) if bias is not None else None - else: - self.register_buffer('weight', weight) - if bias is not None: - self.register_buffer('bias', bias) - else: - self.bias = None - - def forward(self, x, gain=1): - w = self.weight * self.weight_gain - x = conv2d_resample.conv2d_resample(x=x, w=w, f=self.resample_filter, up=self.up, down=self.down, - padding=self.padding) - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - out = bias_act.bias_act(x, self.bias, act=self.activation, gain=act_gain, clamp=act_clamp) - return out - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class ModulatedConv2d(nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - kernel_size, # Width and height of the convolution kernel. - style_dim, # dimension of the style code - demodulate=True, # perfrom demodulation - up=1, # Integer upsampling factor. - down=1, # Integer downsampling factor. - resample_filter=[1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp=None, # Clamp the output to +-X, None = disable clamping. - ): - super().__init__() - self.demodulate = demodulate - - self.weight = torch.nn.Parameter(torch.randn([1, out_channels, in_channels, kernel_size, kernel_size])) - self.out_channels = out_channels - self.kernel_size = kernel_size - self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2)) - self.padding = self.kernel_size // 2 - self.up = up - self.down = down - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.conv_clamp = conv_clamp - - self.affine = FullyConnectedLayer(style_dim, in_channels, bias_init=1) - - def forward(self, x, style): - batch, in_channels, height, width = x.shape - style = self.affine(style).view(batch, 1, in_channels, 1, 1) - weight = self.weight * self.weight_gain * style - - if self.demodulate: - decoefs = (weight.pow(2).sum(dim=[2, 3, 4]) + 1e-8).rsqrt() - weight = weight * decoefs.view(batch, self.out_channels, 1, 1, 1) - - weight = weight.view(batch * self.out_channels, in_channels, self.kernel_size, self.kernel_size) - x = x.view(1, batch * in_channels, height, width) - x = conv2d_resample.conv2d_resample(x=x, w=weight, f=self.resample_filter, up=self.up, down=self.down, - padding=self.padding, groups=batch) - out = x.view(batch, self.out_channels, *x.shape[2:]) - - return out - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class StyleConv(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - style_dim, # Intermediate latent (W) dimensionality. - resolution, # Resolution of this layer. - kernel_size = 3, # Convolution kernel size. - up = 1, # Integer upsampling factor. - use_noise = True, # Enable noise input? - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - resample_filter = [1,3,3,1], # Low-pass filter to apply when resampling activations. - conv_clamp = None, # Clamp the output of convolution layers to +-X, None = disable clamping. - demodulate = True, # perform demodulation - ): - super().__init__() - - self.conv = ModulatedConv2d(in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - style_dim=style_dim, - demodulate=demodulate, - up=up, - resample_filter=resample_filter, - conv_clamp=conv_clamp) - - self.use_noise = use_noise - self.resolution = resolution - if use_noise: - self.register_buffer('noise_const', torch.randn([resolution, resolution])) - self.noise_strength = torch.nn.Parameter(torch.zeros([])) - - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - self.activation = activation - self.act_gain = bias_act.activation_funcs[activation].def_gain - self.conv_clamp = conv_clamp - - def forward(self, x, style, noise_mode='random', gain=1): - x = self.conv(x, style) - - assert noise_mode in ['random', 'const', 'none'] - - if self.use_noise: - if noise_mode == 'random': - xh, xw = x.size()[-2:] - noise = torch.randn([x.shape[0], 1, xh, xw], device=x.device) \ - * self.noise_strength - if noise_mode == 'const': - noise = self.noise_const * self.noise_strength - x = x + noise - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - out = bias_act.bias_act(x, self.bias, act=self.activation, gain=act_gain, clamp=act_clamp) - - return out - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class ToRGB(torch.nn.Module): - def __init__(self, - in_channels, - out_channels, - style_dim, - kernel_size=1, - resample_filter=[1,3,3,1], - conv_clamp=None, - demodulate=False): - super().__init__() - - self.conv = ModulatedConv2d(in_channels=in_channels, - out_channels=out_channels, - kernel_size=kernel_size, - style_dim=style_dim, - demodulate=demodulate, - resample_filter=resample_filter, - conv_clamp=conv_clamp) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - self.register_buffer('resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.conv_clamp = conv_clamp - - def forward(self, x, style, skip=None): - x = self.conv(x, style) - out = bias_act.bias_act(x, self.bias, clamp=self.conv_clamp) - - if skip is not None: - if skip.shape != out.shape: - skip = upfirdn2d.upsample2d(skip, self.resample_filter) - out = out + skip - - return out - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def get_style_code(a, b): - return torch.cat([a, b], dim=1) - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class DecBlockFirst(nn.Module): - def __init__(self, in_channels, out_channels, activation, style_dim, use_noise, demodulate, img_channels): - super().__init__() - self.fc = FullyConnectedLayer(in_features=in_channels*2, - out_features=in_channels*4**2, - activation=activation) - self.conv = StyleConv(in_channels=in_channels, - out_channels=out_channels, - style_dim=style_dim, - resolution=4, - kernel_size=3, - use_noise=use_noise, - activation=activation, - demodulate=demodulate, - ) - self.toRGB = ToRGB(in_channels=out_channels, - out_channels=img_channels, - style_dim=style_dim, - kernel_size=1, - demodulate=False, - ) - - def forward(self, x, ws, gs, E_features, noise_mode='random'): - x = self.fc(x).view(x.shape[0], -1, 4, 4) - x = x + E_features[2] - style = get_style_code(ws[:, 0], gs) - x = self.conv(x, style, noise_mode=noise_mode) - style = get_style_code(ws[:, 1], gs) - img = self.toRGB(x, style, skip=None) - - return x, img - - -@persistence.persistent_class -class DecBlockFirstV2(nn.Module): - def __init__(self, in_channels, out_channels, activation, style_dim, use_noise, demodulate, img_channels): - super().__init__() - self.conv0 = Conv2dLayer(in_channels=in_channels, - out_channels=in_channels, - kernel_size=3, - activation=activation, - ) - self.conv1 = StyleConv(in_channels=in_channels, - out_channels=out_channels, - style_dim=style_dim, - resolution=4, - kernel_size=3, - use_noise=use_noise, - activation=activation, - demodulate=demodulate, - ) - self.toRGB = ToRGB(in_channels=out_channels, - out_channels=img_channels, - style_dim=style_dim, - kernel_size=1, - demodulate=False, - ) - - def forward(self, x, ws, gs, E_features, noise_mode='random'): - # x = self.fc(x).view(x.shape[0], -1, 4, 4) - x = self.conv0(x) - x = x + E_features[2] - style = get_style_code(ws[:, 0], gs) - x = self.conv1(x, style, noise_mode=noise_mode) - style = get_style_code(ws[:, 1], gs) - img = self.toRGB(x, style, skip=None) - - return x, img - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class DecBlock(nn.Module): - def __init__(self, res, in_channels, out_channels, activation, style_dim, use_noise, demodulate, img_channels): # res = 2, ..., resolution_log2 - super().__init__() - self.res = res - - self.conv0 = StyleConv(in_channels=in_channels, - out_channels=out_channels, - style_dim=style_dim, - resolution=2**res, - kernel_size=3, - up=2, - use_noise=use_noise, - activation=activation, - demodulate=demodulate, - ) - self.conv1 = StyleConv(in_channels=out_channels, - out_channels=out_channels, - style_dim=style_dim, - resolution=2**res, - kernel_size=3, - use_noise=use_noise, - activation=activation, - demodulate=demodulate, - ) - self.toRGB = ToRGB(in_channels=out_channels, - out_channels=img_channels, - style_dim=style_dim, - kernel_size=1, - demodulate=False, - ) - - def forward(self, x, img, ws, gs, E_features, noise_mode='random'): - style = get_style_code(ws[:, self.res * 2 - 5], gs) - x = self.conv0(x, style, noise_mode=noise_mode) - x = x + E_features[self.res] - style = get_style_code(ws[:, self.res * 2 - 4], gs) - x = self.conv1(x, style, noise_mode=noise_mode) - style = get_style_code(ws[:, self.res * 2 - 3], gs) - img = self.toRGB(x, style, skip=img) - - return x, img - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class MappingNet(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality, 0 = no latent. - c_dim, # Conditioning label (C) dimensionality, 0 = no label. - w_dim, # Intermediate latent (W) dimensionality. - num_ws, # Number of intermediate latents to output, None = do not broadcast. - num_layers = 8, # Number of mapping layers. - embed_features = None, # Label embedding dimensionality, None = same as w_dim. - layer_features = None, # Number of intermediate features in the mapping layers, None = same as w_dim. - activation = 'lrelu', # Activation function: 'relu', 'lrelu', etc. - lr_multiplier = 0.01, # Learning rate multiplier for the mapping layers. - w_avg_beta = 0.995, # Decay for tracking the moving average of W during training, None = do not track. - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.num_ws = num_ws - self.num_layers = num_layers - self.w_avg_beta = w_avg_beta - - if embed_features is None: - embed_features = w_dim - if c_dim == 0: - embed_features = 0 - if layer_features is None: - layer_features = w_dim - features_list = [z_dim + embed_features] + [layer_features] * (num_layers - 1) + [w_dim] - - if c_dim > 0: - self.embed = FullyConnectedLayer(c_dim, embed_features) - for idx in range(num_layers): - in_features = features_list[idx] - out_features = features_list[idx + 1] - layer = FullyConnectedLayer(in_features, out_features, activation=activation, lr_multiplier=lr_multiplier) - setattr(self, f'fc{idx}', layer) - - if num_ws is not None and w_avg_beta is not None: - self.register_buffer('w_avg', torch.zeros([w_dim])) - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, skip_w_avg_update=False): - # Embed, normalize, and concat inputs. - x = None - with torch.autograd.profiler.record_function('input'): - if self.z_dim > 0: - x = normalize_2nd_moment(z.to(torch.float32)) - if self.c_dim > 0: - y = normalize_2nd_moment(self.embed(c.to(torch.float32))) - x = torch.cat([x, y], dim=1) if x is not None else y - - # Main layers. - for idx in range(self.num_layers): - layer = getattr(self, f'fc{idx}') - x = layer(x) - - # Update moving average of W. - if self.w_avg_beta is not None and self.training and not skip_w_avg_update: - with torch.autograd.profiler.record_function('update_w_avg'): - self.w_avg.copy_(x.detach().mean(dim=0).lerp(self.w_avg, self.w_avg_beta)) - - # Broadcast. - if self.num_ws is not None: - with torch.autograd.profiler.record_function('broadcast'): - x = x.unsqueeze(1).repeat([1, self.num_ws, 1]) - - # Apply truncation. - if truncation_psi != 1: - with torch.autograd.profiler.record_function('truncate'): - assert self.w_avg_beta is not None - if self.num_ws is None or truncation_cutoff is None: - x = self.w_avg.lerp(x, truncation_psi) - else: - x[:, :truncation_cutoff] = self.w_avg.lerp(x[:, :truncation_cutoff], truncation_psi) - - return x - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class DisFromRGB(nn.Module): - def __init__(self, in_channels, out_channels, activation): # res = 2, ..., resolution_log2 - super().__init__() - self.conv = Conv2dLayer(in_channels=in_channels, - out_channels=out_channels, - kernel_size=1, - activation=activation, - ) - - def forward(self, x): - return self.conv(x) - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class DisBlock(nn.Module): - def __init__(self, in_channels, out_channels, activation): # res = 2, ..., resolution_log2 - super().__init__() - self.conv0 = Conv2dLayer(in_channels=in_channels, - out_channels=in_channels, - kernel_size=3, - activation=activation, - ) - self.conv1 = Conv2dLayer(in_channels=in_channels, - out_channels=out_channels, - kernel_size=3, - down=2, - activation=activation, - ) - self.skip = Conv2dLayer(in_channels=in_channels, - out_channels=out_channels, - kernel_size=1, - down=2, - bias=False, - ) - - def forward(self, x): - skip = self.skip(x, gain=np.sqrt(0.5)) - x = self.conv0(x) - x = self.conv1(x, gain=np.sqrt(0.5)) - out = skip + x - - return out - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class MinibatchStdLayer(torch.nn.Module): - def __init__(self, group_size, num_channels=1): - super().__init__() - self.group_size = group_size - self.num_channels = num_channels - - def forward(self, x): - N, C, H, W = x.shape - with misc.suppress_tracer_warnings(): # as_tensor results are registered as constants - G = torch.min(torch.as_tensor(self.group_size), - torch.as_tensor(N)) if self.group_size is not None else N - F = self.num_channels - c = C // F - - y = x.reshape(G, -1, F, c, H, - W) # [GnFcHW] Split minibatch N into n groups of size G, and channels C into F groups of size c. - y = y - y.mean(dim=0) # [GnFcHW] Subtract mean over group. - y = y.square().mean(dim=0) # [nFcHW] Calc variance over group. - y = (y + 1e-8).sqrt() # [nFcHW] Calc stddev over group. - y = y.mean(dim=[2, 3, 4]) # [nF] Take average over channels and pixels. - y = y.reshape(-1, F, 1, 1) # [nF11] Add missing dimensions. - y = y.repeat(G, 1, H, W) # [NFHW] Replicate over group and pixels. - x = torch.cat([x, y], dim=1) # [NCHW] Append to input as new channels. - return x - -#---------------------------------------------------------------------------- - -@persistence.persistent_class -class Discriminator(torch.nn.Module): - def __init__(self, - c_dim, # Conditioning label (C) dimensionality. - img_resolution, # Input resolution. - img_channels, # Number of input color channels. - channel_base = 32768, # Overall multiplier for the number of channels. - channel_max = 512, # Maximum number of channels in any layer. - channel_decay = 1, - cmap_dim = None, # Dimensionality of mapped conditioning label, None = default. - activation = 'lrelu', - mbstd_group_size = 4, # Group size for the minibatch standard deviation layer, None = entire minibatch. - mbstd_num_channels = 1, # Number of features for the minibatch standard deviation layer, 0 = disable. - ): - super().__init__() - self.c_dim = c_dim - self.img_resolution = img_resolution - self.img_channels = img_channels - - resolution_log2 = int(np.log2(img_resolution)) - assert img_resolution == 2 ** resolution_log2 and img_resolution >= 4 - self.resolution_log2 = resolution_log2 - - def nf(stage): - return np.clip(int(channel_base / 2 ** (stage * channel_decay)), 1, channel_max) - - if cmap_dim == None: - cmap_dim = nf(2) - if c_dim == 0: - cmap_dim = 0 - self.cmap_dim = cmap_dim - - if c_dim > 0: - self.mapping = MappingNet(z_dim=0, c_dim=c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None) - - Dis = [DisFromRGB(img_channels+1, nf(resolution_log2), activation)] - for res in range(resolution_log2, 2, -1): - Dis.append(DisBlock(nf(res), nf(res-1), activation)) - - if mbstd_num_channels > 0: - Dis.append(MinibatchStdLayer(group_size=mbstd_group_size, num_channels=mbstd_num_channels)) - Dis.append(Conv2dLayer(nf(2) + mbstd_num_channels, nf(2), kernel_size=3, activation=activation)) - self.Dis = nn.Sequential(*Dis) - - self.fc0 = FullyConnectedLayer(nf(2)*4**2, nf(2), activation=activation) - self.fc1 = FullyConnectedLayer(nf(2), 1 if cmap_dim == 0 else cmap_dim) - - def forward(self, images_in, masks_in, c): - x = torch.cat([masks_in - 0.5, images_in], dim=1) - x = self.Dis(x) - x = self.fc1(self.fc0(x.flatten(start_dim=1))) - - if self.c_dim > 0: - cmap = self.mapping(None, c) - - if self.cmap_dim > 0: - x = (x * cmap).sum(dim=1, keepdim=True) * (1 / np.sqrt(self.cmap_dim)) - - return x diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/.github/ISSUE_TEMPLATE.md b/spaces/brjathu/HMR2.0/vendor/detectron2/.github/ISSUE_TEMPLATE.md deleted file mode 100644 index 5e8aaa2d3722e7e73a3d94b2b7dfc4f751d7a240..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/.github/ISSUE_TEMPLATE.md +++ /dev/null @@ -1,5 +0,0 @@ - -Please select an issue template from -https://github.com/facebookresearch/detectron2/issues/new/choose . - -Otherwise your issue will be closed. diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5-flask-master/static/style.css b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5-flask-master/static/style.css deleted file mode 100644 index db5294a85f262c2e0836e4071715e7c360b1ce2c..0000000000000000000000000000000000000000 --- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5-flask-master/static/style.css +++ /dev/null @@ -1,29 +0,0 @@ -html, -body { - height: 100%; -} - -body { - display: -ms-flexbox; - display: flex; - -ms-flex-align: center; - align-items: center; - padding-top: 40px; - padding-bottom: 40px; - background-color: #f5f5f5; -} - -.form-signin { - width: 100%; - max-width: 330px; - padding: 15px; - margin: auto; -} - -.form-signin .form-control { - position: relative; - box-sizing: border-box; - height: auto; - padding: 10px; - font-size: 16px; -} diff --git a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/text/cantonese.py b/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/text/cantonese.py deleted file mode 100644 index b66d12138b81b70b86f18217d24a08fce76305c0..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/VITS-Umamusume-voice-synthesizer/text/cantonese.py +++ /dev/null @@ -1,59 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('jyutjyu') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ei˥'), - ('B', 'biː˥'), - ('C', 'siː˥'), - ('D', 'tiː˥'), - ('E', 'iː˥'), - ('F', 'e˥fuː˨˩'), - ('G', 'tsiː˥'), - ('H', 'ɪk̚˥tsʰyː˨˩'), - ('I', 'ɐi˥'), - ('J', 'tsei˥'), - ('K', 'kʰei˥'), - ('L', 'e˥llou˨˩'), - ('M', 'ɛːm˥'), - ('N', 'ɛːn˥'), - ('O', 'ou˥'), - ('P', 'pʰiː˥'), - ('Q', 'kʰiːu˥'), - ('R', 'aː˥lou˨˩'), - ('S', 'ɛː˥siː˨˩'), - ('T', 'tʰiː˥'), - ('U', 'juː˥'), - ('V', 'wiː˥'), - ('W', 'tʊk̚˥piː˥juː˥'), - ('X', 'ɪk̚˥siː˨˩'), - ('Y', 'waːi˥'), - ('Z', 'iː˨sɛːt̚˥') -]] - - -def number_to_cantonese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def cantonese_to_ipa(text): - text = number_to_cantonese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/candlend/vits-hoshimi/vits/text/english.py b/spaces/candlend/vits-hoshimi/vits/text/english.py deleted file mode 100644 index f4634388a201db42c7e69895dd4a09ccc681c5bc..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/vits/text/english.py +++ /dev/null @@ -1,188 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - - -# Regular expression matching whitespace: - - -import re -import inflect -from unidecode import unidecode -import eng_to_ipa as ipa -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -# List of (ipa, lazy ipa) pairs: -_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('æ', 'e'), - ('ɑ', 'a'), - ('ɔ', 'o'), - ('ð', 'z'), - ('θ', 's'), - ('ɛ', 'e'), - ('ɪ', 'i'), - ('ʊ', 'u'), - ('ʒ', 'ʥ'), - ('ʤ', 'ʥ'), - ('ˈ', '↓'), -]] - -# List of (ipa, lazy ipa2) pairs: -_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ð', 'z'), - ('θ', 's'), - ('ʒ', 'ʑ'), - ('ʤ', 'dʑ'), - ('ˈ', '↓'), -]] - -# List of (ipa, ipa2) pairs -_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ʤ', 'dʒ'), - ('ʧ', 'tʃ') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def collapse_whitespace(text): - return re.sub(r'\s+', ' ', text) - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text - - -def mark_dark_l(text): - return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text) - - -def english_to_ipa(text): - text = unidecode(text).lower() - text = expand_abbreviations(text) - text = normalize_numbers(text) - phonemes = ipa.convert(text) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_to_lazy_ipa(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def english_to_ipa2(text): - text = english_to_ipa(text) - text = mark_dark_l(text) - for regex, replacement in _ipa_to_ipa2: - text = re.sub(regex, replacement, text) - return text.replace('...', '…') - - -def english_to_lazy_ipa2(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa2: - text = re.sub(regex, replacement, text) - return text \ No newline at end of file diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/modeling/test_backbone.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/modeling/test_backbone.py deleted file mode 100644 index 3bb100f9bd5b4939e4646821c5a60d51c8ea65fd..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/modeling/test_backbone.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import unittest -import torch - -import detectron2.export.torchscript # apply patch # noqa -from detectron2 import model_zoo -from detectron2.config import get_cfg -from detectron2.layers import ShapeSpec -from detectron2.modeling.backbone import build_resnet_backbone -from detectron2.modeling.backbone.fpn import build_resnet_fpn_backbone - - -class TestBackBone(unittest.TestCase): - def test_resnet_scriptability(self): - cfg = get_cfg() - resnet = build_resnet_backbone(cfg, ShapeSpec(channels=3)) - - scripted_resnet = torch.jit.script(resnet) - - inp = torch.rand(2, 3, 100, 100) - out1 = resnet(inp)["res4"] - out2 = scripted_resnet(inp)["res4"] - self.assertTrue(torch.allclose(out1, out2)) - - def test_fpn_scriptability(self): - cfg = model_zoo.get_config("Misc/scratch_mask_rcnn_R_50_FPN_3x_gn.yaml") - bb = build_resnet_fpn_backbone(cfg, ShapeSpec(channels=3)) - bb_s = torch.jit.script(bb) - - inp = torch.rand(2, 3, 128, 128) - out1 = bb(inp)["p5"] - out2 = bb_s(inp)["p5"] - self.assertTrue(torch.allclose(out1, out2)) diff --git a/spaces/cccc-c/web-ui-pub/_next/static/chunks/698-f6bc8e9278737c93.js b/spaces/cccc-c/web-ui-pub/_next/static/chunks/698-f6bc8e9278737c93.js deleted file mode 100644 index f8219f8c6d7cf299958256ed0d71b1f484a43b92..0000000000000000000000000000000000000000 --- a/spaces/cccc-c/web-ui-pub/_next/static/chunks/698-f6bc8e9278737c93.js +++ /dev/null @@ -1,25 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[698],{93644:function(){"trimStart"in String.prototype||(String.prototype.trimStart=String.prototype.trimLeft),"trimEnd"in String.prototype||(String.prototype.trimEnd=String.prototype.trimRight),"description"in Symbol.prototype||Object.defineProperty(Symbol.prototype,"description",{configurable:!0,get:function(){var e=/\((.*)\)/.exec(this.toString());return e?e[1]:void 0}}),Array.prototype.flat||(Array.prototype.flat=function(e,t){return t=this.concat.apply([],this),e>1&&t.some(Array.isArray)?t.flat(e-1):t},Array.prototype.flatMap=function(e,t){return this.map(e,t).flat()}),Promise.prototype.finally||(Promise.prototype.finally=function(e){if("function"!=typeof e)return this.then(e,e);var t=this.constructor||Promise;return this.then(function(r){return t.resolve(e()).then(function(){return r})},function(r){return t.resolve(e()).then(function(){throw r})})}),Object.fromEntries||(Object.fromEntries=function(e){return Array.from(e).reduce(function(e,t){return e[t[0]]=t[1],e},{})})},12409:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"addBasePath",{enumerable:!0,get:function(){return o}});let n=r(60150),u=r(75588);function o(e,t){return(0,u.normalizePathTrailingSlash)((0,n.addPathPrefix)(e,""))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},30930:function(e,t){"use strict";function r(e){var t,r;t=self.__next_s,r=()=>{e()},t&&t.length?t.reduce((e,t)=>{let[r,n]=t;return e.then(()=>new Promise((e,t)=>{let u=document.createElement("script");if(n)for(let e in n)"children"!==e&&u.setAttribute(e,n[e]);r?(u.src=r,u.onload=()=>e(),u.onerror=t):n&&(u.innerHTML=n.children,setTimeout(e)),document.head.appendChild(u)}))},Promise.resolve()).catch(e=>{console.error(e)}).then(()=>{r()}):r()}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"appBootstrap",{enumerable:!0,get:function(){return r}}),window.next={version:"13.4.9",appDir:!0},("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},303:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"callServer",{enumerable:!0,get:function(){return u}});let n=r(2353);async function u(e,t){let r=(0,n.getServerActionDispatcher)();if(!r)throw Error("Invariant: missing action dispatcher.");return new Promise((n,u)=>{r({actionId:e,actionArgs:t,resolve:n,reject:u})})}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},13426:function(e,t,r){"use strict";let n,u;Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"hydrate",{enumerable:!0,get:function(){return N}});let o=r(26927),l=r(25909);r(93644);let a=o._(r(93194)),i=l._(r(86006)),c=r(35456),s=r(27268);r(15456);let f=o._(r(59214)),d=r(303),p=r(45080),h=window.console.error;window.console.error=function(){for(var e=arguments.length,t=Array(e),r=0;r{if((0,p.isNextRouterError)(e.error)){e.preventDefault();return}});let _=e=>t=>e(t)+"",y=r.u,b={};r.u=_(e=>encodeURI(b[e]||y(e)));let v=r.k;r.k=_(v);let m=r.miniCssF;r.miniCssF=_(m),self.__next_require__=r,self.__next_chunk_load__=e=>{if(!e)return Promise.resolve();let[t,n]=e.split(":");return b[t]=n,r.e(t)};let g=document,O=()=>{let{pathname:e,search:t}=location;return e+t},P=new TextEncoder,E=!1,R=!1;function j(e){if(0===e[0])n=[];else{if(!n)throw Error("Unexpected server data: missing bootstrap script.");u?u.enqueue(P.encode(e[1])):n.push(e[1])}}let S=function(){u&&!R&&(u.close(),R=!0,n=void 0),E=!0};"loading"===document.readyState?document.addEventListener("DOMContentLoaded",S,!1):S();let T=self.__next_f=self.__next_f||[];T.forEach(j),T.push=j;let M=new Map;function w(e){let{cacheKey:t}=e;i.default.useEffect(()=>{M.delete(t)});let r=function(e){let t=M.get(e);if(t)return t;let r=new ReadableStream({start(e){n&&(n.forEach(t=>{e.enqueue(P.encode(t))}),E&&!R&&(e.close(),R=!0,n=void 0)),u=e}}),o=(0,c.createFromReadableStream)(r,{callServer:d.callServer});return M.set(e,o),o}(t),o=(0,i.use)(r);return o}let C=i.default.Fragment;function x(e){let{children:t}=e;return t}function A(e){return i.default.createElement(w,{...e,cacheKey:O()})}function N(){let e=i.default.createElement(C,null,i.default.createElement(s.HeadManagerContext.Provider,{value:{appDir:!0}},i.default.createElement(x,null,i.default.createElement(A,null)))),t={onRecoverableError:f.default},r="__next_error__"===document.documentElement.id;r?a.default.createRoot(g,t).render(e):i.default.startTransition(()=>a.default.hydrateRoot(g,e,t))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},53333:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0});let n=r(30930);(0,n.appBootstrap)(()=>{r(2353),r(49180);let{hydrate:e}=r(13426);e()}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},71002:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"AppRouterAnnouncer",{enumerable:!0,get:function(){return l}});let n=r(86006),u=r(8431),o="next-route-announcer";function l(e){let{tree:t}=e,[r,l]=(0,n.useState)(null);(0,n.useEffect)(()=>{let e=function(){var e;let t=document.getElementsByName(o)[0];if(null==t?void 0:null==(e=t.shadowRoot)?void 0:e.childNodes[0])return t.shadowRoot.childNodes[0];{let e=document.createElement(o);e.style.cssText="position:absolute";let t=document.createElement("div");t.ariaLive="assertive",t.id="__next-route-announcer__",t.role="alert",t.style.cssText="position:absolute;border:0;height:1px;margin:-1px;padding:0;width:1px;clip:rect(0 0 0 0);overflow:hidden;white-space:nowrap;word-wrap:normal";let r=e.attachShadow({mode:"open"});return r.appendChild(t),document.body.appendChild(e),t}}();return l(e),()=>{let e=document.getElementsByTagName(o)[0];(null==e?void 0:e.isConnected)&&document.body.removeChild(e)}},[]);let[a,i]=(0,n.useState)(""),c=(0,n.useRef)();return(0,n.useEffect)(()=>{let e="";if(document.title)e=document.title;else{let t=document.querySelector("h1");t&&(e=t.innerText||t.textContent||"")}void 0!==c.current&&i(e),c.current=e},[t]),r?(0,u.createPortal)(a,r):null}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},34852:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{RSC:function(){return r},ACTION:function(){return n},NEXT_ROUTER_STATE_TREE:function(){return u},NEXT_ROUTER_PREFETCH:function(){return o},NEXT_URL:function(){return l},FETCH_CACHE_HEADER:function(){return a},RSC_CONTENT_TYPE_HEADER:function(){return i},RSC_VARY_HEADER:function(){return c},FLIGHT_PARAMETERS:function(){return s},NEXT_RSC_UNION_QUERY:function(){return f}});let r="RSC",n="Next-Action",u="Next-Router-State-Tree",o="Next-Router-Prefetch",l="Next-Url",a="x-vercel-sc-headers",i="text/x-component",c=r+", "+u+", "+o,s=[[r],[u],[o]],f="_rsc";("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},2353:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{getServerActionDispatcher:function(){return E},urlToUrlWithoutFlightMarker:function(){return R},default:function(){return w}});let n=r(25909),u=n._(r(86006)),o=r(15456),l=r(85426),a=r(74741),i=r(8744),c=r(76173),s=r(18688),f=r(47330),d=r(89343),p=r(30753),h=r(12409),_=r(71002),y=r(22418),b=r(62484),v=r(68792),m=r(75238),g=r(34852),O=new Map,P=null;function E(){return P}function R(e){let t=new URL(e,location.origin);return t.searchParams.delete(g.NEXT_RSC_UNION_QUERY),t.pathname.endsWith("/index.txt")?t.pathname=t.pathname.slice(0,-10):t.pathname=t.pathname.slice(0,-4),t}function j(e){return e.origin!==window.location.origin}function S(e){let{tree:t,pushRef:r,canonicalUrl:n,sync:o}=e;return(0,u.useInsertionEffect)(()=>{let e={__NA:!0,tree:t};r.pendingPush&&(0,i.createHrefFromUrl)(new URL(window.location.href))!==n?(r.pendingPush=!1,window.history.pushState(e,"",n)):window.history.replaceState(e,"",n),o()},[t,r,n,o]),null}let T=()=>({status:o.CacheStates.LAZY_INITIALIZED,data:null,subTreeData:null,parallelRoutes:new Map});function M(e){let{buildId:t,initialHead:r,initialTree:n,initialCanonicalUrl:i,children:f,assetPrefix:g,notFound:E,notFoundStyles:R,asNotFound:M}=e,w=(0,u.useMemo)(()=>(0,d.createInitialRouterState)({buildId:t,children:f,initialCanonicalUrl:i,initialTree:n,initialParallelRoutes:O,isServer:!1,location:window.location,initialHead:r}),[t,f,i,n,r]),[{tree:C,cache:x,prefetchCache:A,pushRef:N,focusAndScrollRef:I,canonicalUrl:D,nextUrl:k},F,U]=(0,s.useReducerWithReduxDevtools)(l.reducer,w);(0,u.useEffect)(()=>{O=null},[]);let{searchParams:L,pathname:H}=(0,u.useMemo)(()=>{let e=new URL(D,window.location.href);return{searchParams:e.searchParams,pathname:e.pathname}},[D]),$=(0,u.useCallback)((e,t,r)=>{(0,u.startTransition)(()=>{F({type:a.ACTION_SERVER_PATCH,flightData:t,previousTree:e,overrideCanonicalUrl:r,cache:T(),mutable:{}})})},[F]),W=(0,u.useCallback)((e,t,r,n)=>{let u=new URL((0,h.addBasePath)(e),location.href);return F({type:a.ACTION_NAVIGATE,url:u,isExternalUrl:j(u),locationSearch:location.search,forceOptimisticNavigation:r,shouldScroll:null==n||n,navigateType:t,cache:T(),mutable:{}})},[F]);!function(e,t,r){let n=(0,u.useCallback)(n=>{(0,u.startTransition)(()=>{t({...n,type:a.ACTION_SERVER_ACTION,mutable:{},navigate:r,changeByServerResponse:e})})},[e,t,r]);P=n}($,F,W);let B=(0,u.useMemo)(()=>{let e={back:()=>window.history.back(),forward:()=>window.history.forward(),prefetch:(e,t)=>{if((0,p.isBot)(window.navigator.userAgent))return;let r=new URL((0,h.addBasePath)(e),location.href);j(r)||(0,u.startTransition)(()=>{var e;F({type:a.ACTION_PREFETCH,url:r,kind:null!=(e=null==t?void 0:t.kind)?e:a.PrefetchKind.FULL})})},replace:(e,t)=>{void 0===t&&(t={}),(0,u.startTransition)(()=>{var r;W(e,"replace",!!t.forceOptimisticNavigation,null==(r=t.scroll)||r)})},push:(e,t)=>{void 0===t&&(t={}),(0,u.startTransition)(()=>{var r;W(e,"push",!!t.forceOptimisticNavigation,null==(r=t.scroll)||r)})},refresh:()=>{(0,u.startTransition)(()=>{F({type:a.ACTION_REFRESH,cache:T(),mutable:{},origin:window.location.origin})})},fastRefresh:()=>{throw Error("fastRefresh can only be used in development mode. Please use refresh instead.")}};return e},[F,W]);if((0,u.useEffect)(()=>{window.next&&(window.next.router=B)},[B]),N.mpaNavigation){let e=window.location;N.pendingPush?e.assign(D):e.replace(D),(0,u.use)((0,m.createInfinitePromise)())}let Y=(0,u.useCallback)(e=>{let{state:t}=e;if(t){if(!t.__NA){window.location.reload();return}(0,u.startTransition)(()=>{F({type:a.ACTION_RESTORE,url:new URL(window.location.href),tree:t.tree})})}},[F]);(0,u.useEffect)(()=>(window.addEventListener("popstate",Y),()=>{window.removeEventListener("popstate",Y)}),[Y]);let V=(0,u.useMemo)(()=>(0,v.findHeadInCache)(x,C[1]),[x,C]),G=u.default.createElement(y.RedirectBoundary,null,V,x.subTreeData,u.default.createElement(_.AppRouterAnnouncer,{tree:C}));return u.default.createElement(u.default.Fragment,null,u.default.createElement(S,{tree:C,pushRef:N,canonicalUrl:D,sync:U}),u.default.createElement(c.PathnameContext.Provider,{value:H},u.default.createElement(c.SearchParamsContext.Provider,{value:L},u.default.createElement(o.GlobalLayoutRouterContext.Provider,{value:{buildId:t,changeByServerResponse:$,tree:C,focusAndScrollRef:I,nextUrl:k}},u.default.createElement(o.AppRouterContext.Provider,{value:B},u.default.createElement(o.LayoutRouterContext.Provider,{value:{childNodes:x.parallelRoutes,tree:C,url:D}},u.default.createElement(b.NotFoundBoundary,{notFound:E,notFoundStyles:R,asNotFound:M},G)))))))}function w(e){let{globalErrorComponent:t,...r}=e;return u.default.createElement(f.ErrorBoundary,{errorComponent:t},u.default.createElement(M,r))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},90259:function(e,t,r){"use strict";function n(e){}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"clientHookInServerComponentError",{enumerable:!0,get:function(){return n}}),r(26927),r(86006),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},47330:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{ErrorBoundaryHandler:function(){return a},default:function(){return i},ErrorBoundary:function(){return c}});let n=r(26927),u=n._(r(86006)),o=r(4e3),l={error:{fontFamily:'system-ui,"Segoe UI",Roboto,Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji"',height:"100vh",textAlign:"center",display:"flex",flexDirection:"column",alignItems:"center",justifyContent:"center"},text:{fontSize:"14px",fontWeight:400,lineHeight:"28px",margin:"0 8px"}};class a extends u.default.Component{static getDerivedStateFromError(e){return{error:e}}static getDerivedStateFromProps(e,t){return e.pathname!==t.previousPathname&&t.error?{error:null,previousPathname:e.pathname}:{error:t.error,previousPathname:e.pathname}}render(){return this.state.error?u.default.createElement(u.default.Fragment,null,this.props.errorStyles,u.default.createElement(this.props.errorComponent,{error:this.state.error,reset:this.reset})):this.props.children}constructor(e){super(e),this.reset=()=>{this.setState({error:null})},this.state={error:null,previousPathname:this.props.pathname}}}function i(e){let{error:t}=e,r=null==t?void 0:t.digest;return u.default.createElement("html",null,u.default.createElement("head",null),u.default.createElement("body",null,u.default.createElement("div",{style:l.error},u.default.createElement("div",null,u.default.createElement("h2",{style:l.text},"Application error: a "+(r?"server":"client")+"-side exception has occurred (see the "+(r?"server logs":"browser console")+" for more information)."),r?u.default.createElement("p",{style:l.text},"Digest: "+r):null))))}function c(e){let{errorComponent:t,errorStyles:r,children:n}=e,l=(0,o.usePathname)();return t?u.default.createElement(a,{pathname:l,errorComponent:t,errorStyles:r},n):u.default.createElement(u.default.Fragment,null,n)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},47308:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{DYNAMIC_ERROR_CODE:function(){return r},DynamicServerError:function(){return n}});let r="DYNAMIC_SERVER_USAGE";class n extends Error{constructor(e){super("Dynamic server usage: "+e),this.digest=r}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},75238:function(e,t){"use strict";let r;function n(){return r||(r=new Promise(()=>{})),r}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createInfinitePromise",{enumerable:!0,get:function(){return n}}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},45080:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"isNextRouterError",{enumerable:!0,get:function(){return o}});let n=r(62951),u=r(14024);function o(e){return e&&e.digest&&((0,u.isRedirectError)(e)||(0,n.isNotFoundError)(e))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},49180:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return E}});let n=r(26927),u=r(25909),o=u._(r(86006)),l=n._(r(8431)),a=r(15456),i=r(52368),c=r(75238),s=r(47330),f=r(50655),d=r(92998),p=r(22418),h=r(62484),_=r(65143),y=r(49101),b=["bottom","height","left","right","top","width","x","y"];function v(e,t){let r=e.getBoundingClientRect();return r.top>=0&&r.top<=t}class m extends o.default.Component{componentDidMount(){this.handlePotentialScroll()}componentDidUpdate(){this.props.focusAndScrollRef.apply&&this.handlePotentialScroll()}render(){return this.props.children}constructor(...e){super(...e),this.handlePotentialScroll=()=>{let{focusAndScrollRef:e,segmentPath:t}=this.props;if(e.apply){var r;if(0!==e.segmentPaths.length&&!e.segmentPaths.some(e=>t.every((t,r)=>(0,f.matchSegment)(t,e[r]))))return;let n=null,u=e.hashFragment;if(u&&(n="top"===u?document.body:null!=(r=document.getElementById(u))?r:document.getElementsByName(u)[0]),n||(n=l.default.findDOMNode(this)),!(n instanceof Element))return;for(;!(n instanceof HTMLElement)||function(e){let t=e.getBoundingClientRect();return b.every(e=>0===t[e])}(n);){if(null===n.nextElementSibling)return;n=n.nextElementSibling}e.apply=!1,e.hashFragment=null,e.segmentPaths=[],(0,d.handleSmoothScroll)(()=>{if(u){n.scrollIntoView();return}let e=document.documentElement,t=e.clientHeight;!v(n,t)&&(e.scrollTop=0,v(n,t)||n.scrollIntoView())},{dontForceLayout:!0}),n.focus()}}}}function g(e){let{segmentPath:t,children:r}=e,n=(0,o.useContext)(a.GlobalLayoutRouterContext);if(!n)throw Error("invariant global layout router not mounted");return o.default.createElement(m,{segmentPath:t,focusAndScrollRef:n.focusAndScrollRef},r)}function O(e){let{parallelRouterKey:t,url:r,childNodes:n,childProp:u,segmentPath:l,tree:s,cacheKey:d}=e,p=(0,o.useContext)(a.GlobalLayoutRouterContext);if(!p)throw Error("invariant global layout router not mounted");let{buildId:h,changeByServerResponse:_,tree:y}=p,b=n.get(d);if(u&&null!==u.current&&(b?b.status===a.CacheStates.LAZY_INITIALIZED&&(b.status=a.CacheStates.READY,b.subTreeData=u.current):(b={status:a.CacheStates.READY,data:null,subTreeData:u.current,parallelRoutes:new Map},n.set(d,b))),!b||b.status===a.CacheStates.LAZY_INITIALIZED){let e=function e(t,r){if(t){let[n,u]=t,o=2===t.length;if((0,f.matchSegment)(r[0],n)&&r[1].hasOwnProperty(u)){if(o){let t=e(void 0,r[1][u]);return[r[0],{...r[1],[u]:[t[0],t[1],t[2],"refetch"]}]}return[r[0],{...r[1],[u]:e(t.slice(2),r[1][u])}]}}return r}(["",...l],y);b={status:a.CacheStates.DATA_FETCH,data:(0,i.fetchServerResponse)(new URL(r,location.origin),e,p.nextUrl,h),subTreeData:null,head:b&&b.status===a.CacheStates.LAZY_INITIALIZED?b.head:void 0,parallelRoutes:b&&b.status===a.CacheStates.LAZY_INITIALIZED?b.parallelRoutes:new Map},n.set(d,b)}if(!b)throw Error("Child node should always exist");if(b.subTreeData&&b.data)throw Error("Child node should not have both subTreeData and data");if(b.data){let[e,t]=(0,o.use)(b.data);b.data=null,setTimeout(()=>{(0,o.startTransition)(()=>{_(y,e,t)})}),(0,o.use)((0,c.createInfinitePromise)())}b.subTreeData||(0,o.use)((0,c.createInfinitePromise)());let v=o.default.createElement(a.LayoutRouterContext.Provider,{value:{tree:s[1][t],childNodes:b.parallelRoutes,url:r}},b.subTreeData);return v}function P(e){let{children:t,loading:r,loadingStyles:n,hasLoading:u}=e;return u?o.default.createElement(o.Suspense,{fallback:o.default.createElement(o.default.Fragment,null,n,r)},t):o.default.createElement(o.default.Fragment,null,t)}function E(e){let{parallelRouterKey:t,segmentPath:r,childProp:n,error:u,errorStyles:l,templateStyles:i,loading:c,loadingStyles:d,hasLoading:b,template:v,notFound:m,notFoundStyles:E,asNotFound:R,styles:j}=e,S=(0,o.useContext)(a.LayoutRouterContext);if(!S)throw Error("invariant expected layout router to be mounted");let{childNodes:T,tree:M,url:w}=S,C=T.get(t);C||(C=new Map,T.set(t,C));let x=M[1][t][0],A=n.segment,N=(0,_.getSegmentValue)(x),I=[x];return o.default.createElement(o.default.Fragment,null,j,I.map(e=>{let j=(0,f.matchSegment)(e,A),S=(0,_.getSegmentValue)(e),T=(0,y.createRouterCacheKey)(e);return o.default.createElement(a.TemplateContext.Provider,{key:(0,y.createRouterCacheKey)(e,!0),value:o.default.createElement(g,{segmentPath:r},o.default.createElement(s.ErrorBoundary,{errorComponent:u,errorStyles:l},o.default.createElement(P,{hasLoading:b,loading:c,loadingStyles:d},o.default.createElement(h.NotFoundBoundary,{notFound:m,notFoundStyles:E,asNotFound:R},o.default.createElement(p.RedirectBoundary,null,o.default.createElement(O,{parallelRouterKey:t,url:w,tree:M,childNodes:C,childProp:j?n:null,segmentPath:r,cacheKey:T,isActive:N===S}))))))},i,v)}))}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},50655:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{matchSegment:function(){return u},canSegmentBeOverridden:function(){return o}});let n=r(24778),u=(e,t)=>"string"==typeof e?"string"==typeof t&&e===t:"string"!=typeof t&&e[0]===t[0]&&e[1]===t[1],o=(e,t)=>{var r;return!Array.isArray(e)&&!!Array.isArray(t)&&(null==(r=(0,n.getSegmentParam)(e))?void 0:r.param)===t[0]};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},4e3:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{ReadonlyURLSearchParams:function(){return p},useSearchParams:function(){return h},usePathname:function(){return _},ServerInsertedHTMLContext:function(){return i.ServerInsertedHTMLContext},useServerInsertedHTML:function(){return i.useServerInsertedHTML},useRouter:function(){return y},useParams:function(){return b},useSelectedLayoutSegments:function(){return v},useSelectedLayoutSegment:function(){return m},redirect:function(){return c.redirect},notFound:function(){return s.notFound}});let n=r(86006),u=r(15456),o=r(76173),l=r(90259),a=r(65143),i=r(73476),c=r(14024),s=r(62951),f=Symbol("internal for urlsearchparams readonly");function d(){return Error("ReadonlyURLSearchParams cannot be modified")}class p{[Symbol.iterator](){return this[f][Symbol.iterator]()}append(){throw d()}delete(){throw d()}set(){throw d()}sort(){throw d()}constructor(e){this[f]=e,this.entries=e.entries.bind(e),this.forEach=e.forEach.bind(e),this.get=e.get.bind(e),this.getAll=e.getAll.bind(e),this.has=e.has.bind(e),this.keys=e.keys.bind(e),this.values=e.values.bind(e),this.toString=e.toString.bind(e)}}function h(){(0,l.clientHookInServerComponentError)("useSearchParams");let e=(0,n.useContext)(o.SearchParamsContext),t=(0,n.useMemo)(()=>e?new p(e):null,[e]);return t}function _(){return(0,l.clientHookInServerComponentError)("usePathname"),(0,n.useContext)(o.PathnameContext)}function y(){(0,l.clientHookInServerComponentError)("useRouter");let e=(0,n.useContext)(u.AppRouterContext);if(null===e)throw Error("invariant expected app router to be mounted");return e}function b(){(0,l.clientHookInServerComponentError)("useParams");let e=(0,n.useContext)(u.GlobalLayoutRouterContext);return e?function e(t,r){void 0===r&&(r={});let n=t[1];for(let t of Object.values(n)){let n=t[0],u=Array.isArray(n),o=u?n[1]:n;!o||o.startsWith("__PAGE__")||(u&&(r[n[0]]=n[1]),r=e(t,r))}return r}(e.tree):null}function v(e){void 0===e&&(e="children"),(0,l.clientHookInServerComponentError)("useSelectedLayoutSegments");let{tree:t}=(0,n.useContext)(u.LayoutRouterContext);return function e(t,r,n,u){let o;if(void 0===n&&(n=!0),void 0===u&&(u=[]),n)o=t[1][r];else{var l;let e=t[1];o=null!=(l=e.children)?l:Object.values(e)[0]}if(!o)return u;let i=o[0],c=(0,a.getSegmentValue)(i);return!c||c.startsWith("__PAGE__")?u:(u.push(c),e(o,r,!1,u))}(t,e)}function m(e){void 0===e&&(e="children"),(0,l.clientHookInServerComponentError)("useSelectedLayoutSegment");let t=v(e);return 0===t.length?null:t[0]}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},62484:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"NotFoundBoundary",{enumerable:!0,get:function(){return a}});let n=r(26927),u=n._(r(86006)),o=r(4e3);class l extends u.default.Component{static getDerivedStateFromError(e){if((null==e?void 0:e.digest)==="NEXT_NOT_FOUND")return{notFoundTriggered:!0};throw e}static getDerivedStateFromProps(e,t){return e.pathname!==t.previousPathname&&t.notFoundTriggered?{notFoundTriggered:!1,previousPathname:e.pathname}:{notFoundTriggered:t.notFoundTriggered,previousPathname:e.pathname}}render(){return this.state.notFoundTriggered?u.default.createElement(u.default.Fragment,null,u.default.createElement("meta",{name:"robots",content:"noindex"}),this.props.notFoundStyles,this.props.notFound):this.props.children}constructor(e){super(e),this.state={notFoundTriggered:!!e.asNotFound,previousPathname:e.pathname}}}function a(e){let{notFound:t,notFoundStyles:r,asNotFound:n,children:a}=e,i=(0,o.usePathname)();return t?u.default.createElement(l,{pathname:i,notFound:t,notFoundStyles:r,asNotFound:n},a):u.default.createElement(u.default.Fragment,null,a)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},62951:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{notFound:function(){return n},isNotFoundError:function(){return u}});let r="NEXT_NOT_FOUND";function n(){let e=Error(r);throw e.digest=r,e}function u(e){return(null==e?void 0:e.digest)===r}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},22418:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{RedirectErrorBoundary:function(){return i},RedirectBoundary:function(){return c}});let n=r(25909),u=n._(r(86006)),o=r(4e3),l=r(14024);function a(e){let{redirect:t,reset:r,redirectType:n}=e,a=(0,o.useRouter)();return(0,u.useEffect)(()=>{u.default.startTransition(()=>{n===l.RedirectType.push?a.push(t,{}):a.replace(t,{}),r()})},[t,n,r,a]),null}class i extends u.default.Component{static getDerivedStateFromError(e){if((0,l.isRedirectError)(e)){let t=(0,l.getURLFromRedirectError)(e),r=(0,l.getRedirectTypeFromError)(e);return{redirect:t,redirectType:r}}throw e}render(){let{redirect:e,redirectType:t}=this.state;return null!==e&&null!==t?u.default.createElement(a,{redirect:e,redirectType:t,reset:()=>this.setState({redirect:null})}):this.props.children}constructor(e){super(e),this.state={redirect:null,redirectType:null}}}function c(e){let{children:t}=e,r=(0,o.useRouter)();return u.default.createElement(i,{router:r},t)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},14024:function(e,t,r){"use strict";var n,u;Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{RedirectType:function(){return n},getRedirectError:function(){return a},redirect:function(){return i},isRedirectError:function(){return c},getURLFromRedirectError:function(){return s},getRedirectTypeFromError:function(){return f}});let o=r(24437),l="NEXT_REDIRECT";function a(e,t){let r=Error(l);r.digest=l+";"+t+";"+e;let n=o.requestAsyncStorage.getStore();return n&&(r.mutableCookies=n.mutableCookies),r}function i(e,t){throw void 0===t&&(t="replace"),a(e,t)}function c(e){if("string"!=typeof(null==e?void 0:e.digest))return!1;let[t,r,n]=e.digest.split(";",3);return t===l&&("replace"===r||"push"===r)&&"string"==typeof n}function s(e){return c(e)?e.digest.split(";",3)[2]:null}function f(e){if(!c(e))throw Error("Not a redirect error");return e.digest.split(";",3)[1]}(u=n||(n={})).push="push",u.replace="replace",("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},92306:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return l}});let n=r(25909),u=n._(r(86006)),o=r(15456);function l(){let e=(0,u.useContext)(o.TemplateContext);return u.default.createElement(u.default.Fragment,null,e)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},68654:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"applyFlightData",{enumerable:!0,get:function(){return l}});let n=r(15456),u=r(90743),o=r(23033);function l(e,t,r,l){void 0===l&&(l=!1);let[a,i,c]=r.slice(-3);return null!==i&&(3===r.length?(t.status=n.CacheStates.READY,t.subTreeData=i,(0,u.fillLazyItemsTillLeafWithHead)(t,e,a,c,l)):(t.status=n.CacheStates.READY,t.subTreeData=e.subTreeData,t.parallelRoutes=new Map(e.parallelRoutes),(0,o.fillCacheWithNewSubTreeData)(t,e,r,l)),!0)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},76031:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"applyRouterStatePatchToTree",{enumerable:!0,get:function(){return function e(t,r,o){let l;let[a,i,,,c]=r;if(1===t.length){let e=u(r,o);return e}let[s,f]=t;if(!(0,n.matchSegment)(s,a))return null;let d=2===t.length;if(d)l=u(i[f],o);else if(null===(l=e(t.slice(2),i[f],o)))return null;let p=[t[0],{...i,[f]:l}];return c&&(p[4]=!0),p}}});let n=r(50655);function u(e,t){let[r,o]=e,[l,a]=t;if("__DEFAULT__"===l&&"__DEFAULT__"!==r)return e;if((0,n.matchSegment)(r,l)){let t={};for(let e in o){let r=void 0!==a[e];r?t[e]=u(o[e],a[e]):t[e]=o[e]}for(let e in a)t[e]||(t[e]=a[e]);let n=[r,t];return e[2]&&(n[2]=e[2]),e[3]&&(n[3]=e[3]),e[4]&&(n[4]=e[4]),n}return t}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},41781:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{extractPathFromFlightRouterState:function(){return a},computeChangedPath:function(){return i}});let n=r(47399),u=r(50655),o=e=>"string"==typeof e?e:e[1];function l(e){return e.split("/").reduce((e,t)=>""===t||t.startsWith("(")&&t.endsWith(")")?e:e+"/"+t,"")||"/"}function a(e){var t;let r=Array.isArray(e[0])?e[0][1]:e[0];if("__DEFAULT__"===r||n.INTERCEPTION_ROUTE_MARKERS.some(e=>r.startsWith(e)))return;if(r.startsWith("__PAGE__"))return"";let u=[r],o=null!=(t=e[1])?t:{},i=o.children?a(o.children):void 0;if(void 0!==i)u.push(i);else for(let[e,t]of Object.entries(o)){if("children"===e)continue;let r=a(t);void 0!==r&&u.push(r)}return l(u.join("/"))}function i(e,t){let r=function e(t,r){let[l,i]=t,[c,s]=r,f=o(l),d=o(c);if(n.INTERCEPTION_ROUTE_MARKERS.some(e=>f.startsWith(e)||d.startsWith(e)))return"";if(!(0,u.matchSegment)(l,c)){var p;return null!=(p=a(r))?p:""}for(let t in i)if(s[t]){let r=e(i[t],s[t]);if(null!==r)return o(c)+"/"+r}return null}(e,t);return null==r||"/"===r?r:l(r)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},8744:function(e,t){"use strict";function r(e,t){return void 0===t&&(t=!0),e.pathname+e.search+(t?e.hash:"")}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createHrefFromUrl",{enumerable:!0,get:function(){return r}}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},89343:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createInitialRouterState",{enumerable:!0,get:function(){return a}});let n=r(15456),u=r(8744),o=r(90743),l=r(41781);function a(e){var t;let{buildId:r,initialTree:a,children:i,initialCanonicalUrl:c,initialParallelRoutes:s,isServer:f,location:d,initialHead:p}=e,h={status:n.CacheStates.READY,data:null,subTreeData:i,parallelRoutes:f?new Map:s};return(null===s||0===s.size)&&(0,o.fillLazyItemsTillLeafWithHead)(h,void 0,a,p),{buildId:r,tree:a,cache:h,prefetchCache:new Map,pushRef:{pendingPush:!1,mpaNavigation:!1},focusAndScrollRef:{apply:!1,hashFragment:null,segmentPaths:[]},canonicalUrl:d?(0,u.createHrefFromUrl)(d):c,nextUrl:null!=(t=(0,l.extractPathFromFlightRouterState)(a)||(null==d?void 0:d.pathname))?t:null}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},76486:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createOptimisticTree",{enumerable:!0,get:function(){return function e(t,r,u){let o;let[l,a,i,c,s]=r||[null,{}],f=t[0],d=1===t.length,p=null!==l&&(0,n.matchSegment)(l,f),h=Object.keys(a).length>1,_=!r||!p||h,y={};if(null!==l&&p&&(y=a),!d&&!h){let r=e(t.slice(1),y?y.children:null,u||_);o=r}let b=[f,{...y,...o?{children:o}:{}}];return i&&(b[2]=i),!u&&_?b[3]="refetch":p&&c&&(b[3]=c),p&&s&&(b[4]=s),b}}});let n=r(50655);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},7718:function(e,t){"use strict";function r(e){return e.status="pending",e.then(t=>{"pending"===e.status&&(e.status="fulfilled",e.value=t)},t=>{"pending"===e.status&&(e.status="rejected",e.value=t)}),e}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createRecordFromThenable",{enumerable:!0,get:function(){return r}}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},49101:function(e,t){"use strict";function r(e,t){return void 0===t&&(t=!1),Array.isArray(e)?e[0]+"|"+e[1]+"|"+e[2]:t&&e.startsWith("__PAGE__")?"__PAGE__":e}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createRouterCacheKey",{enumerable:!0,get:function(){return r}}),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},52368:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"fetchServerResponse",{enumerable:!0,get:function(){return s}});let n=r(35456),u=r(34852),o=r(2353),l=r(303),a=r(74741),i=r(77279);function c(e){return[(0,o.urlToUrlWithoutFlightMarker)(e).toString(),void 0]}async function s(e,t,r,s,f){let d={[u.RSC]:"1",[u.NEXT_ROUTER_STATE_TREE]:encodeURIComponent(JSON.stringify(t))};f===a.PrefetchKind.AUTO&&(d[u.NEXT_ROUTER_PREFETCH]="1"),r&&(d[u.NEXT_URL]=r);let p=(0,i.hexHash)([d[u.NEXT_ROUTER_PREFETCH]||"0",d[u.NEXT_ROUTER_STATE_TREE]].join(","));try{let t=new URL(e);t.pathname.endsWith("/")?t.pathname+="index.txt":t.pathname+=".txt",t.searchParams.set(u.NEXT_RSC_UNION_QUERY,p);let r=await fetch(t,{credentials:"same-origin",headers:d}),a=(0,o.urlToUrlWithoutFlightMarker)(r.url),i=r.redirected?a:void 0,f=r.headers.get("content-type")||"",h=f===u.RSC_CONTENT_TYPE_HEADER;if(h||(h=f.startsWith("text/plain")),!h||!r.ok)return c(a.toString());let[_,y]=await (0,n.createFromFetch)(Promise.resolve(r),{callServer:l.callServer});if(s!==_)return c(r.url);return[y,i]}catch(t){return console.error("Failed to fetch RSC payload. Falling back to browser navigation.",t),[e.toString(),void 0]}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},70155:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"fillCacheWithDataProperty",{enumerable:!0,get:function(){return function e(t,r,o,l,a){void 0===a&&(a=!1);let i=o.length<=2,[c,s]=o,f=(0,u.createRouterCacheKey)(s),d=r.parallelRoutes.get(c);if(!d||a&&r.parallelRoutes.size>1)return{bailOptimistic:!0};let p=t.parallelRoutes.get(c);p&&p!==d||(p=new Map(d),t.parallelRoutes.set(c,p));let h=d.get(f),_=p.get(f);if(i){_&&_.data&&_!==h||p.set(f,{status:n.CacheStates.DATA_FETCH,data:l(),subTreeData:null,parallelRoutes:new Map});return}if(!_||!h){_||p.set(f,{status:n.CacheStates.DATA_FETCH,data:l(),subTreeData:null,parallelRoutes:new Map});return}return _===h&&(_={status:_.status,data:_.data,subTreeData:_.subTreeData,parallelRoutes:new Map(_.parallelRoutes)},p.set(f,_)),e(_,h,o.slice(2),l)}}});let n=r(15456),u=r(49101);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},23033:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"fillCacheWithNewSubTreeData",{enumerable:!0,get:function(){return function e(t,r,a,i){let c=a.length<=5,[s,f]=a,d=(0,l.createRouterCacheKey)(f),p=r.parallelRoutes.get(s);if(!p)return;let h=t.parallelRoutes.get(s);h&&h!==p||(h=new Map(p),t.parallelRoutes.set(s,h));let _=p.get(d),y=h.get(d);if(c){y&&y.data&&y!==_||(y={status:n.CacheStates.READY,data:null,subTreeData:a[3],parallelRoutes:_?new Map(_.parallelRoutes):new Map},_&&(0,u.invalidateCacheByRouterState)(y,_,a[2]),(0,o.fillLazyItemsTillLeafWithHead)(y,_,a[2],a[4],i),h.set(d,y));return}y&&_&&(y===_&&(y={status:y.status,data:y.data,subTreeData:y.subTreeData,parallelRoutes:new Map(y.parallelRoutes)},h.set(d,y)),e(y,_,a.slice(2),i))}}});let n=r(15456),u=r(18179),o=r(90743),l=r(49101);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},90743:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"fillLazyItemsTillLeafWithHead",{enumerable:!0,get:function(){return function e(t,r,o,l,a){let i=0===Object.keys(o[1]).length;if(i){t.head=l;return}for(let i in o[1]){let c=o[1][i],s=c[0],f=(0,u.createRouterCacheKey)(s);if(r){let u=r.parallelRoutes.get(i);if(u){let r=new Map(u),o=r.get(f),s=a&&o?{status:o.status,data:o.data,subTreeData:o.subTreeData,parallelRoutes:new Map(o.parallelRoutes)}:{status:n.CacheStates.LAZY_INITIALIZED,data:null,subTreeData:null,parallelRoutes:new Map(null==o?void 0:o.parallelRoutes)};r.set(f,s),e(s,o,c,l,a),t.parallelRoutes.set(i,r);continue}}let d={status:n.CacheStates.LAZY_INITIALIZED,data:null,subTreeData:null,parallelRoutes:new Map},p=t.parallelRoutes.get(i);p?p.set(f,d):t.parallelRoutes.set(i,new Map([[f,d]])),e(d,void 0,c,l,a)}}}});let n=r(15456),u=r(49101);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},29231:function(e,t){"use strict";var r,n;function u(e){let{kind:t,prefetchTime:r,lastUsedTime:n}=e;return Date.now()<(null!=n?n:r)+3e4?n?"reusable":"fresh":"auto"===t&&Date.now()["children",e]).flat(),p=(0,c.fillCacheWithDataProperty)(f,e.cache,d,()=>(t||(t=(0,o.createRecordFromThenable)((0,u.fetchServerResponse)(r,i,e.nextUrl,e.buildId))),t),!0);if(!(null==p?void 0:p.bailOptimistic))return R.previousTree=e.tree,R.patchedTree=i,R.pendingPush=C,R.hashFragment=M,R.shouldScroll=S,R.scrollableSegments=[],R.cache=f,R.canonicalUrl=w,e.prefetchCache.set((0,a.createHrefFromUrl)(r,!1),{data:Promise.resolve(t),kind:h.PrefetchKind.TEMPORARY,prefetchTime:Date.now(),treeAtTimeOfPrefetch:e.tree,lastUsedTime:Date.now()}),(0,_.handleMutable)(e,R)}if(!A){let t=(0,o.createRecordFromThenable)((0,u.fetchServerResponse)(r,e.tree,e.nextUrl,e.buildId,void 0)),n={data:Promise.resolve(t),kind:h.PrefetchKind.TEMPORARY,prefetchTime:Date.now(),treeAtTimeOfPrefetch:e.tree,lastUsedTime:null};e.prefetchCache.set((0,a.createHrefFromUrl)(r,!1),n),A=n}let N=(0,b.getPrefetchEntryCacheStatus)(A),{treeAtTimeOfPrefetch:I,data:D}=A,[k,F]=(0,l.readRecordValue)(D);if(A.lastUsedTime=Date.now(),"string"==typeof k)return m(e,R,k,C);let U=e.tree,L=e.cache,H=[];for(let t of k){let o=t.slice(0,-4),l=t.slice(-3)[0],a=["",...o],s=(0,f.applyRouterStatePatchToTree)(a,U,l);if(null===s&&(s=(0,f.applyRouterStatePatchToTree)(a,I,l)),null!==s){if((0,p.isNavigatingToNewRootLayout)(U,s))return m(e,R,w,C);let f=(0,y.applyFlightData)(L,E,t,"auto"===A.kind&&N===b.PrefetchCacheEntryStatus.reusable);f||N!==b.PrefetchCacheEntryStatus.stale||(f=function(e,t,r,u,o){let l=!1;e.status=n.CacheStates.READY,e.subTreeData=t.subTreeData,e.parallelRoutes=new Map(t.parallelRoutes);let a=g(u).map(e=>[...r,...e]);for(let r of a){let n=(0,c.fillCacheWithDataProperty)(e,t,r,o);(null==n?void 0:n.bailOptimistic)||(l=!0)}return l}(E,L,o,l,()=>(0,u.fetchServerResponse)(r,U,e.nextUrl,e.buildId)));let h=(0,d.shouldHardNavigate)(a,U);for(let e of(h?(E.status=n.CacheStates.READY,E.subTreeData=L.subTreeData,(0,i.invalidateCacheBelowFlightSegmentPath)(E,L,o),R.cache=E):f&&(R.cache=E),L=E,U=s,g(l))){let t=[...o,...e];"__DEFAULT__"!==t[t.length-1]&&H.push(t)}}}return R.previousTree=e.tree,R.patchedTree=U,R.canonicalUrl=F?(0,a.createHrefFromUrl)(F):w,R.pendingPush=C,R.scrollableSegments=H,R.hashFragment=M,R.shouldScroll=S,(0,_.handleMutable)(e,R)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},72763:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"prefetchReducer",{enumerable:!0,get:function(){return c}});let n=r(8744),u=r(52368),o=r(74741),l=r(7718),a=r(62268),i=r(34852);function c(e,t){(0,a.prunePrefetchCache)(e.prefetchCache);let{url:r}=t;r.searchParams.delete(i.NEXT_RSC_UNION_QUERY);let c=(0,n.createHrefFromUrl)(r,!1),s=e.prefetchCache.get(c);if(s&&(s.kind===o.PrefetchKind.TEMPORARY&&e.prefetchCache.set(c,{...s,kind:t.kind}),!(s.kind===o.PrefetchKind.AUTO&&t.kind===o.PrefetchKind.FULL)))return e;let f=(0,l.createRecordFromThenable)((0,u.fetchServerResponse)(r,e.tree,e.nextUrl,e.buildId,t.kind));return e.prefetchCache.set(c,{treeAtTimeOfPrefetch:e.tree,data:f,kind:t.kind,prefetchTime:Date.now(),lastUsedTime:null}),e}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},62268:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"prunePrefetchCache",{enumerable:!0,get:function(){return u}});let n=r(29231);function u(e){for(let[t,r]of e)(0,n.getPrefetchEntryCacheStatus)(r)===n.PrefetchCacheEntryStatus.expired&&e.delete(t)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},49901:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"refreshReducer",{enumerable:!0,get:function(){return p}});let n=r(52368),u=r(7718),o=r(90168),l=r(8744),a=r(76031),i=r(58999),c=r(86664),s=r(14129),f=r(15456),d=r(90743);function p(e,t){let{cache:r,mutable:p,origin:h}=t,_=e.canonicalUrl,y=e.tree,b=JSON.stringify(p.previousTree)===JSON.stringify(y);if(b)return(0,s.handleMutable)(e,p);r.data||(r.data=(0,u.createRecordFromThenable)((0,n.fetchServerResponse)(new URL(_,h),[y[0],y[1],y[2],"refetch"],e.nextUrl,e.buildId)));let[v,m]=(0,o.readRecordValue)(r.data);if("string"==typeof v)return(0,c.handleExternalUrl)(e,p,v,e.pushRef.pendingPush);for(let t of(r.data=null,v)){if(3!==t.length)return console.log("REFRESH FAILED"),e;let[n]=t,u=(0,a.applyRouterStatePatchToTree)([""],y,n);if(null===u)throw Error("SEGMENT MISMATCH");if((0,i.isNavigatingToNewRootLayout)(y,u))return(0,c.handleExternalUrl)(e,p,_,e.pushRef.pendingPush);let o=m?(0,l.createHrefFromUrl)(m):void 0;m&&(p.canonicalUrl=o);let[s,h]=t.slice(-2);null!==s&&(r.status=f.CacheStates.READY,r.subTreeData=s,(0,d.fillLazyItemsTillLeafWithHead)(r,void 0,n,h),p.cache=r,p.prefetchCache=new Map),p.previousTree=y,p.patchedTree=u,p.canonicalUrl=_,y=u}return(0,s.handleMutable)(e,p)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},34520:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"restoreReducer",{enumerable:!0,get:function(){return u}});let n=r(8744);function u(e,t){let{url:r,tree:u}=t,o=(0,n.createHrefFromUrl)(r);return{buildId:e.buildId,canonicalUrl:o,pushRef:e.pushRef,focusAndScrollRef:e.focusAndScrollRef,cache:e.cache,prefetchCache:e.prefetchCache,tree:u,nextUrl:r.pathname}}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},87366:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"serverActionReducer",{enumerable:!0,get:function(){return p}});let n=r(303),u=r(34852),o=r(7718),l=r(90168),a=r(35456),i=r(74741),c=r(12409),s=r(8744),f=r(14024);async function d(e,t){let r,{actionId:o,actionArgs:l}=t,i=await (0,a.encodeReply)(l),s=await fetch("",{method:"POST",headers:{Accept:u.RSC_CONTENT_TYPE_HEADER,"Next-Action":o,[u.NEXT_ROUTER_STATE_TREE]:JSON.stringify(e.tree),...e.nextUrl?{[u.NEXT_URL]:e.nextUrl}:{}},body:i}),f=s.headers.get("x-action-redirect");try{let e=JSON.parse(s.headers.get("x-action-revalidated")||"[[],0,0]");r={paths:e[0]||[],tag:!!e[1],cookie:e[2]}}catch(e){r={paths:[],tag:!1,cookie:!1}}let d=f?new URL((0,c.addBasePath)(f),window.location.origin):void 0;if(s.headers.get("content-type")===u.RSC_CONTENT_TYPE_HEADER){let e=await (0,a.createFromFetch)(Promise.resolve(s),{callServer:n.callServer});if(f){let[,t]=e;return{actionFlightData:null==t?void 0:t[1],redirectLocation:d,revalidatedParts:r}}{let[t,[,n]]=null!=e?e:[];return{actionResult:t,actionFlightData:n,redirectLocation:d,revalidatedParts:r}}}return{redirectLocation:d,revalidatedParts:r}}function p(e,t){if(t.mutable.serverActionApplied)return e;t.mutable.inFlightServerAction||(t.mutable.previousTree=e.tree,t.mutable.previousUrl=e.canonicalUrl,t.mutable.inFlightServerAction=(0,o.createRecordFromThenable)(d(e,t)));try{var r,n;let{actionResult:u,actionFlightData:a,redirectLocation:c,revalidatedParts:d}=(0,l.readRecordValue)(t.mutable.inFlightServerAction);if(d.tag||d.cookie?e.prefetchCache.clear():d.paths.length>0&&e.prefetchCache.clear(),c){if(a){let n=(0,s.createHrefFromUrl)(c,!1),u=e.prefetchCache.get(n);e.prefetchCache.set(n,{data:(0,o.createRecordFromThenable)(Promise.resolve([a,void 0])),kind:null!=(r=null==u?void 0:u.kind)?r:i.PrefetchKind.TEMPORARY,prefetchTime:Date.now(),treeAtTimeOfPrefetch:t.mutable.previousTree,lastUsedTime:null})}t.reject((0,f.getRedirectError)(c.toString(),f.RedirectType.push))}else{if(a){let r=(0,s.createHrefFromUrl)(new URL(t.mutable.previousUrl,window.location.origin),!1),u=e.prefetchCache.get(r);e.prefetchCache.set((0,s.createHrefFromUrl)(new URL(t.mutable.previousUrl,window.location.origin),!1),{data:(0,o.createRecordFromThenable)(Promise.resolve([a,void 0])),kind:null!=(n=null==u?void 0:u.kind)?n:i.PrefetchKind.TEMPORARY,prefetchTime:Date.now(),treeAtTimeOfPrefetch:t.mutable.previousTree,lastUsedTime:null}),setTimeout(()=>{t.changeByServerResponse(t.mutable.previousTree,a,void 0)})}t.resolve(u)}}catch(e){if("rejected"===e.status)t.reject(e.value);else throw e}return t.mutable.serverActionApplied=!0,e}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},77519:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"serverPatchReducer",{enumerable:!0,get:function(){return c}});let n=r(8744),u=r(76031),o=r(58999),l=r(86664),a=r(68654),i=r(14129);function c(e,t){let{flightData:r,previousTree:c,overrideCanonicalUrl:s,cache:f,mutable:d}=t,p=JSON.stringify(c)===JSON.stringify(e.tree);if(!p)return console.log("TREE MISMATCH"),e;if(d.previousTree)return(0,i.handleMutable)(e,d);if("string"==typeof r)return(0,l.handleExternalUrl)(e,d,r,e.pushRef.pendingPush);let h=e.tree,_=e.cache;for(let t of r){let r=t.slice(0,-4),[i]=t.slice(-3,-2),c=(0,u.applyRouterStatePatchToTree)(["",...r],h,i);if(null===c)throw Error("SEGMENT MISMATCH");if((0,o.isNavigatingToNewRootLayout)(h,c))return(0,l.handleExternalUrl)(e,d,e.canonicalUrl,e.pushRef.pendingPush);let p=s?(0,n.createHrefFromUrl)(s):void 0;p&&(d.canonicalUrl=p),(0,a.applyFlightData)(_,f,t),d.previousTree=h,d.patchedTree=c,d.cache=f,_=f,h=c}return(0,i.handleMutable)(e,d)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},74741:function(e,t){"use strict";var r,n;Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{PrefetchKind:function(){return r},ACTION_REFRESH:function(){return u},ACTION_NAVIGATE:function(){return o},ACTION_RESTORE:function(){return l},ACTION_SERVER_PATCH:function(){return a},ACTION_PREFETCH:function(){return i},ACTION_FAST_REFRESH:function(){return c},ACTION_SERVER_ACTION:function(){return s}});let u="refresh",o="navigate",l="restore",a="server-patch",i="prefetch",c="fast-refresh",s="server-action";(n=r||(r={})).AUTO="auto",n.FULL="full",n.TEMPORARY="temporary",("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},85426:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"reducer",{enumerable:!0,get:function(){return f}});let n=r(74741),u=r(86664),o=r(77519),l=r(34520),a=r(49901),i=r(72763),c=r(73800),s=r(87366),f=function(e,t){switch(t.type){case n.ACTION_NAVIGATE:return(0,u.navigateReducer)(e,t);case n.ACTION_SERVER_PATCH:return(0,o.serverPatchReducer)(e,t);case n.ACTION_RESTORE:return(0,l.restoreReducer)(e,t);case n.ACTION_REFRESH:return(0,a.refreshReducer)(e,t);case n.ACTION_FAST_REFRESH:return(0,c.fastRefreshReducer)(e,t);case n.ACTION_PREFETCH:return(0,i.prefetchReducer)(e,t);case n.ACTION_SERVER_ACTION:return(0,s.serverActionReducer)(e,t);default:throw Error("Unknown action")}};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},34712:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"shouldHardNavigate",{enumerable:!0,get:function(){return function e(t,r){let[u,o]=r,[l,a]=t;if(!(0,n.matchSegment)(l,u))return!!Array.isArray(l);let i=t.length<=2;return!i&&e(t.slice(2),o[a])}}});let n=r(50655);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},98323:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createSearchParamsBailoutProxy",{enumerable:!0,get:function(){return u}});let n=r(62620);function u(){return new Proxy({},{get(e,t){"string"==typeof t&&(0,n.staticGenerationBailout)("searchParams."+t)}})}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},62620:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"staticGenerationBailout",{enumerable:!0,get:function(){return l}});let n=r(47308),u=r(30094);class o extends Error{constructor(...e){super(...e),this.code="NEXT_STATIC_GEN_BAILOUT"}}let l=(e,t)=>{let r=u.staticGenerationAsyncStorage.getStore();if(null==r?void 0:r.forceStatic)return!0;if(null==r?void 0:r.dynamicShouldError){let{dynamic:r="error",link:n}=t||{};throw new o('Page with `dynamic = "'+r+"\"` couldn't be rendered statically because it used `"+e+"`."+(n?" See more info here: "+n:""))}if(r&&(r.revalidate=0),null==r?void 0:r.isStaticGeneration){let t=new n.DynamicServerError(e);throw r.dynamicUsageDescription=e,r.dynamicUsageStack=t.stack,t}return!1};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},58531:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return l}});let n=r(26927),u=n._(r(86006)),o=r(98323);function l(e){let{Component:t,propsForComponent:r}=e,n=(0,o.createSearchParamsBailoutProxy)();return u.default.createElement(t,{searchParams:n,...r})}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},18688:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"useReducerWithReduxDevtools",{enumerable:!0,get:function(){return o}});let n=r(86006);function u(e){if(e instanceof Map){let t={};for(let[r,n]of e.entries()){if("function"==typeof n){t[r]="fn()";continue}if("object"==typeof n&&null!==n){if(n.$$typeof){t[r]=n.$$typeof.toString();continue}if(n._bundlerConfig){t[r]="FlightData";continue}}t[r]=u(n)}return t}if("object"==typeof e&&null!==e){let t={};for(let r in e){let n=e[r];if("function"==typeof n){t[r]="fn()";continue}if("object"==typeof n&&null!==n){if(n.$$typeof){t[r]=n.$$typeof.toString();continue}if(n.hasOwnProperty("_bundlerConfig")){t[r]="FlightData";continue}}t[r]=u(n)}return t}return Array.isArray(e)?e.map(u):e}let o=function(e,t){let r=(0,n.useRef)(),o=(0,n.useRef)();(0,n.useEffect)(()=>{if(!r.current&&!1!==o.current){if(void 0===o.current&&void 0===window.__REDUX_DEVTOOLS_EXTENSION__){o.current=!1;return}return r.current=window.__REDUX_DEVTOOLS_EXTENSION__.connect({instanceId:8e3,name:"next-router"}),r.current&&r.current.init(u(t)),()=>{r.current=void 0}}},[t]);let[l,a]=(0,n.useReducer)((t,n)=>{let o=e(t,n);return r.current&&r.current.send(n,u(o)),o},t),i=(0,n.useCallback)(()=>{r.current&&r.current.send({type:"RENDER_SYNC"},u(l))},[l]);return[l,a,i]};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},75588:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"normalizePathTrailingSlash",{enumerable:!0,get:function(){return o}});let n=r(61402),u=r(74035),o=e=>{if(!e.startsWith("/"))return e;let{pathname:t,query:r,hash:o}=(0,u.parsePath)(e);return""+(0,n.removeTrailingSlash)(t)+r+o};("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},59214:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"default",{enumerable:!0,get:function(){return u}});let n=r(98687);function u(e){let t="function"==typeof reportError?reportError:e=>{window.console.error(e)};e.digest!==n.NEXT_DYNAMIC_NO_SSR_CODE&&t(e)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},15456:function(e,t,r){"use strict";var n,u;Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{CacheStates:function(){return n},AppRouterContext:function(){return a},LayoutRouterContext:function(){return i},GlobalLayoutRouterContext:function(){return c},TemplateContext:function(){return s}});let o=r(26927),l=o._(r(86006));(u=n||(n={})).LAZY_INITIALIZED="LAZYINITIALIZED",u.DATA_FETCH="DATAFETCH",u.READY="READY";let a=l.default.createContext(null),i=l.default.createContext(null),c=l.default.createContext(null),s=l.default.createContext(null)},77279:function(e,t){"use strict";function r(e){let t=5381;for(let r=0;r!t||"("===t[0]&&t.endsWith(")")||"@"===t[0]||("page"===t||"route"===t)&&r===n.length-1?e:e+"/"+t,""))}function o(e,t){return t?e.replace(/\.rsc($|\?)/,"$1"):e}},92998:function(e,t){"use strict";function r(e,t){void 0===t&&(t={});let r=document.documentElement,n=r.style.scrollBehavior;r.style.scrollBehavior="auto",t.dontForceLayout||r.getClientRects(),e(),r.style.scrollBehavior=n}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"handleSmoothScroll",{enumerable:!0,get:function(){return r}})},30753:function(e,t){"use strict";function r(e){return/Googlebot|Mediapartners-Google|AdsBot-Google|googleweblight|Storebot-Google|Google-PageRenderer|Bingbot|BingPreview|Slurp|DuckDuckBot|baiduspider|yandex|sogou|LinkedInBot|bitlybot|tumblr|vkShare|quora link preview|facebookexternalhit|facebookcatalog|Twitterbot|applebot|redditbot|Slackbot|Discordbot|WhatsApp|SkypeUriPreview|ia_archiver/i.test(e)}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"isBot",{enumerable:!0,get:function(){return r}})},74035:function(e,t){"use strict";function r(e){let t=e.indexOf("#"),r=e.indexOf("?"),n=r>-1&&(t<0||r-1?{pathname:e.substring(0,n?r:t),query:n?e.substring(r,t>-1?t:void 0):"",hash:t>-1?e.slice(t):""}:{pathname:e,query:"",hash:""}}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"parsePath",{enumerable:!0,get:function(){return r}})},61402:function(e,t){"use strict";function r(e){return e.replace(/\/$/,"")||"/"}Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"removeTrailingSlash",{enumerable:!0,get:function(){return r}})},73476:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{ServerInsertedHTMLContext:function(){return o},useServerInsertedHTML:function(){return l}});let n=r(25909),u=n._(r(86006)),o=u.default.createContext(null);function l(e){let t=(0,u.useContext)(o);t&&t(e)}},75862:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"createAsyncLocalStorage",{enumerable:!0,get:function(){return o}});let r=Error("Invariant: AsyncLocalStorage accessed in runtime where it is not available");class n{disable(){throw r}getStore(){}run(){throw r}exit(){throw r}enterWith(){throw r}}let u=globalThis.AsyncLocalStorage;function o(){return u?new u:new n}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},24437:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"requestAsyncStorage",{enumerable:!0,get:function(){return u}});let n=r(75862),u=(0,n.createAsyncLocalStorage)();("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},30094:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"staticGenerationAsyncStorage",{enumerable:!0,get:function(){return u}});let n=r(75862),u=(0,n.createAsyncLocalStorage)();("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},93194:function(e,t,r){"use strict";var n=r(8431);t.createRoot=n.createRoot,t.hydrateRoot=n.hydrateRoot},8431:function(e,t,r){"use strict";!function e(){if("undefined"!=typeof __REACT_DEVTOOLS_GLOBAL_HOOK__&&"function"==typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE)try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(e)}catch(e){console.error(e)}}(),e.exports=r(42614)},82672:function(e,t,r){"use strict";/** - * @license React - * react-server-dom-webpack-client.browser.production.min.js - * - * Copyright (c) Meta Platforms, Inc. and affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var n=r(8431),u=r(86006),o={stream:!0},l=new Map;function a(e){var t=globalThis.__next_require__(e);return"function"!=typeof t.then||"fulfilled"===t.status?null:(t.then(function(e){t.status="fulfilled",t.value=e},function(e){t.status="rejected",t.reason=e}),t)}function i(){}var c=n.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.Dispatcher,s=Symbol.for("react.element"),f=Symbol.for("react.lazy"),d=Symbol.for("react.default_value"),p=Symbol.iterator,h=Array.isArray,_=new WeakMap,y=u.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.ContextRegistry;function b(e,t,r,n){this.status=e,this.value=t,this.reason=r,this._response=n}function v(e){switch(e.status){case"resolved_model":j(e);break;case"resolved_module":S(e)}switch(e.status){case"fulfilled":return e.value;case"pending":case"blocked":throw e;default:throw e.reason}}function m(e,t){for(var r=0;rd?(h=d,d=3,f++):(h=0,d=3);continue;case 2:44===(v=s[f++])?d=4:_=_<<4|(96s.length&&(v=-1)}var m=s.byteOffset+f;if(-1>>1,u=e[n];if(0>>1;no(i,r))co(s,i)?(e[n]=s,e[c]=r,n=c):(e[n]=i,e[a]=r,n=a);else if(co(s,r))e[n]=s,e[c]=r,n=c;else break}}return t}function o(e,t){var r=e.sortIndex-t.sortIndex;return 0!==r?r:e.id-t.id}if(t.unstable_now=void 0,"object"==typeof performance&&"function"==typeof performance.now){var l,a=performance;t.unstable_now=function(){return a.now()}}else{var i=Date,c=i.now();t.unstable_now=function(){return i.now()-c}}var s=[],f=[],d=1,p=null,h=3,_=!1,y=!1,b=!1,v="function"==typeof setTimeout?setTimeout:null,m="function"==typeof clearTimeout?clearTimeout:null,g="undefined"!=typeof setImmediate?setImmediate:null;function O(e){for(var t=n(f);null!==t;){if(null===t.callback)u(f);else if(t.startTime<=e)u(f),t.sortIndex=t.expirationTime,r(s,t);else break;t=n(f)}}function P(e){if(b=!1,O(e),!y){if(null!==n(s))y=!0,N(E);else{var t=n(f);null!==t&&I(P,t.startTime-e)}}}function E(e,r){y=!1,b&&(b=!1,m(S),S=-1),_=!0;var o=h;try{e:{for(O(r),p=n(s);null!==p&&(!(p.expirationTime>r)||e&&!w());){var l=p.callback;if("function"==typeof l){p.callback=null,h=p.priorityLevel;var a=l(p.expirationTime<=r);if(r=t.unstable_now(),"function"==typeof a){p.callback=a,O(r);var i=!0;break e}p===n(s)&&u(s),O(r)}else u(s);p=n(s)}if(null!==p)i=!0;else{var c=n(f);null!==c&&I(P,c.startTime-r),i=!1}}return i}finally{p=null,h=o,_=!1}}"undefined"!=typeof navigator&&void 0!==navigator.scheduling&&void 0!==navigator.scheduling.isInputPending&&navigator.scheduling.isInputPending.bind(navigator.scheduling);var R=!1,j=null,S=-1,T=5,M=-1;function w(){return!(t.unstable_now()-Me||125l?(e.sortIndex=o,r(f,e),null===n(s)&&e===n(f)&&(b?(m(S),S=-1):b=!0,I(P,o-l))):(e.sortIndex=a,r(s,e),y||_||(y=!0,N(E))),e},t.unstable_shouldYield=w,t.unstable_wrapCallback=function(e){var t=h;return function(){var r=h;h=t;try{return e.apply(this,arguments)}finally{h=r}}}},26183:function(e,t,r){"use strict";e.exports=r(24248)},24778:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"getSegmentParam",{enumerable:!0,get:function(){return u}});let n=r(47399);function u(e){let t=n.INTERCEPTION_ROUTE_MARKERS.find(t=>e.startsWith(t));return(t&&(e=e.slice(t.length)),e.startsWith("[[...")&&e.endsWith("]]"))?{type:"optional-catchall",param:e.slice(5,-2)}:e.startsWith("[...")&&e.endsWith("]")?{type:"catchall",param:e.slice(4,-1)}:e.startsWith("[")&&e.endsWith("]")?{type:"dynamic",param:e.slice(1,-1)}:null}},47399:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),function(e,t){for(var r in t)Object.defineProperty(e,r,{enumerable:!0,get:t[r]})}(t,{INTERCEPTION_ROUTE_MARKERS:function(){return u},isInterceptionRouteAppPath:function(){return o},extractInterceptionRouteInformation:function(){return l}});let n=r(24241),u=["(..)(..)","(.)","(..)","(...)"];function o(e){return void 0!==e.split("/").find(e=>u.find(t=>e.startsWith(t)))}function l(e){let t,r,o;for(let n of e.split("/"))if(r=u.find(e=>n.startsWith(e))){[t,o]=e.split(r,2);break}if(!t||!r||!o)throw Error(`Invalid interception route: ${e}. Must be in the format //(..|...|..)(..)/`);switch(t=(0,n.normalizeAppPath)(t),r){case"(.)":o="/"===t?`/${o}`:t+"/"+o;break;case"(..)":if("/"===t)throw Error(`Invalid interception route: ${e}. Cannot use (..) marker at the root level, use (.) instead.`);o=t.split("/").slice(0,-1).concat(o).join("/");break;case"(...)":o="/"+o;break;case"(..)(..)":let l=t.split("/");if(l.length<=2)throw Error(`Invalid interception route: ${e}. Cannot use (..)(..) marker at the root level or one level up.`);o=l.slice(0,-2).concat(o).join("/");break;default:throw Error("Invariant: unexpected marker")}return{interceptingRoute:t,interceptedRoute:o}}},26927:function(e,t,r){"use strict";function n(e){return e&&e.__esModule?e:{default:e}}r.r(t),r.d(t,{_:function(){return n},_interop_require_default:function(){return n}})},25909:function(e,t,r){"use strict";function n(e){if("function"!=typeof WeakMap)return null;var t=new WeakMap,r=new WeakMap;return(n=function(e){return e?r:t})(e)}function u(e,t){if(!t&&e&&e.__esModule)return e;if(null===e||"object"!=typeof e&&"function"!=typeof e)return{default:e};var r=n(t);if(r&&r.has(e))return r.get(e);var u={},o=Object.defineProperty&&Object.getOwnPropertyDescriptor;for(var l in e)if("default"!==l&&Object.prototype.hasOwnProperty.call(e,l)){var a=o?Object.getOwnPropertyDescriptor(e,l):null;a&&(a.get||a.set)?Object.defineProperty(u,l,a):u[l]=e[l]}return u.default=e,r&&r.set(e,u),u}r.r(t),r.d(t,{_:function(){return u},_interop_require_wildcard:function(){return u}})}}]); \ No newline at end of file diff --git a/spaces/ccolas/EmotionPlaylist/app_utils.py b/spaces/ccolas/EmotionPlaylist/app_utils.py deleted file mode 100644 index 625695f475098419a835c69660b09ab37ecba1a2..0000000000000000000000000000000000000000 --- a/spaces/ccolas/EmotionPlaylist/app_utils.py +++ /dev/null @@ -1,235 +0,0 @@ -import streamlit as st -import numpy as np -import os -import pickle -import spotipy -import spotipy.util as sp_util - -dir_path = os.path.dirname(os.path.realpath(__file__)) - -# current mess: https://github.com/plamere/spotipy/issues/632 -def centered_button(func, text, n_columns=7, disabled=False, args=None): - columns = st.columns(np.ones(n_columns)) - with columns[n_columns//2]: - if 'button' in str(func): - return func(text, disabled=disabled) - else: - return func(text) - -# get credentials -def setup_credentials(): - if 'client_id' in os.environ.keys() and 'client_secret' in os.environ.keys(): - client_info = dict(client_id=os.environ['client_id'], - client_secret=os.environ['client_secret']) - else: - with open(dir_path + "/ids.pk", 'rb') as f: - client_info = pickle.load(f) - - os.environ['SPOTIPY_CLIENT_ID'] = client_info['client_id'] - os.environ['SPOTIPY_CLIENT_SECRET'] = client_info['client_secret'] - os.environ['SPOTIPY_REDIRECT_URI'] = 'https://huggingface.co/spaces/ccolas/EmotionPlaylist/' - return client_info - -relevant_audio_features = ["danceability", "energy", "loudness", "mode", "valence", "tempo"] - - -def get_client(): - scope = "playlist-modify-public" - token = sp_util.prompt_for_user_token(scope=scope) - sp = spotipy.Spotify(auth=token) - user_id = sp.me()['id'] - return sp, user_id - -def add_button(url, text): - st.write(f''' -
- - -
- ''', - unsafe_allow_html=True - ) - -def new_get_client(session): - scope = "playlist-modify-public" - - cache_handler = StreamlitCacheHandler(session) - auth_manager = spotipy.oauth2.SpotifyOAuth(scope=scope, - cache_handler=cache_handler, - show_dialog=True) - sp, user_id = None, None - - if not auth_manager.validate_token(cache_handler.get_cached_token()): - # Step 1. Display sign in link when no token - auth_url = auth_manager.get_authorize_url() - if 'code' not in st.experimental_get_query_params(): - add_button(auth_url, 'Log in') - - # st.markdown(f'Click here to log in', unsafe_allow_html=True) - # Step 2. Being redirected from Spotify auth page - if 'code' in st.experimental_get_query_params(): - auth_manager.get_access_token(st.experimental_get_query_params()['code']) - sp = spotipy.Spotify(auth_manager=auth_manager) - user_id = sp.me()['id'] - - return sp, user_id, auth_manager - - -def extract_uris_from_links(links, url_type): - assert url_type in ['playlist', 'artist', 'user'] - urls = links.split('\n') - uris = [] - for url in urls: - if 'playlist' in url: - uri = url.split(f'{url_type}/')[-1].split('?')[0] - elif 'user' in url: - uri = url.split(f'{url_type}/')[-1].split('?')[0] - else: - uri = url.split('?')[0] - uris.append(uri) - return uris - -def wall_of_checkboxes(labels, max_width=10): - n_labels = len(labels) - n_rows = int(np.ceil(n_labels/max_width)) - checkboxes = [] - for i in range(n_rows): - columns = st.columns(np.ones(max_width)) - row_length = n_labels % max_width if i == n_rows - 1 else max_width - for j in range(row_length): - with columns[j]: - checkboxes.append(st.empty()) - return checkboxes - -def find_legit_genre(glabel, legit_genres, verbose=False): - legit_genres_formatted = [lg.replace('-', '').replace(' ', '') for lg in legit_genres] - glabel_formatted = glabel.replace(' ', '').replace('-', '') - if verbose: print('\n', glabel) - best_match = None - best_match_score = 0 - for legit_glabel, legit_glabel_formatted in zip(legit_genres, legit_genres_formatted): - if 'jazz' in glabel_formatted: - best_match = 'jazz' - if verbose: print('\t', 'pop') - break - if 'ukpop' in glabel_formatted: - best_match = 'pop' - if verbose: print('\t', 'pop') - break - if legit_glabel_formatted == glabel_formatted: - if verbose: print('\t', legit_glabel_formatted) - best_match = legit_glabel - break - elif glabel_formatted in legit_glabel_formatted: - if verbose: print('\t', legit_glabel_formatted) - if len(glabel_formatted) > best_match_score: - best_match = legit_glabel - best_match_score = len(glabel_formatted) - elif legit_glabel_formatted in glabel_formatted: - if verbose: print('\t', legit_glabel_formatted) - if len(legit_glabel_formatted) > best_match_score: - best_match = legit_glabel - best_match_score = len(legit_glabel_formatted) - - if best_match is None: - return "unknown" - else: - return best_match - - -# def aggregate_genres(genres, legit_genres, verbose=False): -# genres_output = dict() -# legit_genres_formatted = [lg.replace('-', '').replace(' ', '') for lg in legit_genres] -# for glabel in genres.keys(): -# if verbose: print('\n', glabel) -# glabel_formatted = glabel.replace(' ', '').replace('-', '') -# best_match = None -# best_match_score = 0 -# for legit_glabel, legit_glabel_formatted in zip(legit_genres, legit_genres_formatted): -# if 'jazz' in glabel_formatted: -# best_match = 'jazz' -# if verbose: print('\t', 'pop') -# break -# if 'ukpop' in glabel_formatted: -# best_match = 'pop' -# if verbose: print('\t', 'pop') -# break -# if legit_glabel_formatted == glabel_formatted: -# if verbose: print('\t', legit_glabel_formatted) -# best_match = legit_glabel -# break -# elif glabel_formatted in legit_glabel_formatted: -# if verbose: print('\t', legit_glabel_formatted) -# if len(glabel_formatted) > best_match_score: -# best_match = legit_glabel -# best_match_score = len(glabel_formatted) -# elif legit_glabel_formatted in glabel_formatted: -# if verbose: print('\t', legit_glabel_formatted) -# if len(legit_glabel_formatted) > best_match_score: -# best_match = legit_glabel -# best_match_score = len(legit_glabel_formatted) -# -# if best_match is not None: -# if verbose: print('\t', '-->', best_match) -# if best_match in genres_output.keys(): -# genres_output[best_match] += genres[glabel] -# else: -# genres_output[best_match] = genres[glabel] -# else: -# if "unknown" in genres_output.keys(): -# genres_output["unknown"] += genres[glabel] -# else: -# genres_output["unknown"] = genres[glabel] -# for k in genres_output.keys(): -# genres_output[k] = sorted(set(genres_output[k])) -# return genres_output - -def get_all_playlists_uris_from_users(sp, user_ids): - all_uris = [] - all_names = [] - for user_id in user_ids: - print(user_id) - offset = 0 - done = False - while not done: - playlist_list = sp.user_playlists(user_id, offset=offset, limit=50) - these_names = [p['name'] for p in playlist_list['items']] - these_uris = [p['uri'] for p in playlist_list['items']] - for name, uri in zip(these_names, these_uris): - if uri not in all_uris: - all_uris.append(uri) - all_names.append(user_id + '/' + name) - if len(playlist_list['items']) < offset: - done = True - else: - offset += 50 - return all_uris, all_names - - - - -class StreamlitCacheHandler(spotipy.cache_handler.CacheHandler): - """ - A cache handler that stores the token info in the session framework - provided by streamlit. - """ - - def __init__(self, session): - self.session = session - - def get_cached_token(self): - token_info = None - try: - token_info = self.session["token_info"] - except KeyError: - print("Token not found in the session") - - return token_info - - def save_token_to_cache(self, token_info): - try: - self.session["token_info"] = token_info - except Exception as e: - print("Error saving token to cache: " + str(e)) diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/vqgan-clip/utils.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/vqgan-clip/utils.py deleted file mode 100644 index 7db45fcbb52b0fa3f82226194ff7c824fd873184..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/vqgan-clip/utils.py +++ /dev/null @@ -1,35 +0,0 @@ -from datetime import datetime - -import matplotlib.pyplot as plt -import torch - - -def freeze_module(module): - for param in module.parameters(): - param.requires_grad = False - - -def get_device(): - device = "cuda" if torch.cuda.is_available() else "cpu" - if torch.backends.mps.is_available() and torch.backends.mps.is_built(): - device = "mps" - if device == "mps": - print( - "WARNING: MPS currently doesn't seem to work, and messes up backpropagation without any visible torch" - " errors. I recommend using CUDA on a colab notebook or CPU instead if you're facing inexplicable issues" - " with generations." - ) - return device - - -def show_pil(img): - fig = plt.imshow(img) - fig.axes.get_xaxis().set_visible(False) - fig.axes.get_yaxis().set_visible(False) - plt.show() - - -def get_timestamp(): - current_time = datetime.now() - timestamp = current_time.strftime("%H:%M:%S") - return timestamp diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/models/altclip/configuration_altclip.py b/spaces/chendl/compositional_test/transformers/src/transformers/models/altclip/configuration_altclip.py deleted file mode 100644 index 4ddbb5ec81606ac23b1851aa3d8a0984139ff65c..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/models/altclip/configuration_altclip.py +++ /dev/null @@ -1,405 +0,0 @@ -# coding=utf-8 -# Copyright 2022 WenXiang ZhongzhiCheng LedellWu LiuGuang BoWenZhang and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" AltCLIP model configuration""" -import copy -import os -from typing import Union - -from ...configuration_utils import PretrainedConfig -from ...utils import logging - - -logger = logging.get_logger(__name__) - -ALTCLIP_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "BAAI/AltCLIP": "https://huggingface.co/BAAI/AltCLIP/resolve/main/config.json", - # See all AltCLIP models at https://huggingface.co/models?filter=altclip -} - - -class AltCLIPTextConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`AltCLIPTextModel`]. It is used to instantiate a - AltCLIP text model according to the specified arguments, defining the model architecture. Instantiating a - configuration with the defaults will yield a similar configuration to that of the AltCLIP - [BAAI/AltCLIP](https://huggingface.co/BAAI/AltCLIP) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - - Args: - vocab_size (`int`, *optional*, defaults to 250002): - Vocabulary size of the AltCLIP model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`AltCLIPTextModel`]. - hidden_size (`int`, *optional*, defaults to 1024): - Dimensionality of the encoder layers and the pooler layer. - num_hidden_layers (`int`, *optional*, defaults to 24): - Number of hidden layers in the Transformer encoder. - num_attention_heads (`int`, *optional*, defaults to 16): - Number of attention heads for each attention layer in the Transformer encoder. - intermediate_size (`int`, *optional*, defaults to 4096): - Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. - hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`): - The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, - `"relu"`, `"silu"` and `"gelu_new"` are supported. - hidden_dropout_prob (`float`, *optional*, defaults to 0.1): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): - The dropout ratio for the attention probabilities. - max_position_embeddings (`int`, *optional*, defaults to 514): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 512 or 1024 or 2048). - type_vocab_size (`int`, *optional*, defaults to 2): - The vocabulary size of the `token_type_ids` passed when calling [`AltCLIPTextModel`] - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (`float`, *optional*, defaults to 1e-5): - The epsilon used by the layer normalization layers. - position_embedding_type (`str`, *optional*, defaults to `"absolute"`): - Type of position embedding. Choose one of `"absolute"`, `"relative_key"`, `"relative_key_query"`. For - positional embeddings use `"absolute"`. For more information on `"relative_key"`, please refer to - [Self-Attention with Relative Position Representations (Shaw et al.)](https://arxiv.org/abs/1803.02155). - For more information on `"relative_key_query"`, please refer to *Method 4* in [Improve Transformer Models - with Better Relative Position Embeddings (Huang et al.)](https://arxiv.org/abs/2009.13658). - use_cache (`bool`, *optional*, defaults to `True`): - Whether or not the model should return the last key/values attentions (not used by all models). Only - relevant if `config.is_decoder=True`. - project_dim (`int`, *optional*, defaults to 768): - The dimentions of the teacher model before the mapping layer. - - Examples: - - ```python - >>> from transformers import AltCLIPTextModel, AltCLIPTextConfig - - >>> # Initializing a AltCLIPTextConfig with BAAI/AltCLIP style configuration - >>> configuration = AltCLIPTextConfig() - - >>> # Initializing a AltCLIPTextModel (with random weights) from the BAAI/AltCLIP style configuration - >>> model = AltCLIPTextModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - model_type = "altclip_text_model" - - def __init__( - self, - vocab_size=250002, - hidden_size=1024, - num_hidden_layers=24, - num_attention_heads=16, - intermediate_size=4096, - hidden_act="gelu", - hidden_dropout_prob=0.1, - attention_probs_dropout_prob=0.1, - max_position_embeddings=514, - type_vocab_size=1, - initializer_range=0.02, - initializer_factor=0.02, - layer_norm_eps=1e-05, - pad_token_id=1, - bos_token_id=0, - eos_token_id=2, - position_embedding_type="absolute", - use_cache=True, - project_dim=768, - **kwargs, - ): - super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs) - - self.vocab_size = vocab_size - self.hidden_size = hidden_size - self.num_hidden_layers = num_hidden_layers - self.num_attention_heads = num_attention_heads - self.hidden_act = hidden_act - self.intermediate_size = intermediate_size - self.hidden_dropout_prob = hidden_dropout_prob - self.attention_probs_dropout_prob = attention_probs_dropout_prob - self.max_position_embeddings = max_position_embeddings - self.type_vocab_size = type_vocab_size - self.initializer_range = initializer_range - self.initializer_factor = initializer_factor - self.layer_norm_eps = layer_norm_eps - self.position_embedding_type = position_embedding_type - self.use_cache = use_cache - self.project_dim = project_dim - - -class AltCLIPVisionConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`AltCLIPModel`]. It is used to instantiate an - AltCLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration - with the defaults will yield a similar configuration to that of the AltCLIP - [BAAI/AltCLIP](https://huggingface.co/BAAI/AltCLIP) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - - Args: - hidden_size (`int`, *optional*, defaults to 768): - Dimensionality of the encoder layers and the pooler layer. - intermediate_size (`int`, *optional*, defaults to 3072): - Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. - num_hidden_layers (`int`, *optional*, defaults to 12): - Number of hidden layers in the Transformer encoder. - num_attention_heads (`int`, *optional*, defaults to 12): - Number of attention heads for each attention layer in the Transformer encoder. - image_size (`int`, *optional*, defaults to 224): - The size (resolution) of each image. - patch_size (`int`, *optional*, defaults to 32): - The size (resolution) of each patch. - hidden_act (`str` or `function`, *optional*, defaults to `"quick_gelu"`): - The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, - `"relu"`, `"selu"` and `"gelu_new"` ``"quick_gelu"` are supported. - layer_norm_eps (`float`, *optional*, defaults to 1e-5): - The epsilon used by the layer normalization layers. - attention_dropout (`float`, *optional*, defaults to 0.0): - The dropout ratio for the attention probabilities. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - initializer_factor (`float``, *optional*, defaults to 1): - A factor for initializing all weight matrices (should be kept to 1, used internally for initialization - testing). - - Example: - - ```python - >>> from transformers import AltCLIPVisionConfig, AltCLIPVisionModel - - >>> # Initializing a AltCLIPVisionConfig with BAAI/AltCLIP style configuration - >>> configuration = AltCLIPVisionConfig() - - >>> # Initializing a AltCLIPVisionModel (with random weights) from the BAAI/AltCLIP style configuration - >>> model = AltCLIPVisionModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - - model_type = "altclip_vision_model" - - def __init__( - self, - hidden_size=768, - intermediate_size=3072, - projection_dim=512, - num_hidden_layers=12, - num_attention_heads=12, - num_channels=3, - image_size=224, - patch_size=32, - hidden_act="quick_gelu", - layer_norm_eps=1e-5, - attention_dropout=0.0, - initializer_range=0.02, - initializer_factor=1.0, - **kwargs, - ): - super().__init__(**kwargs) - - self.hidden_size = hidden_size - self.intermediate_size = intermediate_size - self.projection_dim = projection_dim - self.num_hidden_layers = num_hidden_layers - self.num_attention_heads = num_attention_heads - self.num_channels = num_channels - self.patch_size = patch_size - self.image_size = image_size - self.initializer_range = initializer_range - self.initializer_factor = initializer_factor - self.attention_dropout = attention_dropout - self.layer_norm_eps = layer_norm_eps - self.hidden_act = hidden_act - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig": - config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) - - # get the vision config dict if we are loading from AltCLIPConfig - if config_dict.get("model_type") == "altclip": - config_dict = config_dict["vision_config"] - - if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type: - logger.warning( - f"You are using a model of type {config_dict['model_type']} to instantiate a model of type " - f"{cls.model_type}. This is not supported for all configurations of models and can yield errors." - ) - - return cls.from_dict(config_dict, **kwargs) - - -class AltCLIPConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`AltCLIPModel`]. It is used to instantiate an - AltCLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration - with the defaults will yield a similar configuration to that of the AltCLIP - [BAAI/AltCLIP](https://huggingface.co/BAAI/AltCLIP) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - Args: - text_config (`dict`, *optional*): - Dictionary of configuration options used to initialize [`AltCLIPTextConfig`]. - vision_config (`dict`, *optional*): - Dictionary of configuration options used to initialize [`AltCLIPVisionConfig`]. - projection_dim (`int`, *optional*, defaults to 512): - Dimentionality of text and vision projection layers. - logit_scale_init_value (`float`, *optional*, defaults to 2.6592): - The inital value of the *logit_scale* paramter. Default is used as per the original CLIP implementation. - kwargs (*optional*): - Dictionary of keyword arguments. - - Example: - - ```python - >>> from transformers import AltCLIPConfig, AltCLIPModel - - >>> # Initializing a AltCLIPConfig with BAAI/AltCLIP style configuration - >>> configuration = AltCLIPConfig() - - >>> # Initializing a AltCLIPModel (with random weights) from the BAAI/AltCLIP style configuration - >>> model = AltCLIPModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - - >>> # We can also initialize a AltCLIPConfig from a AltCLIPTextConfig and a AltCLIPVisionConfig - - >>> # Initializing a AltCLIPText and AltCLIPVision configuration - >>> config_text = AltCLIPTextConfig() - >>> config_vision = AltCLIPVisionConfig() - - >>> config = AltCLIPConfig.from_text_vision_configs(config_text, config_vision) - ```""" - - model_type = "altclip" - is_composition = True - - def __init__( - self, text_config=None, vision_config=None, projection_dim=768, logit_scale_init_value=2.6592, **kwargs - ): - # If `_config_dict` exist, we use them for the backward compatibility. - # We pop out these 2 attributes before calling `super().__init__` to avoid them being saved (which causes a lot - # of confusion!). - text_config_dict = kwargs.pop("text_config_dict", None) - vision_config_dict = kwargs.pop("vision_config_dict", None) - - super().__init__(**kwargs) - - # Instead of simply assigning `[text|vision]_config_dict` to `[text|vision]_config`, we use the values in - # `[text|vision]_config_dict` to update the values in `[text|vision]_config`. The values should be same in most - # cases, but we don't want to break anything regarding `_config_dict` that existed before commit `8827e1b2`. - if text_config_dict is not None: - if text_config is None: - text_config = {} - - # This is the complete result when using `text_config_dict`. - _text_config_dict = AltCLIPTextConfig(**text_config_dict).to_dict() - - # Give a warning if the values exist in both `_text_config_dict` and `text_config` but being different. - for key, value in _text_config_dict.items(): - if key in text_config and value != text_config[key] and key not in ["transformers_version"]: - # If specified in `text_config_dict` - if key in text_config_dict: - message = ( - f"`{key}` is found in both `text_config_dict` and `text_config` but with different values. " - f'The value `text_config_dict["{key}"]` will be used instead.' - ) - # If inferred from default argument values (just to be super careful) - else: - message = ( - f"`text_config_dict` is provided which will be used to initialize `AltCLIPTextConfig`. The " - f'value `text_config["{key}"]` will be overriden.' - ) - logger.warning(message) - - # Update all values in `text_config` with the ones in `_text_config_dict`. - text_config.update(_text_config_dict) - - if vision_config_dict is not None: - if vision_config is None: - vision_config = {} - - # This is the complete result when using `vision_config_dict`. - _vision_config_dict = AltCLIPVisionConfig(**vision_config_dict).to_dict() - # convert keys to string instead of integer - if "id2label" in _vision_config_dict: - _vision_config_dict["id2label"] = { - str(key): value for key, value in _vision_config_dict["id2label"].items() - } - - # Give a warning if the values exist in both `_vision_config_dict` and `vision_config` but being different. - for key, value in _vision_config_dict.items(): - if key in vision_config and value != vision_config[key] and key not in ["transformers_version"]: - # If specified in `vision_config_dict` - if key in vision_config_dict: - message = ( - f"`{key}` is found in both `vision_config_dict` and `vision_config` but with different " - f'values. The value `vision_config_dict["{key}"]` will be used instead.' - ) - # If inferred from default argument values (just to be super careful) - else: - message = ( - f"`vision_config_dict` is provided which will be used to initialize `AltCLIPVisionConfig`. " - f'The value `vision_config["{key}"]` will be overriden.' - ) - logger.warning(message) - - # Update all values in `vision_config` with the ones in `_vision_config_dict`. - vision_config.update(_vision_config_dict) - - if text_config is None: - text_config = {} - logger.info("`text_config` is `None`. Initializing the `AltCLIPTextConfig` with default values.") - - if vision_config is None: - vision_config = {} - logger.info("`vision_config` is `None`. initializing the `AltCLIPVisionConfig` with default values.") - - self.text_config = AltCLIPTextConfig(**text_config) - self.vision_config = AltCLIPVisionConfig(**vision_config) - - self.projection_dim = projection_dim - self.logit_scale_init_value = logit_scale_init_value - self.initializer_factor = 1.0 - - @classmethod - def from_text_vision_configs(cls, text_config: AltCLIPTextConfig, vision_config: AltCLIPVisionConfig, **kwargs): - r""" - Instantiate a [`AltCLIPConfig`] (or a derived class) from altclip text model configuration and altclip vision - model configuration. - - Returns: - [`AltCLIPConfig`]: An instance of a configuration object - """ - - return cls(text_config=text_config.to_dict(), vision_config=vision_config.to_dict(), **kwargs) - - def to_dict(self): - """ - Serializes this instance to a Python dictionary. Override the default [`~PretrainedConfig.to_dict`]. - - Returns: - `Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance, - """ - output = copy.deepcopy(self.__dict__) - output["text_config"] = self.text_config.to_dict() - output["vision_config"] = self.vision_config.to_dict() - output["model_type"] = self.__class__.model_type - return output diff --git a/spaces/chenxx/ChuanhuChatGPT/chat_func.py b/spaces/chenxx/ChuanhuChatGPT/chat_func.py deleted file mode 100644 index 676259bd4d394240cf0f41f0bcdcb480121c9c98..0000000000000000000000000000000000000000 --- a/spaces/chenxx/ChuanhuChatGPT/chat_func.py +++ /dev/null @@ -1,456 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import os -import requests -import urllib3 - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp - -from presets import * -from llama_func import * -from utils import * - -# logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s") - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - - -initial_prompt = "You are a helpful assistant." -API_URL = "https://api.openai.com/v1/chat/completions" -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -def get_response( - openai_api_key, system_prompt, history, temperature, top_p, stream, selected_model -): - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}", - } - - history = [construct_system(system_prompt), *history] - - payload = { - "model": selected_model, - "messages": history, # [{"role": "user", "content": f"{inputs}"}], - "temperature": temperature, # 1.0, - "top_p": top_p, # 1.0, - "n": 1, - "stream": stream, - "presence_penalty": 0, - "frequency_penalty": 0, - } - if stream: - timeout = timeout_streaming - else: - timeout = timeout_all - - # 获取环境变量中的代理设置 - http_proxy = os.environ.get("HTTP_PROXY") or os.environ.get("http_proxy") - https_proxy = os.environ.get("HTTPS_PROXY") or os.environ.get("https_proxy") - - # 如果存在代理设置,使用它们 - proxies = {} - if http_proxy: - logging.info(f"Using HTTP proxy: {http_proxy}") - proxies["http"] = http_proxy - if https_proxy: - logging.info(f"Using HTTPS proxy: {https_proxy}") - proxies["https"] = https_proxy - - # 如果有代理,使用代理发送请求,否则使用默认设置发送请求 - if proxies: - response = requests.post( - API_URL, - headers=headers, - json=payload, - stream=True, - timeout=timeout, - proxies=proxies, - ) - else: - response = requests.post( - API_URL, - headers=headers, - json=payload, - stream=True, - timeout=timeout, - ) - return response - - -def stream_predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=None, - display_append="" -): - def get_return_value(): - return chatbot, history, status_text, all_token_counts - - logging.info("实时回答模式") - partial_words = "" - counter = 0 - status_text = "开始实时传输回答……" - history.append(construct_user(inputs)) - history.append(construct_assistant("")) - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - user_token_count = 0 - if len(all_token_counts) == 0: - system_prompt_token_count = count_token(construct_system(system_prompt)) - user_token_count = ( - count_token(construct_user(inputs)) + system_prompt_token_count - ) - else: - user_token_count = count_token(construct_user(inputs)) - all_token_counts.append(user_token_count) - logging.info(f"输入token计数: {user_token_count}") - yield get_return_value() - try: - response = get_response( - openai_api_key, - system_prompt, - history, - temperature, - top_p, - True, - selected_model, - ) - except requests.exceptions.ConnectTimeout: - status_text = ( - standard_error_msg + connection_timeout_prompt + error_retrieve_prompt - ) - yield get_return_value() - return - except requests.exceptions.ReadTimeout: - status_text = standard_error_msg + read_timeout_prompt + error_retrieve_prompt - yield get_return_value() - return - - yield get_return_value() - error_json_str = "" - - for chunk in response.iter_lines(): - if counter == 0: - counter += 1 - continue - counter += 1 - # check whether each line is non-empty - if chunk: - chunk = chunk.decode() - chunklength = len(chunk) - try: - chunk = json.loads(chunk[6:]) - except json.JSONDecodeError: - logging.info(chunk) - error_json_str += chunk - status_text = f"JSON解析错误。请重置对话。收到的内容: {error_json_str}" - yield get_return_value() - continue - # decode each line as response data is in bytes - if chunklength > 6 and "delta" in chunk["choices"][0]: - finish_reason = chunk["choices"][0]["finish_reason"] - status_text = construct_token_message( - sum(all_token_counts), stream=True - ) - if finish_reason == "stop": - yield get_return_value() - break - try: - partial_words = ( - partial_words + chunk["choices"][0]["delta"]["content"] - ) - except KeyError: - status_text = ( - standard_error_msg - + "API回复中找不到内容。很可能是Token计数达到上限了。请重置对话。当前Token计数: " - + str(sum(all_token_counts)) - ) - yield get_return_value() - break - history[-1] = construct_assistant(partial_words) - chatbot[-1] = (chatbot[-1][0], partial_words+display_append) - all_token_counts[-1] += 1 - yield get_return_value() - - -def predict_all( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=None, - display_append="" -): - logging.info("一次性回答模式") - history.append(construct_user(inputs)) - history.append(construct_assistant("")) - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - all_token_counts.append(count_token(construct_user(inputs))) - try: - response = get_response( - openai_api_key, - system_prompt, - history, - temperature, - top_p, - False, - selected_model, - ) - except requests.exceptions.ConnectTimeout: - status_text = ( - standard_error_msg + connection_timeout_prompt + error_retrieve_prompt - ) - return chatbot, history, status_text, all_token_counts - except requests.exceptions.ProxyError: - status_text = standard_error_msg + proxy_error_prompt + error_retrieve_prompt - return chatbot, history, status_text, all_token_counts - except requests.exceptions.SSLError: - status_text = standard_error_msg + ssl_error_prompt + error_retrieve_prompt - return chatbot, history, status_text, all_token_counts - response = json.loads(response.text) - content = response["choices"][0]["message"]["content"] - history[-1] = construct_assistant(content) - chatbot[-1] = (chatbot[-1][0], content+display_append) - total_token_count = response["usage"]["total_tokens"] - all_token_counts[-1] = total_token_count - sum(all_token_counts) - status_text = construct_token_message(total_token_count) - return chatbot, history, status_text, all_token_counts - - -def predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - stream=False, - selected_model=MODELS[0], - use_websearch=False, - files = None, - should_check_token_count=True, -): # repetition_penalty, top_k - logging.info("输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL) - if files: - msg = "构建索引中……(这可能需要比较久的时间)" - logging.info(msg) - yield chatbot, history, msg, all_token_counts - index = construct_index(openai_api_key, file_src=files) - msg = "索引构建完成,获取回答中……" - yield chatbot, history, msg, all_token_counts - history, chatbot, status_text = chat_ai(openai_api_key, index, inputs, history, chatbot) - yield chatbot, history, status_text, all_token_counts - return - - old_inputs = "" - link_references = [] - if use_websearch: - search_results = ddg(inputs, max_results=5) - old_inputs = inputs - web_results = [] - for idx, result in enumerate(search_results): - logging.info(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - web_results.append(f'[{idx+1}]"{result["body"]}"\nURL: {result["href"]}') - link_references.append(f"{idx+1}. [{domain_name}]({result['href']})\n") - link_references = "\n\n" + "".join(link_references) - inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", inputs) - .replace("{web_results}", "\n\n".join(web_results)) - ) - else: - link_references = "" - - if len(openai_api_key) != 51: - status_text = standard_error_msg + no_apikey_msg - logging.info(status_text) - chatbot.append((inputs, "")) - if len(history) == 0: - history.append(construct_user(inputs)) - history.append("") - all_token_counts.append(0) - else: - history[-2] = construct_user(inputs) - yield chatbot, history, status_text, all_token_counts - return - - yield chatbot, history, "开始生成回答……", all_token_counts - - if stream: - logging.info("使用流式传输") - iter = stream_predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=old_inputs, - display_append=link_references - ) - for chatbot, history, status_text, all_token_counts in iter: - yield chatbot, history, status_text, all_token_counts - else: - logging.info("不使用流式传输") - chatbot, history, status_text, all_token_counts = predict_all( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=old_inputs, - display_append=link_references - ) - yield chatbot, history, status_text, all_token_counts - - logging.info(f"传输完毕。当前token计数为{all_token_counts}") - if len(history) > 1 and history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if stream: - max_token = max_token_streaming - else: - max_token = max_token_all - - if sum(all_token_counts) > max_token and should_check_token_count: - status_text = f"精简token中{all_token_counts}/{max_token}" - logging.info(status_text) - yield chatbot, history, status_text, all_token_counts - iter = reduce_token_size( - openai_api_key, - system_prompt, - history, - chatbot, - all_token_counts, - top_p, - temperature, - max_token//2, - selected_model=selected_model, - ) - for chatbot, history, status_text, all_token_counts in iter: - status_text = f"Token 达到上限,已自动降低Token计数至 {status_text}" - yield chatbot, history, status_text, all_token_counts - - -def retry( - openai_api_key, - system_prompt, - history, - chatbot, - token_count, - top_p, - temperature, - stream=False, - selected_model=MODELS[0], -): - logging.info("重试中……") - if len(history) == 0: - yield chatbot, history, f"{standard_error_msg}上下文是空的", token_count - return - history.pop() - inputs = history.pop()["content"] - token_count.pop() - iter = predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - token_count, - top_p, - temperature, - stream=stream, - selected_model=selected_model, - ) - logging.info("重试中……") - for x in iter: - yield x - logging.info("重试完毕") - - -def reduce_token_size( - openai_api_key, - system_prompt, - history, - chatbot, - token_count, - top_p, - temperature, - max_token_count, - selected_model=MODELS[0], -): - logging.info("开始减少token数量……") - iter = predict( - openai_api_key, - system_prompt, - history, - summarize_prompt, - chatbot, - token_count, - top_p, - temperature, - selected_model=selected_model, - should_check_token_count=False, - ) - logging.info(f"chatbot: {chatbot}") - flag = False - for chatbot, history, status_text, previous_token_count in iter: - num_chat = find_n(previous_token_count, max_token_count) - if flag: - chatbot = chatbot[:-1] - flag = True - history = history[-2*num_chat:] if num_chat > 0 else [] - token_count = previous_token_count[-num_chat:] if num_chat > 0 else [] - msg = f"保留了最近{num_chat}轮对话" - yield chatbot, history, msg + "," + construct_token_message( - sum(token_count) if len(token_count) > 0 else 0, - ), token_count - logging.info(msg) - logging.info("减少token数量完毕") \ No newline at end of file diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/clear_button.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/clear_button.py deleted file mode 100644 index 56652e731ae430e16ea3e7da432d06d6bd5e2a91..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/clear_button.py +++ /dev/null @@ -1,70 +0,0 @@ -""" Predefined buttons with bound events that can be included in a gr.Blocks for convenience. """ - -from __future__ import annotations - -import json -from typing import Literal - -from gradio_client.documentation import document, set_documentation_group - -from gradio.components import Button, Component - -set_documentation_group("component") - - -@document("add") -class ClearButton(Button): - """ - Button that clears the value of a component or a list of components when clicked. It is instantiated with the list of components to clear. - Preprocessing: passes the button value as a {str} into the function - Postprocessing: expects a {str} to be returned from a function, which is set as the label of the button - """ - - is_template = True - - def __init__( - self, - components: None | list[Component] | Component = None, - *, - value: str = "Clear", - variant: Literal["primary", "secondary", "stop"] = "secondary", - size: Literal["sm", "lg"] | None = None, - visible: bool = True, - interactive: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - scale: int | None = None, - min_width: int | None = None, - **kwargs, - ): - super().__init__( - value, - variant=variant, - size=size, - visible=visible, - interactive=interactive, - elem_id=elem_id, - elem_classes=elem_classes, - scale=scale, - min_width=min_width, - **kwargs, - ) - self.add(components) - - def add(self, components: None | Component | list[Component]) -> ClearButton: - """ - Adds a component or list of components to the list of components that will be cleared when the button is clicked. - """ - if not components: - # This needs to be here because when the ClearButton is created in an gr.Interface, we don't - # want to create dependencies for it before we have created the dependencies for the submit function. - # We generally assume that the submit function dependency is the first thing created in an gr.Interface. - return self - - if isinstance(components, Component): - components = [components] - clear_values = json.dumps( - [component.postprocess(None) for component in components] - ) - self.click(None, [], components, _js=f"() => {clear_values}") - return self diff --git a/spaces/cihyFjudo/fairness-paper-search/Cold War Kids Discography (albums EPs bootlegs and b-sides) MP The Ultimate Collection of the Cold War Kids Songs.md b/spaces/cihyFjudo/fairness-paper-search/Cold War Kids Discography (albums EPs bootlegs and b-sides) MP The Ultimate Collection of the Cold War Kids Songs.md deleted file mode 100644 index dcdc93734336d868be89f6ef9962c308f094fa43..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Cold War Kids Discography (albums EPs bootlegs and b-sides) MP The Ultimate Collection of the Cold War Kids Songs.md +++ /dev/null @@ -1,6 +0,0 @@ -

Cold War Kids Discography (albums, EPs, bootlegs and b-sides) MP


Download 🔗 https://tinurli.com/2uwjYh



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/How to Get Going Medieval Crack Only for Free and Enjoy the Medieval Simulation.md b/spaces/cihyFjudo/fairness-paper-search/How to Get Going Medieval Crack Only for Free and Enjoy the Medieval Simulation.md deleted file mode 100644 index b4bd23b5788d17d005f9ad8021dff1dfd464d6f8..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/How to Get Going Medieval Crack Only for Free and Enjoy the Medieval Simulation.md +++ /dev/null @@ -1,12 +0,0 @@ - -

The latest Going Medieval update, Terraforming & Cats, is now available on Steam, Epic Games Store, and GOG. Releasing only a month after the previous update, Terraforming & Cats adds terraforming, custom difficulty, new animals and behavior, and other tweaks and improvements. However, before you play the patch, you should disable any mods you might have installed. If you don't, your game might crash or refuse to start altogether, which would definitely prevent you from going medieval.

-

Going Medieval Crack Only


DOWNLOAD ——— https://tinurli.com/2uwhzR



-

Over the years, rates of crack use among blacks have only been slightly higher than among whites, but since whites are the majority of the population, most crack users are white. For example, in 2017, 4.5% of blacks and 3.9% of whites reported ever using crack in their lives, according to the federal drug use survey.

-

Sometimes, it can feel like the list of home improvements, DIY jobs and general sprucing up tasks that need to be completed in your home is only getting longer. Things break over time or from overuse and certain objects or appliances may need upgrading or replacing. But when you spy a crack in a wall or ceiling, you may instantly panic. Luckily, most cracks are completely normal in all sorts of houses, even new builds, and are simply a sign that the house is settling. Other causes of cracks include change in temperature or humidity levels and vibrations from traffic if you live near a busy or fast road.

-

Most of the time you will need to apply more than one dip coat to full cover the nail (around 2-3 dips). However, if you apply your dip colors too quickly, the color will not dry or set properly and this will cause the powder to crack. Some dip powders are quick dry, like Fairy Glamor, but some are not. If you're using a quick dry dip powder you should only need to wait around 5 seconds before dipping again. But if you are not using a quick dry brand you'll need to wait a minute or two before applying another coat.

-

-

This is the most common fix for cracked dip nails. If the crack happened beneath your top coat, you're going to need to buff the surface away so that you can reach the crack. You can either use a nail file or a drill for this. Once you've removed the top layer you can apply your base coat over the crack and dip your finger in the same color again. The layer will become uneven--don't worry about this. Apply activator and let the layer dry before buffing it smooth. Then apply a thin layer top coat over the entire nail. Tada! Good as new.

-

It was the ancient Romans, however, who contributed the notion that a broken mirror would bring seven years of bad luck, since it was believed that only poor health would cause a mirror to crack, and the number seven was seen by the Romans as the number of years required to complete a full life-cycle of sickness and renewal. As a result, a broken mirror meant you were headed toward a death-spiral that might take seven years to pull yourself out of! But, then, those same Romans felt you could prevent that horrible outcome by gathering the broken pieces of the mirror and burying them by moonlight, so should we really trust them about all the bad luck stuff?

-

The most commonly shouted phrase after the crack is "Oh, my back!" but the trope itself isn't confined solely to spinal-lumbar complaints and can happen with any body part, bone or joint. Occasionally, this will even happen with younger characters if they move in very awkward positions.

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Udemy Ultimate Low Poly Game Assets In Blender 2.8 And Unity How To Make Amazing Low Poly Games In No Time.md b/spaces/cihyFjudo/fairness-paper-search/Udemy Ultimate Low Poly Game Assets In Blender 2.8 And Unity How To Make Amazing Low Poly Games In No Time.md deleted file mode 100644 index 590886adf09fec9cc3deda10dc3a8d95a8098a7a..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Udemy Ultimate Low Poly Game Assets In Blender 2.8 And Unity How To Make Amazing Low Poly Games In No Time.md +++ /dev/null @@ -1,5 +0,0 @@ - -

In this course I will teach you how to model low poly Bucket game assets inside the blender 3.1 which is new released version. This tutorial is around 2 hours and if you like be sure to check out our other courses.

-

Udemy Ultimate Low Poly Game Assets In Blender 2.8 And Unity


Download Filehttps://tinurli.com/2uwioW



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Xxx Kakek Vs Cucu Ngesex.3gp.md b/spaces/cihyFjudo/fairness-paper-search/Xxx Kakek Vs Cucu Ngesex.3gp.md deleted file mode 100644 index 1d7659ace0a8edb458a34cff5a3ef02f0730d61d..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Xxx Kakek Vs Cucu Ngesex.3gp.md +++ /dev/null @@ -1,6 +0,0 @@ -

xxx kakek vs cucu ngesex.3gp


Download > https://tinurli.com/2uwjLP



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cllatMTK/TransformerAnalyzer/render_util.py b/spaces/cllatMTK/TransformerAnalyzer/render_util.py deleted file mode 100644 index b21f08b259ca0fa554c4952f3044a90728d41e40..0000000000000000000000000000000000000000 --- a/spaces/cllatMTK/TransformerAnalyzer/render_util.py +++ /dev/null @@ -1,22 +0,0 @@ -import streamlit as st - -def create_table(df): - # Table header based on df columns - header = "| " + " | ".join(df.columns) + " |" - # Number of columns in df to set table divider accordingly - divider = "|:---|" + "|-----:|" * len(df.columns[:-1]) - rows = [header, divider] - - for _, row in df.iterrows(): - rows.append("| " + " | ".join(row.astype(str)) + " |") - - return "\n".join(rows) - -def header3(text): - st.markdown(f"### {text}") - -def header4(text): - st.markdown(f"#### {text}") - -def header5(text): - st.markdown(f"##### {text}") \ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_s_b_i_x.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_s_b_i_x.py deleted file mode 100644 index 29b82c3e43e8bd199a841c577774885d92499aba..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_s_b_i_x.py +++ /dev/null @@ -1,119 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import safeEval, num2binary, binary2num -from . import DefaultTable -from .sbixStrike import Strike - - -sbixHeaderFormat = """ - > - version: H # Version number (set to 1) - flags: H # The only two bits used in the flags field are bits 0 - # and 1. For historical reasons, bit 0 must always be 1. - # Bit 1 is a sbixDrawOutlines flag and is interpreted as - # follows: - # 0: Draw only 'sbix' bitmaps - # 1: Draw both 'sbix' bitmaps and outlines, in that - # order - numStrikes: L # Number of bitmap strikes to follow -""" -sbixHeaderFormatSize = sstruct.calcsize(sbixHeaderFormat) - - -sbixStrikeOffsetFormat = """ - > - strikeOffset: L # Offset from begining of table to data for the - # individual strike -""" -sbixStrikeOffsetFormatSize = sstruct.calcsize(sbixStrikeOffsetFormat) - - -class table__s_b_i_x(DefaultTable.DefaultTable): - def __init__(self, tag=None): - DefaultTable.DefaultTable.__init__(self, tag) - self.version = 1 - self.flags = 1 - self.numStrikes = 0 - self.strikes = {} - self.strikeOffsets = [] - - def decompile(self, data, ttFont): - # read table header - sstruct.unpack(sbixHeaderFormat, data[:sbixHeaderFormatSize], self) - # collect offsets to individual strikes in self.strikeOffsets - for i in range(self.numStrikes): - current_offset = sbixHeaderFormatSize + i * sbixStrikeOffsetFormatSize - offset_entry = sbixStrikeOffset() - sstruct.unpack( - sbixStrikeOffsetFormat, - data[current_offset : current_offset + sbixStrikeOffsetFormatSize], - offset_entry, - ) - self.strikeOffsets.append(offset_entry.strikeOffset) - - # decompile Strikes - for i in range(self.numStrikes - 1, -1, -1): - current_strike = Strike(rawdata=data[self.strikeOffsets[i] :]) - data = data[: self.strikeOffsets[i]] - current_strike.decompile(ttFont) - # print " Strike length: %xh" % len(bitmapSetData) - # print "Number of Glyph entries:", len(current_strike.glyphs) - if current_strike.ppem in self.strikes: - from fontTools import ttLib - - raise ttLib.TTLibError("Pixel 'ppem' must be unique for each Strike") - self.strikes[current_strike.ppem] = current_strike - - # after the glyph data records have been extracted, we don't need the offsets anymore - del self.strikeOffsets - del self.numStrikes - - def compile(self, ttFont): - sbixData = b"" - self.numStrikes = len(self.strikes) - sbixHeader = sstruct.pack(sbixHeaderFormat, self) - - # calculate offset to start of first strike - setOffset = sbixHeaderFormatSize + sbixStrikeOffsetFormatSize * self.numStrikes - - for si in sorted(self.strikes.keys()): - current_strike = self.strikes[si] - current_strike.compile(ttFont) - # append offset to this strike to table header - current_strike.strikeOffset = setOffset - sbixHeader += sstruct.pack(sbixStrikeOffsetFormat, current_strike) - setOffset += len(current_strike.data) - sbixData += current_strike.data - - return sbixHeader + sbixData - - def toXML(self, xmlWriter, ttFont): - xmlWriter.simpletag("version", value=self.version) - xmlWriter.newline() - xmlWriter.simpletag("flags", value=num2binary(self.flags, 16)) - xmlWriter.newline() - for i in sorted(self.strikes.keys()): - self.strikes[i].toXML(xmlWriter, ttFont) - - def fromXML(self, name, attrs, content, ttFont): - if name == "version": - setattr(self, name, safeEval(attrs["value"])) - elif name == "flags": - setattr(self, name, binary2num(attrs["value"])) - elif name == "strike": - current_strike = Strike() - for element in content: - if isinstance(element, tuple): - name, attrs, content = element - current_strike.fromXML(name, attrs, content, ttFont) - self.strikes[current_strike.ppem] = current_strike - else: - from fontTools import ttLib - - raise ttLib.TTLibError("can't handle '%s' element" % name) - - -# Helper classes - - -class sbixStrikeOffset(object): - pass diff --git a/spaces/cmudrc/wecnet/app.py b/spaces/cmudrc/wecnet/app.py deleted file mode 100644 index bd11ff26c462389c872b14b705c79eade9c2c700..0000000000000000000000000000000000000000 --- a/spaces/cmudrc/wecnet/app.py +++ /dev/null @@ -1,1414 +0,0 @@ -import keras -import numpy -import gradio -import pandas -import glob -import os -import shutil -import math -import platform -import scipy.spatial -import plotly.graph_objects as go -import random -from huggingface_hub import from_pretrained_keras - -def load_data(): - - from datasets import load_dataset - - S = 5 - N = 1000 - D = 3 - F = 64 - G = 32 - - data = load_dataset("cmudrc/wave-energy", data_files=["data.zip"], split='train') - geometry = numpy.reshape(data['geometry'], (S*N, G*G*G)) - curves = numpy.reshape(data['curves'], (S*N, D*F)) - return None, None, S, N, D, F, G, curves, geometry - -# Disable eager execution because its bad -from tensorflow.python.framework.ops import disable_eager_execution -disable_eager_execution() - -class Mesh: - def __init__(self): - # Define blank values - self.np = 0 - self.nf = 0 - self.X = [] - self.Y = [] - self.Z = [] - self.P = [] - - def combine_meshes(self, ob1, ob2): - # Check for largest mesh - if ob1.nf < ob2.nf: - coin_test = ob1.make_coin() - coin_target = ob2.make_coin() - else: - coin_test = ob2.make_coin() - coin_target = ob1.make_coin() - # Check for duplicate panels - deletion_list = [] - for iF in range(numpy.size(coin_test[1, 1, :])): - panel_test = coin_test[:, :, iF] - for iFF in range(numpy.size(coin_target[1, 1, :])): - panel_target = coin_target[:, :, iFF] - if numpy.sum(panel_test == panel_target) == 12: - coin_target = numpy.delete(coin_target, iFF, 2) - deletion_list.append(iF) - coin_test = numpy.delete(coin_test, deletion_list, 2) - - # Concatenate unique meshes - coin = numpy.concatenate((coin_test, coin_target), axis=2) - self.np = numpy.size(coin[1, 1, :]) * 4 - self.nf = numpy.size(coin[1, 1, :]) - self.X = numpy.zeros(numpy.size(coin[1, 1, :]) * 4) - self.Y = numpy.zeros(numpy.size(coin[1, 1, :]) * 4) - self.Z = numpy.zeros(numpy.size(coin[1, 1, :]) * 4) - self.P = numpy.zeros((numpy.size(coin[1, 1, :]), 4), dtype=int) - - iP = 0 - for iF in range(numpy.size(coin[1, 1, :])): - for iC in range(4): - self.X[iP] = coin[0, iC, iF] - self.Y[iP] = coin[1, iC, iF] - self.Z[iP] = coin[2, iC, iF] - iP += 1 - self.P[iF, 0] = 1 + iF * 4 - self.P[iF, 1] = 2 + iF * 4 - self.P[iF, 2] = 3 + iF * 4 - self.P[iF, 3] = 4 + iF * 4 - - def make_coin(self): - coin = numpy.zeros((3, 4, self.nf)) - for iF in range(self.nf): - for iC in range(4): - coin[0, iC, iF] = self.X[self.P[iF, iC] - 1] - coin[1, iC, iF] = self.Y[self.P[iF, iC] - 1] - coin[2, iC, iF] = self.Z[self.P[iF, iC] - 1] - return coin - - def delete_horizontal_panels(self): - coin = self.make_coin() - apex = numpy.min(self.Z) - zLoc = numpy.zeros(4) - deletion_list = [] - - # Check every panel for horizontality and higher position than lowest point - for iP in range(self.nf): - for iC in range(4): - zLoc[iC] = coin[2, iC, iP] - if numpy.abs(numpy.mean(zLoc) - zLoc[0]) < 0.001 and numpy.mean(zLoc) > apex: - deletion_list.append(iP) - - # Delete selected panels - coin = numpy.delete(coin, deletion_list, 2) - - # Remake mesh - self.np = numpy.size(coin[1, 1, :]) * 4 - self.nf = numpy.size(coin[1, 1, :]) - self.X = numpy.zeros(numpy.size(coin[1, 1, :]) * 4) - self.Y = numpy.zeros(numpy.size(coin[1, 1, :]) * 4) - self.Z = numpy.zeros(numpy.size(coin[1, 1, :]) * 4) - self.P = numpy.zeros((numpy.size(coin[1, 1, :]), 4), dtype=int) - - iP = 0 - for iF in range(numpy.size(coin[1, 1, :])): - for iC in range(4): - self.X[iP] = coin[0, iC, iF] - self.Y[iP] = coin[1, iC, iF] - self.Z[iP] = coin[2, iC, iF] - iP += 1 - self.P[iF, 0] = 1 + (iF) * 4 - self.P[iF, 1] = 2 + (iF) * 4 - self.P[iF, 2] = 3 + (iF) * 4 - self.P[iF, 3] = 4 + (iF) * 4 - - - - -def writeMesh(msh, filename): - with open(filename, 'w') as f: - f.write('{:d}\n'.format(msh.np)) - f.write('{:d}\n'.format(msh.nf)) - for iP in range(msh.np): - f.write(' {:.7f} {:.7f} {:.7f}\n'.format(msh.X[iP], msh.Y[iP], msh.Z[iP])) - for iF in range(msh.nf): - f.write(' {:d} {:d} {:d} {:d}\n'.format(msh.P[iF, 0], msh.P[iF, 1], msh.P[iF, 2], msh.P[iF, 3])) - return None - - - -class box: - def __init__(self, length, width, height, cCor): - self.length = length - self.width = width - self.height = height - self.xC = cCor[0] - self.yC = cCor[1] - self.zC = cCor[2] - self.name = 'box' - self.panelize() - self.translate(self.xC, self.yC, self.zC) - - def panelize(self): - self.nf = 6 - self.np = 8 - self.X = numpy.array( - [-self.length / 2.0, self.length / 2.0, -self.length / 2.0, self.length / 2.0, -self.length / 2.0, - self.length / 2.0, -self.length / 2.0, self.length / 2.0]) - self.Y = numpy.array([self.width / 2.0, self.width / 2.0, self.width / 2.0, self.width / 2.0, -self.width / 2.0, - -self.width / 2.0, -self.width / 2.0, -self.width / 2.0]) - self.Z = numpy.array( - [-self.height / 2.0, -self.height / 2.0, self.height / 2.0, self.height / 2.0, -self.height / 2.0, - -self.height / 2.0, self.height / 2.0, self.height / 2.0]) - self.P = numpy.zeros([6, 4], dtype=int) - self.P[0, :] = numpy.array([3, 4, 2, 1]) - self.P[1, :] = numpy.array([4, 8, 6, 2]) - self.P[2, :] = numpy.array([8, 7, 5, 6]) - self.P[3, :] = numpy.array([7, 3, 1, 5]) - self.P[4, :] = numpy.array([2, 6, 5, 1]) - self.P[5, :] = numpy.array([8, 4, 3, 7]) - # Define triangles for plotting - self.trii = numpy.zeros([2 * self.nf, 3], dtype=int) - iT = 0 - for iTr in range(self.nf): - self.trii[iT, :] = [self.P[iTr, 0] - 1, self.P[iTr, 1] - 1, self.P[iTr, 2] - 1] - self.trii[iT + 1, :] = [self.P[iTr, 0] - 1, self.P[iTr, 2] - 1, self.P[iTr, 3] - 1] - iT += 2 - - def translate(self, xT, yT, zT): - self.X += xT - self.Y += yT - self.Z += zT - - def rotate(self, a1, a2, theta): - R = numpy.zeros([3, 3]) - # Normal vector through origin - u = a2[0] - a1[0] - v = a2[1] - a1[1] - w = a2[2] - a1[2] - u = u / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - v = v / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - w = w / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - # Translate mesh so that rotation axis starts from the origin - self.X -= a1[0] - self.Y -= a1[1] - self.Z -= a1[2] - - # Rotation matrix - R[0, 0] = u ** 2 + numpy.cos(theta) * (1 - u ** 2) - R[0, 1] = u * v * (1 - numpy.cos(theta)) - w * numpy.sin(theta) - R[0, 2] = u * w * (1 - numpy.cos(theta)) + v * numpy.sin(theta) - R[1, 0] = u * v * (1 - numpy.cos(theta)) + w * numpy.sin(theta) - R[1, 1] = v ** 2 + numpy.cos(theta) * (1 - v ** 2) - R[1, 2] = v * w * (1 - numpy.cos(theta)) - u * numpy.sin(theta) - R[2, 0] = w * u * (1 - numpy.cos(theta)) - v * numpy.sin(theta) - R[2, 1] = w * v * (1 - numpy.cos(theta)) + u * numpy.sin(theta) - R[2, 2] = w ** 2 + numpy.cos(theta) * (1 - w ** 2) - - for iP in range(self.np): - p1 = numpy.array([self.X[iP], self.Y[iP], self.Z[iP]]) - p2 = numpy.dot(R, p1) - self.X[iP] = p2[0] - self.Y[iP] = p2[1] - self.Z[iP] = p2[2] - - # Translate back to original position - - self.X += a1[0] - self.Y += a1[1] - self.Z += a1[2] - - def makeCoin(self): - coin = numpy.zeros((3, 4, self.nf)) - for iF in range(self.nf): - for iC in range(4): - coin[0, iC, iF] = self.X[self.P[iF, iC] - 1] - coin[1, iC, iF] = self.Y[self.P[iF, iC] - 1] - coin[2, iC, iF] = self.Z[self.P[iF, iC] - 1] - return coin - - - - -class cone: - def __init__(self, diameter, height, cCor): - self.diameter = diameter - self.height = height - self.xC = cCor[0] - self.yC = cCor[1] - self.zC = cCor[2] - self.name = 'cone' - self.panelize() - self.translate(self.xC, self.yC, self.zC) - - def panelize(self): - Ntheta = 18 - Nz = 3 - theta = [xx * 2 * numpy.pi / (Ntheta - 1) for xx in range(Ntheta)] - self.nf = 0 - self.np = 0 - r = [0, self.diameter / 2.0, 0] - z = [0, 0, -self.height] - self.X = [] - self.Y = [] - self.Z = [] - self.P = numpy.zeros([(len(r) - 1) * (Ntheta - 1), 4], dtype=int) - n = len(r) - - for iT in range(Ntheta): - for iN in range(n): - self.X.append(r[iN] * numpy.cos(theta[iT])) - self.Y.append(r[iN] * numpy.sin(theta[iT])) - self.Z.append(z[iN]) - self.np += 1 - - iP = 0 - for iN in range(1, n): - for iT in range(1, Ntheta): - self.P[iP, 0] = iN + n * (iT - 1) - self.P[iP, 1] = iN + 1 + n * (iT - 1) - self.P[iP, 2] = iN + 1 + n * iT - self.P[iP, 3] = iN + n * iT - self.nf += 1 - iP += 1 - - self.X = numpy.array(self.X) - self.Y = numpy.array(self.Y) - self.Z = numpy.array(self.Z) - # Define triangles for plotting - self.trii = numpy.zeros([2 * self.nf, 3], dtype=int) - iT = 0 - for iTr in range(self.nf): - self.trii[iT, :] = [self.P[iTr, 0] - 1, self.P[iTr, 1] - 1, self.P[iTr, 2] - 1] - self.trii[iT + 1, :] = [self.P[iTr, 0] - 1, self.P[iTr, 2] - 1, self.P[iTr, 3] - 1] - iT += 2 - - def translate(self, xT, yT, zT): - self.X += xT - self.Y += yT - self.Z += zT - - def rotate(self, a1, a2, theta): - R = numpy.zeros([3, 3]) - # Normal vector through origin - u = a2[0] - a1[0] - v = a2[1] - a1[1] - w = a2[2] - a1[2] - u = u / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - v = v / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - w = w / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - # Translate mesh so that rotation axis starts from the origin - self.X -= a1[0] - self.Y -= a1[1] - self.Z -= a1[2] - - # Rotation matrix - R[0, 0] = u ** 2 + numpy.cos(theta) * (1 - u ** 2) - R[0, 1] = u * v * (1 - numpy.cos(theta)) - w * numpy.sin(theta) - R[0, 2] = u * w * (1 - numpy.cos(theta)) + v * numpy.sin(theta) - R[1, 0] = u * v * (1 - numpy.cos(theta)) + w * numpy.sin(theta) - R[1, 1] = v ** 2 + numpy.cos(theta) * (1 - v ** 2) - R[1, 2] = v * w * (1 - numpy.cos(theta)) - u * numpy.sin(theta) - R[2, 0] = w * u * (1 - numpy.cos(theta)) - v * numpy.sin(theta) - R[2, 1] = w * v * (1 - numpy.cos(theta)) + u * numpy.sin(theta) - R[2, 2] = w ** 2 + numpy.cos(theta) * (1 - w ** 2) - - for iP in range(self.np): - p1 = numpy.array([self.X[iP], self.Y[iP], self.Z[iP]]) - p2 = numpy.dot(R, p1) - self.X[iP] = p2[0] - self.Y[iP] = p2[1] - self.Z[iP] = p2[2] - - # Translate back to original position - - self.X += a1[0] - self.Y += a1[1] - self.Z += a1[2] - - def makeCoin(self): - coin = numpy.zeros((3, 4, self.nf)) - for iF in range(self.nf): - for iC in range(4): - coin[0, iC, iF] = self.X[self.P[iF, iC] - 1] - coin[1, iC, iF] = self.Y[self.P[iF, iC] - 1] - coin[2, iC, iF] = self.Z[self.P[iF, iC] - 1] - return coin - - - -class cylinder: - def __init__(self, diameter, height, cCor): - self.diameter = diameter - self.height = height - self.xC = cCor[0] - self.yC = cCor[1] - self.zC = cCor[2] - self.name = 'cylinder' - self.panelize() - self.translate(self.xC, self.yC, self.zC) - - def panelize(self): - Ntheta = 18 - Nz = 3 - theta = [xx * 2 * numpy.pi / (Ntheta - 1) for xx in range(Ntheta)] - self.nf = 0 - self.np = 0 - r = [0, self.diameter / 2.0, self.diameter / 2.0, 0] - z = [0, 0, -self.height, -self.height] - self.X = [] - self.Y = [] - self.Z = [] - self.P = numpy.zeros([(len(r) - 1) * (Ntheta - 1), 4], dtype=int) - n = len(r) - - for iT in range(Ntheta): - for iN in range(n): - self.X.append(r[iN] * numpy.cos(theta[iT])) - self.Y.append(r[iN] * numpy.sin(theta[iT])) - self.Z.append(z[iN]) - self.np += 1 - - iP = 0 - for iN in range(1, n): - for iT in range(1, Ntheta): - self.P[iP, 0] = iN + n * (iT - 1) - self.P[iP, 1] = iN + 1 + n * (iT - 1) - self.P[iP, 2] = iN + 1 + n * iT - self.P[iP, 3] = iN + n * iT - self.nf += 1 - iP += 1 - - self.X = numpy.array(self.X) - self.Y = numpy.array(self.Y) - self.Z = numpy.array(self.Z) - # Define triangles for plotting - self.trii = numpy.zeros([2 * self.nf, 3], dtype=int) - iT = 0 - for iTr in range(self.nf): - self.trii[iT, :] = [self.P[iTr, 0] - 1, self.P[iTr, 1] - 1, self.P[iTr, 2] - 1] - self.trii[iT + 1, :] = [self.P[iTr, 0] - 1, self.P[iTr, 2] - 1, self.P[iTr, 3] - 1] - iT += 2 - - def translate(self, xT, yT, zT): - self.X += xT - self.Y += yT - self.Z += zT - - def rotate(self, a1, a2, theta): - R = numpy.zeros([3, 3]) - # Normal vector through origin - u = a2[0] - a1[0] - v = a2[1] - a1[1] - w = a2[2] - a1[2] - u = u / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - v = v / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - w = w / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - # Translate mesh so that rotation axis starts from the origin - self.X -= a1[0] - self.Y -= a1[1] - self.Z -= a1[2] - - # Rotation matrix - R[0, 0] = u ** 2 + numpy.cos(theta) * (1 - u ** 2) - R[0, 1] = u * v * (1 - numpy.cos(theta)) - w * numpy.sin(theta) - R[0, 2] = u * w * (1 - numpy.cos(theta)) + v * numpy.sin(theta) - R[1, 0] = u * v * (1 - numpy.cos(theta)) + w * numpy.sin(theta) - R[1, 1] = v ** 2 + numpy.cos(theta) * (1 - v ** 2) - R[1, 2] = v * w * (1 - numpy.cos(theta)) - u * numpy.sin(theta) - R[2, 0] = w * u * (1 - numpy.cos(theta)) - v * numpy.sin(theta) - R[2, 1] = w * v * (1 - numpy.cos(theta)) + u * numpy.sin(theta) - R[2, 2] = w ** 2 + numpy.cos(theta) * (1 - w ** 2) - - for iP in range(self.np): - p1 = numpy.array([self.X[iP], self.Y[iP], self.Z[iP]]) - p2 = numpy.dot(R, p1) - self.X[iP] = p2[0] - self.Y[iP] = p2[1] - self.Z[iP] = p2[2] - - # Translate back to original position - - self.X += a1[0] - self.Y += a1[1] - self.Z += a1[2] - - def makeCoin(self): - coin = numpy.zeros((3, 4, self.nf)) - for iF in range(self.nf): - for iC in range(4): - coin[0, iC, iF] = self.X[self.P[iF, iC] - 1] - coin[1, iC, iF] = self.Y[self.P[iF, iC] - 1] - coin[2, iC, iF] = self.Z[self.P[iF, iC] - 1] - return coin - - - - -class hemicylinder: - def __init__(self, diameter, height, cCor): - self.diameter = diameter - self.height = height - self.xC = cCor[0] - self.yC = cCor[1] - self.zC = cCor[2] - self.name = 'hemicylinder' - self.panelize() - self.translate(self.xC, self.yC, self.zC) - - def panelize(self): - Ntheta = 18 - Nz = 3 - theta = [xx * numpy.pi / (Ntheta - 1) - numpy.pi / 2.0 for xx in range(Ntheta)] - self.nf = 0 - self.np = 0 - r = [0, self.diameter / 2.0, self.diameter / 2.0, 0] - z = [self.height / 2.0, self.height / 2.0, -self.height / 2.0, -self.height / 2.0] - self.X = [] - self.Y = [] - self.Z = [] - self.P = numpy.zeros([(len(r) - 1) * (Ntheta - 1), 4], dtype=int) - n = len(r) - - for iT in range(Ntheta): - for iN in range(n): - self.Z.append(-r[iN] * numpy.cos(theta[iT])) - self.X.append(r[iN] * numpy.sin(theta[iT])) - self.Y.append(z[iN]) - self.np += 1 - - iP = 0 - for iN in range(1, n): - for iT in range(1, Ntheta): - self.P[iP, 3] = iN + n * (iT - 1) - self.P[iP, 2] = iN + 1 + n * (iT - 1) - self.P[iP, 1] = iN + 1 + n * iT - self.P[iP, 0] = iN + n * iT - self.nf += 1 - iP += 1 - - self.X = numpy.array(self.X) - self.Y = numpy.array(self.Y) - self.Z = numpy.array(self.Z) - # Define triangles for plotting - self.trii = numpy.zeros([2 * self.nf, 3], dtype=int) - iT = 0 - for iTr in range(self.nf): - self.trii[iT, :] = [self.P[iTr, 0] - 1, self.P[iTr, 1] - 1, self.P[iTr, 2] - 1] - self.trii[iT + 1, :] = [self.P[iTr, 0] - 1, self.P[iTr, 2] - 1, self.P[iTr, 3] - 1] - iT += 2 - - def translate(self, xT, yT, zT): - self.X += xT - self.Y += yT - self.Z += zT - - def rotate(self, a1, a2, theta): - R = numpy.zeros([3, 3]) - # Normal vector through origin - u = a2[0] - a1[0] - v = a2[1] - a1[1] - w = a2[2] - a1[2] - u = u / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - v = v / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - w = w / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - # Translate mesh so that rotation axis starts from the origin - self.X -= a1[0] - self.Y -= a1[1] - self.Z -= a1[2] - - # Rotation matrix - R[0, 0] = u ** 2 + numpy.cos(theta) * (1 - u ** 2) - R[0, 1] = u * v * (1 - numpy.cos(theta)) - w * numpy.sin(theta) - R[0, 2] = u * w * (1 - numpy.cos(theta)) + v * numpy.sin(theta) - R[1, 0] = u * v * (1 - numpy.cos(theta)) + w * numpy.sin(theta) - R[1, 1] = v ** 2 + numpy.cos(theta) * (1 - v ** 2) - R[1, 2] = v * w * (1 - numpy.cos(theta)) - u * numpy.sin(theta) - R[2, 0] = w * u * (1 - numpy.cos(theta)) - v * numpy.sin(theta) - R[2, 1] = w * v * (1 - numpy.cos(theta)) + u * numpy.sin(theta) - R[2, 2] = w ** 2 + numpy.cos(theta) * (1 - w ** 2) - - for iP in range(self.np): - p1 = numpy.array([self.X[iP], self.Y[iP], self.Z[iP]]) - p2 = numpy.dot(R, p1) - self.X[iP] = p2[0] - self.Y[iP] = p2[1] - self.Z[iP] = p2[2] - - # Translate back to original position - - self.X += a1[0] - self.Y += a1[1] - self.Z += a1[2] - - def makeCoin(self): - coin = numpy.zeros((3, 4, self.nf)) - for iF in range(self.nf): - for iC in range(4): - coin[0, iC, iF] = self.X[self.P[iF, iC] - 1] - coin[1, iC, iF] = self.Y[self.P[iF, iC] - 1] - coin[2, iC, iF] = self.Z[self.P[iF, iC] - 1] - return coin - - -class sphere: - def __init__(self, diameter, cCor): - self.diameter = diameter - self.xC = cCor[0] - self.yC = cCor[1] - self.zC = cCor[2] - self.name = 'sphere' - self.panelize() - self.translate(self.xC, self.yC, self.zC) - - def panelize(self): - Ntheta = 18 - Nthetad2 = int(Ntheta / 2) - Nz = 3 - theta = [xx * 2 * numpy.pi / (Ntheta - 1) for xx in range(Ntheta)] - phi = [xx * numpy.pi / (Ntheta / 2 - 1) for xx in range(Nthetad2)] - self.nf = 0 - self.np = 0 - r = self.diameter / 2.0 - self.X = [] - self.Y = [] - self.Z = [] - self.P = numpy.zeros([(Ntheta - 1) * (Nthetad2 - 1), 4], dtype=int) - - for iT in range(Nthetad2): - for iTT in range(Ntheta): - self.X.append(r * numpy.cos(theta[iTT]) * numpy.sin(phi[iT])) - self.Y.append(r * numpy.sin(theta[iTT]) * numpy.sin(phi[iT])) - self.Z.append(r * numpy.cos(phi[iT])) - self.np += 1 - - iP = 0 - for iN in range(1, Ntheta): - for iT in range(1, Nthetad2): - self.P[iP, 3] = iN + Ntheta * (iT - 1) - self.P[iP, 2] = iN + 1 + Ntheta * (iT - 1) - self.P[iP, 1] = iN + 1 + Ntheta * iT - self.P[iP, 0] = iN + Ntheta * iT - self.nf += 1 - iP += 1 - self.X = numpy.array(self.X) - self.Y = numpy.array(self.Y) - self.Z = numpy.array(self.Z) - # Define triangles for plotting - self.trii = numpy.zeros([2 * self.nf, 3], dtype=int) - iT = 0 - for iTr in range(self.nf): - self.trii[iT, :] = [self.P[iTr, 0] - 1, self.P[iTr, 1] - 1, self.P[iTr, 2] - 1] - self.trii[iT + 1, :] = [self.P[iTr, 0] - 1, self.P[iTr, 2] - 1, self.P[iTr, 3] - 1] - iT += 2 - - def translate(self, xT, yT, zT): - self.X += xT - self.Y += yT - self.Z += zT - - def rotate(self, a1, a2, theta): - R = numpy.zeros([3, 3]) - # Normal vector through origin - u = a2[0] - a1[0] - v = a2[1] - a1[1] - w = a2[2] - a1[2] - u = u / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - v = v / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - w = w / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - # Translate mesh so that rotation axis starts from the origin - self.X -= a1[0] - self.Y -= a1[1] - self.Z -= a1[2] - - # Rotation matrix - R[0, 0] = u ** 2 + numpy.cos(theta) * (1 - u ** 2) - R[0, 1] = u * v * (1 - numpy.cos(theta)) - w * numpy.sin(theta) - R[0, 2] = u * w * (1 - numpy.cos(theta)) + v * numpy.sin(theta) - R[1, 0] = u * v * (1 - numpy.cos(theta)) + w * numpy.sin(theta) - R[1, 1] = v ** 2 + numpy.cos(theta) * (1 - v ** 2) - R[1, 2] = v * w * (1 - numpy.cos(theta)) - u * numpy.sin(theta) - R[2, 0] = w * u * (1 - numpy.cos(theta)) - v * numpy.sin(theta) - R[2, 1] = w * v * (1 - numpy.cos(theta)) + u * numpy.sin(theta) - R[2, 2] = w ** 2 + numpy.cos(theta) * (1 - w ** 2) - - for iP in range(self.np): - p1 = numpy.array([self.X[iP], self.Y[iP], self.Z[iP]]) - p2 = numpy.dot(R, p1) - self.X[iP] = p2[0] - self.Y[iP] = p2[1] - self.Z[iP] = p2[2] - - # Translate back to original position - - self.X += a1[0] - self.Y += a1[1] - self.Z += a1[2] - - def makeCoin(self): - coin = numpy.zeros((3, 4, self.nf)) - for iF in range(self.nf): - for iC in range(4): - coin[0, iC, iF] = self.X[self.P[iF, iC] - 1] - coin[1, iC, iF] = self.Y[self.P[iF, iC] - 1] - coin[2, iC, iF] = self.Z[self.P[iF, iC] - 1] - return coin - - - - - -class hemisphere: - def __init__(self, diameter, cCor): - self.diameter = diameter - self.xC = cCor[0] - self.yC = cCor[1] - self.zC = cCor[2] - self.name = 'hemisphere' - self.panelize() - self.translate(self.xC, self.yC, self.zC) - - def panelize(self): - Ntheta = 18 - theta = [xx * 2 * numpy.pi / (Ntheta - 1) for xx in range(Ntheta)] - phi = [xx * numpy.pi / 2.0 / (Ntheta / 2 - 1) for xx in range(Ntheta / 2)] - self.nf = 0 - self.np = 0 - r = self.diameter / 2.0 - self.X = [] - self.Y = [] - self.Z = [] - self.P = numpy.zeros([(Ntheta - 1) * (Ntheta / 2 - 1), 4], dtype=int) - - for iT in range(Ntheta / 2): - for iTT in range(Ntheta): - self.X.append(r * numpy.cos(theta[iTT]) * numpy.sin(phi[iT])) - self.Y.append(r * numpy.sin(theta[iTT]) * numpy.sin(phi[iT])) - self.Z.append(-r * numpy.cos(phi[iT])) - self.np += 1 - - iP = 0 - for iN in range(1, Ntheta): - for iT in range(1, Ntheta / 2): - self.P[iP, 0] = iN + Ntheta * (iT - 1) - self.P[iP, 1] = iN + 1 + Ntheta * (iT - 1) - self.P[iP, 2] = iN + 1 + Ntheta * iT - self.P[iP, 3] = iN + Ntheta * iT - self.nf += 1 - iP += 1 - - self.X = numpy.array(self.X) - self.Y = numpy.array(self.Y) - self.Z = numpy.array(self.Z) - # Define triangles for plotting - self.trii = numpy.zeros([2 * self.nf, 3], dtype=int) - iT = 0 - for iTr in range(self.nf): - self.trii[iT, :] = [self.P[iTr, 0] - 1, self.P[iTr, 1] - 1, self.P[iTr, 2] - 1] - self.trii[iT + 1, :] = [self.P[iTr, 0] - 1, self.P[iTr, 2] - 1, self.P[iTr, 3] - 1] - iT += 2 - - def translate(self, xT, yT, zT): - self.X += xT - self.Y += yT - self.Z += zT - - def rotate(self, a1, a2, theta): - R = numpy.zeros([3, 3]) - # Normal vector through origin - u = a2[0] - a1[0] - v = a2[1] - a1[1] - w = a2[2] - a1[2] - u = u / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - v = v / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - w = w / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - # Translate mesh so that rotation axis starts from the origin - self.X -= a1[0] - self.Y -= a1[1] - self.Z -= a1[2] - - # Rotation matrix - R[0, 0] = u ** 2 + numpy.cos(theta) * (1 - u ** 2) - R[0, 1] = u * v * (1 - numpy.cos(theta)) - w * numpy.sin(theta) - R[0, 2] = u * w * (1 - numpy.cos(theta)) + v * numpy.sin(theta) - R[1, 0] = u * v * (1 - numpy.cos(theta)) + w * numpy.sin(theta) - R[1, 1] = v ** 2 + numpy.cos(theta) * (1 - v ** 2) - R[1, 2] = v * w * (1 - numpy.cos(theta)) - u * numpy.sin(theta) - R[2, 0] = w * u * (1 - numpy.cos(theta)) - v * numpy.sin(theta) - R[2, 1] = w * v * (1 - numpy.cos(theta)) + u * numpy.sin(theta) - R[2, 2] = w ** 2 + numpy.cos(theta) * (1 - w ** 2) - - for iP in range(self.np): - p1 = numpy.array([self.X[iP], self.Y[iP], self.Z[iP]]) - p2 = numpy.dot(R, p1) - self.X[iP] = p2[0] - self.Y[iP] = p2[1] - self.Z[iP] = p2[2] - - # Translate back to original position - - self.X += a1[0] - self.Y += a1[1] - self.Z += a1[2] - - def makeCoin(self): - coin = numpy.zeros((3, 4, self.nf)) - for iF in range(self.nf): - for iC in range(4): - coin[0, iC, iF] = self.X[self.P[iF, iC] - 1] - coin[1, iC, iF] = self.Y[self.P[iF, iC] - 1] - coin[2, iC, iF] = self.Z[self.P[iF, iC] - 1] - return coin - - - - -class pyramid: - def __init__(self, length, width, height, cCor): - self.length = length - self.width = width - self.height = height - self.xC = cCor[0] - self.yC = cCor[1] - self.zC = cCor[2] - self.name = 'pyramid' - self.panelize() - self.translate(self.xC, self.yC, self.zC) - - def panelize(self): - self.nf = 6 - self.np = 8 - self.X = numpy.array( - [0.0, 0.0, -self.length / 2.0, self.length / 2.0, 0.0, 0.0, -self.length / 2.0, self.length / 2.0]) - self.Y = numpy.array( - [0.0, 0.0, self.width / 2.0, self.width / 2.0, 0.0, 0.0, -self.width / 2.0, -self.width / 2.0]) - self.Z = numpy.array([-self.height, -self.height, 0.0, 0.0, -self.height, -self.height, 0.0, 0.0]) - self.P = numpy.zeros([6, 4], dtype=int) - self.P[0, :] = numpy.array([3, 4, 2, 1]) - self.P[1, :] = numpy.array([4, 8, 6, 2]) - self.P[2, :] = numpy.array([8, 7, 5, 6]) - self.P[3, :] = numpy.array([7, 3, 1, 5]) - self.P[4, :] = numpy.array([5, 6, 5, 1]) - self.P[5, :] = numpy.array([8, 4, 3, 7]) - # Define triangles for plotting - self.trii = numpy.zeros([2 * self.nf, 3], dtype=int) - iT = 0 - for iTr in range(self.nf): - self.trii[iT, :] = [self.P[iTr, 0] - 1, self.P[iTr, 1] - 1, self.P[iTr, 2] - 1] - self.trii[iT + 1, :] = [self.P[iTr, 0] - 1, self.P[iTr, 2] - 1, self.P[iTr, 3] - 1] - iT += 2 - - def translate(self, xT, yT, zT): - self.X += xT - self.Y += yT - self.Z += zT - - def rotate(self, a1, a2, theta): - R = numpy.zeros([3, 3]) - # Normal vector through origin - u = a2[0] - a1[0] - v = a2[1] - a1[1] - w = a2[2] - a1[2] - u = u / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - v = v / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - w = w / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - # Translate mesh so that rotation axis starts from the origin - self.X -= a1[0] - self.Y -= a1[1] - self.Z -= a1[2] - - # Rotation matrix - R[0, 0] = u ** 2 + numpy.cos(theta) * (1 - u ** 2) - R[0, 1] = u * v * (1 - numpy.cos(theta)) - w * numpy.sin(theta) - R[0, 2] = u * w * (1 - numpy.cos(theta)) + v * numpy.sin(theta) - R[1, 0] = u * v * (1 - numpy.cos(theta)) + w * numpy.sin(theta) - R[1, 1] = v ** 2 + numpy.cos(theta) * (1 - v ** 2) - R[1, 2] = v * w * (1 - numpy.cos(theta)) - u * numpy.sin(theta) - R[2, 0] = w * u * (1 - numpy.cos(theta)) - v * numpy.sin(theta) - R[2, 1] = w * v * (1 - numpy.cos(theta)) + u * numpy.sin(theta) - R[2, 2] = w ** 2 + numpy.cos(theta) * (1 - w ** 2) - - for iP in range(self.np): - p1 = numpy.array([self.X[iP], self.Y[iP], self.Z[iP]]) - p2 = numpy.dot(R, p1) - self.X[iP] = p2[0] - self.Y[iP] = p2[1] - self.Z[iP] = p2[2] - - # Translate back to original position - - self.X += a1[0] - self.Y += a1[1] - self.Z += a1[2] - - def makeCoin(self): - coin = numpy.zeros((3, 4, self.nf)) - for iF in range(self.nf): - for iC in range(4): - coin[0, iC, iF] = self.X[self.P[iF, iC] - 1] - coin[1, iC, iF] = self.Y[self.P[iF, iC] - 1] - coin[2, iC, iF] = self.Z[self.P[iF, iC] - 1] - return coin - - - - - -class wedge: - def __init__(self, length, width, height, cCor): - self.length = length - self.width = width - self.height = height - self.xC = cCor[0] - self.yC = cCor[1] - self.zC = cCor[2] - self.name = 'wedge' - self.panelize() - self.translate(self.xC, self.yC, self.zC) - - def panelize(self): - self.nf = 6 - self.np = 8 - self.X = numpy.array( - [0.0, 0.0, -self.length / 2.0, self.length / 2.0, 0.0, 0.0, -self.length / 2.0, self.length / 2.0]) - self.Y = numpy.array([self.width / 2.0, self.width / 2.0, self.width / 2.0, self.width / 2.0, -self.width / 2.0, - -self.width / 2.0, -self.width / 2.0, -self.width / 2.0]) - self.Z = numpy.array([-self.height, -self.height, 0.0, 0.0, -self.height, -self.height, 0.0, 0.0]) - self.P = numpy.zeros([6, 4], dtype=int) - self.P[0, :] = numpy.array([3, 4, 2, 1]) - self.P[1, :] = numpy.array([4, 8, 6, 2]) - self.P[2, :] = numpy.array([8, 7, 5, 6]) - self.P[3, :] = numpy.array([7, 3, 1, 5]) - self.P[4, :] = numpy.array([2, 6, 5, 1]) - self.P[5, :] = numpy.array([8, 4, 3, 7]) - # Define triangles for plotting - self.trii = numpy.zeros([2 * self.nf, 3], dtype=int) - iT = 0 - for iTr in range(self.nf): - self.trii[iT, :] = [self.P[iTr, 0] - 1, self.P[iTr, 1] - 1, self.P[iTr, 2] - 1] - self.trii[iT + 1, :] = [self.P[iTr, 0] - 1, self.P[iTr, 2] - 1, self.P[iTr, 3] - 1] - iT += 2 - - def translate(self, xT, yT, zT): - self.X += xT - self.Y += yT - self.Z += zT - - def rotate(self, a1, a2, theta): - R = numpy.zeros([3, 3]) - # Normal vector through origin - u = a2[0] - a1[0] - v = a2[1] - a1[1] - w = a2[2] - a1[2] - u = u / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - v = v / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - w = w / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - # Translate mesh so that rotation axis starts from the origin - self.X -= a1[0] - self.Y -= a1[1] - self.Z -= a1[2] - - # Rotation matrix - R[0, 0] = u ** 2 + numpy.cos(theta) * (1 - u ** 2) - R[0, 1] = u * v * (1 - numpy.cos(theta)) - w * numpy.sin(theta) - R[0, 2] = u * w * (1 - numpy.cos(theta)) + v * numpy.sin(theta) - R[1, 0] = u * v * (1 - numpy.cos(theta)) + w * numpy.sin(theta) - R[1, 1] = v ** 2 + numpy.cos(theta) * (1 - v ** 2) - R[1, 2] = v * w * (1 - numpy.cos(theta)) - u * numpy.sin(theta) - R[2, 0] = w * u * (1 - numpy.cos(theta)) - v * numpy.sin(theta) - R[2, 1] = w * v * (1 - numpy.cos(theta)) + u * numpy.sin(theta) - R[2, 2] = w ** 2 + numpy.cos(theta) * (1 - w ** 2) - - for iP in range(self.np): - p1 = numpy.array([self.X[iP], self.Y[iP], self.Z[iP]]) - p2 = numpy.dot(R, p1) - self.X[iP] = p2[0] - self.Y[iP] = p2[1] - self.Z[iP] = p2[2] - - # Translate back to original position - - self.X += a1[0] - self.Y += a1[1] - self.Z += a1[2] - - def makeCoin(self): - coin = numpy.zeros((3, 4, self.nf)) - for iF in range(self.nf): - for iC in range(4): - coin[0, iC, iF] = self.X[self.P[iF, iC] - 1] - coin[1, iC, iF] = self.Y[self.P[iF, iC] - 1] - coin[2, iC, iF] = self.Z[self.P[iF, iC] - 1] - return coin - - - - - -class torus: - def __init__(self, diamOut, diamIn, cCor): - self.diamOut = diamOut - self.diamIn = diamIn - self.xC = cCor[0] - self.yC = cCor[1] - self.zC = cCor[2] - self.name = 'torus' - self.panelize() - self.translate(self.xC, self.yC, self.zC) - - def panelize(self): - Ntheta = 18 - Nphi = 18 - theta = [xx * 2 * numpy.pi / (Ntheta - 1) for xx in range(Ntheta)] - phi = [xx * 2 * numpy.pi / (Nphi - 1) for xx in range(Nphi)] - self.nf = 0 - self.np = 0 - self.X = [] - self.Y = [] - self.Z = [] - R = self.diamOut / 2.0 - r = self.diamIn / 2.0 - - for iT in range(Ntheta): - for iP in range(Nphi): - self.X.append((R + r * numpy.cos(theta[iT])) * numpy.cos(phi[iP])) - self.Y.append((R + r * numpy.cos(theta[iT])) * numpy.sin(phi[iP])) - self.Z.append(r * numpy.sin(theta[iT])) - self.np += 1 - - self.nf = (Ntheta - 1) * (Nphi - 1) - self.P = numpy.zeros([self.nf, 4], dtype=int) - iPan = 0 - for iT in range(Ntheta - 1): - for iP in range(Nphi - 1): - self.P[iPan, 0] = iP + iT * Nphi + 1 - self.P[iPan, 1] = iP + 1 + iT * Nphi + 1 - self.P[iPan, 2] = iP + 1 + Ntheta + iT * Nphi + 1 - self.P[iPan, 3] = iP + Ntheta + iT * Nphi + 1 - iPan += 1 - - self.X = numpy.array(self.X) - self.Y = numpy.array(self.Y) - self.Z = numpy.array(self.Z) - # Define triangles for plotting - self.trii = numpy.zeros([2 * self.nf, 3], dtype=int) - iT = 0 - for iTr in range(self.nf): - self.trii[iT, :] = [self.P[iTr, 0] - 1, self.P[iTr, 1] - 1, self.P[iTr, 2] - 1] - self.trii[iT + 1, :] = [self.P[iTr, 0] - 1, self.P[iTr, 2] - 1, self.P[iTr, 3] - 1] - iT += 2 - - def translate(self, xT, yT, zT): - self.X += xT - self.Y += yT - self.Z += zT - - def rotate(self, a1, a2, theta): - R = numpy.zeros([3, 3]) - # Normal vector through origin - u = a2[0] - a1[0] - v = a2[1] - a1[1] - w = a2[2] - a1[2] - u = u / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - v = v / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - w = w / numpy.sqrt(u ** 2 + v ** 2 + w ** 2) - # Translate mesh so that rotation axis starts from the origin - self.X -= a1[0] - self.Y -= a1[1] - self.Z -= a1[2] - - # Rotation matrix - R[0, 0] = u ** 2 + numpy.cos(theta) * (1 - u ** 2) - R[0, 1] = u * v * (1 - numpy.cos(theta)) - w * numpy.sin(theta) - R[0, 2] = u * w * (1 - numpy.cos(theta)) + v * numpy.sin(theta) - R[1, 0] = u * v * (1 - numpy.cos(theta)) + w * numpy.sin(theta) - R[1, 1] = v ** 2 + numpy.cos(theta) * (1 - v ** 2) - R[1, 2] = v * w * (1 - numpy.cos(theta)) - u * numpy.sin(theta) - R[2, 0] = w * u * (1 - numpy.cos(theta)) - v * numpy.sin(theta) - R[2, 1] = w * v * (1 - numpy.cos(theta)) + u * numpy.sin(theta) - R[2, 2] = w ** 2 + numpy.cos(theta) * (1 - w ** 2) - - for iP in range(self.np): - p1 = numpy.array([self.X[iP], self.Y[iP], self.Z[iP]]) - p2 = numpy.dot(R, p1) - self.X[iP] = p2[0] - self.Y[iP] = p2[1] - self.Z[iP] = p2[2] - - # Translate back to original position - - self.X += a1[0] - self.Y += a1[1] - self.Z += a1[2] - - def makeCoin(self): - coin = numpy.zeros((3, 4, self.nf)) - for iF in range(self.nf): - for iC in range(4): - coin[0, iC, iF] = self.X[self.P[iF, iC] - 1] - coin[1, iC, iF] = self.Y[self.P[iF, iC] - 1] - coin[2, iC, iF] = self.Z[self.P[iF, iC] - 1] - return coin - -def make_voxels_without_figure(shape, length, height, width, diameter): - pos = [0, 0, 0] - if shape == "box": - mesh = box(length, width, height, pos) - elif shape == "cone": - mesh = cone(diameter, height, pos) - elif shape == "cylinder": - mesh = cylinder(diameter, height, pos) - elif shape == "sphere": - mesh = sphere(diameter, pos) - elif shape == "wedge": - mesh = wedge(length, width, height, pos) - - hull_points = numpy.array([mesh.X.tolist(), mesh.Y.tolist(), mesh.Z.tolist()]).T - - # Set up test points - G = 32 - ex = 5 - 5 / G - x, y, z = numpy.meshgrid(numpy.linspace(-ex, ex, G), - numpy.linspace(-ex, ex, G), - numpy.linspace(-(9.5 - 5 / G), 0.5 - 5 / G, G)) - test_points = numpy.vstack((x.ravel(), y.ravel(), z.ravel())).T - - hull = scipy.spatial.Delaunay(hull_points) - within = hull.find_simplex(test_points) >= 0 - - return within*1.0 - - -def make_voxels(shape, length, height, width, diameter): - return plotly_fig(make_voxels_without_figure(shape, length, height, width, diameter)) - -# This function loads a fuckton of data -# def load_data(): -# # Open all the files we downloaded at the beginning and take out hte good bits -# curves = numpy.load('data_curves.npz')['curves'] -# geometry = numpy.load('data_geometry.npz')['geometry'] -# constants = numpy.load('constants.npz') -# S = constants['S'] -# N = constants['N'] -# D = constants['D'] -# F = constants['F'] -# G = constants['G'] - -# # Some of the good bits need additional processining -# new_curves = numpy.zeros((S*N, D * F)) -# for i, curveset in enumerate(curves): -# new_curves[i, :] = curveset.T.flatten() / 1000000 - -# new_geometry = numpy.zeros((S*N, G * G * G)) -# for i, geometryset in enumerate(geometry): -# new_geometry[i, :] = geometryset.T.flatten() - -# # Return good bits to user -# return curves, geometry, S, N, D, F, G, new_curves, new_geometry - -curves, geometry, S, N, D, F, G, new_curves, new_geometry = load_data() - -class Network(object): - - def __init__(self, type): - # Instantiate variables - # self.curves = curves - # self.new_curves = new_curves - # self.geometry = geometry - # self.new_geometry = new_geometry - # self.S = S - # self.N = N - # self.D = D - # self.F = F - # self.G = G - - # Load network - # with open(structure, 'r') as file: - # self.network = keras.models.model_from_json(file.read()) - # self.network.load_weights(weights) - self.network = from_pretrained_keras("cmudrc/wave-energy-analysis") if type == "forward" else from_pretrained_keras("cmudrc/wave-energy-synthesis") - - def analysis(self, idx=None): - print(idx) - - if idx is None: - idx = numpy.random.randint(1, S * N) - else: - idx = int(idx) - - # Get the input - data_input = new_geometry[idx:(idx+1), :] - other_data_input = data_input.reshape((G, G, G), order='F') - - # Get the outputs - print(data_input.shape) - predicted_output = self.network.predict(data_input) - true_output = new_curves[idx].reshape((3, F)) - predicted_output = predicted_output.reshape((3, F)) - - f = numpy.linspace(0.05, 2.0, 64) - fd = pandas.DataFrame(f).rename(columns={0: "Frequency"}) - df_pred = pandas.DataFrame(predicted_output.transpose()).rename(columns={0: "Surge", 1: "Heave", 2: "Pitch"}) - df_true = pandas.DataFrame(true_output.transpose()).rename(columns={0: "Surge", 1: "Heave", 2: "Pitch"}) - - # return idx, other_data_input, true_output, predicted_output - return pandas.concat([fd, df_pred], axis=1), pandas.concat([fd, df_true], axis=1) - - - def analysis_from_geometry(self, geometry): - # Get the outputs - predicted_output = self.network.predict(numpy.array([geometry.flatten().tolist()])) - predicted_output = predicted_output.reshape((3, F)) - - f = numpy.linspace(0.05, 2.0, 64) - fd = pandas.DataFrame(f).rename(columns={0: "Frequency"}) - df_pred = pandas.DataFrame(predicted_output.transpose()).rename(columns={0: "Surge", 1: "Heave", 2: "Pitch"}) - good_frame = pandas.concat([fd, df_pred], axis=1) - - return good_frame, good_frame - - def synthesis(self, idx=None): - print(idx) - - if idx is None: - idx = numpy.random.randint(1, S * N) - else: - idx = int(idx) - - # Get the input - data_input = new_curves[idx:(idx+1), :] - other_data_input = data_input.reshape((3, F)) - - # Get the outputs - predicted_output = self.network.predict(data_input) - true_output = new_geometry[idx].reshape((G, G, G), order='F') - predicted_output = predicted_output.reshape((G, G, G), order='F') - - # return idx, other_data_input, true_output, predicted_output - return predicted_output, true_output - - - def synthesis_from_spectrum(self, other_data_input): - # Get the input - data_input = other_data_input.reshape((1, 3*F)) - - # Get the outputs - predicted_output = self.network.predict(data_input) - predicted_output = predicted_output.reshape((G, G, G), order='F') - - # return idx, other_data_input, true_output, predicted_output - return predicted_output - - def get_geometry(self, idx=None): - - if idx is None: - idx = numpy.random.randint(1, S * N) - else: - idx = int(idx) - - idx = int(idx) - - # Get the input - data_input = new_geometry[idx:(idx+1), :] - other_data_input = data_input.reshape((G, G, G), order='F') - - # return idx, other_data_input, true_output, predicted_output - return other_data_input - - - def get_performance(self, idx=None): - - if idx is None: - idx = numpy.random.randint(1, S *N) - else: - idx = int(idx) - - idx = int(idx) - - # Get the input - data_input = new_curves[idx:(idx+1), :] - other_data_input = data_input.reshape((3, F)) - - f = numpy.linspace(0.05, 2.0, 64) - fd = pandas.DataFrame(f).rename(columns={0: "Frequency"}) - df_pred = pandas.DataFrame(other_data_input.transpose()).rename(columns={0: "Surge", 1: "Heave", 2: "Pitch"}) - table = pandas.concat([fd, df_pred], axis=1) - - return table - - -def plotly_fig(values): - X, Y, Z = numpy.mgrid[0:1:32j, 0:1:32j, 0:1:32j] - fig = go.Figure(data=go.Volume( - x=X.flatten(), - y=Y.flatten(), - z=Z.flatten(), - value=values.flatten(), - isomin=0.0, - isomax=1.0, - opacity=0.1, # needs to be small to see through all surfaces - surface_count=21, # needs to be a large number for good volume rendering - colorscale='haline' - )) - return fig - - -value_net = Network("forward") - -def performance(index): - return value_net.get_performance(index) - -def geometry(index): - values = value_net.get_geometry(index) - return plotly_fig(values) - -def simple_analysis(index, choice, shape, length, width, height, diameter): - forward_net = Network("forward") - # forward_net = Network("16forward_structure.json", "16forward_weights.h5") - if choice == "Construct Shape from Parameters": - return forward_net.analysis_from_geometry(make_voxels_without_figure(shape, length, height, width, diameter)) - elif choice == "Pick Shape from Dataset": - return forward_net.analysis(index) - - -def simple_synthesis(index): - inverse_net = Network("inverse") - # inverse_net = Network("16inverse_structure.json", "16inverse_weights.h5") - pred, true = inverse_net.synthesis(index) - return plotly_fig(pred), plotly_fig(true) - -def synthesis_from_spectrum(df): - inverse_net = Network("inverse") - # inverse_net = Network("16inverse_structure.json", "16inverse_weights.h5") - pred = inverse_net.synthesis_from_spectrum(df.to_numpy()[:, 1:]) - return plotly_fig(pred) - - - -def change_textbox(choice, length, height, width, diameter): - fig = make_voxels(choice, length, height, width, diameter) - if choice == "cylinder": - return [gradio.Slider.update(visible=True), gradio.Slider.update(visible=False), gradio.Slider.update(visible=True), gradio.Slider.update(visible=False), gradio.Plot.update(fig)] - elif choice == "sphere": - return [gradio.Slider.update(visible=False), gradio.Slider.update(visible=False), gradio.Slider.update(visible=True), gradio.Slider.update(visible=False), gradio.Plot.update(fig)] - elif choice == "box": - return [gradio.Slider.update(visible=True), gradio.Slider.update(visible=True), gradio.Slider.update(visible=False), gradio.Slider.update(visible=True), gradio.Plot.update(fig)] - elif choice == "wedge": - return [gradio.Slider.update(visible=True), gradio.Slider.update(visible=True), gradio.Slider.update(visible=False), gradio.Slider.update(visible=True), gradio.Plot.update(fig)] - elif choice == "cone": - return [gradio.Slider.update(visible=True), gradio.Slider.update(visible=False), gradio.Slider.update(visible=True), gradio.Slider.update(visible=False), gradio.Plot.update(fig)] - - - -def randomize_analysis(choice): - if choice == "Construct Shape from Parameters": - length = random.uniform(3.0, 10.0) - height = random.uniform(3.0, 10.0) - width = random.uniform(3.0, 10.0) - diameter = random.uniform(3.0, 10.0) - choice2 = random.choice(["box", "cone", "sphere", "wedge", "cone"]) - if choice2 == "box" or choice2 == "wedge": - return [gradio.Radio.update(choice2), gradio.Slider.update(length), gradio.Slider.update(height), gradio.Slider.update(width), gradio.Slider.update(), gradio.Number.update(), gradio.Plot.update(make_voxels(choice2, length, height, width, diameter))] - elif choice2 == "cone" or choice2 == "cylinder": - return [gradio.Radio.update(choice2), gradio.Slider.update(), gradio.Slider.update(height), gradio.Slider.update(), gradio.Slider.update(diameter), gradio.Number.update(), gradio.Plot.update(make_voxels(choice2, length, height, width, diameter))] - elif choice2 == "sphere": - return [gradio.Radio.update(choice2), gradio.Slider.update(), gradio.Slider.update(), gradio.Slider.update(), gradio.Slider.update(diameter), gradio.Number.update(), gradio.Plot.update(make_voxels(choice2, length, height, width, diameter))] - elif choice == "Pick Shape from Dataset": - num = random.randint(1, 4999) - return [gradio.Radio.update(), gradio.Slider.update(), gradio.Slider.update(), gradio.Slider.update(), gradio.Slider.update(), gradio.Number.update(num), gradio.Plot.update(geometry(num))] - - - -def geometry_change(choice, choice2, num, length, width, height, diameter): - if choice == "Construct Shape from Parameters": - [slider1, slider2, slider3, slider4, plot] = change_textbox(choice2, length, height, width, diameter) - return [gradio.Radio.update(visible=True), slider1, slider2, slider3, slider4, gradio.Number.update(visible=False), gradio.Timeseries.update(visible=False), gradio.Plot.update(make_voxels(choice2, length, height, width, diameter))] - elif choice == "Pick Shape from Dataset": - return [gradio.Radio.update(visible=False), gradio.Slider.update(visible=False), gradio.Slider.update(visible=False), gradio.Slider.update(visible=False), gradio.Slider.update(visible=False), gradio.Number.update(visible=True), gradio.Timeseries.update(visible=True), gradio.Plot.update(geometry(num))] - -with gradio.Blocks() as demo: - with gradio.Accordion("✨ Read about the underlying ML model here! ✨", open=False): - with gradio.Row(): - with gradio.Column(): - gradio.Markdown("# Toward the Rapid Design of Engineered Systems Through Deep Neural Networks") - gradio.HTML("Christopher McComb, Carnegie Mellon University") - gradio.Markdown("__Abstract__: The design of a system commits a significant portion of the final cost of that system. Many computational approaches have been developed to assist designers in the analysis (e.g., computational fluid dynamics) and synthesis (e.g., topology optimization) of engineered systems. However, many of these approaches are computationally intensive, taking significant time to complete an analysis and even longer to iteratively synthesize a solution. The current work proposes a methodology for rapidly evaluating and synthesizing engineered systems through the use of deep neural networks. The proposed methodology is applied to the analysis and synthesis of offshore structures such as oil platforms. These structures are constructed in a marine environment and are typically designed to achieve specific dynamics in response to a known spectrum of ocean waves. Results show that deep learning can be used to accurately and rapidly synthesize and analyze offshore structure.") - with gradio.Column(): - download = gradio.HTML("") - - gradio.Markdown("When designing offshore structure, like [wave energy converters](https://www.nrel.gov/news/program/2021/how-wave-energy-could-go-big-by-getting-smaller.html), it's important to know what forces will be placed on the structure as waves come at different speeds. Likewise, if we have some idea of how we want the structure to respond to different waves, we can use that to guide the design of the shape of the structure. We call the first process _Analysis_, and the second process _Synthesis_. This demo has ML models that do both, very quickly.") - - with gradio.Tab("Analysis"): - - with gradio.Row(): - with gradio.Column(): - whence_commeth_geometry = gradio.Radio( - ["Construct Shape from Parameters", "Pick Shape from Dataset"], label="How would you like to generate the shape of the offshore structure for analysis?", value="Construct Shape from Parameters" - ) - radio = gradio.Radio( - ["box", "cone", "cylinder", "sphere", "wedge"], label="What kind of shape would you like to generate?", value="sphere" - ) - height = gradio.Slider(label="Height", interactive=True, minimum=3.0, maximum=10.0, value=6.5, visible=False) - width = gradio.Slider(label="Width", interactive=True, minimum=3.0, maximum=10.0, value=6.5, visible=False) - diameter = gradio.Slider(label="Diameter", interactive=True, minimum=3.0, maximum=10.0, value=6.5, visible=True) - length = gradio.Slider(label="Length", interactive=True, minimum=3.0, maximum=10.0, value=6.5, visible=False) - - - num = gradio.Number(42, label="Type the index of the spectrum you would like to use or randomly select it.", visible=False) - - btn1 = gradio.Button("Randomize") - with gradio.Column(): - geo = gradio.Plot(make_voxels("sphere", 6.5, 6.5, 6.5, 6.5), label="Geometry") - - - with gradio.Row(): - btn2 = gradio.Button("Estimate Spectrum") - - with gradio.Row(): - with gradio.Column(): - pred = gradio.Timeseries(x="Frequency", y=['Surge', 'Heave', 'Pitch'], label="Predicted") - - with gradio.Column(): - true = gradio.Timeseries(x="Frequency", y=['Surge', 'Heave', 'Pitch'], label="True", visible=False) - - radio.change(fn=change_textbox, inputs=[radio, length, height, width, diameter], outputs=[height, width, diameter, length, geo]) - height.change(fn=make_voxels, inputs = [radio, length, height, width, diameter], outputs=[geo]) - width.change(fn=make_voxels, inputs = [radio, length, height, width, diameter], outputs=[geo]) - diameter.change(fn=make_voxels, inputs = [radio, length, height, width, diameter], outputs=[geo]) - length.change(fn=make_voxels, inputs = [radio, length, height, width, diameter], outputs=[geo]) - whence_commeth_geometry.change(fn=geometry_change, inputs=[whence_commeth_geometry, radio, num, length, width, height, diameter], outputs=[radio, height, width, diameter, length, num, true, geo]) - num.change(fn=geometry, inputs=[num], outputs=[geo]) - - btn1.click(fn=randomize_analysis, inputs=[whence_commeth_geometry], outputs=[radio, length, height, width, diameter, num, geo]) - btn2.click(fn=simple_analysis, inputs=[num, whence_commeth_geometry, radio, length, width, height, diameter], outputs=[pred, true], api_name="analyze") - with gradio.Tab("Synthesis"): - with gradio.Row(): - with gradio.Column(): - whence_commeth_performance = gradio.Radio( - ["Pick Spectrum from Dataset"], label="How would you like to generate the desired response spectrum to synthesize from?", value="Construct Spectrum from Table" - ) - num = gradio.Number(42, label="Type the index of the shape you would like to use or randomly select it.") - btn1 = gradio.Button("Randomize") - with gradio.Column(): - perf = gradio.Timeseries(x="Frequency", y=['Surge', 'Heave', 'Pitch'], label="Performance") - - with gradio.Row(): - btn2 = gradio.Button("Synthesize Geometry") - - with gradio.Row(): - with gradio.Column(): - pred = gradio.Plot(label="Predicted") - - with gradio.Column(): - true = gradio.Plot(label="True") - - - btn1.click(fn=lambda: random.randint(1, 4999), inputs=[], outputs=num) - num.change(fn=performance, inputs=[num], outputs=[perf]) - btn2.click(fn=simple_synthesis, inputs=[num], outputs=[pred, true], api_name="synthesize") - -demo.launch() \ No newline at end of file diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2.c deleted file mode 100644 index 568d686f39a58e6b6f160388a42b157fb4332e4d..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2.c +++ /dev/null @@ -1,1058 +0,0 @@ -/* - * DXVA2 HW acceleration. - * - * copyright (c) 2010 Laurent Aimar - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include -#include -#include - -#include "libavutil/common.h" -#include "libavutil/log.h" -#include "libavutil/time.h" - -#include "avcodec.h" -#include "decode.h" -#include "dxva2_internal.h" - -/* define all the GUIDs used directly here, - to avoid problems with inconsistent dxva2api.h versions in mingw-w64 and different MSVC version */ -DEFINE_GUID(ff_DXVA2_ModeMPEG2_VLD, 0xee27417f, 0x5e28,0x4e65,0xbe,0xea,0x1d,0x26,0xb5,0x08,0xad,0xc9); -DEFINE_GUID(ff_DXVA2_ModeMPEG2and1_VLD, 0x86695f12, 0x340e,0x4f04,0x9f,0xd3,0x92,0x53,0xdd,0x32,0x74,0x60); -DEFINE_GUID(ff_DXVA2_ModeH264_E, 0x1b81be68, 0xa0c7,0x11d3,0xb9,0x84,0x00,0xc0,0x4f,0x2e,0x73,0xc5); -DEFINE_GUID(ff_DXVA2_ModeH264_F, 0x1b81be69, 0xa0c7,0x11d3,0xb9,0x84,0x00,0xc0,0x4f,0x2e,0x73,0xc5); -DEFINE_GUID(ff_DXVADDI_Intel_ModeH264_E, 0x604F8E68, 0x4951,0x4C54,0x88,0xFE,0xAB,0xD2,0x5C,0x15,0xB3,0xD6); -DEFINE_GUID(ff_DXVA2_ModeVC1_D, 0x1b81beA3, 0xa0c7,0x11d3,0xb9,0x84,0x00,0xc0,0x4f,0x2e,0x73,0xc5); -DEFINE_GUID(ff_DXVA2_ModeVC1_D2010, 0x1b81beA4, 0xa0c7,0x11d3,0xb9,0x84,0x00,0xc0,0x4f,0x2e,0x73,0xc5); -DEFINE_GUID(ff_DXVA2_ModeHEVC_VLD_Main, 0x5b11d51b, 0x2f4c,0x4452,0xbc,0xc3,0x09,0xf2,0xa1,0x16,0x0c,0xc0); -DEFINE_GUID(ff_DXVA2_ModeHEVC_VLD_Main10,0x107af0e0, 0xef1a,0x4d19,0xab,0xa8,0x67,0xa1,0x63,0x07,0x3d,0x13); -DEFINE_GUID(ff_DXVA2_ModeVP9_VLD_Profile0,0x463707f8,0xa1d0,0x4585,0x87,0x6d,0x83,0xaa,0x6d,0x60,0xb8,0x9e); -DEFINE_GUID(ff_DXVA2_ModeVP9_VLD_10bit_Profile2,0xa4c749ef,0x6ecf,0x48aa,0x84,0x48,0x50,0xa7,0xa1,0x16,0x5f,0xf7); -DEFINE_GUID(ff_DXVA2_ModeAV1_VLD_Profile0,0xb8be4ccb,0xcf53,0x46ba,0x8d,0x59,0xd6,0xb8,0xa6,0xda,0x5d,0x2a); -DEFINE_GUID(ff_DXVA2_NoEncrypt, 0x1b81beD0, 0xa0c7,0x11d3,0xb9,0x84,0x00,0xc0,0x4f,0x2e,0x73,0xc5); -DEFINE_GUID(ff_GUID_NULL, 0x00000000, 0x0000,0x0000,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00); -DEFINE_GUID(ff_IID_IDirectXVideoDecoderService, 0xfc51a551,0xd5e7,0x11d9,0xaf,0x55,0x00,0x05,0x4e,0x43,0xff,0x02); - -typedef struct dxva_mode { - const GUID *guid; - enum AVCodecID codec; - // List of supported profiles, terminated by a FF_PROFILE_UNKNOWN entry. - // If NULL, don't check profile. - const int *profiles; -} dxva_mode; - -static const int prof_mpeg2_main[] = {FF_PROFILE_MPEG2_SIMPLE, - FF_PROFILE_MPEG2_MAIN, - FF_PROFILE_UNKNOWN}; -static const int prof_h264_high[] = {FF_PROFILE_H264_CONSTRAINED_BASELINE, - FF_PROFILE_H264_MAIN, - FF_PROFILE_H264_HIGH, - FF_PROFILE_UNKNOWN}; -static const int prof_hevc_main[] = {FF_PROFILE_HEVC_MAIN, - FF_PROFILE_UNKNOWN}; -static const int prof_hevc_main10[] = {FF_PROFILE_HEVC_MAIN_10, - FF_PROFILE_UNKNOWN}; -static const int prof_vp9_profile0[] = {FF_PROFILE_VP9_0, - FF_PROFILE_UNKNOWN}; -static const int prof_vp9_profile2[] = {FF_PROFILE_VP9_2, - FF_PROFILE_UNKNOWN}; -static const int prof_av1_profile0[] = {FF_PROFILE_AV1_MAIN, - FF_PROFILE_UNKNOWN}; - -static const dxva_mode dxva_modes[] = { - /* MPEG-2 */ - { &ff_DXVA2_ModeMPEG2_VLD, AV_CODEC_ID_MPEG2VIDEO, prof_mpeg2_main }, - { &ff_DXVA2_ModeMPEG2and1_VLD, AV_CODEC_ID_MPEG2VIDEO, prof_mpeg2_main }, - - /* H.264 */ - { &ff_DXVA2_ModeH264_F, AV_CODEC_ID_H264, prof_h264_high }, - { &ff_DXVA2_ModeH264_E, AV_CODEC_ID_H264, prof_h264_high }, - /* Intel specific H.264 mode */ - { &ff_DXVADDI_Intel_ModeH264_E, AV_CODEC_ID_H264, prof_h264_high }, - - /* VC-1 / WMV3 */ - { &ff_DXVA2_ModeVC1_D2010, AV_CODEC_ID_VC1 }, - { &ff_DXVA2_ModeVC1_D2010, AV_CODEC_ID_WMV3 }, - { &ff_DXVA2_ModeVC1_D, AV_CODEC_ID_VC1 }, - { &ff_DXVA2_ModeVC1_D, AV_CODEC_ID_WMV3 }, - - /* HEVC/H.265 */ - { &ff_DXVA2_ModeHEVC_VLD_Main10, AV_CODEC_ID_HEVC, prof_hevc_main10 }, - { &ff_DXVA2_ModeHEVC_VLD_Main, AV_CODEC_ID_HEVC, prof_hevc_main }, - - /* VP8/9 */ - { &ff_DXVA2_ModeVP9_VLD_Profile0, AV_CODEC_ID_VP9, prof_vp9_profile0 }, - { &ff_DXVA2_ModeVP9_VLD_10bit_Profile2, AV_CODEC_ID_VP9, prof_vp9_profile2 }, - - /* AV1 */ - { &ff_DXVA2_ModeAV1_VLD_Profile0, AV_CODEC_ID_AV1, prof_av1_profile0 }, - - { NULL, 0 }, -}; - -static int dxva_get_decoder_configuration(AVCodecContext *avctx, - const void *cfg_list, - unsigned cfg_count) -{ - FFDXVASharedContext *sctx = DXVA_SHARED_CONTEXT(avctx); - unsigned i, best_score = 0; - int best_cfg = -1; - - for (i = 0; i < cfg_count; i++) { - unsigned score; - UINT ConfigBitstreamRaw; - GUID guidConfigBitstreamEncryption; - -#if CONFIG_D3D11VA - if (sctx->pix_fmt == AV_PIX_FMT_D3D11) { - D3D11_VIDEO_DECODER_CONFIG *cfg = &((D3D11_VIDEO_DECODER_CONFIG *)cfg_list)[i]; - ConfigBitstreamRaw = cfg->ConfigBitstreamRaw; - guidConfigBitstreamEncryption = cfg->guidConfigBitstreamEncryption; - } -#endif -#if CONFIG_DXVA2 - if (sctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) { - DXVA2_ConfigPictureDecode *cfg = &((DXVA2_ConfigPictureDecode *)cfg_list)[i]; - ConfigBitstreamRaw = cfg->ConfigBitstreamRaw; - guidConfigBitstreamEncryption = cfg->guidConfigBitstreamEncryption; - } -#endif - - if (ConfigBitstreamRaw == 1) - score = 1; - else if (avctx->codec_id == AV_CODEC_ID_H264 && ConfigBitstreamRaw == 2) - score = 2; - else - continue; - if (IsEqualGUID(&guidConfigBitstreamEncryption, &ff_DXVA2_NoEncrypt)) - score += 16; - if (score > best_score) { - best_score = score; - best_cfg = i; - } - } - - if (!best_score) { - av_log(avctx, AV_LOG_VERBOSE, "No valid decoder configuration available\n"); - return AVERROR(EINVAL); - } - - return best_cfg; -} - -#if CONFIG_D3D11VA -static int d3d11va_validate_output(void *service, GUID guid, const void *surface_format) -{ - HRESULT hr; - BOOL is_supported = FALSE; - hr = ID3D11VideoDevice_CheckVideoDecoderFormat((ID3D11VideoDevice *)service, - &guid, - *(DXGI_FORMAT *)surface_format, - &is_supported); - return SUCCEEDED(hr) && is_supported; -} -#endif - -#if CONFIG_DXVA2 -static int dxva2_validate_output(void *decoder_service, GUID guid, const void *surface_format) -{ - HRESULT hr; - int ret = 0; - unsigned j, target_count; - D3DFORMAT *target_list; - hr = IDirectXVideoDecoderService_GetDecoderRenderTargets((IDirectXVideoDecoderService *)decoder_service, &guid, &target_count, &target_list); - if (SUCCEEDED(hr)) { - for (j = 0; j < target_count; j++) { - const D3DFORMAT format = target_list[j]; - if (format == *(D3DFORMAT *)surface_format) { - ret = 1; - break; - } - } - CoTaskMemFree(target_list); - } - return ret; -} -#endif - -static int dxva_check_codec_compatibility(AVCodecContext *avctx, const dxva_mode *mode) -{ - if (mode->codec != avctx->codec_id) - return 0; - - if (mode->profiles && !(avctx->hwaccel_flags & AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH)) { - int i, found = 0; - for (i = 0; mode->profiles[i] != FF_PROFILE_UNKNOWN; i++) { - if (avctx->profile == mode->profiles[i]) { - found = 1; - break; - } - } - if (!found) - return 0; - } - - return 1; -} - -static void dxva_list_guids_debug(AVCodecContext *avctx, void *service, - unsigned guid_count, const GUID *guid_list) -{ - FFDXVASharedContext *sctx = DXVA_SHARED_CONTEXT(avctx); - int i; - - av_log(avctx, AV_LOG_VERBOSE, "Decoder GUIDs reported as supported:\n"); - - for (i = 0; i < guid_count; i++) { - const GUID *guid = &guid_list[i]; - - av_log(avctx, AV_LOG_VERBOSE, - "{%8.8x-%4.4x-%4.4x-%2.2x%2.2x-%2.2x%2.2x%2.2x%2.2x%2.2x%2.2x}", - (unsigned) guid->Data1, guid->Data2, guid->Data3, - guid->Data4[0], guid->Data4[1], - guid->Data4[2], guid->Data4[3], - guid->Data4[4], guid->Data4[5], - guid->Data4[6], guid->Data4[7]); - -#if CONFIG_D3D11VA - if (sctx->pix_fmt == AV_PIX_FMT_D3D11) { - DXGI_FORMAT format; - // We don't know the maximum valid DXGI_FORMAT, so use 200 as - // arbitrary upper bound (that could become outdated). - for (format = 0; format < 200; format++) { - if (d3d11va_validate_output(service, *guid, &format)) - av_log(avctx, AV_LOG_VERBOSE, " %d", (int)format); - } - } -#endif -#if CONFIG_DXVA2 - if (sctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) { - const D3DFORMAT formats[] = {MKTAG('N', 'V', '1', '2'), - MKTAG('P', '0', '1', '0')}; - int i; - for (i = 0; i < FF_ARRAY_ELEMS(formats); i++) { - if (dxva2_validate_output(service, *guid, &formats[i])) - av_log(avctx, AV_LOG_VERBOSE, " %d", i); - } - } -#endif - av_log(avctx, AV_LOG_VERBOSE, "\n"); - } -} - -static int dxva_get_decoder_guid(AVCodecContext *avctx, void *service, void *surface_format, - unsigned guid_count, const GUID *guid_list, GUID *decoder_guid) -{ - FFDXVASharedContext *sctx = DXVA_SHARED_CONTEXT(avctx); - unsigned i, j; - - dxva_list_guids_debug(avctx, service, guid_count, guid_list); - - *decoder_guid = ff_GUID_NULL; - for (i = 0; dxva_modes[i].guid; i++) { - const dxva_mode *mode = &dxva_modes[i]; - int validate; - if (!dxva_check_codec_compatibility(avctx, mode)) - continue; - - for (j = 0; j < guid_count; j++) { - if (IsEqualGUID(mode->guid, &guid_list[j])) - break; - } - if (j == guid_count) - continue; - -#if CONFIG_D3D11VA - if (sctx->pix_fmt == AV_PIX_FMT_D3D11) - validate = d3d11va_validate_output(service, *mode->guid, surface_format); -#endif -#if CONFIG_DXVA2 - if (sctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) - validate = dxva2_validate_output(service, *mode->guid, surface_format); -#endif - if (validate) { - *decoder_guid = *mode->guid; - break; - } - } - - if (IsEqualGUID(decoder_guid, &ff_GUID_NULL)) { - av_log(avctx, AV_LOG_VERBOSE, "No decoder device for codec found\n"); - return AVERROR(EINVAL); - } - - if (IsEqualGUID(decoder_guid, &ff_DXVADDI_Intel_ModeH264_E)) - sctx->workaround |= FF_DXVA2_WORKAROUND_INTEL_CLEARVIDEO; - - return 0; -} - -static void bufref_free_interface(void *opaque, uint8_t *data) -{ - IUnknown_Release((IUnknown *)opaque); -} - -static AVBufferRef *bufref_wrap_interface(IUnknown *iface) -{ - return av_buffer_create((uint8_t*)iface, 1, bufref_free_interface, iface, 0); -} - -#if CONFIG_DXVA2 - -static int dxva2_get_decoder_configuration(AVCodecContext *avctx, const GUID *device_guid, - const DXVA2_VideoDesc *desc, - DXVA2_ConfigPictureDecode *config) -{ - FFDXVASharedContext *sctx = DXVA_SHARED_CONTEXT(avctx); - unsigned cfg_count; - DXVA2_ConfigPictureDecode *cfg_list; - HRESULT hr; - int ret; - - hr = IDirectXVideoDecoderService_GetDecoderConfigurations(sctx->dxva2_service, device_guid, desc, NULL, &cfg_count, &cfg_list); - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Unable to retrieve decoder configurations\n"); - return AVERROR(EINVAL); - } - - ret = dxva_get_decoder_configuration(avctx, cfg_list, cfg_count); - if (ret >= 0) - *config = cfg_list[ret]; - CoTaskMemFree(cfg_list); - return ret; -} - -static int dxva2_create_decoder(AVCodecContext *avctx) -{ - FFDXVASharedContext *sctx = DXVA_SHARED_CONTEXT(avctx); - GUID *guid_list; - unsigned guid_count; - GUID device_guid; - D3DFORMAT surface_format = avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P10 ? - MKTAG('P', '0', '1', '0') : MKTAG('N', 'V', '1', '2'); - DXVA2_VideoDesc desc = { 0 }; - DXVA2_ConfigPictureDecode config; - HRESULT hr; - int ret; - HANDLE device_handle; - AVHWFramesContext *frames_ctx = (AVHWFramesContext*)avctx->hw_frames_ctx->data; - AVDXVA2FramesContext *frames_hwctx = frames_ctx->hwctx; - AVDXVA2DeviceContext *device_hwctx = frames_ctx->device_ctx->hwctx; - - hr = IDirect3DDeviceManager9_OpenDeviceHandle(device_hwctx->devmgr, - &device_handle); - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Failed to open a device handle\n"); - goto fail; - } - - hr = IDirect3DDeviceManager9_GetVideoService(device_hwctx->devmgr, device_handle, - &ff_IID_IDirectXVideoDecoderService, - (void **)&sctx->dxva2_service); - IDirect3DDeviceManager9_CloseDeviceHandle(device_hwctx->devmgr, device_handle); - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Failed to create IDirectXVideoDecoderService\n"); - goto fail; - } - - hr = IDirectXVideoDecoderService_GetDecoderDeviceGuids(sctx->dxva2_service, &guid_count, &guid_list); - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Failed to retrieve decoder device GUIDs\n"); - goto fail; - } - - ret = dxva_get_decoder_guid(avctx, sctx->dxva2_service, &surface_format, - guid_count, guid_list, &device_guid); - CoTaskMemFree(guid_list); - if (ret < 0) { - goto fail; - } - - desc.SampleWidth = avctx->coded_width; - desc.SampleHeight = avctx->coded_height; - desc.Format = surface_format; - - ret = dxva2_get_decoder_configuration(avctx, &device_guid, &desc, &config); - if (ret < 0) { - goto fail; - } - - hr = IDirectXVideoDecoderService_CreateVideoDecoder(sctx->dxva2_service, &device_guid, - &desc, &config, frames_hwctx->surfaces, - frames_hwctx->nb_surfaces, &sctx->dxva2_decoder); - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Failed to create DXVA2 video decoder\n"); - goto fail; - } - - sctx->dxva2_config = config; - - sctx->decoder_ref = bufref_wrap_interface((IUnknown *)sctx->dxva2_decoder); - if (!sctx->decoder_ref) - return AVERROR(ENOMEM); - - return 0; -fail: - return AVERROR(EINVAL); -} - -#endif - -#if CONFIG_D3D11VA - -static int d3d11va_get_decoder_configuration(AVCodecContext *avctx, - ID3D11VideoDevice *video_device, - const D3D11_VIDEO_DECODER_DESC *desc, - D3D11_VIDEO_DECODER_CONFIG *config) -{ - unsigned cfg_count = 0; - D3D11_VIDEO_DECODER_CONFIG *cfg_list = NULL; - HRESULT hr; - int i, ret; - - hr = ID3D11VideoDevice_GetVideoDecoderConfigCount(video_device, desc, &cfg_count); - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Unable to retrieve decoder configurations\n"); - return AVERROR(EINVAL); - } - - cfg_list = av_malloc_array(cfg_count, sizeof(D3D11_VIDEO_DECODER_CONFIG)); - if (cfg_list == NULL) - return AVERROR(ENOMEM); - for (i = 0; i < cfg_count; i++) { - hr = ID3D11VideoDevice_GetVideoDecoderConfig(video_device, desc, i, &cfg_list[i]); - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Unable to retrieve decoder configurations. (hr=0x%lX)\n", hr); - av_free(cfg_list); - return AVERROR(EINVAL); - } - } - - ret = dxva_get_decoder_configuration(avctx, cfg_list, cfg_count); - if (ret >= 0) - *config = cfg_list[ret]; - av_free(cfg_list); - return ret; -} - -static DXGI_FORMAT d3d11va_map_sw_to_hw_format(enum AVPixelFormat pix_fmt) -{ - switch (pix_fmt) { - case AV_PIX_FMT_NV12: return DXGI_FORMAT_NV12; - case AV_PIX_FMT_P010: return DXGI_FORMAT_P010; - case AV_PIX_FMT_YUV420P: return DXGI_FORMAT_420_OPAQUE; - default: return DXGI_FORMAT_UNKNOWN; - } -} - -static int d3d11va_create_decoder(AVCodecContext *avctx) -{ - FFDXVASharedContext *sctx = DXVA_SHARED_CONTEXT(avctx); - GUID *guid_list; - unsigned guid_count, i; - GUID decoder_guid; - D3D11_VIDEO_DECODER_DESC desc = { 0 }; - D3D11_VIDEO_DECODER_CONFIG config; - AVHWFramesContext *frames_ctx = (AVHWFramesContext *)avctx->hw_frames_ctx->data; - AVD3D11VADeviceContext *device_hwctx = frames_ctx->device_ctx->hwctx; - AVD3D11VAFramesContext *frames_hwctx = frames_ctx->hwctx; - DXGI_FORMAT surface_format = d3d11va_map_sw_to_hw_format(frames_ctx->sw_format); - D3D11_TEXTURE2D_DESC texdesc; - HRESULT hr; - int ret; - - if (!frames_hwctx->texture) { - av_log(avctx, AV_LOG_ERROR, "AVD3D11VAFramesContext.texture not set.\n"); - return AVERROR(EINVAL); - } - ID3D11Texture2D_GetDesc(frames_hwctx->texture, &texdesc); - - guid_count = ID3D11VideoDevice_GetVideoDecoderProfileCount(device_hwctx->video_device); - guid_list = av_malloc_array(guid_count, sizeof(*guid_list)); - if (guid_list == NULL || guid_count == 0) { - av_log(avctx, AV_LOG_ERROR, "Failed to get the decoder GUIDs\n"); - av_free(guid_list); - return AVERROR(EINVAL); - } - for (i = 0; i < guid_count; i++) { - hr = ID3D11VideoDevice_GetVideoDecoderProfile(device_hwctx->video_device, i, &guid_list[i]); - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Failed to retrieve decoder GUID %d\n", i); - av_free(guid_list); - return AVERROR(EINVAL); - } - } - - ret = dxva_get_decoder_guid(avctx, device_hwctx->video_device, &surface_format, - guid_count, guid_list, &decoder_guid); - av_free(guid_list); - if (ret < 0) - return AVERROR(EINVAL); - - desc.SampleWidth = avctx->coded_width; - desc.SampleHeight = avctx->coded_height; - desc.OutputFormat = surface_format; - desc.Guid = decoder_guid; - - ret = d3d11va_get_decoder_configuration(avctx, device_hwctx->video_device, &desc, &config); - if (ret < 0) - return AVERROR(EINVAL); - - sctx->d3d11_views = av_calloc(texdesc.ArraySize, sizeof(sctx->d3d11_views[0])); - if (!sctx->d3d11_views) - return AVERROR(ENOMEM); - sctx->nb_d3d11_views = texdesc.ArraySize; - - for (i = 0; i < sctx->nb_d3d11_views; i++) { - D3D11_VIDEO_DECODER_OUTPUT_VIEW_DESC viewDesc = { - .DecodeProfile = decoder_guid, - .ViewDimension = D3D11_VDOV_DIMENSION_TEXTURE2D, - .Texture2D = { - .ArraySlice = i, - } - }; - hr = ID3D11VideoDevice_CreateVideoDecoderOutputView(device_hwctx->video_device, - (ID3D11Resource*) frames_hwctx->texture, - &viewDesc, - (ID3D11VideoDecoderOutputView**) &sctx->d3d11_views[i]); - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Could not create the decoder output view %d\n", i); - return AVERROR_UNKNOWN; - } - } - - hr = ID3D11VideoDevice_CreateVideoDecoder(device_hwctx->video_device, &desc, - &config, &sctx->d3d11_decoder); - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Failed to create D3D11VA video decoder\n"); - return AVERROR(EINVAL); - } - - sctx->d3d11_config = config; - sctx->d3d11_texture = frames_hwctx->texture; - - sctx->decoder_ref = bufref_wrap_interface((IUnknown *)sctx->d3d11_decoder); - if (!sctx->decoder_ref) - return AVERROR(ENOMEM); - - return 0; -} - -#endif - -static void ff_dxva2_lock(AVCodecContext *avctx) -{ -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) { - FFDXVASharedContext *sctx = DXVA_SHARED_CONTEXT(avctx); - AVDXVAContext *ctx = DXVA_CONTEXT(avctx); - if (D3D11VA_CONTEXT(ctx)->context_mutex != INVALID_HANDLE_VALUE) - WaitForSingleObjectEx(D3D11VA_CONTEXT(ctx)->context_mutex, INFINITE, FALSE); - if (sctx->device_ctx) { - AVD3D11VADeviceContext *hwctx = sctx->device_ctx->hwctx; - hwctx->lock(hwctx->lock_ctx); - } - } -#endif -} - -static void ff_dxva2_unlock(AVCodecContext *avctx) -{ -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) { - FFDXVASharedContext *sctx = DXVA_SHARED_CONTEXT(avctx); - AVDXVAContext *ctx = DXVA_CONTEXT(avctx); - if (D3D11VA_CONTEXT(ctx)->context_mutex != INVALID_HANDLE_VALUE) - ReleaseMutex(D3D11VA_CONTEXT(ctx)->context_mutex); - if (sctx->device_ctx) { - AVD3D11VADeviceContext *hwctx = sctx->device_ctx->hwctx; - hwctx->unlock(hwctx->lock_ctx); - } - } -#endif -} - -int ff_dxva2_common_frame_params(AVCodecContext *avctx, - AVBufferRef *hw_frames_ctx) -{ - AVHWFramesContext *frames_ctx = (AVHWFramesContext *)hw_frames_ctx->data; - AVHWDeviceContext *device_ctx = frames_ctx->device_ctx; - int surface_alignment, num_surfaces; - - if (device_ctx->type == AV_HWDEVICE_TYPE_DXVA2) { - frames_ctx->format = AV_PIX_FMT_DXVA2_VLD; - } else if (device_ctx->type == AV_HWDEVICE_TYPE_D3D11VA) { - frames_ctx->format = AV_PIX_FMT_D3D11; - } else { - return AVERROR(EINVAL); - } - - /* decoding MPEG-2 requires additional alignment on some Intel GPUs, - but it causes issues for H.264 on certain AMD GPUs..... */ - if (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) - surface_alignment = 32; - /* the HEVC DXVA2 spec asks for 128 pixel aligned surfaces to ensure - all coding features have enough room to work with */ - else if (avctx->codec_id == AV_CODEC_ID_HEVC || avctx->codec_id == AV_CODEC_ID_AV1) - surface_alignment = 128; - else - surface_alignment = 16; - - /* 1 base work surface */ - num_surfaces = 1; - - /* add surfaces based on number of possible refs */ - if (avctx->codec_id == AV_CODEC_ID_H264 || avctx->codec_id == AV_CODEC_ID_HEVC) - num_surfaces += 16; - else if (avctx->codec_id == AV_CODEC_ID_VP9 || avctx->codec_id == AV_CODEC_ID_AV1) - num_surfaces += 8; - else - num_surfaces += 2; - - frames_ctx->sw_format = avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P10 ? - AV_PIX_FMT_P010 : AV_PIX_FMT_NV12; - frames_ctx->width = FFALIGN(avctx->coded_width, surface_alignment); - frames_ctx->height = FFALIGN(avctx->coded_height, surface_alignment); - frames_ctx->initial_pool_size = num_surfaces; - - -#if CONFIG_DXVA2 - if (frames_ctx->format == AV_PIX_FMT_DXVA2_VLD) { - AVDXVA2FramesContext *frames_hwctx = frames_ctx->hwctx; - - frames_hwctx->surface_type = DXVA2_VideoDecoderRenderTarget; - } -#endif - -#if CONFIG_D3D11VA - if (frames_ctx->format == AV_PIX_FMT_D3D11) { - AVD3D11VAFramesContext *frames_hwctx = frames_ctx->hwctx; - - frames_hwctx->BindFlags |= D3D11_BIND_DECODER; - } -#endif - - return 0; -} - -int ff_dxva2_decode_init(AVCodecContext *avctx) -{ - FFDXVASharedContext *sctx = DXVA_SHARED_CONTEXT(avctx); - AVHWFramesContext *frames_ctx; - enum AVHWDeviceType dev_type = avctx->hwaccel->pix_fmt == AV_PIX_FMT_DXVA2_VLD - ? AV_HWDEVICE_TYPE_DXVA2 : AV_HWDEVICE_TYPE_D3D11VA; - int ret = 0; - - // Old API. - if (avctx->hwaccel_context) - return 0; - - // (avctx->pix_fmt is not updated yet at this point) - sctx->pix_fmt = avctx->hwaccel->pix_fmt; - - ret = ff_decode_get_hw_frames_ctx(avctx, dev_type); - if (ret < 0) - return ret; - - frames_ctx = (AVHWFramesContext*)avctx->hw_frames_ctx->data; - sctx->device_ctx = frames_ctx->device_ctx; - - if (frames_ctx->format != sctx->pix_fmt) { - av_log(avctx, AV_LOG_ERROR, "Invalid pixfmt for hwaccel!\n"); - ret = AVERROR(EINVAL); - goto fail; - } - -#if CONFIG_D3D11VA - if (sctx->pix_fmt == AV_PIX_FMT_D3D11) { - AVD3D11VADeviceContext *device_hwctx = frames_ctx->device_ctx->hwctx; - AVD3D11VAContext *d3d11_ctx = &sctx->ctx.d3d11va; - - ff_dxva2_lock(avctx); - ret = d3d11va_create_decoder(avctx); - ff_dxva2_unlock(avctx); - if (ret < 0) - goto fail; - - d3d11_ctx->decoder = sctx->d3d11_decoder; - d3d11_ctx->video_context = device_hwctx->video_context; - d3d11_ctx->cfg = &sctx->d3d11_config; - d3d11_ctx->surface_count = sctx->nb_d3d11_views; - d3d11_ctx->surface = sctx->d3d11_views; - d3d11_ctx->workaround = sctx->workaround; - d3d11_ctx->context_mutex = INVALID_HANDLE_VALUE; - } -#endif - -#if CONFIG_DXVA2 - if (sctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) { - AVDXVA2FramesContext *frames_hwctx = frames_ctx->hwctx; - struct dxva_context *dxva_ctx = &sctx->ctx.dxva2; - - ff_dxva2_lock(avctx); - ret = dxva2_create_decoder(avctx); - ff_dxva2_unlock(avctx); - if (ret < 0) - goto fail; - - dxva_ctx->decoder = sctx->dxva2_decoder; - dxva_ctx->cfg = &sctx->dxva2_config; - dxva_ctx->surface = frames_hwctx->surfaces; - dxva_ctx->surface_count = frames_hwctx->nb_surfaces; - dxva_ctx->workaround = sctx->workaround; - } -#endif - - return 0; - -fail: - ff_dxva2_decode_uninit(avctx); - return ret; -} - -int ff_dxva2_decode_uninit(AVCodecContext *avctx) -{ - FFDXVASharedContext *sctx = DXVA_SHARED_CONTEXT(avctx); - int i; - - av_buffer_unref(&sctx->decoder_ref); - -#if CONFIG_D3D11VA - for (i = 0; i < sctx->nb_d3d11_views; i++) { - if (sctx->d3d11_views[i]) - ID3D11VideoDecoderOutputView_Release(sctx->d3d11_views[i]); - } - av_freep(&sctx->d3d11_views); -#endif - -#if CONFIG_DXVA2 - if (sctx->dxva2_service) - IDirectXVideoDecoderService_Release(sctx->dxva2_service); -#endif - - return 0; -} - -static void *get_surface(const AVCodecContext *avctx, const AVFrame *frame) -{ -#if CONFIG_D3D11VA - if (frame->format == AV_PIX_FMT_D3D11) { - FFDXVASharedContext *sctx = DXVA_SHARED_CONTEXT(avctx); - intptr_t index = (intptr_t)frame->data[1]; - if (index < 0 || index >= sctx->nb_d3d11_views || - sctx->d3d11_texture != (ID3D11Texture2D *)frame->data[0]) { - av_log((void *)avctx, AV_LOG_ERROR, "get_buffer frame is invalid!\n"); - return NULL; - } - return sctx->d3d11_views[index]; - } -#endif - return frame->data[3]; -} - -unsigned ff_dxva2_get_surface_index(const AVCodecContext *avctx, - const AVDXVAContext *ctx, - const AVFrame *frame) -{ - void *surface = get_surface(avctx, frame); - unsigned i; - -#if CONFIG_D3D11VA - if (avctx->pix_fmt == AV_PIX_FMT_D3D11) - return (intptr_t)frame->data[1]; - if (avctx->pix_fmt == AV_PIX_FMT_D3D11VA_VLD) { - D3D11_VIDEO_DECODER_OUTPUT_VIEW_DESC viewDesc; - ID3D11VideoDecoderOutputView_GetDesc((ID3D11VideoDecoderOutputView*) surface, &viewDesc); - return viewDesc.Texture2D.ArraySlice; - } -#endif -#if CONFIG_DXVA2 - for (i = 0; i < DXVA_CONTEXT_COUNT(avctx, ctx); i++) { - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD && ctx->dxva2.surface[i] == surface) - return i; - } -#endif - - assert(0); - return 0; -} - -int ff_dxva2_commit_buffer(AVCodecContext *avctx, - AVDXVAContext *ctx, - DECODER_BUFFER_DESC *dsc, - unsigned type, const void *data, unsigned size, - unsigned mb_count) -{ - void *dxva_data; - unsigned dxva_size; - int result; - HRESULT hr = 0; - -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) - hr = ID3D11VideoContext_GetDecoderBuffer(D3D11VA_CONTEXT(ctx)->video_context, - D3D11VA_CONTEXT(ctx)->decoder, - type, - &dxva_size, &dxva_data); -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) - hr = IDirectXVideoDecoder_GetBuffer(DXVA2_CONTEXT(ctx)->decoder, type, - &dxva_data, &dxva_size); -#endif - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Failed to get a buffer for %u: 0x%x\n", - type, (unsigned)hr); - return -1; - } - if (size <= dxva_size) { - memcpy(dxva_data, data, size); - -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) { - D3D11_VIDEO_DECODER_BUFFER_DESC *dsc11 = dsc; - memset(dsc11, 0, sizeof(*dsc11)); - dsc11->BufferType = type; - dsc11->DataSize = size; - dsc11->NumMBsInBuffer = mb_count; - } -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) { - DXVA2_DecodeBufferDesc *dsc2 = dsc; - memset(dsc2, 0, sizeof(*dsc2)); - dsc2->CompressedBufferType = type; - dsc2->DataSize = size; - dsc2->NumMBsInBuffer = mb_count; - } -#endif - - result = 0; - } else { - av_log(avctx, AV_LOG_ERROR, "Buffer for type %u was too small\n", type); - result = -1; - } - -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) - hr = ID3D11VideoContext_ReleaseDecoderBuffer(D3D11VA_CONTEXT(ctx)->video_context, D3D11VA_CONTEXT(ctx)->decoder, type); -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) - hr = IDirectXVideoDecoder_ReleaseBuffer(DXVA2_CONTEXT(ctx)->decoder, type); -#endif - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, - "Failed to release buffer type %u: 0x%x\n", - type, (unsigned)hr); - result = -1; - } - return result; -} - -static int frame_add_buf(AVFrame *frame, AVBufferRef *ref) -{ - int i; - - for (i = 0; i < AV_NUM_DATA_POINTERS; i++) { - if (!frame->buf[i]) { - frame->buf[i] = av_buffer_ref(ref); - return frame->buf[i] ? 0 : AVERROR(ENOMEM); - } - } - - // For now we expect that the caller does not use more than - // AV_NUM_DATA_POINTERS-1 buffers if the user uses a custom pool. - return AVERROR(EINVAL); -} - -int ff_dxva2_common_end_frame(AVCodecContext *avctx, AVFrame *frame, - const void *pp, unsigned pp_size, - const void *qm, unsigned qm_size, - int (*commit_bs_si)(AVCodecContext *, - DECODER_BUFFER_DESC *bs, - DECODER_BUFFER_DESC *slice)) -{ - AVDXVAContext *ctx = DXVA_CONTEXT(avctx); - unsigned buffer_count = 0; -#if CONFIG_D3D11VA - D3D11_VIDEO_DECODER_BUFFER_DESC buffer11[4]; -#endif -#if CONFIG_DXVA2 - DXVA2_DecodeBufferDesc buffer2[4]; -#endif - DECODER_BUFFER_DESC *buffer = NULL, *buffer_slice = NULL; - int result, runs = 0; - HRESULT hr; - unsigned type; - FFDXVASharedContext *sctx = DXVA_SHARED_CONTEXT(avctx); - - if (sctx->decoder_ref) { - result = frame_add_buf(frame, sctx->decoder_ref); - if (result < 0) - return result; - } - - do { - ff_dxva2_lock(avctx); -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) - hr = ID3D11VideoContext_DecoderBeginFrame(D3D11VA_CONTEXT(ctx)->video_context, D3D11VA_CONTEXT(ctx)->decoder, - get_surface(avctx, frame), - 0, NULL); -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) - hr = IDirectXVideoDecoder_BeginFrame(DXVA2_CONTEXT(ctx)->decoder, - get_surface(avctx, frame), - NULL); -#endif - if (hr != E_PENDING || ++runs > 50) - break; - ff_dxva2_unlock(avctx); - av_usleep(2000); - } while(1); - - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Failed to begin frame: 0x%x\n", (unsigned)hr); - ff_dxva2_unlock(avctx); - return -1; - } - -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) { - buffer = &buffer11[buffer_count]; - type = D3D11_VIDEO_DECODER_BUFFER_PICTURE_PARAMETERS; - } -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) { - buffer = &buffer2[buffer_count]; - type = DXVA2_PictureParametersBufferType; - } -#endif - result = ff_dxva2_commit_buffer(avctx, ctx, buffer, - type, - pp, pp_size, 0); - if (result) { - av_log(avctx, AV_LOG_ERROR, - "Failed to add picture parameter buffer\n"); - goto end; - } - buffer_count++; - - if (qm_size > 0) { -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) { - buffer = &buffer11[buffer_count]; - type = D3D11_VIDEO_DECODER_BUFFER_INVERSE_QUANTIZATION_MATRIX; - } -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) { - buffer = &buffer2[buffer_count]; - type = DXVA2_InverseQuantizationMatrixBufferType; - } -#endif - result = ff_dxva2_commit_buffer(avctx, ctx, buffer, - type, - qm, qm_size, 0); - if (result) { - av_log(avctx, AV_LOG_ERROR, - "Failed to add inverse quantization matrix buffer\n"); - goto end; - } - buffer_count++; - } - -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) { - buffer = &buffer11[buffer_count + 0]; - buffer_slice = &buffer11[buffer_count + 1]; - } -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) { - buffer = &buffer2[buffer_count + 0]; - buffer_slice = &buffer2[buffer_count + 1]; - } -#endif - - result = commit_bs_si(avctx, - buffer, - buffer_slice); - if (result) { - av_log(avctx, AV_LOG_ERROR, - "Failed to add bitstream or slice control buffer\n"); - goto end; - } - buffer_count += 2; - - /* TODO Film Grain when possible */ - - assert(buffer_count == 1 + (qm_size > 0) + 2); - -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) - hr = ID3D11VideoContext_SubmitDecoderBuffers(D3D11VA_CONTEXT(ctx)->video_context, - D3D11VA_CONTEXT(ctx)->decoder, - buffer_count, buffer11); -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) { - DXVA2_DecodeExecuteParams exec = { - .NumCompBuffers = buffer_count, - .pCompressedBuffers = buffer2, - .pExtensionData = NULL, - }; - hr = IDirectXVideoDecoder_Execute(DXVA2_CONTEXT(ctx)->decoder, &exec); - } -#endif - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Failed to execute: 0x%x\n", (unsigned)hr); - result = -1; - } - -end: -#if CONFIG_D3D11VA - if (ff_dxva2_is_d3d11(avctx)) - hr = ID3D11VideoContext_DecoderEndFrame(D3D11VA_CONTEXT(ctx)->video_context, D3D11VA_CONTEXT(ctx)->decoder); -#endif -#if CONFIG_DXVA2 - if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) - hr = IDirectXVideoDecoder_EndFrame(DXVA2_CONTEXT(ctx)->decoder, NULL); -#endif - ff_dxva2_unlock(avctx); - if (FAILED(hr)) { - av_log(avctx, AV_LOG_ERROR, "Failed to end frame: 0x%x\n", (unsigned)hr); - result = -1; - } - - return result; -} - -int ff_dxva2_is_d3d11(const AVCodecContext *avctx) -{ - if (CONFIG_D3D11VA) - return avctx->pix_fmt == AV_PIX_FMT_D3D11VA_VLD || - avctx->pix_fmt == AV_PIX_FMT_D3D11; - else - return 0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/escape130.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/escape130.c deleted file mode 100644 index 3b0460fd79a09e1ea31dcedb567f876441481063..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/escape130.c +++ /dev/null @@ -1,359 +0,0 @@ -/* - * Escape 130 video decoder - * Copyright (C) 2008 Eli Friedman (eli.friedman gmail.com) - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/attributes.h" -#include "libavutil/mem.h" - -#define BITSTREAM_READER_LE -#include "avcodec.h" -#include "codec_internal.h" -#include "decode.h" -#include "get_bits.h" - -typedef struct Escape130Context { - uint8_t *old_y_avg; - - uint8_t *new_y, *old_y; - uint8_t *new_u, *old_u; - uint8_t *new_v, *old_v; - - uint8_t *buf1, *buf2; - int linesize[3]; -} Escape130Context; - -static const uint8_t offset_table[] = { 2, 4, 10, 20 }; -static const int8_t sign_table[64][4] = { - { 0, 0, 0, 0 }, - { -1, 1, 0, 0 }, - { 1, -1, 0, 0 }, - { -1, 0, 1, 0 }, - { -1, 1, 1, 0 }, - { 0, -1, 1, 0 }, - { 1, -1, 1, 0 }, - { -1, -1, 1, 0 }, - { 1, 0, -1, 0 }, - { 0, 1, -1, 0 }, - { 1, 1, -1, 0 }, - { -1, 1, -1, 0 }, - { 1, -1, -1, 0 }, - { -1, 0, 0, 1 }, - { -1, 1, 0, 1 }, - { 0, -1, 0, 1 }, - - { 0, 0, 0, 0 }, - { 1, -1, 0, 1 }, - { -1, -1, 0, 1 }, - { -1, 0, 1, 1 }, - { -1, 1, 1, 1 }, - { 0, -1, 1, 1 }, - { 1, -1, 1, 1 }, - { -1, -1, 1, 1 }, - { 0, 0, -1, 1 }, - { 1, 0, -1, 1 }, - { -1, 0, -1, 1 }, - { 0, 1, -1, 1 }, - { 1, 1, -1, 1 }, - { -1, 1, -1, 1 }, - { 0, -1, -1, 1 }, - { 1, -1, -1, 1 }, - - { 0, 0, 0, 0 }, - { -1, -1, -1, 1 }, - { 1, 0, 0, -1 }, - { 0, 1, 0, -1 }, - { 1, 1, 0, -1 }, - { -1, 1, 0, -1 }, - { 1, -1, 0, -1 }, - { 0, 0, 1, -1 }, - { 1, 0, 1, -1 }, - { -1, 0, 1, -1 }, - { 0, 1, 1, -1 }, - { 1, 1, 1, -1 }, - { -1, 1, 1, -1 }, - { 0, -1, 1, -1 }, - { 1, -1, 1, -1 }, - { -1, -1, 1, -1 }, - - { 0, 0, 0, 0 }, - { 1, 0, -1, -1 }, - { 0, 1, -1, -1 }, - { 1, 1, -1, -1 }, - { -1, 1, -1, -1 }, - { 1, -1, -1, -1 } -}; - -static const int8_t luma_adjust[] = { -4, -3, -2, -1, 1, 2, 3, 4 }; - -static const int8_t chroma_adjust[2][8] = { - { 1, 1, 0, -1, -1, -1, 0, 1 }, - { 0, 1, 1, 1, 0, -1, -1, -1 } -}; - -static const uint8_t chroma_vals[] = { - 20, 28, 36, 44, 52, 60, 68, 76, - 84, 92, 100, 106, 112, 116, 120, 124, - 128, 132, 136, 140, 144, 150, 156, 164, - 172, 180, 188, 196, 204, 212, 220, 228 -}; - -static av_cold int escape130_decode_init(AVCodecContext *avctx) -{ - Escape130Context *s = avctx->priv_data; - avctx->pix_fmt = AV_PIX_FMT_YUV420P; - - if ((avctx->width & 1) || (avctx->height & 1)) { - av_log(avctx, AV_LOG_ERROR, - "Dimensions should be a multiple of two.\n"); - return AVERROR_INVALIDDATA; - } - - s->old_y_avg = av_malloc(avctx->width * avctx->height / 4); - s->buf1 = av_malloc(avctx->width * avctx->height * 3 / 2); - s->buf2 = av_malloc(avctx->width * avctx->height * 3 / 2); - if (!s->old_y_avg || !s->buf1 || !s->buf2) { - av_log(avctx, AV_LOG_ERROR, "Could not allocate buffer.\n"); - return AVERROR(ENOMEM); - } - - s->linesize[0] = avctx->width; - s->linesize[1] = - s->linesize[2] = avctx->width / 2; - - s->new_y = s->buf1; - s->new_u = s->new_y + avctx->width * avctx->height; - s->new_v = s->new_u + avctx->width * avctx->height / 4; - s->old_y = s->buf2; - s->old_u = s->old_y + avctx->width * avctx->height; - s->old_v = s->old_u + avctx->width * avctx->height / 4; - memset(s->old_y, 0, avctx->width * avctx->height); - memset(s->old_u, 0x10, avctx->width * avctx->height / 4); - memset(s->old_v, 0x10, avctx->width * avctx->height / 4); - - return 0; -} - -static av_cold int escape130_decode_close(AVCodecContext *avctx) -{ - Escape130Context *s = avctx->priv_data; - - av_freep(&s->old_y_avg); - av_freep(&s->buf1); - av_freep(&s->buf2); - - return 0; -} - -static int decode_skip_count(GetBitContext* gb) -{ - int value; - - if (get_bits_left(gb) < 1+3) - return -1; - - value = get_bits1(gb); - if (value) - return 0; - - value = get_bits(gb, 3); - if (value) - return value; - - value = get_bits(gb, 8); - if (value) - return value + 7; - - value = get_bits(gb, 15); - if (value) - return value + 262; - - return -1; -} - -static int escape130_decode_frame(AVCodecContext *avctx, AVFrame *pic, - int *got_frame, AVPacket *avpkt) -{ - int buf_size = avpkt->size; - Escape130Context *s = avctx->priv_data; - GetBitContext gb; - int ret; - - uint8_t *old_y, *old_cb, *old_cr, - *new_y, *new_cb, *new_cr; - uint8_t *dstY, *dstU, *dstV; - unsigned old_y_stride, old_cb_stride, old_cr_stride, - new_y_stride, new_cb_stride, new_cr_stride; - unsigned total_blocks = avctx->width * avctx->height / 4, - block_index, block_x = 0; - unsigned y[4] = { 0 }, cb = 0x10, cr = 0x10; - int skip = -1, y_avg = 0, i, j; - uint8_t *ya = s->old_y_avg; - - // first 16 bytes are header; no useful information in here - if (buf_size <= 16) { - av_log(avctx, AV_LOG_ERROR, "Insufficient frame data\n"); - return AVERROR_INVALIDDATA; - } - - if ((ret = ff_get_buffer(avctx, pic, 0)) < 0) - return ret; - - if ((ret = init_get_bits8(&gb, avpkt->data, avpkt->size)) < 0) - return ret; - skip_bits_long(&gb, 16 * 8); - - new_y = s->new_y; - new_cb = s->new_u; - new_cr = s->new_v; - new_y_stride = s->linesize[0]; - new_cb_stride = s->linesize[1]; - new_cr_stride = s->linesize[2]; - old_y = s->old_y; - old_cb = s->old_u; - old_cr = s->old_v; - old_y_stride = s->linesize[0]; - old_cb_stride = s->linesize[1]; - old_cr_stride = s->linesize[2]; - - for (block_index = 0; block_index < total_blocks; block_index++) { - // Note that this call will make us skip the rest of the blocks - // if the frame ends prematurely. - if (skip == -1) - skip = decode_skip_count(&gb); - if (skip == -1) { - av_log(avctx, AV_LOG_ERROR, "Error decoding skip value\n"); - return AVERROR_INVALIDDATA; - } - - if (skip) { - y[0] = old_y[0]; - y[1] = old_y[1]; - y[2] = old_y[old_y_stride]; - y[3] = old_y[old_y_stride + 1]; - y_avg = ya[0]; - cb = old_cb[0]; - cr = old_cr[0]; - } else { - if (get_bits1(&gb)) { - unsigned sign_selector = get_bits(&gb, 6); - unsigned difference_selector = get_bits(&gb, 2); - y_avg = 2 * get_bits(&gb, 5); - for (i = 0; i < 4; i++) { - y[i] = av_clip(y_avg + offset_table[difference_selector] * - sign_table[sign_selector][i], 0, 63); - } - } else if (get_bits1(&gb)) { - if (get_bits1(&gb)) { - y_avg = get_bits(&gb, 6); - } else { - unsigned adjust_index = get_bits(&gb, 3); - y_avg = (y_avg + luma_adjust[adjust_index]) & 63; - } - for (i = 0; i < 4; i++) - y[i] = y_avg; - } - - if (get_bits1(&gb)) { - if (get_bits1(&gb)) { - cb = get_bits(&gb, 5); - cr = get_bits(&gb, 5); - } else { - unsigned adjust_index = get_bits(&gb, 3); - cb = (cb + chroma_adjust[0][adjust_index]) & 31; - cr = (cr + chroma_adjust[1][adjust_index]) & 31; - } - } - } - *ya++ = y_avg; - - new_y[0] = y[0]; - new_y[1] = y[1]; - new_y[new_y_stride] = y[2]; - new_y[new_y_stride + 1] = y[3]; - *new_cb = cb; - *new_cr = cr; - - old_y += 2; - old_cb++; - old_cr++; - new_y += 2; - new_cb++; - new_cr++; - block_x++; - if (block_x * 2 == avctx->width) { - block_x = 0; - old_y += old_y_stride * 2 - avctx->width; - old_cb += old_cb_stride - avctx->width / 2; - old_cr += old_cr_stride - avctx->width / 2; - new_y += new_y_stride * 2 - avctx->width; - new_cb += new_cb_stride - avctx->width / 2; - new_cr += new_cr_stride - avctx->width / 2; - } - - skip--; - } - - new_y = s->new_y; - new_cb = s->new_u; - new_cr = s->new_v; - dstY = pic->data[0]; - dstU = pic->data[1]; - dstV = pic->data[2]; - for (j = 0; j < avctx->height; j++) { - for (i = 0; i < avctx->width; i++) - dstY[i] = new_y[i] << 2; - dstY += pic->linesize[0]; - new_y += new_y_stride; - } - for (j = 0; j < avctx->height / 2; j++) { - for (i = 0; i < avctx->width / 2; i++) { - dstU[i] = chroma_vals[new_cb[i]]; - dstV[i] = chroma_vals[new_cr[i]]; - } - dstU += pic->linesize[1]; - dstV += pic->linesize[2]; - new_cb += new_cb_stride; - new_cr += new_cr_stride; - } - - ff_dlog(avctx, "Frame data: provided %d bytes, used %d bytes\n", - buf_size, get_bits_count(&gb) >> 3); - - FFSWAP(uint8_t*, s->old_y, s->new_y); - FFSWAP(uint8_t*, s->old_u, s->new_u); - FFSWAP(uint8_t*, s->old_v, s->new_v); - - *got_frame = 1; - - return buf_size; -} - -const FFCodec ff_escape130_decoder = { - .p.name = "escape130", - CODEC_LONG_NAME("Escape 130"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_ESCAPE130, - .priv_data_size = sizeof(Escape130Context), - .init = escape130_decode_init, - .close = escape130_decode_close, - FF_CODEC_DECODE_CB(escape130_decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fdctdsp.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fdctdsp.c deleted file mode 100644 index 5306c9d047e9bc7bcf683a8192a622a9afc1c16d..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fdctdsp.c +++ /dev/null @@ -1,51 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/attributes.h" -#include "avcodec.h" -#include "dct.h" -#include "faandct.h" -#include "fdctdsp.h" -#include "config.h" - -av_cold void ff_fdctdsp_init(FDCTDSPContext *c, AVCodecContext *avctx) -{ - av_unused const unsigned high_bit_depth = avctx->bits_per_raw_sample > 8; - - if (avctx->bits_per_raw_sample == 10 || avctx->bits_per_raw_sample == 9) { - c->fdct = ff_jpeg_fdct_islow_10; - c->fdct248 = ff_fdct248_islow_10; - } else if (avctx->dct_algo == FF_DCT_FASTINT) { - c->fdct = ff_fdct_ifast; - c->fdct248 = ff_fdct_ifast248; -#if CONFIG_FAANDCT - } else if (avctx->dct_algo == FF_DCT_FAAN) { - c->fdct = ff_faandct; - c->fdct248 = ff_faandct248; -#endif /* CONFIG_FAANDCT */ - } else { - c->fdct = ff_jpeg_fdct_islow_8; // slow/accurate/default - c->fdct248 = ff_fdct248_islow_8; - } - -#if ARCH_PPC - ff_fdctdsp_init_ppc(c, avctx, high_bit_depth); -#elif ARCH_X86 - ff_fdctdsp_init_x86(c, avctx, high_bit_depth); -#endif -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Apkdays Call of Duty Warzone Mobile How to Sync Your Battle Pass and Friends List Across Platforms.md b/spaces/congsaPfin/Manga-OCR/logs/Apkdays Call of Duty Warzone Mobile How to Sync Your Battle Pass and Friends List Across Platforms.md deleted file mode 100644 index a08748ecc667cad1a960fd94b4ef3bb16ee4dd11..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Apkdays Call of Duty Warzone Mobile How to Sync Your Battle Pass and Friends List Across Platforms.md +++ /dev/null @@ -1,168 +0,0 @@ -
-

Apkdays Call of Duty Warzone Mobile: Everything You Need to Know

-

If you are a fan of Call of Duty: Warzone, the popular battle royale game that has taken the gaming world by storm, you might be wondering if you can play it on your mobile device. Well, the answer is yes, thanks to Apkdays Call of Duty Warzone Mobile, a modded version of the game that lets you enjoy Verdansk on the go. In this article, we will tell you everything you need to know about Apkdays Call of Duty Warzone Mobile, including what it is, how to download and install it, how to play it like a pro, and how it compares to other mobile battle royales.

-

What is Apkdays Call of Duty Warzone Mobile?

-

A brief introduction to the game and its features

-

Apkdays Call of Duty Warzone Mobile is a free-to-play mobile game that is based on Call of Duty: Warzone, the hit battle royale game that is available on PC and consoles. It is not an official release from Activision, but rather a modded version that has been created by a third-party developer called Apkdays. The game aims to replicate the authentic Call of Duty: Warzone experience on mobile devices, with high-quality graphics, intuitive controls, and cross-progression with Call of Duty: Warzone 2.0 and Call of Duty: Modern Warfare II.

-

apkdays call of duty warzone mobile


Download --->>> https://urlca.com/2uObpv



-

Some of the features that Apkdays Call of Duty Warzone Mobile offers are:

-
    -
  • The iconic Verdansk map, with dozens of points of interest and strategies to survive
  • -
  • Up to 120 live players in a match, with real players and no bots
  • -
  • A variety of weapons, attachments, upgrades, killstreaks, revive tokens, and contracts to use
  • -
  • The Gulag system, where you can win a duel to get a second chance at survival
  • -
  • Social features like friends, chat channels, and Battle Pass across platforms
  • -
  • A shorter 10-minute mode for quick sessions
  • -
-

How to download and install the game on your device

-

Since Apkdays Call of Duty Warzone Mobile is not an official release from Activision, you cannot find it on the Google Play Store or the App Store. Instead, you have to download it from the Apkdays website , where you can find the latest version of the game. Here are the steps to download and install the game on your device:

-
    -
  1. Go to [Apkdays](^1^) website and find the download link for Apkdays Call of Duty Warzone Mobile.
  2. -
  3. Click on the download link and wait for the file to be downloaded on your device.
  4. -
  5. Once the file is downloaded, locate it in your file manager and tap on it to install it.
  6. -
  7. If you see a warning message that says "Install blocked", go to your device settings and enable "Unknown sources" or "Allow from this source".
  8. -
  9. After enabling the installation, launch the game and enjoy Apkdays Call of Duty Warzone Mobile.
  10. -
-

Note: You may need to update the game from time to time to get the latest features and fixes. You can check for updates on the Apkdays website or in the game itself.

-

What are the benefits of using Apkdays Call of Duty Warzone Mobile?

-

Apkdays Call of Duty Warzone Mobile is not just a cheap imitation of Call of Duty: Warzone, but rather a faithful adaptation that has many benefits for mobile gamers. Some of the benefits are:

-
    -
  • You can play Call of Duty: Warzone on your mobile device without compromising on the quality or performance of the game.
  • -
  • You can save storage space and data usage by downloading a smaller file size than the original game.
  • -
  • You can access exclusive features and content that are not available in the official game, such as new weapons, skins, modes, and events.
  • -
  • You can cross-play and cross-progression with other players who are using Call of Duty: Warzone 2.0 and Call of Duty: Modern Warfare II on PC and consoles.
  • -
  • You can support the independent developer who created Apkdays Call of Duty Warzone Mobile and help them improve the game further.
  • -
-

How to play Apkdays Call of Duty Warzone Mobile like a pro

-

Tips and tricks for surviving and winning in Verdansk

-

Apkdays Call of Duty Warzone Mobile is not an easy game to master, especially if you are new to the battle royale genre. You will face many challenges and threats in Verdansk, such as enemy players, gas circles, loot scarcity, and environmental hazards. To increase your chances of survival and victory, you need to follow some tips and tricks that will help you improve your skills and strategies. Here are some of them:

-
    -
  • Choose your landing spot wisely. Depending on your playstyle and preference, you may want to land in a hot zone with high loot potential and high risk, or a cold zone with low loot potential and low risk. You can also use the ping system to communicate with your teammates and coordinate your landing spot.
  • -
  • Loot fast and smart. As soon as you land, you need to find weapons, armor, ammo, and other items that will help you survive. You can loot from buildings, crates, supply boxes, dead enemies, and contracts. You can also use the loadout drop marker to get your custom loadout from Call of Duty: Warzone 2.0 or Call of Duty: Modern Warfare II.
  • -
  • Stay alert and aware. Verdansk is a huge map with many enemies lurking around. You need to keep an eye on your surroundings, use your mini-map, listen to audio cues, and watch out for enemy indicators. You also need to pay attention to the gas circle, which will shrink over time and force you to move to a safe zone.
  • -
  • Play as a team. Apkdays Call of Duty Warzone Mobile is best played with friends or other players who can cooperate and coordinate with you. You can use voice chat or text chat to communicate with your teammates, share loot, revive each other, and execute tactics. You can also use the ping system to mark enemies, locations, items, and vehicles.
  • -
  • Be adaptable and flexible. Verdansk is a dynamic map that changes every match. You need to be ready to face different situations and scenarios that may require you to change your plan or strategy. You also need to be able to use different weapons, attachments, killstreaks, contracts, and vehicles that suit your needs.
  • -
-

Best weapons, attachments, and loadouts to use

-

Apkdays Call of Duty Warzone Mobile has a wide range of weapons that you can use in Verdansk, from pistols and shotguns to assault rifles and sniper rifles. Each weapon has its own stats, pros, cons, and attachments that affect its performance. You can customize your weapons with different attachments that enhance their accuracy, damage, range, fire rate, mobility, or control. You can also create your own loadouts with different weapons, attachments, perks, and equipment that suit your playstyle and preference. You can access your loadouts from the loadout drop marker that appears randomly or from contracts. Some of the best weapons, attachments, and loadouts to use in Apkdays Call of Duty Warzone Mobile are: - The M4A1 assault rifle, which is a versatile and reliable weapon that can handle any situation. It has high damage, accuracy, range, and fire rate, making it a great choice for medium to long-range engagements. Some of the best attachments for the M4A1 are the Monolithic Suppressor, the M16 Grenadier Barrel, the VLK 3.0x Optic, the Commando Foregrip, and the 60 Round Mags. - The MP5 submachine gun, which is a fast and powerful weapon that excels in close-range combat. It has high damage, fire rate, mobility, and control, making it a great choice for rushing and flanking enemies. Some of the best attachments for the MP5 are the Monolithic Integral Suppressor, the Merc Foregrip, the 45 Round Mags, the Stippled Grip Tape, and the Sleight of Hand. - The HDR sniper rifle, which is a deadly and accurate weapon that can take down enemies from afar. It has high damage, range, bullet velocity, and penetration, making it a great choice for sniping and counter-sniping enemies. Some of the best attachments for the HDR are the Monolithic Suppressor, the 26.9" HDR Pro Barrel, the Variable Zoom Scope, the FTAC Champion Stock, and the Focus. - The loadout that combines the M4A1 and the MP5, which is a balanced and effective loadout that can handle any situation. You can use the M4A1 for medium to long-range engagements and switch to the MP5 for close-range engagements. You can also use perks like Cold-Blooded, Ghost, and Amped to stay hidden from enemy detection and switch weapons faster. You can also use equipment like C4 and Heartbeat Sensor to deal damage and locate enemies. - The loadout that combines the HDR and a pistol of your choice, which is a risky but rewarding loadout that can dominate at long-range engagements. You can use the HDR to snipe enemies from a distance and switch to your pistol for self-defense or finishing off enemies. You can also use perks like Overkill, High Alert, and Shrapnel to carry two primary weapons, be aware of enemy flanks, and carry extra lethal equipment. You can also use equipment like Claymore and Smoke Grenade to protect yourself and cover your escape.

How to use contracts, killstreaks, and cash wisely

-

Apkdays Call of Duty Warzone Mobile has many elements that make it more than just a simple battle royale game. One of these elements is contracts, which are optional missions that you can find and activate in Verdansk. Contracts offer various rewards such as cash, loot, intel, or loadout drops. There are different types of contracts such as Bounty (hunt down a specific enemy), Recon (secure a location), Scavenger (collect supply boxes), Most Wanted (survive as a marked target), or Supply Run (reach a buy station). Another element is killstreaks, which are powerful abilities that you can use to gain an edge over your enemies. Killstreaks include UAV (reveal enemy locations), Cluster Strike (call in an airstrike), Precision Airstrike (call in a targeted airstrike), Sentry Gun (deploy an automated turret), Wheelson (control a mini-tank), or Juggernaut (wear a heavy armor suit). You can find killstreaks from loot or buy them from buy stations. The last element is cash, which is the currency that you can use to buy various items and services in Verdansk. Cash can be found from loot, contracts, enemies, or cash drops. You can use cash to buy weapons, attachments, loadouts, killstreaks, armor plates, revive tokens, self-revive kits, gas masks, or redeploy your teammates from buy stations. You can also use cash to deposit or withdraw from bank stations or ATMs. Some of the tips on how to use contracts, killstreaks, and cash wisely are: - Choose contracts that suit your playstyle and situation. For example, if you want to hunt enemies, go for Bounty contracts. If you want to secure a location, go for Recon contracts. If you want to loot more, go for Scavenger contracts. If you want to challenge yourself, go for Most Wanted or Supply Run contracts. - Use killstreaks at the right time and place. For example, use UAV when you want to locate enemies or avoid them. Use Cluster Strike or Precision Airstrike when you want to damage or eliminate enemies in a specific area. Use Sentry Gun when you want to defend a location or distract enemies. Use Wheelson when you want to wreak havoc on enemies or vehicles. Use Juggernaut when you want to dominate the battlefield or survive longer. - Manage your cash carefully and strategically. For example, don't spend all your cash on unnecessary items or services. Save some cash for emergencies or late-game situations. Share your cash with your teammates if they need it more than you. Deposit your cash in bank stations or ATMs if you have too much of it and don't want to lose it. Withdraw your cash from bank stations or ATMs if you need more of it and have enough balance.

How does Apkdays Call of Duty Warzone Mobile compare to other mobile battle royales?

-

The advantages and disadvantages of Apkdays Call of Duty Warzone Mobile

-

Apkdays Call of Duty Warzone Mobile is not the only mobile battle royale game that you can play on your device. There are many other options that you can choose from, such as PUBG Mobile, Free Fire, Fortnite Mobile, COD Mobile, and more. Each game has its own strengths and weaknesses that make it appealing or unappealing to different players. Here are some of the advantages and disadvantages of Apkdays Call of Duty Warzone Mobile compared to other mobile battle royales:

- - - - - - - - - - - - - - - - - - - - - -
AdvantagesDisadvantages
- It offers a realistic and immersive Call of Duty: Warzone experience on mobile devices.- It is not an official release from Activision and may have compatibility or security issues.
- It has high-quality graphics, sound effects, and animations that create a stunning visual and auditory experience.- It requires a high-end device and a stable internet connection to run smoothly and avoid lag or crashes.
- It has many features and content that are not available in other mobile battle royales, such as Verdansk map, Gulag system, contracts, killstreaks, cross-play, and cross-progression.- It has a steep learning curve and a high difficulty level that may frustrate or discourage new or casual players.
- It has a loyal and active community of players and fans who support the game and the developer.- It has a limited player base and may have long waiting times or matchmaking issues.
-

The similarities and differences between Apkdays Call of Duty Warzone Mobile and Call of Duty: Warzone 2.0

-

Apkdays Call of Duty Warzone Mobile is not the same as Call of Duty: Warzone 2.0, the official sequel to Call of Duty: Warzone that is expected to launch in 2023. Call of Duty: Warzone 2.0 is a major update that will introduce new features, content, and improvements to the original game. Apkdays Call of Duty Warzone Mobile is a modded version of the original game that aims to bring it to mobile devices. Here are some of the similarities and differences between Apkdays Call of Duty Warzone Mobile and Call of Duty: Warzone 2.0:

-

apkdays call of duty warzone mobile download
-apkdays call of duty warzone mobile release date
-apkdays call of duty warzone mobile gameplay
-apkdays call of duty warzone mobile pre-registration
-apkdays call of duty warzone mobile review
-apkdays call of duty warzone mobile system requirements
-apkdays call of duty warzone mobile tips and tricks
-apkdays call of duty warzone mobile best weapons
-apkdays call of duty warzone mobile cheats and hacks
-apkdays call of duty warzone mobile update
-apkdays call of duty warzone mobile vs pubg mobile
-apkdays call of duty warzone mobile crossplay
-apkdays call of duty warzone mobile controller support
-apkdays call of duty warzone mobile battle pass
-apkdays call of duty warzone mobile maps
-apkdays call of duty warzone mobile verdansk
-apkdays call of duty warzone mobile gulag
-apkdays call of duty warzone mobile killstreaks
-apkdays call of duty warzone mobile vehicles
-apkdays call of duty warzone mobile graphics settings
-apkdays call of duty warzone mobile how to install
-apkdays call of duty warzone mobile how to play
-apkdays call of duty warzone mobile how to win
-apkdays call of duty warzone mobile how to get ghost skin
-apkdays call of duty warzone mobile how to get shoot house map
-apkdays call of duty warzone mobile news and updates
-apkdays call of duty warzone mobile trailer and gameplay videos
-apkdays call of duty warzone mobile reddit and discord communities
-apkdays call of duty warzone mobile feedback and suggestions
-apkdays call of duty warzone mobile bugs and issues
-apkdays call of duty warzone mobile comparison with other cod games
-apkdays call of duty warzone mobile comparison with other battle royale games
-apkdays call of duty warzone mobile pros and cons
-apkdays call of duty warzone mobile features and modes
-apkdays call of duty warzone mobile loadouts and customization
-apkdays call of duty warzone mobile weapons and attachments
-apkdays call of duty warzone mobile perks and equipment
-apkdays call of duty warzone mobile contracts and missions
-apkdays call of duty warzone mobile challenges and rewards
-apkdays call of duty warzone mobile tournaments and events

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
SimilaritiesDifferences
- They are both based on Call of Duty: Warzone, the popular battle royale game that is set in the Modern Warfare universe.- Apkdays Call of Duty Warzone Mobile is a mobile game that is available on Android and iOS devices, while Call of Duty: Warzone 2.0 is a PC and console game that is available on Windows, PlayStation, and Xbox platforms.
- They both feature the iconic Verdansk map, with dozens of points of interest and strategies to survive.- Apkdays Call of Duty Warzone Mobile has a smaller and simplified version of Verdansk, while Call of Duty: Warzone 2.0 has a larger and more detailed version of Verdansk.
- They both have up to 120 live players in a match, with real players and no bots.- Apkdays Call of Duty Warzone Mobile has a shorter 10-minute mode for quick sessions, while Call of Duty: Warzone 2.0 has a longer 20-minute mode for intense sessions.
- They both have a variety of weapons, attachments, upgrades, killstreaks, revive tokens, and contracts to use.- Apkdays Call of Duty Warzone Mobile has some exclusive features and content that are not available in Call of Duty: Warzone 2.0, such as new weapons, skins, modes, and events.
- They both have the Gulag system, where you can win a duel to get a second chance at survival.- Apkdays Call of Duty Warzone Mobile has a different Gulag system than Call of Duty: Warzone 2.0, where you can choose your weapon and loadout before entering the duel.
- They both have social features like friends, chat channels, and Battle Pass across platforms.- Apkdays Call of Duty Warzone Mobile has cross-play and cross-progression with Call of Duty: Warzone 2.0 and Call of Duty: Modern Warfare II, while Call of Duty: Warzone 2.0 has cross-play and cross-progression with Call of Duty: Vanguard and Call of Duty: Black Ops Cold War.
-

The feedback and reviews from players and critics

-

Apkdays Call of Duty Warzone Mobile has received mixed feedback and reviews from players and critics who have tried the game. Some players and critics have praised the game for its impressive graphics, smooth gameplay, faithful adaptation, and exclusive features. They have also appreciated the developer's efforts to create and update the game regularly. Some examples of positive feedback and reviews are:

- - "Apkdays Call of Duty Warzone Mobile is a masterpiece that delivers an authentic and immersive Call of Duty: Warzone experience on mobile devices. It is one of the best mobile battle royale games I have ever played." - A player from Reddit - "Apkdays Call of Duty Warzone Mobile is a stunning achievement that showcases the potential and power of mobile gaming. It is not just a clone or a rip-off, but rather a tribute and a homage to the original game." - A critic from IGN - "Apkdays Call of Duty Warzone Mobile is a must-play for any fan of Call of Duty: Warzone or battle royale games in general. It has everything you need to enjoy Verdansk on the go." - A player from Google Play Store However, some players and critics have criticized the game for its compatibility or security issues, high difficulty level, limited player base, or lack of originality. They have also warned about the possible legal or ethical implications of using a modded version of the game. Some examples of negative feedback and reviews are: - "Apkdays Call of Duty Warzone Mobile is a buggy and unstable game that crashes frequently and drains battery life. It is not compatible with many devices and may contain malware or viruses." - A player from App Store - "Apkdays Call of Duty Warzone Mobile is a cheap and lazy game that copies everything from the original game without adding anything new or innovative. It is a waste of time and money." - A critic from GameSpot - "Apkdays Call of Duty Warzone Mobile is a risky and illegal game that violates the intellectual property rights of Activision and may result in legal action or account ban. It is not worth the trouble." - A player from YouTube

Conclusion

-

A summary of the main points and a call to action for the readers

-

Apkdays Call of Duty Warzone Mobile is a mobile game that is based on Call of Duty: Warzone, the popular battle royale game that is available on PC and consoles. It is a modded version that has been created by a third-party developer called Apkdays. The game aims to replicate the authentic Call of Duty: Warzone experience on mobile devices, with high-quality graphics, intuitive controls, and cross-progression with Call of Duty: Warzone 2.0 and Call of Duty: Modern Warfare II.

-

In this article, we have told you everything you need to know about Apkdays Call of Duty Warzone Mobile, including what it is, how to download and install it, how to play it like a pro, and how it compares to other mobile battle royales. We have also shared some tips and tricks that will help you improve your skills and strategies in Verdansk.

-

If you are interested in trying out Apkdays Call of Duty Warzone Mobile, you can download it from the Apkdays website and enjoy Verdansk on the go. However, you should also be aware of the potential risks and drawbacks of using a modded version of the game, such as compatibility or security issues, high difficulty level, limited player base, or legal or ethical implications.

-

Apkdays Call of Duty Warzone Mobile is not for everyone, but it is definitely worth a shot for any fan of Call of Duty: Warzone or battle royale games in general. It is one of the best mobile battle royale games we have ever played, and we hope you will enjoy it as much as we did.

-

FAQs

-

Five unique questions and answers about Apkdays Call of Duty Warzone Mobile

-
    -
  • Q: Is Apkdays Call of Duty Warzone Mobile safe to use?
  • -
  • A: Apkdays Call of Duty Warzone Mobile is not an official release from Activision and may have compatibility or security issues. You should download it at your own risk and discretion. You should also scan the file for malware or viruses before installing it.
  • -
  • Q: Is Apkdays Call of Duty Warzone Mobile free to play?
  • -
  • A: Apkdays Call of Duty Warzone Mobile is free to play and does not require any subscription or purchase. However, it may have some in-game purchases or ads that support the developer.
  • -
  • Q: How can I update Apkdays Call of Duty Warzone Mobile?
  • -
  • A: You can update Apkdays Call of Duty Warzone Mobile by visiting the Apkdays website and downloading the latest version of the game. You can also check for updates in the game itself.
  • -
  • Q: How can I contact the developer of Apkdays Call of Duty Warzone Mobile?
  • -
  • A: You can contact the developer of Apkdays Call of Duty Warzone Mobile by visiting their website and filling out their contact form. You can also follow them on their social media accounts or join their Discord server.
  • -
  • Q: How can I report a bug or a problem in Apkdays Call of Duty Warzone Mobile?
  • -
  • A: You can report a bug or a problem in Apkdays Call of Duty Warzone Mobile by visiting their website and filling out their bug report form. You can also contact them via their email address or their Discord server.
  • -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Getting Over It with Bennett Foddy APK and Enjoy the Ultimate Challenge.md b/spaces/congsaPfin/Manga-OCR/logs/Download Getting Over It with Bennett Foddy APK and Enjoy the Ultimate Challenge.md deleted file mode 100644 index ad57104716b46e119c5a51f9240373d2c8dc9bcc..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Getting Over It with Bennett Foddy APK and Enjoy the Ultimate Challenge.md +++ /dev/null @@ -1,82 +0,0 @@ -
-

How to Download Getting Over It Free APK for Android

-

Do you want to play a game that will test your patience, skill, and sanity? Do you want to experience the thrill of climbing up a mountain with nothing but a hammer and a pot? Do you want to do it for free on your Android device? If you answered yes to any of these questions, then you might be interested in downloading Getting Over It free APK.

-

download getting over it free apk


Download Zip ✪✪✪ https://urlca.com/2uOd8f



-

What is Getting Over It?

-

A challenging climbing game

-

Getting Over It is an arcade climbing game where you carefully use a hammer to climb up a mountain. You move the hammer with the mouse, and that's all there is. With practice, you'll be able to jump, swing, climb and fly. But be careful, because one wrong move can send you flying back to where you started, or even worse. The game is designed to be frustrating, punishing, and rewarding at the same time. You'll hear the developer, Bennett Foddy, make philosophical observations about the problem at hand as you play. And if you manage to reach the top of the mountain, a magical reward awaits you.

-

A fan game based on a popular original

-

Getting Over It is a fan game based on the hugely popular Getting Over It with Bennett Foddy, which was released in 2017. The original game was inspired by Jazzuo's 2002 B-Game classic 'Sexy Hiking'. The fan game has a different theme and graphics, but the gameplay and mechanics are very similar. Instead of a man in a pot navigating a punishing and surreal landscape, this playful alternative has you playing as a cat in a plant pot climbing various colorful blocks and giant fruits.

-

What is an APK file?

-

A package file for Android apps

-

The term APK stands for Android Package Kit. An APK file, on the other hand, can be considered as an archive file for any app which comes with the .apk file extension. It is a package file format used by the Android operating system for the distribution and installation of mobile applications. An APK file contains all of a program's code, resources, assets, certificates, and manifest file.

-

A way to install apps from unknown sources

-

Most Android devices allow users to manually install APK files only after they turn on an "Unknown Sources" setting that allows installation from sources other than trusted ones like Google Play. One may do so for many reasons, such as during the development of apps, to install apps not found on the store, or to install an older version of an existing app. However, one should be careful when opening an APK file from a source they're unfamiliar with, as it may contain malware or viruses.

-

How to download Getting Over It free APK?

-

Find a reliable website that offers the APK file

-

The first step to download Getting Over It free APK is to find a website that offers the APK file for download. You can use Google or any other search engine to look for websites that have the APK file. Some examples of websites that offer Getting Over It free APK are [CrazyGames](^1^), [Steam](^2^), and [Google Play](^3^). Make sure that the website you choose is reliable and trustworthy, and that it has positive reviews and ratings from other users.

-

Enable unknown sources on your Android device

-

The next step is to enable unknown sources on your Android device. This will allow you to install apps from sources other than Google Play. To do this, navigate to one of these menus depending on your Android version:

-

How to download getting over it free apk for android
-Download getting over it with bennett foddy apk free
-Getting over it free apk download latest version
-Download getting over it mod apk free unlimited money
-Getting over it free apk download for pc windows 10
-Download getting over it apk free no verification
-Getting over it free apk download full game
-Download getting over it hack apk free all levels unlocked
-Getting over it free apk download without ads
-Download getting over it cracked apk free no root
-Getting over it free apk download for ios iphone ipad
-Download getting over it premium apk free original
-Getting over it free apk download offline mode
-Download getting over it mega mod apk free god mode
-Getting over it free apk download for mac os x
-Download getting over it pro apk free updated
-Getting over it free apk download with obb data file
-Download getting over it unlimited coins apk free
-Getting over it free apk download high graphics quality
-Download getting over it cheat apk free easy mode
-Getting over it free apk download for chromebook
-Download getting over it plus apk free extra features
-Getting over it free apk download with voice commentary
-Download getting over it gold apk free special edition
-Getting over it free apk download for android tv box
-Download getting over it lite apk free low size
-Getting over it free apk download with multiplayer mode
-Download getting over it rexdl apk free fast speed
-Getting over it free apk download for firestick fire tv
-Download getting over it revdl apk free direct link
-Getting over it free apk download for bluestacks emulator
-Download getting over it apkpure apk free safe secure
-Getting over it free apk download for linux ubuntu
-Download getting over it happymod apk free working tested
-Getting over it free apk download for nvidia shield tv
-Download getting over it moddroid apk free no ads
-Getting over it free apk download for smart tv lg samsung sony
-Download getting over it apkmirror apk free latest update
-Getting over it free apk download for kindle fire hd tablet
-Download getting over it apkmody apk free mod menu unlocked

-
    -
  • Settings > Security > Unknown Sources
  • -
  • Settings > Apps and Notifications > Advanced > Special App Access > Install Unknown Apps
  • -
  • Settings > Biometrics and Security > Install Unknown Apps
  • -
-

Then, tap on the toggle switch to turn it on. You may see a warning message that says installing from unknown sources may harm your device. Tap on OK to proceed.

-

Download and install the APK file

-

The final step is to download and install the APK file. To do this, go back to the website where you found the APK file and tap on the download button. You may see a pop-up message that asks you to confirm the download. Tap on OK to start the download. Once the download is complete, you will see a notification that says "Download complete". Tap on the notification to open the APK file. You may see another pop-up message that asks you to confirm the installation. Tap on Install to begin the installation. Wait for a few seconds until the installation is finished. You will see a message that says "App installed". Tap on Open to launch the app and enjoy playing Getting Over It for free.

-

Conclusion

-

Getting Over It is a fun and challenging game that will test your skills and patience as you climb up a mountain with a hammer and a pot. You can download Getting Over It free APK for your Android device by following these simple steps: find a reliable website that offers the APK file, enable unknown sources on your device, and download and install the APK file. However, be careful when downloading from unknown sources, as they may contain malware or viruses. Always check the reviews and ratings of the website and the app before downloading. Have fun playing Getting Over It and don't give up!

-

FAQs

-

What are the requirements to play Getting Over It on Android?

-

To play Getting Over It on Android, you need an Android device that runs on Android 5.0 or higher, has at least 1 GB of RAM, and has at least 100 MB of free storage space.

-

Is Getting Over It free on Google Play?

-

No, Getting Over It is not free on Google Play. It costs $4.99 to download from Google Play. However, you can download Getting Over It free APK from other sources as explained in this article.

-

Is Getting Over It safe to play?

-

Getting Over It is safe to play as long as you download it from a trusted source. However, be aware that the game is very frustrating and may cause rage or despair in some players. If you feel stressed or angry while playing, take a break and calm down.

-

How long does it take to finish Getting Over It?

-

The length of time it takes to finish Getting Over It depends on your skill level and luck. Some players have finished the game in less than an hour, while others have spent hundreds of hours trying to reach the top. The average time to finish the game is around 5 hours.

-

What is the reward for finishing Getting Over It?

-

The reward for finishing Getting Over It is a secret that only those who have completed the game can know. However, some hints have been given by the developer and other players who have finished the game. The reward involves a golden cauldron, a special song, and a message from Bennett Foddy.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Stick War Legacy MOD APK Enjoy 999 Army Unlimited Upgrades and More.md b/spaces/congsaPfin/Manga-OCR/logs/Stick War Legacy MOD APK Enjoy 999 Army Unlimited Upgrades and More.md deleted file mode 100644 index 9c86e8920ca2fc9d9fbcd2a712a8ecb89b1d650a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Stick War Legacy MOD APK Enjoy 999 Army Unlimited Upgrades and More.md +++ /dev/null @@ -1,97 +0,0 @@ - -

Stick War Legacy MOD APK: How to Get a New Army and Win Every Battle

-

Do you love playing stick figure games? Do you want to lead your own army of stickmen and conquer the world? If yes, then you should try Stick War Legacy, one of the most popular and addictive strategy games on mobile devices. But wait, there's more! You can also download Stick War Legacy MOD APK, which gives you unlimited gems, gold, mana, and access to a new army of powerful units. In this article, we will tell you everything you need to know about Stick War Legacy MOD APK, how to download and install it, and how to get a new army and win every battle. Let's get started!

-

Introduction

-

What is Stick War Legacy?

-

Stick War Legacy is a strategy game developed by Max Games Studios. It is based on the popular web game, Stick War, which was released in 2009. In this game, you play as the leader of a nation called Order, which is surrounded by enemies who want to destroy you. You must build and train your army of stickmen, mine resources, research technologies, and fight against other nations in epic battles. You can also play in different modes, such as campaign, tournament, endless zombies, and custom battles.

-

stick war legacy mod apk new army


Download Ziphttps://urlca.com/2uO9xc



-

What is Stick War Legacy MOD APK?

-

Stick War Legacy MOD APK is a modified version of the original game, which gives you some extra features and advantages. For example, you can get unlimited gems, gold, and mana, which are the main currencies in the game. You can use them to buy weapons, resources, upgrades, and more. You can also unlock and use a new army of stickmen, which have different abilities and skills. These units can help you defeat your enemies faster and easier.

-

Why do you need a new army in Stick War Legacy?

-

As you progress in the game, you will face more challenging and stronger enemies. They will have better weapons, defenses, and strategies. They will also have their own unique units, such as giants, wizards, archers, spearmen, and swordsmen. If you want to win against them, you need to have a diverse and powerful army of your own. That's why having a new army in Stick War Legacy MOD APK is very useful. You can have more options and flexibility in choosing your units and tactics.

-

How to download and install Stick War Legacy MOD APK

-

Step 1: Download the MOD APK file from a trusted source

-

The first thing you need to do is to download the Stick War Legacy MOD APK file from a reliable source. You can use the link below or search for other websites that offer it. Make sure that the file is safe and virus-free before downloading it.

-

Step 2: Enable unknown sources on your device

-

The next thing you need to do is to enable unknown sources on your device. This will allow you to install apps that are not from the official Google Play Store. To do this, go to your device settings > security > unknown sources > toggle on.

-Step 3: Install the MOD APK file and launch the game

-

The final thing you need to do is to install the Stick War Legacy MOD APK file and launch the game. To do this, locate the file in your device storage and tap on it. Follow the instructions on the screen to complete the installation. Once done, open the game and enjoy the new features and army.

-

stick war legacy mod apk unlimited gems and army
-stick war legacy mod apk 9999 army download
-stick war legacy mod apk new army update
-stick war legacy mod apk all weapons unlocked
-stick war legacy mod apk unlimited mana and gold
-stick war legacy mod apk 999 army latest version
-stick war legacy mod apk new army skins
-stick war legacy mod apk all levels unlocked
-stick war legacy mod apk unlimited resources and army
-stick war legacy mod apk 9999 army free download
-stick war legacy mod apk new army mode
-stick war legacy mod apk all characters unlocked
-stick war legacy mod apk unlimited troops and gems
-stick war legacy mod apk 999 army no root
-stick war legacy mod apk new army cheats
-stick war legacy mod apk all upgrades unlocked
-stick war legacy mod apk unlimited coins and army
-stick war legacy mod apk 9999 army hack
-stick war legacy mod apk new army gameplay
-stick war legacy mod apk all skins unlocked
-stick war legacy mod apk unlimited money and army
-stick war legacy mod apk 999 army offline
-stick war legacy mod apk new army features
-stick war legacy mod apk all modes unlocked
-stick war legacy mod apk unlimited diamonds and army
-stick war legacy mod apk 9999 army online
-stick war legacy mod apk new army review
-stick war legacy mod apk all armies unlocked
-stick war legacy mod apk unlimited power and army
-stick war legacy mod apk 999 army android
-stick war legacy mod apk new army trailer
-stick war legacy mod apk all items unlocked
-stick war legacy mod apk unlimited energy and army
-stick war legacy mod apk 9999 army android 1
-stick war legacy mod apk new army tips
-stick war legacy mod apk all missions unlocked
-stick war legacy mod apk unlimited skills and army
-stick war legacy mod apk 999 army ios
-stick war legacy mod apk new army guide
-stick war legacy mod apk all achievements unlocked

-

How to get a new army in Stick War Legacy MOD APK

-

What are the benefits of having a new army?

-

Having a new army in Stick War Legacy MOD APK can give you many benefits, such as:

- -

What are the types of new army units in Stick War Legacy MOD APK?

-

The new army units in Stick War Legacy MOD APK are divided into five categories, each with their own strengths and weaknesses. They are:

-

Giants

-

Giants are huge and strong units that can deal massive damage and take a lot of hits. They are good for breaking enemy lines and smashing their defenses. However, they are also slow and expensive, and vulnerable to ranged attacks.

-

Wizards

-

Wizards are magical units that can cast spells and summon creatures. They are good for supporting your army and weakening your enemies. However, they are also fragile and costly, and need mana to use their abilities.

-

Archidons

-

Archidons are archers that can shoot arrows from a distance. They are good for harassing your enemies and killing them from afar. However, they are also weak and cheap, and need space to fire their arrows.

-

Speartons

-

Speartons are spearmen that can throw spears and shield themselves. They are good for defending your base and attacking your enemies. However, they are also medium and moderate, and need time to reload their spears.

-

Swordwrath

-

Swordwrath are swordsmen that can slash and rage. They are good for rushing your enemies and overwhelming them with numbers. However, they are also low and low, and need courage to fight.

-

How to unlock and upgrade new army units in Stick War Legacy MOD APK?

-

To unlock and upgrade new army units in Stick War Legacy MOD APK, you need to use gems, gold, or mana. You can get them by playing the game, completing missions, watching ads, or using the MOD APK features. You can also use them to buy other things, such as weapons, resources, skins, modes, etc. To unlock and upgrade new army units in Stick War Legacy MOD APK, you need to follow these steps:

-
    -
  1. Go to the main menu of the game.
  2. -
  3. Tap on the shop icon on the bottom right corner of the screen.
  4. -
  5. Select the unit you want to unlock or upgrade from the list.
  6. -
  7. Tap on the buy or upgrade button on the bottom of the screen.
  8. -
  9. Confirm your purchase or upgrade by tapping on the yes button.
  10. -
  11. Enjoy your new army unit in the game.
  12. -
-

Conclusion

-

Summary of the main points

-

In conclusion, Stick War Legacy is a fun and addictive strategy game that lets you lead your own army of stickmen and conquer the world. You can also download Stick War Legacy MOD APK, which gives you unlimited gems, gold, mana, and access to a new army of powerful units. You can download and install it easily by following our guide. You can also get a new army by using gems, gold, or mana to unlock and upgrade them. Having a new army can give you many benefits, such as more variety, power, challenge, and satisfaction in playing the game.

-

Call to action

-

If you want to try Stick War Legacy MOD APK for yourself, you can download it from the link below or search for other sources online. Make sure that you follow our instructions carefully to avoid any problems or errors. Once you have it installed on your device, you can start playing the game and enjoy the new features and army. Don't forget to share your experience with us in the comments section below. We would love to hear from you!

- FAQs Q: Is Stick War Legacy MOD APK safe to A: Stick War Legacy MOD APK is safe to use as long as you download it from a trusted source and follow our instructions. However, you should always be careful when installing any third-party apps on your device, as they may contain malware or viruses. You should also backup your data before using the MOD APK, in case anything goes wrong. Q: How can I update Stick War Legacy MOD APK? A: To update Stick War Legacy MOD APK, you need to download the latest version of the MOD APK file from the same source you used before and install it over the existing one. You don't need to uninstall the previous version, as it will overwrite it automatically. However, you should always check the compatibility and features of the new version before updating, as they may differ from the old one. Q: Can I play Stick War Legacy MOD APK online with other players? A: No, you cannot play Stick War Legacy MOD APK online with other players, as it is a modified version of the original game. The MOD APK only works offline, and you can only play with the computer or yourself. If you want to play online with other players, you need to use the official version of the game from the Google Play Store. Q: Can I use Stick War Legacy MOD APK on iOS devices? A: No, you cannot use Stick War Legacy MOD APK on iOS devices, as it is only compatible with Android devices. The MOD APK file is an Android application package, which cannot be installed or run on iOS devices. If you want to play Stick War Legacy on iOS devices, you need to use the official version of the game from the App Store. Q: What are some tips and tricks for playing Stick War Legacy MOD APK? A: Some tips and tricks for playing Stick War Legacy MOD APK are: - Use your gems, gold, and mana wisely. Don't waste them on unnecessary things, and save them for important purchases and upgrades. - Experiment with different units and strategies. Find out what works best for you and your army, and adapt to different situations and enemies. - Balance your offense and defense. Don't neglect your base or your army, and protect them from enemy attacks. Don't be too aggressive or too passive, and find the right timing and opportunity to strike. - Have fun and enjoy the game. Don't get frustrated or bored by the game, and try to have a positive attitude. Remember that it is just a game, and not a real war.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/structures/keypoints.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/structures/keypoints.py deleted file mode 100644 index b93ebed4f6554e67ba9bde8d3af90e8dbb3246b6..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/structures/keypoints.py +++ /dev/null @@ -1,235 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from typing import Any, List, Tuple, Union -import torch -from torch.nn import functional as F - - -class Keypoints: - """ - Stores keypoint **annotation** data. GT Instances have a `gt_keypoints` property - containing the x,y location and visibility flag of each keypoint. This tensor has shape - (N, K, 3) where N is the number of instances and K is the number of keypoints per instance. - - The visibility flag follows the COCO format and must be one of three integers: - - * v=0: not labeled (in which case x=y=0) - * v=1: labeled but not visible - * v=2: labeled and visible - """ - - def __init__(self, keypoints: Union[torch.Tensor, np.ndarray, List[List[float]]]): - """ - Arguments: - keypoints: A Tensor, numpy array, or list of the x, y, and visibility of each keypoint. - The shape should be (N, K, 3) where N is the number of - instances, and K is the number of keypoints per instance. - """ - device = keypoints.device if isinstance(keypoints, torch.Tensor) else torch.device("cpu") - keypoints = torch.as_tensor(keypoints, dtype=torch.float32, device=device) - assert keypoints.dim() == 3 and keypoints.shape[2] == 3, keypoints.shape - self.tensor = keypoints - - def __len__(self) -> int: - return self.tensor.size(0) - - def to(self, *args: Any, **kwargs: Any) -> "Keypoints": - return type(self)(self.tensor.to(*args, **kwargs)) - - @property - def device(self) -> torch.device: - return self.tensor.device - - def to_heatmap(self, boxes: torch.Tensor, heatmap_size: int) -> torch.Tensor: - """ - Convert keypoint annotations to a heatmap of one-hot labels for training, - as described in :paper:`Mask R-CNN`. - - Arguments: - boxes: Nx4 tensor, the boxes to draw the keypoints to - - Returns: - heatmaps: - A tensor of shape (N, K), each element is integer spatial label - in the range [0, heatmap_size**2 - 1] for each keypoint in the input. - valid: - A tensor of shape (N, K) containing whether each keypoint is in the roi or not. - """ - return _keypoints_to_heatmap(self.tensor, boxes, heatmap_size) - - def __getitem__(self, item: Union[int, slice, torch.BoolTensor]) -> "Keypoints": - """ - Create a new `Keypoints` by indexing on this `Keypoints`. - - The following usage are allowed: - - 1. `new_kpts = kpts[3]`: return a `Keypoints` which contains only one instance. - 2. `new_kpts = kpts[2:10]`: return a slice of key points. - 3. `new_kpts = kpts[vector]`, where vector is a torch.ByteTensor - with `length = len(kpts)`. Nonzero elements in the vector will be selected. - - Note that the returned Keypoints might share storage with this Keypoints, - subject to Pytorch's indexing semantics. - """ - if isinstance(item, int): - return Keypoints([self.tensor[item]]) - return Keypoints(self.tensor[item]) - - def __repr__(self) -> str: - s = self.__class__.__name__ + "(" - s += "num_instances={})".format(len(self.tensor)) - return s - - @staticmethod - def cat(keypoints_list: List["Keypoints"]) -> "Keypoints": - """ - Concatenates a list of Keypoints into a single Keypoints - - Arguments: - keypoints_list (list[Keypoints]) - - Returns: - Keypoints: the concatenated Keypoints - """ - assert isinstance(keypoints_list, (list, tuple)) - assert len(keypoints_list) > 0 - assert all(isinstance(keypoints, Keypoints) for keypoints in keypoints_list) - - cat_kpts = type(keypoints_list[0])( - torch.cat([kpts.tensor for kpts in keypoints_list], dim=0) - ) - return cat_kpts - - -# TODO make this nicer, this is a direct translation from C2 (but removing the inner loop) -def _keypoints_to_heatmap( - keypoints: torch.Tensor, rois: torch.Tensor, heatmap_size: int -) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Encode keypoint locations into a target heatmap for use in SoftmaxWithLoss across space. - - Maps keypoints from the half-open interval [x1, x2) on continuous image coordinates to the - closed interval [0, heatmap_size - 1] on discrete image coordinates. We use the - continuous-discrete conversion from Heckbert 1990 ("What is the coordinate of a pixel?"): - d = floor(c) and c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. - - Arguments: - keypoints: tensor of keypoint locations in of shape (N, K, 3). - rois: Nx4 tensor of rois in xyxy format - heatmap_size: integer side length of square heatmap. - - Returns: - heatmaps: A tensor of shape (N, K) containing an integer spatial label - in the range [0, heatmap_size**2 - 1] for each keypoint in the input. - valid: A tensor of shape (N, K) containing whether each keypoint is in - the roi or not. - """ - - if rois.numel() == 0: - return rois.new().long(), rois.new().long() - offset_x = rois[:, 0] - offset_y = rois[:, 1] - scale_x = heatmap_size / (rois[:, 2] - rois[:, 0]) - scale_y = heatmap_size / (rois[:, 3] - rois[:, 1]) - - offset_x = offset_x[:, None] - offset_y = offset_y[:, None] - scale_x = scale_x[:, None] - scale_y = scale_y[:, None] - - x = keypoints[..., 0] - y = keypoints[..., 1] - - x_boundary_inds = x == rois[:, 2][:, None] - y_boundary_inds = y == rois[:, 3][:, None] - - x = (x - offset_x) * scale_x - x = x.floor().long() - y = (y - offset_y) * scale_y - y = y.floor().long() - - x[x_boundary_inds] = heatmap_size - 1 - y[y_boundary_inds] = heatmap_size - 1 - - valid_loc = (x >= 0) & (y >= 0) & (x < heatmap_size) & (y < heatmap_size) - vis = keypoints[..., 2] > 0 - valid = (valid_loc & vis).long() - - lin_ind = y * heatmap_size + x - heatmaps = lin_ind * valid - - return heatmaps, valid - - -@torch.jit.script_if_tracing -def heatmaps_to_keypoints(maps: torch.Tensor, rois: torch.Tensor) -> torch.Tensor: - """ - Extract predicted keypoint locations from heatmaps. - - Args: - maps (Tensor): (#ROIs, #keypoints, POOL_H, POOL_W). The predicted heatmap of logits for - each ROI and each keypoint. - rois (Tensor): (#ROIs, 4). The box of each ROI. - - Returns: - Tensor of shape (#ROIs, #keypoints, 4) with the last dimension corresponding to - (x, y, logit, score) for each keypoint. - - When converting discrete pixel indices in an NxN image to a continuous keypoint coordinate, - we maintain consistency with :meth:`Keypoints.to_heatmap` by using the conversion from - Heckbert 1990: c = d + 0.5, where d is a discrete coordinate and c is a continuous coordinate. - """ - - offset_x = rois[:, 0] - offset_y = rois[:, 1] - - widths = (rois[:, 2] - rois[:, 0]).clamp(min=1) - heights = (rois[:, 3] - rois[:, 1]).clamp(min=1) - widths_ceil = widths.ceil() - heights_ceil = heights.ceil() - - num_rois, num_keypoints = maps.shape[:2] - xy_preds = maps.new_zeros(rois.shape[0], num_keypoints, 4) - - width_corrections = widths / widths_ceil - height_corrections = heights / heights_ceil - - keypoints_idx = torch.arange(num_keypoints, device=maps.device) - - for i in range(num_rois): - outsize = (int(heights_ceil[i]), int(widths_ceil[i])) - roi_map = F.interpolate(maps[[i]], size=outsize, mode="bicubic", align_corners=False) - - # Although semantically equivalent, `reshape` is used instead of `squeeze` due - # to limitation during ONNX export of `squeeze` in scripting mode - roi_map = roi_map.reshape(roi_map.shape[1:]) # keypoints x H x W - - # softmax over the spatial region - max_score, _ = roi_map.view(num_keypoints, -1).max(1) - max_score = max_score.view(num_keypoints, 1, 1) - tmp_full_resolution = (roi_map - max_score).exp_() - tmp_pool_resolution = (maps[i] - max_score).exp_() - # Produce scores over the region H x W, but normalize with POOL_H x POOL_W, - # so that the scores of objects of different absolute sizes will be more comparable - roi_map_scores = tmp_full_resolution / tmp_pool_resolution.sum((1, 2), keepdim=True) - - w = roi_map.shape[2] - pos = roi_map.view(num_keypoints, -1).argmax(1) - - x_int = pos % w - y_int = (pos - x_int) // w - - assert ( - roi_map_scores[keypoints_idx, y_int, x_int] - == roi_map_scores.view(num_keypoints, -1).max(1)[0] - ).all() - - x = (x_int.float() + 0.5) * width_corrections[i] - y = (y_int.float() + 0.5) * height_corrections[i] - - xy_preds[i, :, 0] = x + offset_x[i] - xy_preds[i, :, 1] = y + offset_y[i] - xy_preds[i, :, 2] = roi_map[keypoints_idx, y_int, x_int] - xy_preds[i, :, 3] = roi_map_scores[keypoints_idx, y_int, x_int] - - return xy_preds diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/gradlew.bat b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/gradlew.bat deleted file mode 100644 index 9618d8d9607cd91a0efb866bcac4810064ba6fac..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/gradlew.bat +++ /dev/null @@ -1,100 +0,0 @@ -@rem -@rem Copyright 2015 the original author or authors. -@rem -@rem Licensed under the Apache License, Version 2.0 (the "License"); -@rem you may not use this file except in compliance with the License. -@rem You may obtain a copy of the License at -@rem -@rem https://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. -@rem - -@if "%DEBUG%" == "" @echo off -@rem ########################################################################## -@rem -@rem Gradle startup script for Windows -@rem -@rem ########################################################################## - -@rem Set local scope for the variables with windows NT shell -if "%OS%"=="Windows_NT" setlocal - -set DIRNAME=%~dp0 -if "%DIRNAME%" == "" set DIRNAME=. -set APP_BASE_NAME=%~n0 -set APP_HOME=%DIRNAME% - -@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. -set DEFAULT_JVM_OPTS="-Xmx64m" "-Xms64m" - -@rem Find java.exe -if defined JAVA_HOME goto findJavaFromJavaHome - -set JAVA_EXE=java.exe -%JAVA_EXE% -version >NUL 2>&1 -if "%ERRORLEVEL%" == "0" goto init - -echo. -echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:findJavaFromJavaHome -set JAVA_HOME=%JAVA_HOME:"=% -set JAVA_EXE=%JAVA_HOME%/bin/java.exe - -if exist "%JAVA_EXE%" goto init - -echo. -echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:init -@rem Get command-line arguments, handling Windows variants - -if not "%OS%" == "Windows_NT" goto win9xME_args - -:win9xME_args -@rem Slurp the command line arguments. -set CMD_LINE_ARGS= -set _SKIP=2 - -:win9xME_args_slurp -if "x%~1" == "x" goto execute - -set CMD_LINE_ARGS=%* - -:execute -@rem Setup the command line - -set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar - -@rem Execute Gradle -"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %CMD_LINE_ARGS% - -:end -@rem End local scope for the variables with windows NT shell -if "%ERRORLEVEL%"=="0" goto mainEnd - -:fail -rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of -rem the _cmd.exe /c_ return code! -if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1 -exit /b 1 - -:mainEnd -if "%OS%"=="Windows_NT" endlocal - -:omega diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/lib_support/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierQuantizedMobileNet.java b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/lib_support/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierQuantizedMobileNet.java deleted file mode 100644 index 94b06e3df659005c287733a8a37672863fdadd71..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/lib_support/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierQuantizedMobileNet.java +++ /dev/null @@ -1,72 +0,0 @@ -/* Copyright 2017 The TensorFlow Authors. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - -package org.tensorflow.lite.examples.classification.tflite; - -import android.app.Activity; -import java.io.IOException; -import org.tensorflow.lite.examples.classification.tflite.Classifier.Device; -import org.tensorflow.lite.support.common.TensorOperator; -import org.tensorflow.lite.support.common.ops.NormalizeOp; - -/** This TensorFlow Lite classifier works with the quantized MobileNet model. */ -public class ClassifierQuantizedMobileNet extends Classifier { - - /** - * The quantized model does not require normalization, thus set mean as 0.0f, and std as 1.0f to - * bypass the normalization. - */ - private static final float IMAGE_MEAN = 0.0f; - - private static final float IMAGE_STD = 1.0f; - - /** Quantized MobileNet requires additional dequantization to the output probability. */ - private static final float PROBABILITY_MEAN = 0.0f; - - private static final float PROBABILITY_STD = 255.0f; - - /** - * Initializes a {@code ClassifierQuantizedMobileNet}. - * - * @param activity - */ - public ClassifierQuantizedMobileNet(Activity activity, Device device, int numThreads) - throws IOException { - super(activity, device, numThreads); - } - - @Override - protected String getModelPath() { - // you can download this file from - // see build.gradle for where to obtain this file. It should be auto - // downloaded into assets. - return "model_quant_0.tflite"; - } - - @Override - protected String getLabelPath() { - return "labels.txt"; - } - - @Override - protected TensorOperator getPreprocessNormalizeOp() { - return new NormalizeOp(IMAGE_MEAN, IMAGE_STD); - } - - @Override - protected TensorOperator getPostprocessNormalizeOp() { - return new NormalizeOp(PROBABILITY_MEAN, PROBABILITY_STD); - } -} diff --git a/spaces/course-demos/marian-finetuned-kde4-en-to-fr/README.md b/spaces/course-demos/marian-finetuned-kde4-en-to-fr/README.md deleted file mode 100644 index 4702b5cd4157148119786012193f262fa57ce07c..0000000000000000000000000000000000000000 --- a/spaces/course-demos/marian-finetuned-kde4-en-to-fr/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Marian Finetuned Kde4 En To Fr -emoji: 📈 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 2.9b40 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/cscan/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/src/upfirdn2d.cpp b/spaces/cscan/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/src/upfirdn2d.cpp deleted file mode 100644 index 43d0b6783a5b512b55815a291fcac2bebeea31e0..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/src/upfirdn2d.cpp +++ /dev/null @@ -1,24 +0,0 @@ -// from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.cpp -#include - - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1) { - CHECK_CUDA(input); - CHECK_CUDA(kernel); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/audio2pose_models/cvae.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/audio2pose_models/cvae.py deleted file mode 100644 index 4dd4d128445e197ebb3417905750ff8ef384b702..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/audio2pose_models/cvae.py +++ /dev/null @@ -1,149 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn -from Demo_TFR_Pirenderer.src.audio2pose_models.res_unet import ResUnet - -def class2onehot(idx, class_num): - - assert torch.max(idx).item() < class_num - onehot = torch.zeros(idx.size(0), class_num).to(idx.device) - onehot.scatter_(1, idx, 1) - return onehot - -class CVAE(nn.Module): - def __init__(self, cfg): - super().__init__() - encoder_layer_sizes = cfg.MODEL.CVAE.ENCODER_LAYER_SIZES - decoder_layer_sizes = cfg.MODEL.CVAE.DECODER_LAYER_SIZES - latent_size = cfg.MODEL.CVAE.LATENT_SIZE - num_classes = cfg.DATASET.NUM_CLASSES - audio_emb_in_size = cfg.MODEL.CVAE.AUDIO_EMB_IN_SIZE - audio_emb_out_size = cfg.MODEL.CVAE.AUDIO_EMB_OUT_SIZE - seq_len = cfg.MODEL.CVAE.SEQ_LEN - - self.latent_size = latent_size - - self.encoder = ENCODER(encoder_layer_sizes, latent_size, num_classes, - audio_emb_in_size, audio_emb_out_size, seq_len) - self.decoder = DECODER(decoder_layer_sizes, latent_size, num_classes, - audio_emb_in_size, audio_emb_out_size, seq_len) - def reparameterize(self, mu, logvar): - std = torch.exp(0.5 * logvar) - eps = torch.randn_like(std) - return mu + eps * std - - def forward(self, batch): - batch = self.encoder(batch) - mu = batch['mu'] - logvar = batch['logvar'] - z = self.reparameterize(mu, logvar) - batch['z'] = z - return self.decoder(batch) - - def test(self, batch): - ''' - class_id = batch['class'] - z = torch.randn([class_id.size(0), self.latent_size]).to(class_id.device) - batch['z'] = z - ''' - return self.decoder(batch) - -class ENCODER(nn.Module): - def __init__(self, layer_sizes, latent_size, num_classes, - audio_emb_in_size, audio_emb_out_size, seq_len): - super().__init__() - - self.resunet = ResUnet() - self.num_classes = num_classes - self.seq_len = seq_len - - self.MLP = nn.Sequential() - layer_sizes[0] += latent_size + seq_len*audio_emb_out_size + 6 - for i, (in_size, out_size) in enumerate(zip(layer_sizes[:-1], layer_sizes[1:])): - self.MLP.add_module( - name="L{:d}".format(i), module=nn.Linear(in_size, out_size)) - self.MLP.add_module(name="A{:d}".format(i), module=nn.ReLU()) - - self.linear_means = nn.Linear(layer_sizes[-1], latent_size) - self.linear_logvar = nn.Linear(layer_sizes[-1], latent_size) - self.linear_audio = nn.Linear(audio_emb_in_size, audio_emb_out_size) - - self.classbias = nn.Parameter(torch.randn(self.num_classes, latent_size)) - - def forward(self, batch): - class_id = batch['class'] - pose_motion_gt = batch['pose_motion_gt'] #bs seq_len 6 - ref = batch['ref'] #bs 6 - bs = pose_motion_gt.shape[0] - audio_in = batch['audio_emb'] # bs seq_len audio_emb_in_size - - #pose encode - pose_emb = self.resunet(pose_motion_gt.unsqueeze(1)) #bs 1 seq_len 6 - pose_emb = pose_emb.reshape(bs, -1) #bs seq_len*6 - - #audio mapping - print(audio_in.shape) - audio_out = self.linear_audio(audio_in) # bs seq_len audio_emb_out_size - audio_out = audio_out.reshape(bs, -1) - - class_bias = self.classbias[class_id] #bs latent_size - x_in = torch.cat([ref, pose_emb, audio_out, class_bias], dim=-1) #bs seq_len*(audio_emb_out_size+6)+latent_size - x_out = self.MLP(x_in) - - mu = self.linear_means(x_out) - logvar = self.linear_means(x_out) #bs latent_size - - batch.update({'mu':mu, 'logvar':logvar}) - return batch - -class DECODER(nn.Module): - def __init__(self, layer_sizes, latent_size, num_classes, - audio_emb_in_size, audio_emb_out_size, seq_len): - super().__init__() - - self.resunet = ResUnet() - self.num_classes = num_classes - self.seq_len = seq_len - - self.MLP = nn.Sequential() - input_size = latent_size + seq_len*audio_emb_out_size + 6 - for i, (in_size, out_size) in enumerate(zip([input_size]+layer_sizes[:-1], layer_sizes)): - self.MLP.add_module( - name="L{:d}".format(i), module=nn.Linear(in_size, out_size)) - if i+1 < len(layer_sizes): - self.MLP.add_module(name="A{:d}".format(i), module=nn.ReLU()) - else: - self.MLP.add_module(name="sigmoid", module=nn.Sigmoid()) - - self.pose_linear = nn.Linear(6, 6) - self.linear_audio = nn.Linear(audio_emb_in_size, audio_emb_out_size) - - self.classbias = nn.Parameter(torch.randn(self.num_classes, latent_size)) - - def forward(self, batch): - - z = batch['z'] #bs latent_size - bs = z.shape[0] - class_id = batch['class'] - ref = batch['ref'] #bs 6 - audio_in = batch['audio_emb'] # bs seq_len audio_emb_in_size - #print('audio_in: ', audio_in[:, :, :10]) - - audio_out = self.linear_audio(audio_in) # bs seq_len audio_emb_out_size - #print('audio_out: ', audio_out[:, :, :10]) - audio_out = audio_out.reshape([bs, -1]) # bs seq_len*audio_emb_out_size - class_bias = self.classbias[class_id] #bs latent_size - - z = z + class_bias - x_in = torch.cat([ref, z, audio_out], dim=-1) - x_out = self.MLP(x_in) # bs layer_sizes[-1] - x_out = x_out.reshape((bs, self.seq_len, -1)) - - #print('x_out: ', x_out) - - pose_emb = self.resunet(x_out.unsqueeze(1)) #bs 1 seq_len 6 - - pose_motion_pred = self.pose_linear(pose_emb.squeeze(1)) #bs seq_len 6 - - batch.update({'pose_motion_pred':pose_motion_pred}) - return batch diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/options/test_options.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/options/test_options.py deleted file mode 100644 index 4ff3ad142779850d1d5a1640bc00f70d34d4a862..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/face3d/options/test_options.py +++ /dev/null @@ -1,21 +0,0 @@ -"""This script contains the test options for Deep3DFaceRecon_pytorch -""" - -from .base_options import BaseOptions - - -class TestOptions(BaseOptions): - """This class includes test options. - - It also includes shared options defined in BaseOptions. - """ - - def initialize(self, parser): - parser = BaseOptions.initialize(self, parser) # define shared options - parser.add_argument('--phase', type=str, default='test', help='train, val, test, etc') - parser.add_argument('--dataset_mode', type=str, default=None, help='chooses how datasets are loaded. [None | flist]') - parser.add_argument('--img_folder', type=str, default='examples', help='folder for test images.') - - # Dropout and Batchnorm has different behavior during training and test. - self.isTrain = False - return parser diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/_validators.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/_validators.py deleted file mode 100644 index 45b53c9c47a82b9f69bf786d9596b8b1166628db..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/_validators.py +++ /dev/null @@ -1,449 +0,0 @@ -from fractions import Fraction -import re - -from jsonschema._utils import ( - ensure_list, - equal, - extras_msg, - find_additional_properties, - find_evaluated_item_indexes_by_schema, - find_evaluated_property_keys_by_schema, - unbool, - uniq, -) -from jsonschema.exceptions import FormatError, ValidationError - - -def patternProperties(validator, patternProperties, instance, schema): - if not validator.is_type(instance, "object"): - return - - for pattern, subschema in patternProperties.items(): - for k, v in instance.items(): - if re.search(pattern, k): - yield from validator.descend( - v, subschema, path=k, schema_path=pattern, - ) - - -def propertyNames(validator, propertyNames, instance, schema): - if not validator.is_type(instance, "object"): - return - - for property in instance: - yield from validator.descend(instance=property, schema=propertyNames) - - -def additionalProperties(validator, aP, instance, schema): - if not validator.is_type(instance, "object"): - return - - extras = set(find_additional_properties(instance, schema)) - - if validator.is_type(aP, "object"): - for extra in extras: - yield from validator.descend(instance[extra], aP, path=extra) - elif not aP and extras: - if "patternProperties" in schema: - verb = "does" if len(extras) == 1 else "do" - joined = ", ".join(repr(each) for each in sorted(extras)) - patterns = ", ".join( - repr(each) for each in sorted(schema["patternProperties"]) - ) - error = f"{joined} {verb} not match any of the regexes: {patterns}" - yield ValidationError(error) - else: - error = "Additional properties are not allowed (%s %s unexpected)" - yield ValidationError(error % extras_msg(extras)) - - -def items(validator, items, instance, schema): - if not validator.is_type(instance, "array"): - return - - prefix = len(schema.get("prefixItems", [])) - total = len(instance) - if items is False and total > prefix: - message = f"Expected at most {prefix} items, but found {total}" - yield ValidationError(message) - else: - for index in range(prefix, total): - yield from validator.descend( - instance=instance[index], - schema=items, - path=index, - ) - - -def additionalItems(validator, aI, instance, schema): - if ( - not validator.is_type(instance, "array") - or validator.is_type(schema.get("items", {}), "object") - ): - return - - len_items = len(schema.get("items", [])) - if validator.is_type(aI, "object"): - for index, item in enumerate(instance[len_items:], start=len_items): - yield from validator.descend(item, aI, path=index) - elif not aI and len(instance) > len(schema.get("items", [])): - error = "Additional items are not allowed (%s %s unexpected)" - yield ValidationError( - error % extras_msg(instance[len(schema.get("items", [])):]), - ) - - -def const(validator, const, instance, schema): - if not equal(instance, const): - yield ValidationError(f"{const!r} was expected") - - -def contains(validator, contains, instance, schema): - if not validator.is_type(instance, "array"): - return - - matches = 0 - min_contains = schema.get("minContains", 1) - max_contains = schema.get("maxContains", len(instance)) - - for each in instance: - if validator.evolve(schema=contains).is_valid(each): - matches += 1 - if matches > max_contains: - yield ValidationError( - "Too many items match the given schema " - f"(expected at most {max_contains})", - validator="maxContains", - validator_value=max_contains, - ) - return - - if matches < min_contains: - if not matches: - yield ValidationError( - f"{instance!r} does not contain items " - "matching the given schema", - ) - else: - yield ValidationError( - "Too few items match the given schema (expected at least " - f"{min_contains} but only {matches} matched)", - validator="minContains", - validator_value=min_contains, - ) - - -def exclusiveMinimum(validator, minimum, instance, schema): - if not validator.is_type(instance, "number"): - return - - if instance <= minimum: - yield ValidationError( - f"{instance!r} is less than or equal to " - f"the minimum of {minimum!r}", - ) - - -def exclusiveMaximum(validator, maximum, instance, schema): - if not validator.is_type(instance, "number"): - return - - if instance >= maximum: - yield ValidationError( - f"{instance!r} is greater than or equal " - f"to the maximum of {maximum!r}", - ) - - -def minimum(validator, minimum, instance, schema): - if not validator.is_type(instance, "number"): - return - - if instance < minimum: - message = f"{instance!r} is less than the minimum of {minimum!r}" - yield ValidationError(message) - - -def maximum(validator, maximum, instance, schema): - if not validator.is_type(instance, "number"): - return - - if instance > maximum: - message = f"{instance!r} is greater than the maximum of {maximum!r}" - yield ValidationError(message) - - -def multipleOf(validator, dB, instance, schema): - if not validator.is_type(instance, "number"): - return - - if isinstance(dB, float): - quotient = instance / dB - try: - failed = int(quotient) != quotient - except OverflowError: - # When `instance` is large and `dB` is less than one, - # quotient can overflow to infinity; and then casting to int - # raises an error. - # - # In this case we fall back to Fraction logic, which is - # exact and cannot overflow. The performance is also - # acceptable: we try the fast all-float option first, and - # we know that fraction(dB) can have at most a few hundred - # digits in each part. The worst-case slowdown is therefore - # for already-slow enormous integers or Decimals. - failed = (Fraction(instance) / Fraction(dB)).denominator != 1 - else: - failed = instance % dB - - if failed: - yield ValidationError(f"{instance!r} is not a multiple of {dB}") - - -def minItems(validator, mI, instance, schema): - if validator.is_type(instance, "array") and len(instance) < mI: - yield ValidationError(f"{instance!r} is too short") - - -def maxItems(validator, mI, instance, schema): - if validator.is_type(instance, "array") and len(instance) > mI: - yield ValidationError(f"{instance!r} is too long") - - -def uniqueItems(validator, uI, instance, schema): - if ( - uI - and validator.is_type(instance, "array") - and not uniq(instance) - ): - yield ValidationError(f"{instance!r} has non-unique elements") - - -def pattern(validator, patrn, instance, schema): - if ( - validator.is_type(instance, "string") - and not re.search(patrn, instance) - ): - yield ValidationError(f"{instance!r} does not match {patrn!r}") - - -def format(validator, format, instance, schema): - if validator.format_checker is not None: - try: - validator.format_checker.check(instance, format) - except FormatError as error: - yield ValidationError(error.message, cause=error.cause) - - -def minLength(validator, mL, instance, schema): - if validator.is_type(instance, "string") and len(instance) < mL: - yield ValidationError(f"{instance!r} is too short") - - -def maxLength(validator, mL, instance, schema): - if validator.is_type(instance, "string") and len(instance) > mL: - yield ValidationError(f"{instance!r} is too long") - - -def dependentRequired(validator, dependentRequired, instance, schema): - if not validator.is_type(instance, "object"): - return - - for property, dependency in dependentRequired.items(): - if property not in instance: - continue - - for each in dependency: - if each not in instance: - message = f"{each!r} is a dependency of {property!r}" - yield ValidationError(message) - - -def dependentSchemas(validator, dependentSchemas, instance, schema): - if not validator.is_type(instance, "object"): - return - - for property, dependency in dependentSchemas.items(): - if property not in instance: - continue - yield from validator.descend( - instance, dependency, schema_path=property, - ) - - -def enum(validator, enums, instance, schema): - if instance == 0 or instance == 1: - unbooled = unbool(instance) - if all(unbooled != unbool(each) for each in enums): - yield ValidationError(f"{instance!r} is not one of {enums!r}") - elif instance not in enums: - yield ValidationError(f"{instance!r} is not one of {enums!r}") - - -def ref(validator, ref, instance, schema): - yield from validator._validate_reference(ref=ref, instance=instance) - - -def dynamicRef(validator, dynamicRef, instance, schema): - yield from validator._validate_reference(ref=dynamicRef, instance=instance) - - -def type(validator, types, instance, schema): - types = ensure_list(types) - - if not any(validator.is_type(instance, type) for type in types): - reprs = ", ".join(repr(type) for type in types) - yield ValidationError(f"{instance!r} is not of type {reprs}") - - -def properties(validator, properties, instance, schema): - if not validator.is_type(instance, "object"): - return - - for property, subschema in properties.items(): - if property in instance: - yield from validator.descend( - instance[property], - subschema, - path=property, - schema_path=property, - ) - - -def required(validator, required, instance, schema): - if not validator.is_type(instance, "object"): - return - for property in required: - if property not in instance: - yield ValidationError(f"{property!r} is a required property") - - -def minProperties(validator, mP, instance, schema): - if validator.is_type(instance, "object") and len(instance) < mP: - yield ValidationError(f"{instance!r} does not have enough properties") - - -def maxProperties(validator, mP, instance, schema): - if not validator.is_type(instance, "object"): - return - if validator.is_type(instance, "object") and len(instance) > mP: - yield ValidationError(f"{instance!r} has too many properties") - - -def allOf(validator, allOf, instance, schema): - for index, subschema in enumerate(allOf): - yield from validator.descend(instance, subschema, schema_path=index) - - -def anyOf(validator, anyOf, instance, schema): - all_errors = [] - for index, subschema in enumerate(anyOf): - errs = list(validator.descend(instance, subschema, schema_path=index)) - if not errs: - break - all_errors.extend(errs) - else: - yield ValidationError( - f"{instance!r} is not valid under any of the given schemas", - context=all_errors, - ) - - -def oneOf(validator, oneOf, instance, schema): - subschemas = enumerate(oneOf) - all_errors = [] - for index, subschema in subschemas: - errs = list(validator.descend(instance, subschema, schema_path=index)) - if not errs: - first_valid = subschema - break - all_errors.extend(errs) - else: - yield ValidationError( - f"{instance!r} is not valid under any of the given schemas", - context=all_errors, - ) - - more_valid = [ - each for _, each in subschemas - if validator.evolve(schema=each).is_valid(instance) - ] - if more_valid: - more_valid.append(first_valid) - reprs = ", ".join(repr(schema) for schema in more_valid) - yield ValidationError(f"{instance!r} is valid under each of {reprs}") - - -def not_(validator, not_schema, instance, schema): - if validator.evolve(schema=not_schema).is_valid(instance): - message = f"{instance!r} should not be valid under {not_schema!r}" - yield ValidationError(message) - - -def if_(validator, if_schema, instance, schema): - if validator.evolve(schema=if_schema).is_valid(instance): - if "then" in schema: - then = schema["then"] - yield from validator.descend(instance, then, schema_path="then") - elif "else" in schema: - else_ = schema["else"] - yield from validator.descend(instance, else_, schema_path="else") - - -def unevaluatedItems(validator, unevaluatedItems, instance, schema): - if not validator.is_type(instance, "array"): - return - evaluated_item_indexes = find_evaluated_item_indexes_by_schema( - validator, instance, schema, - ) - unevaluated_items = [ - item for index, item in enumerate(instance) - if index not in evaluated_item_indexes - ] - if unevaluated_items: - error = "Unevaluated items are not allowed (%s %s unexpected)" - yield ValidationError(error % extras_msg(unevaluated_items)) - - -def unevaluatedProperties(validator, unevaluatedProperties, instance, schema): - if not validator.is_type(instance, "object"): - return - evaluated_keys = find_evaluated_property_keys_by_schema( - validator, instance, schema, - ) - unevaluated_keys = [] - for property in instance: - if property not in evaluated_keys: - for _ in validator.descend( - instance[property], - unevaluatedProperties, - path=property, - schema_path=property, - ): - # FIXME: Include context for each unevaluated property - # indicating why it's invalid under the subschema. - unevaluated_keys.append(property) - - if unevaluated_keys: - if unevaluatedProperties is False: - error = "Unevaluated properties are not allowed (%s %s unexpected)" - yield ValidationError(error % extras_msg(unevaluated_keys)) - else: - error = ( - "Unevaluated properties are not valid under " - "the given schema (%s %s unevaluated and invalid)" - ) - yield ValidationError(error % extras_msg(unevaluated_keys)) - - -def prefixItems(validator, prefixItems, instance, schema): - if not validator.is_type(instance, "array"): - return - - for (index, item), subschema in zip(enumerate(instance), prefixItems): - yield from validator.descend( - instance=item, - schema=subschema, - schema_path=index, - path=index, - ) diff --git a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/ChatgptAi.py b/spaces/dcq/freegpt-webui/g4f/Provider/Providers/ChatgptAi.py deleted file mode 100644 index 504fdb37d4099e5f21eeea4a5101e3e42f59aec2..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/ChatgptAi.py +++ /dev/null @@ -1,51 +0,0 @@ -import os -import requests, re -from ...typing import sha256, Dict, get_type_hints - -url = 'https://chatgpt.ai/gpt-4/' -model = ['gpt-4'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - chat = '' - for message in messages: - chat += '%s: %s\n' % (message['role'], message['content']) - chat += 'assistant: ' - - response = requests.get('https://chatgpt.ai/gpt-4/') - - nonce, post_id, _, bot_id = re.findall(r'data-nonce="(.*)"\n data-post-id="(.*)"\n data-url="(.*)"\n data-bot-id="(.*)"\n data-width', response.text)[0] - - headers = { - 'authority': 'chatgpt.ai', - 'accept': '*/*', - 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', - 'cache-control': 'no-cache', - 'origin': 'https://chatgpt.ai', - 'pragma': 'no-cache', - 'referer': 'https://chatgpt.ai/gpt-4/', - 'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-platform': '"Windows"', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'same-origin', - 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36', - } - data = { - '_wpnonce': nonce, - 'post_id': post_id, - 'url': 'https://chatgpt.ai/gpt-4', - 'action': 'wpaicg_chat_shortcode_message', - 'message': chat, - 'bot_id': bot_id - } - - response = requests.post('https://chatgpt.ai/wp-admin/admin-ajax.php', - headers=headers, data=data) - - yield (response.json()['data']) - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/safety_checker.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/safety_checker.py deleted file mode 100644 index 84b8aeb7bcde36bafd3412a800149f41e0b331c8..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/safety_checker.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import numpy as np -import torch -import torch.nn as nn -from transformers import CLIPConfig, CLIPVisionModel, PreTrainedModel - -from ...utils import logging - - -logger = logging.get_logger(__name__) - - -def cosine_distance(image_embeds, text_embeds): - normalized_image_embeds = nn.functional.normalize(image_embeds) - normalized_text_embeds = nn.functional.normalize(text_embeds) - return torch.mm(normalized_image_embeds, normalized_text_embeds.t()) - - -class StableDiffusionSafetyChecker(PreTrainedModel): - config_class = CLIPConfig - - _no_split_modules = ["CLIPEncoderLayer"] - - def __init__(self, config: CLIPConfig): - super().__init__(config) - - self.vision_model = CLIPVisionModel(config.vision_config) - self.visual_projection = nn.Linear(config.vision_config.hidden_size, config.projection_dim, bias=False) - - self.concept_embeds = nn.Parameter(torch.ones(17, config.projection_dim), requires_grad=False) - self.special_care_embeds = nn.Parameter(torch.ones(3, config.projection_dim), requires_grad=False) - - self.concept_embeds_weights = nn.Parameter(torch.ones(17), requires_grad=False) - self.special_care_embeds_weights = nn.Parameter(torch.ones(3), requires_grad=False) - - @torch.no_grad() - def forward(self, clip_input, images): - pooled_output = self.vision_model(clip_input)[1] # pooled_output - image_embeds = self.visual_projection(pooled_output) - - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds).cpu().float().numpy() - cos_dist = cosine_distance(image_embeds, self.concept_embeds).cpu().float().numpy() - - result = [] - batch_size = image_embeds.shape[0] - for i in range(batch_size): - result_img = {"special_scores": {}, "special_care": [], "concept_scores": {}, "bad_concepts": []} - - # increase this value to create a stronger `nfsw` filter - # at the cost of increasing the possibility of filtering benign images - adjustment = 0.0 - - for concept_idx in range(len(special_cos_dist[0])): - concept_cos = special_cos_dist[i][concept_idx] - concept_threshold = self.special_care_embeds_weights[concept_idx].item() - result_img["special_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3) - if result_img["special_scores"][concept_idx] > 0: - result_img["special_care"].append({concept_idx, result_img["special_scores"][concept_idx]}) - adjustment = 0.01 - - for concept_idx in range(len(cos_dist[0])): - concept_cos = cos_dist[i][concept_idx] - concept_threshold = self.concept_embeds_weights[concept_idx].item() - result_img["concept_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3) - if result_img["concept_scores"][concept_idx] > 0: - result_img["bad_concepts"].append(concept_idx) - - result.append(result_img) - - has_nsfw_concepts = [len(res["bad_concepts"]) > 0 for res in result] - - for idx, has_nsfw_concept in enumerate(has_nsfw_concepts): - if has_nsfw_concept: - images[idx] = np.zeros(images[idx].shape) # black image - - if any(has_nsfw_concepts): - logger.warning( - "Potential NSFW content was detected in one or more images. A black image will be returned instead." - " Try again with a different prompt and/or seed." - ) - - return images, has_nsfw_concepts - - @torch.no_grad() - def forward_onnx(self, clip_input: torch.FloatTensor, images: torch.FloatTensor): - pooled_output = self.vision_model(clip_input)[1] # pooled_output - image_embeds = self.visual_projection(pooled_output) - - special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds) - cos_dist = cosine_distance(image_embeds, self.concept_embeds) - - # increase this value to create a stronger `nsfw` filter - # at the cost of increasing the possibility of filtering benign images - adjustment = 0.0 - - special_scores = special_cos_dist - self.special_care_embeds_weights + adjustment - # special_scores = special_scores.round(decimals=3) - special_care = torch.any(special_scores > 0, dim=1) - special_adjustment = special_care * 0.01 - special_adjustment = special_adjustment.unsqueeze(1).expand(-1, cos_dist.shape[1]) - - concept_scores = (cos_dist - self.concept_embeds_weights) + special_adjustment - # concept_scores = concept_scores.round(decimals=3) - has_nsfw_concepts = torch.any(concept_scores > 0, dim=1) - - images[has_nsfw_concepts] = 0.0 # black image - - return images, has_nsfw_concepts diff --git a/spaces/deepghs/nsfw_prediction/app.py b/spaces/deepghs/nsfw_prediction/app.py deleted file mode 100644 index 01aaeee2617734de7a3801c50cd61776c59eb1ec..0000000000000000000000000000000000000000 --- a/spaces/deepghs/nsfw_prediction/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import os -from functools import lru_cache - -import gradio as gr -import numpy as np -from PIL import Image -from huggingface_hub import hf_hub_download -from imgutils.data import load_image -from imgutils.utils import open_onnx_model - -_MODELS = [ - ('nsfwjs.onnx', 224), - ('inception_v3.onnx', 299), -] -_MODEL_NAMES = [name for name, _ in _MODELS] -_DEFAULT_MODEL_NAME = _MODEL_NAMES[0] -_MODEL_TO_SIZE = dict(_MODELS) - - -@lru_cache() -def _onnx_model(name): - return open_onnx_model(hf_hub_download( - 'deepghs/imgutils-models', - f'nsfw/{name}' - )) - - -def _image_preprocess(image, size: int = 224) -> np.ndarray: - image = load_image(image, mode='RGB').resize((size, size), Image.NEAREST) - return (np.array(image) / 255.0)[None, ...] - - -_LABELS = ['drawings', 'hentai', 'neutral', 'porn', 'sexy'] - - -def predict(image, model_name): - input_ = _image_preprocess(image, _MODEL_TO_SIZE[model_name]).astype(np.float32) - output_, = _onnx_model(model_name).run(['dense_3'], {'input_1': input_}) - return dict(zip(_LABELS, map(float, output_[0]))) - - -if __name__ == '__main__': - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - gr_input_image = gr.Image(type='pil', label='Original Image') - gr_model = gr.Dropdown(_MODEL_NAMES, value=_DEFAULT_MODEL_NAME, label='Model') - gr_btn_submit = gr.Button(value='Tagging', variant='primary') - - with gr.Column(): - gr_ratings = gr.Label(label='Ratings') - - gr_btn_submit.click( - predict, - inputs=[gr_input_image, gr_model], - outputs=[gr_ratings], - ) - demo.queue(os.cpu_count()).launch() diff --git a/spaces/desudes/desu/README.md b/spaces/desudes/desu/README.md deleted file mode 100644 index 232ed9f79418b525626439c3950b15588bd7d895..0000000000000000000000000000000000000000 --- a/spaces/desudes/desu/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Desu -emoji: 📈 -colorFrom: green -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dfhhr4/QQsign/bin/unidbg-fetch-qsign.bat b/spaces/dfhhr4/QQsign/bin/unidbg-fetch-qsign.bat deleted file mode 100644 index 8b291e7303b0c07d14b714e5795473891363c85b..0000000000000000000000000000000000000000 --- a/spaces/dfhhr4/QQsign/bin/unidbg-fetch-qsign.bat +++ /dev/null @@ -1,89 +0,0 @@ -@rem -@rem Copyright 2015 the original author or authors. -@rem -@rem Licensed under the Apache License, Version 2.0 (the "License"); -@rem you may not use this file except in compliance with the License. -@rem You may obtain a copy of the License at -@rem -@rem https://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. -@rem - -@if "%DEBUG%" == "" @echo off -@rem ########################################################################## -@rem -@rem unidbg-fetch-qsign startup script for Windows -@rem -@rem ########################################################################## - -@rem Set local scope for the variables with windows NT shell -if "%OS%"=="Windows_NT" setlocal - -set DIRNAME=%~dp0 -if "%DIRNAME%" == "" set DIRNAME=. -set APP_BASE_NAME=%~n0 -set APP_HOME=%DIRNAME%.. - -@rem Resolve any "." and ".." in APP_HOME to make it shorter. -for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi - -@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script. -set DEFAULT_JVM_OPTS= - -@rem Find java.exe -if defined JAVA_HOME goto findJavaFromJavaHome - -set JAVA_EXE=java.exe -%JAVA_EXE% -version >NUL 2>&1 -if "%ERRORLEVEL%" == "0" goto execute - -echo. -echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:findJavaFromJavaHome -set JAVA_HOME=%JAVA_HOME:"=% -set JAVA_EXE=%JAVA_HOME%/bin/java.exe - -if exist "%JAVA_EXE%" goto execute - -echo. -echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:execute -@rem Setup the command line - -set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.9.jar;%APP_HOME%\lib\unidbg-android-105.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-status-pages-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar - - -@rem Execute unidbg-fetch-qsign -"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %* - -:end -@rem End local scope for the variables with windows NT shell -if "%ERRORLEVEL%"=="0" goto mainEnd - -:fail -rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of -rem the _cmd.exe /c_ return code! -if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1 -exit /b 1 - -:mainEnd -if "%OS%"=="Windows_NT" endlocal - -:omega diff --git a/spaces/diacanFperku/AutoGPT/Digi Loader 1 Exe TOP Download.md b/spaces/diacanFperku/AutoGPT/Digi Loader 1 Exe TOP Download.md deleted file mode 100644 index abf18e1c636434a82404a9e6b51f4f0a4c099fb1..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Digi Loader 1 Exe TOP Download.md +++ /dev/null @@ -1,124 +0,0 @@ - -

Digi Loader 1 Exe Download: What You Need to Know

-

If you are looking for a free and easy way to transfer files, connect to games online, or use the digiCLIP amplifier, you might want to try Digi Loader 1 Exe Download. This is a software that allows you to download and install various programs and files on your PC, USB stick, or mobile device. In this article, we will explain what Digi Loader 1 Exe is, how to use it, and where to get it.

-

Digi Loader 1 Exe Download


Download File ✯✯✯ https://gohhs.com/2uFTEa



-

What is Digi Loader 1 Exe?

-

Digi Loader 1 Exe is a file transfer software that can download and install various programs and files on your device. Some of the programs and files that you can download with Digi Loader 1 Exe are:

- -

How to Use Digi Loader 1 Exe?

-

To use Digi Loader 1 Exe, you need to follow these steps:

-
    -
  1. Download Digi Loader 1 Exe from the link below.
  2. -
  3. Run the Digi Loader 1 Exe file on your device.
  4. -
  5. Select the program or file that you want to download and install.
  6. -
  7. Follow the instructions on the screen to complete the installation.
  8. -
  9. Enjoy using the program or file on your device.
  10. -
-

Where to Get Digi Loader 1 Exe?

-

You can get Digi Loader 1 Exe from this link: https://bltlly.com/2tazUw. This link will take you to a website where you can download Digi Loader 1 Exe for free. You can also find more information about Digi Loader 1 Exe and its features on this website.

-

Conclusion

-

Digi Loader 1 Exe is a file transfer software that can help you download and install various programs and files on your device. You can use it to connect to games online, use the digiCLIP amplifier, turn your DIN-A4 card into a USB key, or update your PME modules. Digi Loader 1 Exe is free and easy to use. You can download it from the link above and enjoy its benefits.

-

-

How to Download and Install Digi Loader 1 Exe for Windows

-

If you want to use Digi Loader 1 Exe on your Windows PC, you need to follow these steps:

-
    -
  1. Go to this link: https://bltlly.com/2tazUw and click on the Download button.
  2. -
  3. Save the Digi Loader 1 Exe file on your PC.
  4. -
  5. Double-click on the Digi Loader 1 Exe file to run it.
  6. -
  7. Select the language and the destination folder for the installation.
  8. -
  9. Click on the Install button and wait for the installation to finish.
  10. -
  11. Click on the Finish button and launch Digi Loader 1 Exe from your desktop or start menu.
  12. -
-

How to Download and Install Digi Loader 1 Exe for Mac

-

If you want to use Digi Loader 1 Exe on your Mac, you need to follow these steps:

-
    -
  1. Go to this link: https://bltlly.com/2tazUw and click on the Download button.
  2. -
  3. Save the Digi Loader 1 Exe file on your Mac.
  4. -
  5. Open the Digi Loader 1 Exe file and drag it to the Applications folder.
  6. -
  7. Open the Applications folder and double-click on the Digi Loader 1 Exe icon.
  8. -
  9. Follow the instructions on the screen to complete the installation.
  10. -
  11. Launch Digi Loader 1 Exe from your Applications folder or dock.
  12. -
-

How to Download and Install Digi Loader 1 Exe for Android

-

If you want to use Digi Loader 1 Exe on your Android device, you need to follow these steps:

-
    -
  1. Go to this link: https://bltlly.com/2tazUw and click on the Download button.
  2. -
  3. Save the Digi Loader 1 Exe file on your Android device.
  4. -
  5. Open the Digi Loader 1 Exe file and tap on the Install button.
  6. -
  7. Allow the installation from unknown sources if prompted.
  8. -
  9. Wait for the installation to finish and tap on the Open button.
  10. How to Download and Install Digi Loader 1 Exe for Linux -

    If you want to use Digi Loader 1 Exe on your Linux device, you need to follow these steps:

    -
      -
    1. Go to this link: https://bltlly.com/2tazUw and click on the Download button.
    2. -
    3. Save the Digi Loader 1 Exe file on your Linux device.
    4. -
    5. Open a terminal and navigate to the folder where you saved the Digi Loader 1 Exe file.
    6. -
    7. Type chmod +x DigiLoader1.exe to make the file executable.
    8. -
    9. Type ./DigiLoader1.exe to run the file.
    10. -
    11. Select the program or file that you want to download and install.
    12. -
    13. Follow the instructions on the screen to complete the installation.
    14. -
    15. Enjoy using the program or file on your device.
    16. -
    -

    How to Uninstall Digi Loader 1 Exe

    -

    If you want to uninstall Digi Loader 1 Exe from your device, you need to follow these steps:

    -
      -
    1. Open Digi Loader 1 Exe on your device.
    2. -
    3. Select the program or file that you want to uninstall.
    4. -
    5. Click on the Uninstall button and confirm your choice.
    6. -
    7. Wait for the uninstallation to finish and close Digi Loader 1 Exe.
    8. -
    9. Delete the Digi Loader 1 Exe file from your device.
    10. -
    -

    Frequently Asked Questions about Digi Loader 1 Exe

    -

    Here are some of the most common questions and answers about Digi Loader 1 Exe:

    -