diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Catia V5 R20 64 Bit Crack The Ultimate Solution for Your Engineering Projects.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Catia V5 R20 64 Bit Crack The Ultimate Solution for Your Engineering Projects.md deleted file mode 100644 index 9b90655b3d1d98261582716f3d20c931527f63e7..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Catia V5 R20 64 Bit Crack The Ultimate Solution for Your Engineering Projects.md +++ /dev/null @@ -1,118 +0,0 @@ - -

Catia V5 R20 64 Bit Crack: What You Need to Know

-

If you are looking for a way to use one of the most popular and powerful software for Computer Aided Design, Engineering, Analysis, Manufacturing and Production (CAD/CAM/CAE), you might have heard of Catia V5 R20. This is a comprehensive program that allows you to design, analyze, and produce products in various industries, such as aerospace, automotive, shipbuilding, consumer goods, and more. However, using Catia V5 R20 is not cheap or easy. You need to pay a license fee to access its full features, and you need to activate it online with a valid serial number. This can be a hassle for some users who want to use Catia V5 R20 without spending too much money or time.

-

That's why some people look for a crack for Catia V5 R20. A crack is a file that modifies or bypasses the original software's security features, allowing you to use it without paying or activating it. A crack can be a patch, a keygen, or a loader that changes the software's code or behavior. In this article, we will show you how to download and install Catia V5 R20 64 bit crack on your Windows computer. We will also discuss the benefits and risks of using a crack for Catia V5 R20, as well as some alternatives that you can consider.

-

Catia V5 R20 64 Bit Crack


DOWNLOAD ►►►►► https://byltly.com/2uKv3v



-

How to Download Catia V5 R20 64 Bit Crack

-

The first step to use Catia V5 R20 with a crack is to find and download the crack file from a reliable source. There are many websites that claim to offer cracks for various software, but not all of them are trustworthy. Some of them may contain malware or viruses that can harm your computer or steal your personal information. Some of them may also provide fake or outdated cracks that do not work or cause errors. Therefore, you need to be careful when choosing where to download the crack file from.

-

One possible source that we found is this blog post that provides a link to download Catia P2 V5R20 with a crack included. According to the post, this is a full offline installer setup of Catia P2 V5R20 that works perfectly fine without any problem. The post also provides instructions on how to install the program and apply the crack. However, we cannot guarantee the safety or validity of this source, so you should use it at your own risk.

-

To download the crack file from this source, you need to follow these steps:

- -

How to Install Catia V5 R20 64 Bit Crack

-

The next step is to install Catia P2 V5R20 on your computer using the crack file. Before you do that, you need to make sure that your system meets the minimum requirements for running Catia P2 V5R20. According to this website , these are the system requirements:

- -

You also need to disable your antivirus and firewall before installing Catia P2 V5R20 with a crack. This is because some antivirus programs may detect the crack file as a threat and delete it or block its execution. To disable your antivirus and firewall, you can follow these steps for Windows Defender Firewall or these steps for Microsoft Defender Antivirus.

-

After disabling your antivirus and firewall, you can install Catia P2 V5R20 with a crack by following these steps:

-

How to install Catia V5 R20 64 Bit Crack on Windows 10
-Catia V5 R20 64 Bit Crack download link
-Catia V5 R20 64 Bit Crack free trial
-Catia V5 R20 64 Bit Crack license key generator
-Catia V5 R20 64 Bit Crack tutorial pdf
-Catia V5 R20 64 Bit Crack system requirements
-Catia V5 R20 64 Bit Crack vs Catia V6
-Catia V5 R20 64 Bit Crack features and benefits
-Catia V5 R20 64 Bit Crack online course
-Catia V5 R20 64 Bit Crack review and rating
-Catia V5 R20 64 Bit Crack alternatives and competitors
-Catia V5 R20 64 Bit Crack price and discount
-Catia V5 R20 64 Bit Crack support and customer service
-Catia V5 R20 64 Bit Crack activation code and serial number
-Catia V5 R20 64 Bit Crack error and troubleshooting
-Catia V5 R20 64 Bit Crack update and patch
-Catia V5 R20 64 Bit Crack tips and tricks
-Catia V5 R20 64 Bit Crack best practices and standards
-Catia V5 R20 64 Bit Crack comparison and benchmark
-Catia V5 R20 64 Bit Crack pros and cons
-Catia V5 R20 64 Bit Crack forum and community
-Catia V5 R20 64 Bit Crack case study and success story
-Catia V5 R20 64 Bit Crack FAQ and Q&A
-Catia V5 R20 64 Bit Crack video and audio
-Catia V5 R20 64 Bit Crack blog and article
-Catia V5 R20 64 Bit Crack ebook and guide
-Catia V5 R20 64 Bit Crack webinar and workshop
-Catia V5 R20 64 Bit Crack software and hardware
-Catia V5 R20 64 Bit Crack tools and resources
-Catia V5 R20 64 Bit Crack simulation and animation
-Catia V5 R20 64 Bit Crack design and modeling
-Catia V5 R20 64 Bit Crack engineering and analysis
-Catia V5 R20 64 Bit Crack manufacturing and production
-Catia V5 R20 64 Bit Crack testing and validation
-Catia V5 R20 64 Bit Crack optimization and improvement
-Catia V5 R20 64 Bit Crack integration and interoperability
-Catia V5 R20 64 Bit Crack collaboration and communication
-Catia V5 R20 64 Bit Crack documentation and reporting
-Catia V5 R20 64 Bit Crack customization and configuration
-Catia V5 R20 64 Bit Crack security and privacy
-Catia V5 R20 64 Bit Crack backup and recovery
-Catia V5 R20 64 Bit Crack migration and upgrade
-Catia V5 R20 64 Bit Crack compatibility and performance
-Catia V5 R20 64 Bit Crack quality and reliability
-Catia V5 R20 64 Bit Crack innovation and creativity
-Catia V5 R20 64 Bit Crack fun and entertainment
-Catia V5 R20 64 Bit Crack challenge and opportunity
-Catia V5 R20 64 Bit Crack learning and development
-Catia V5 R20 64 Bit Crack career and growth

- -

Benefits of Using Catia V5 R20 64 Bit Crack

-

By using Catia P2 V5R20 with a crack, you can enjoy some benefits that may not be available if you use the original software with a license. Here are some of them:

- -

Risks of Using Catia V5 R20 64 Bit Crack

-

However, using Catia P2 V5R20 with a crack also comes with some risks that you should be aware of. Here are some of them:

- -

Alternatives to Catia V5 R20 64 Bit Crack

-

If you are not comfortable with using Catia P2 V5R20 with a crack, or if you want to avoid the risks associated with it, you can consider some alternatives that may suit your needs better. Here are some of them:

- -

Conclusion

-

In conclusion, Catia V5 R20 64 bit crack is a file that allows you to use Catia P2 V5R20 without paying or activating it. It can provide some benefits such as saving money and time and accessing a powerful and comprehensive CAD/CAM/CAE tool. However, it also has some risks such as legal consequences, security threats, and performance issues. Therefore, you should weigh the pros and cons carefully before deciding whether to use it or not. Alternatively, you can consider some other options such as using a free trial version, a student or academic version, or a similar but cheaper or free tool.

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Codigo De Activacion De Video Repair 16.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Codigo De Activacion De Video Repair 16.md deleted file mode 100644 index 56ec7b6c26febd164a1629ba9f43d16f5bcdd706..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Codigo De Activacion De Video Repair 16.md +++ /dev/null @@ -1,139 +0,0 @@ - -

Codigo De Activacion De Video Repair 16: How to Fix Your Corrupted Videos

-

Have you ever encountered a situation where your videos are corrupted and you can't play them on your computer or mobile device? Maybe you have recorded some precious moments with your family or friends, but the videos are damaged due to virus infection, power failure, improper operation, or other reasons. Or maybe you have downloaded some videos from the internet, but they are incomplete or broken. How frustrating is that?

-

Codigo De Activacion De Video Repair 16


Downloadhttps://byltly.com/2uKymQ



-

Don't worry, there is a solution for you. In this article, we will introduce you to a powerful tool called Video Repair 16, which can help you fix your corrupted videos in a few simple steps. We will also show you how to get a codigo de activacion for Video Repair 16, which is required to activate the full version of the program. And we will share some tips and tricks for using Video Repair 16 effectively. So, let's get started!

-

Introduction

-

What is Video Repair 16?

-

Video Repair 16 is a professional video repair software that can repair various types of video corruption issues, such as video not playing, video freezing, video stuttering, video out of sync, video pixelated, video distorted, video black screen, and more. It supports repairing videos in various formats, such as MP4, MOV, AVI, MKV, FLV, WMV, etc. It also supports repairing videos from different sources, such as cameras, camcorders, drones, smartphones, memory cards, hard drives, etc.

-

Video Repair 16 has two repair modes: quick repair and advanced repair. The quick repair mode can fix most common video corruption issues by analyzing and repairing the video data. The advanced repair mode can fix more complex video corruption issues by using a sample video file as a reference. The sample video file should be from the same device and in the same format as the corrupted video file.

-

Why do you need a codigo de activacion for Video Repair 16?

-

Video Repair 16 is a paid software that offers a free trial version for users to test its features and performance. However, the free trial version has some limitations. For example, it can only repair up to three videos at a time, and it can only save up to one-third of each repaired video. To unlock the full functionality of Video Repair 16 and repair unlimited videos without any restrictions, you need to purchase a codigo de activacion for Video Repair 16.

-

Como obtener el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion gratis
-Donde encontrar el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion full
-Solucionar error de codigo de activacion de video repair 16
-Video repair 16 codigo de activacion crack
-Descargar codigo de activacion de video repair 16
-Video repair 16 codigo de activacion online
-Generar codigo de activacion de video repair 16
-Video repair 16 codigo de activacion serial
-Requisitos para el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion licencia
-Funcionamiento del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion keygen
-Tutorial para el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion mega
-Ventajas del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion original
-Alternativas al codigo de activacion de video repair 16
-Video repair 16 codigo de activacion premium
-Pasos para el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion valido
-Beneficios del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion windows
-Caracteristicas del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion youtube
-Consejos para el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion zip
-Dudas sobre el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion zippyshare
-Opiniones sobre el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion mediafire
-Preguntas frecuentes sobre el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion uptobox
-Testimonios sobre el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion rapidgator
-Problemas con el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion turbobit
-Garantia del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion uploaded
-Soporte para el codigo de activacion de video repair 16
-Video repair 16 codigo de activacion nitroflare
-Oferta del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion filefactory
-Comparativa del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion depositfiles
-Valoracion del codigo de activacion de video repair 16
-Video repair 16 codigo de activacion hitfile
-Experiencia con el codigo

-

A codigo de activacion for Video Repair 16 is a unique code that is generated after you buy a license for the software. It is used to verify your identity and activate your copy of Video Repair 16. Once you enter your codigo de activacion in the program, you can enjoy all the benefits of Video Repair 16.

-

How to get a codigo de activacion for Video Repair 16?

-

To get a codigo de activacion for Video Repair 16, you need to follow these steps:

-
    -
  1. Visit the official website of Video Repair 16 and click on the "Buy Now" button.
  2. -
  3. Select the license type that suits your needs. You can choose between a one-year license and a lifetime license.
  4. -
  5. Enter your personal information and payment details and complete the order.
  6. -
  7. Check your email inbox for a confirmation email from Video Repair 16. The email will contain your codigo de activacion and a download link for the software.
  8. -
  9. Download and install Video Repair 16 on your computer.
  10. -
-

How to use Video Repair 16 to fix your corrupted videos

-

Step 1: Download and install Video Repair 16

-

If you have already downloaded and installed Video Repair 16 on your computer, you can skip this step. If not, you can follow these steps:

-
    -
  1. Click on the download link in the confirmation email from Video Repair 16 or visit the official website of Video Repair 16 and click on the "Download" button.
  2. -
  3. Save the setup file on your computer and run it.
  4. -
  5. Follow the instructions on the screen to complete the installation process.
  6. -
-

Step 2: Launch Video Repair 16 and enter your codigo de activacion

-

If you have already entered your codigo de activacion in Video Repair 16, you can skip this step. If not, you can follow these steps:

-
    -
  1. Launch Video Repair 16 on your computer.
  2. -
  3. Click on the "Register" button at the top right corner of the main interface.
  4. -
  5. Enter your email address and codigo de activacion in the pop-up window and click on "Activate".
  6. -
  7. A message will appear confirming that your activation is successful.
  8. -
-

Step 3: Add the corrupted videos to the program

-

To add the corrupted videos to Video Repair 16, you can follow these steps:

-
    -
  1. Click on the "Add" button at the bottom left corner of the main interface.
  2. -
  3. Browse your computer or external device and select the corrupted videos that you want to repair.
  4. -
  5. Click on "Open" to import them to the program.
  6. -
  7. You can also drag and drop the corrupted videos directly to the program.
  8. -
-

Step 4: Choose the repair mode and start the repair process

-

To choose the repair mode and start the repair process in Video Repair 16, you can follow these steps:

-
    -
  1. Select one or more corrupted videos that you want to repair from the list.
  2. -
  3. Click on "Repair" at the bottom right corner of the main interface.
  4. -
  5. A pop-up window will appear asking you to choose between quick repair and advanced repair. You can select either option depending on your situation.
  6. -
  7. If you choose quick repair, click on "OK" to start repairing your videos immediately.
  8. -
  9. If you choose advanced repair, click on "OK" and then click on "Folder" icon next to each corrupted video to add a sample video file as a reference. Then click on "Repair" again to start repairing your videos.
  10. -
-

Step 5: Preview and save the repaired videos

-

To preview and save the repaired videos in Video Repair 16, you can follow these steps:

-
    -
  1. After repairing your videos successfully with either quick repair or advanced repair mode ,you will see them listed under "Repaired Files".
  2. -
  3. You can click on each repaired video file name or thumbnail image to preview it in a built-in media player window.
  4. -
  5. You can also check some information about each repaired video file such as format ,size ,duration ,and resolution under "File Information".
  6. -
  7. If you are satisfied with the results ,you can click on "Save All" at bottom right corner of main interface .
  8. -
  9. A pop-up window will appear asking you to choose a destination folder where you want to save your repaired videos .You can browse your computer or external device and select a folder .Then click on "Save" .
  10. -
  11. Your repaired videos will be saved in selected folder .You can access them anytime .
  12. -
-

Tips and tricks for using Video Repair 16

-

Tip 1: Backup your videos before repairing them

- can also use a cloud service such as Google Drive ,Dropbox ,or OneDrive to backup your videos online .

-

Tip 2: Use the advanced repair mode for severely corrupted videos

-

If your videos are severely corrupted and the quick repair mode cannot fix them ,you can try the advanced repair mode .The advanced repair mode can repair more complex video corruption issues by using a sample video file as a reference .The sample video file should be from the same device and in the same format as the corrupted video file .For example ,if your corrupted video file is a MP4 file recorded by your iPhone ,you should use another MP4 file recorded by your iPhone as a sample video file .The sample video file should also be healthy and playable .The advanced repair mode will use the information from the sample video file to repair the corrupted video file .

-

Tip 3: Contact the customer support if you encounter any problems

-

If you encounter any problems while using Video Repair 16 ,such as activation issues ,repairing errors ,or saving failures ,you can contact the customer support team of Video Repair 16 for help .You can send an email to support@videorepair16.com or visit the official website of Video Repair 16 and click on "Contact Us" .You can also check the FAQ section on the website for some common questions and answers .The customer support team of Video Repair 16 is friendly and professional ,and they will try their best to solve your problems as soon as possible .

-

Conclusion

-

In conclusion ,Video Repair 16 is a powerful and easy-to-use video repair software that can help you fix your corrupted videos in a few simple steps .It supports repairing videos in various formats and from different sources .It also offers two repair modes :quick repair and advanced repair .To use Video Repair 16 ,you need to get a codigo de activacion for Video Repair 16 first ,which you can buy from the official website of Video Repair 16 .Then you can follow the steps we have shown you in this article to add ,repair ,preview ,and save your corrupted videos .We hope this article has helped you understand how to use Video Repair 16 and how to get a codigo de activacion for Video Repair 16 .If you have any questions or feedback ,please feel free to leave a comment below or contact us via email .Thank you for reading!

-

FAQs

-

Here are some frequently asked questions about Video Repair 16 and codigo de activacion for Video Repair 16 :

-
    -
  1. Q: How much does a codigo de activacion for Video Repair 16 cost?
  2. -
  3. A: A codigo de activacion for Video Repair 16 costs $49.95 for a one-year license and $69.95 for a lifetime license. You can pay with PayPal, credit card, debit card, or other payment methods.
  4. -
  5. Q: How long does it take to receive my codigo de activacion for Video Repair 16 after I place an order?
  6. -
  7. A: You will receive your codigo de activacion for Video Repair 16 instantly via email after you complete your payment. Please check your email inbox and spam folder for the confirmation email from Video Repair 16.
  8. -
  9. Q: Can I use my codigo de activacion for Video Repair 16 on multiple computers?
  10. -
  11. A: No, you can only use your codigo de activacion for Video Repair 16 on one computer. If you want to use it on another computer, you need to deactivate it from the first computer and activate it on the second computer.
  12. -
  13. Q: What if I lose my codigo de activacion for Video Repair 16 or forget to deactivate it from my old computer?
  14. -
  15. A: If you lose your codigo de activacion for Video Repair 16 or forget to deactivate it from your old computer, you can contact the customer support team of Video Repair 16 and provide them with your order number and email address. They will help you retrieve your codigo de activacion or reset your activation status.
  16. -
  17. Q: Does Video Repair 16 guarantee to fix all corrupted videos?
  18. -
  19. A: No, Video Repair 16 does not guarantee to fix all corrupted videos. Some videos may be too damaged or corrupted beyond repair. However, Video Repair 16 has a high success rate in repairing most common video corruption issues. You can try it for free before buying it to see if it works for your videos.
  20. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/3dmgame Dll Mediafire 88.md b/spaces/1gistliPinn/ChatGPT4/Examples/3dmgame Dll Mediafire 88.md deleted file mode 100644 index c1b914d5f8d1daf52beaa585ed34b85d15de0ad7..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/3dmgame Dll Mediafire 88.md +++ /dev/null @@ -1,45 +0,0 @@ -
-

How to Download and Fix 3dmGameDll.dll Errors for Free

-

If you are looking for a way to download and fix 3dmGameDll.dll errors for free, you have come to the right place. 3dmGameDll.dll is a dynamic link library file that is used by some popular games such as Mad Max, Metal Gear Solid V: The Phantom Pain, and Watch Dogs. This file contains important functions and data that the games need to run properly. However, sometimes this file can get corrupted, deleted, or misplaced, causing various problems such as crashes, freezes, or error messages.

-

3dmgame dll mediafire 88


Download ★★★★★ https://imgfil.com/2uxXOI



-

In this article, we will show you how to download and fix 3dmGameDll.dll errors for free using simple methods. We will also explain what causes these errors and how to prevent them in the future. Follow the steps below to get started.

-

What Causes 3dmGameDll.dll Errors?

-

There are many possible causes of 3dmGameDll.dll errors, but some of the most common ones are:

- -

These causes can lead to various symptoms such as:

- -

How to Download and Fix 3dmGameDll.dll Errors for Free?

-

There are several ways to download and fix 3dmGameDll.dll errors for free, depending on the cause and severity of the problem. Here are some of the most effective methods:

-

-

Method 1: Reinstall the Game

-

The easiest and most reliable way to fix 3dmGameDll.dll errors is to reinstall the game that is causing the problem. This will ensure that all the game files are intact and up-to-date, including the 3dmGameDll.dll file. To reinstall the game, follow these steps:

-
    -
  1. Uninstall the game from your computer using the Control Panel or the game's uninstaller.
  2. -
  3. Delete any leftover files and folders related to the game from your hard drive.
  4. -
  5. Download the latest version of the game from its official website or a trusted source.
  6. -
  7. Install the game on your computer following the instructions on the screen.
  8. -
  9. Launch the game and check if the error is resolved.
  10. -
-

Method 2: Download and Replace the 3dmGameDll.dll File

-

If reinstalling the game does not work, you can try downloading and replacing the 3dmGameDll.dll file manually. This can help if the file is missing or corrupted on your system. To download and replace the 3dmGameDll.dll file, follow these steps:

-
    -
  1. Go to a reputable website that offers free .dll file downloads, such as DLLme.com.
  2. -
  3. Search for "3dmGameDll.dll" and select the version or variant that matches your game and system specifications.
  4. -
  5. Click on "Download" and save the file to your computer.
  6. -
  7. Locate the folder where your game is installed on your hard drive (usually C:\Program Files (x86) or C:\Program Files).
  8. -
  9. Find and rename the existing 3dmGameDll.dll file (if any) to something else, such as "3dmGameDll_old.dll".
  10. d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/A Pdf Content Splitter 4.8.4 [HOT] Keygen For 14.md b/spaces/1gistliPinn/ChatGPT4/Examples/A Pdf Content Splitter 4.8.4 [HOT] Keygen For 14.md deleted file mode 100644 index 9d7b0e6e5c925a4374e88d7b638d22ef723b1cbc..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/A Pdf Content Splitter 4.8.4 [HOT] Keygen For 14.md +++ /dev/null @@ -1,39 +0,0 @@ - -

    A-PDF Content Splitter 4.8.4: A Powerful Tool to Split PDF Files by Content

    -

    If you have ever dealt with large PDF files that contain multiple documents or sections, you know how hard it can be to manage them. You may want to extract some pages, rearrange them, or save them as separate files for easier sharing or printing. But how can you do that without spending hours on manual work or buying expensive software?

    -

    Fortunately, there is a solution: A-PDF Content Splitter 4.8.4. This is a user-friendly and affordable PDF tool that allows you to split PDF files into smaller documents based on specific content on their pages. You can set up rules to define how to split your PDFs by unique text, find text, or word position. You can also customize the output names and properties of the split files, and even set up hot directories to automate the splitting process.

    -

    a pdf content splitter 4.8.4 keygen for 14


    Download Zip »»» https://imgfil.com/2uxXhM



    -

    With A-PDF Content Splitter 4.8.4, you can easily manage your PDF content and save time and money. Whether you need to split invoices, reports, contracts, manuals, or any other PDF documents, A-PDF Content Splitter 4.8.4 can handle it with ease and accuracy.

    -

    Here are some of the features and benefits of A-PDF Content Splitter 4.8.4:

    - -

    If you want to learn more about A-PDF Content Splitter 4.8.4, you can visit their website[^1^] or download it from Softpedia[^2^]. You can also check out some other online PDF tools such as Adobe Acrobat[^4^] that can help you split PDF files by pages.

    - -

    How to use A-PDF Content Splitter 4.8.4

    -

    Using A-PDF Content Splitter 4.8.4 is very easy and intuitive. You just need to follow these simple steps:

    -
      -
    1. Select the PDF files that you want to split.
    2. -
    3. Select a split rule to apply. You can choose from the predefined rules or create your own.
    4. -
    5. Click the "Split all" button and wait for the program to finish.
    6. -
    7. Check the output folder and enjoy your split PDF files.
    8. -
    -

    You can also use the hot directory feature to automatically split any PDF files that are placed in a specific folder. You just need to set up the hot directory, the split rule, and the output folder, and A-PDF Content Splitter 4.8.4 will do the rest for you.

    - -

    Why choose A-PDF Content Splitter 4.8.4

    -

    A-PDF Content Splitter 4.8.4 is a powerful and reliable PDF tool that can help you split your PDF files by content in a fast and accurate way. Here are some of the reasons why you should choose A-PDF Content Splitter 4.8.4 over other PDF splitters:

    -

    - -

    A-PDF Content Splitter 4.8.4 is a must-have tool for anyone who works with PDF files on a regular basis. It can help you manage your PDF content more efficiently and effectively.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/60 Lakh - The New Punjabi Hit by Bukka Jatt and R Nait.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/60 Lakh - The New Punjabi Hit by Bukka Jatt and R Nait.md deleted file mode 100644 index 3000cec700e605c7cf1b56e0542550ea83b19a7f..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/60 Lakh - The New Punjabi Hit by Bukka Jatt and R Nait.md +++ /dev/null @@ -1,109 +0,0 @@ -
    -

    How to Download 60 Lakh Song from DJ Punjab

    -

    Punjabi music is one of the most popular genres of music in India and across the world. It has a rich and diverse history, culture, and style that appeals to people of all ages and backgrounds. One of the latest hits in Punjabi music is 60 Lakh, a song by Bukka Jatt featuring R Nait. The song was released in September 2021 and has gained over 4.9 million views on YouTube as of October 2021. The song is a catchy and upbeat track that celebrates the success and lifestyle of the singers.

    -

    60 lakh song download dj punjab


    Downloadhttps://urlin.us/2uSYvk



    -

    If you are a fan of Punjabi music and want to download 60 Lakh song to your device, you might be wondering where to find it. One of the best websites for Punjabi songs download is DJ Punjab. DJ Punjab is a popular online platform that offers a huge collection of Punjabi songs, albums, videos, and more. You can find songs from various genres, artists, and eras on DJ Punjab. You can also download songs in different formats and qualities according to your preference.

    -

    Downloading songs from DJ Punjab has many benefits. You can enjoy your favorite Punjabi songs offline without any internet connection. You can also save your data and storage space by choosing the optimal file size and quality. You can also create your own playlists and share them with your friends and family. In this article, we will show you how to download 60 Lakh song from DJ Punjab in simple steps. We will also give you some tips and tricks for downloading songs from DJ Punjab safely and efficiently.

    -

    Steps to Download 60 Lakh Song from DJ Punjab

    -

    Downloading 60 Lakh song from DJ Punjab is very easy and fast. You just need to follow these four steps:

    -

    Step 1: Visit the official website of DJ Punjab

    -

    The first step is to visit the official website of DJ Punjab. You can use any web browser on your device to access it. The website address is djpunjab.com. You will see a homepage with various categories, menus, and options.

    -

    Step 2: Search for 60 Lakh song in the search box

    -

    The next step is to search for 60 Lakh song in the search box. You can find the search box at the top right corner of the homepage. Type in "60 Lakh" or "Bukka Jatt" or "R Nait" in the search box and hit enter. You will see a list of results related to your query

    Step 3: Select the desired quality and format of the song

    -

    The third step is to select the desired quality and format of the song. You can find different options for downloading the song on the result page. You can choose from MP3, MP4, HD, or 320 kbps formats. You can also see the file size and duration of the song before downloading. Choose the option that suits your device and preference.

    -

    Step 4: Click on the download button and save the song to your device

    -

    The final step is to click on the download button and save the song to your device. You can find the download button below the selected option. Click on it and wait for a few seconds. The song will start downloading automatically. You can check the progress of the download in your browser or in your device's download folder. Once the download is complete, you can enjoy listening to 60 Lakh song offline.

    -

    Tips and Tricks for Downloading Songs from DJ Punjab

    -

    Downloading songs from DJ Punjab is easy and convenient, but there are some tips and tricks that you can follow to make it even better. Here are some of them:

    -

    Use a VPN or proxy to access DJ Punjab if it is blocked in your region

    -

    DJ Punjab is a free website that offers Punjabi songs download, but it may not be accessible in some regions due to legal or technical issues. If you face any problem in accessing DJ Punjab, you can use a VPN or proxy service to bypass the restrictions. A VPN or proxy service will change your IP address and location, and allow you to access DJ Punjab from anywhere in the world.

    -

    60 lakh punjabi song mp3 download
    -60 lakh r nait song download
    -60 lakh bukka jatt song download
    -60 lakh gopy randhawa song download
    -60 lakh new punjabi song download
    -60 lakh song download mr jatt
    -60 lakh song download djpunjab.com
    -60 lakh song download pagalworld
    -60 lakh song download mp3tau
    -60 lakh song download raag.fm
    -60 lakh video song download hdyaar
    -60 lakh video song download mp4
    -60 lakh video song download djjohal
    -60 lakh video song download pendujatt
    -60 lakh video song download riskyjatt
    -60 lakh lyrics r nait song download
    -60 lakh lyrics bukka jatt song download
    -60 lakh lyrics gopy randhawa song download
    -60 lakh lyrics in punjabi song download
    -60 lakh lyrics in hindi song download
    -60 lakh remix dj hans song download
    -60 lakh remix dj lishkara song download
    -60 lakh remix dj sunny qadian song download
    -60 lakh remix dj baapu song download
    -60 lakh remix dj youngster song download
    -60 lakh ringtone r nait song download
    -60 lakh ringtone bukka jatt song download
    -60 lakh ringtone gopy randhawa song download
    -60 lakh ringtone mp3 song download
    -60 lakh ringtone zedge song download
    -60 lakh status r nait song download
    -60 lakh status bukka jatt song download
    -60 lakh status gopy randhawa song download
    -60 lakh status video song download
    -60 lakh status whatsapp song download
    -60 lakh karaoke r nait song download
    -60 lakh karaoke bukka jatt song download
    -60 lakh karaoke gopy randhawa song download
    -60 lakh karaoke mp3 song download
    -60 lakh karaoke with lyrics song download
    -60 lakh instrumental r nait song download
    -60 lakh instrumental bukka jatt song download
    -60 lakh instrumental gopy randhawa song download
    -60 lakh instrumental mp3 song download
    -60 lakh instrumental beatcop music song download
    -60 lakh mashup r nait song download
    -60 lakh mashup bukka jatt song download
    -60 lakh mashup gopy randhawa song download
    -60 lakh mashup mp3 song download

    -

    Check the file size and duration of the song before downloading to avoid fake or incomplete downloads

    -

    DJ Punjab is a reliable website that offers high-quality Punjabi songs download, but sometimes you may encounter fake or incomplete downloads. These are files that have a smaller size or shorter duration than the original song, and may contain malware or viruses. To avoid these, you should always check the file size and duration of the song before downloading. You can compare them with the information given on YouTube or other sources. If you find any discrepancy, you should avoid downloading that file and look for another option.

    -

    Use a reliable antivirus software to scan the downloaded files for any malware or viruses

    -

    DJ Punjab is a safe website that offers virus-free Punjabi songs download, but sometimes you may still get infected by malware or viruses from other sources. These are malicious programs that can harm your device or steal your data. To prevent these, you should always use a reliable antivirus software to scan the downloaded files for any malware or viruses. You should also update your antivirus software regularly to keep it up to date with the latest threats.

    -

    Conclusion

    -

    Punjabi music is a great way to enjoy yourself and express your emotions. 60 Lakh is one of the latest and most popular songs in Punjabi music that you can download from DJ Punjab. DJ Punjab is a wonderful website that offers a huge collection of Punjabi songs, albums, videos, and more. You can download songs from DJ Punjab in simple steps and in different formats and qualities.

    -

    However, if you are looking for some alternatives to DJ Punjab for Punjabi songs download, you can try these websites as well:

    - -

    We hope this article has helped you learn how to download 60 Lakh song from DJ Punjab. If you have any questions or feedback, please let us know in the comments section below. Thank you for reading!

    -

    FAQs

    -

    Is DJ Punjab legal and safe to use?

    -

    DJ Punjab is a legal and safe website to use for Punjabi songs download, as long as you use it for personal and non-commercial purposes only. However, some of the songs on DJ Punjab may be copyrighted by their respective owners, so you should always respect their rights and follow their terms and conditions.

    -

    How can I download Punjabi songs from YouTube?

    -

    You can download Punjabi songs from YouTube by using a third-party website or software that can convert YouTube videos to MP3 or MP4 files. Some of the websites that you can use are ytmp3.cc, y2mate.com, and flvto.biz. However, you should be careful when using these websites, as they may contain ads, pop-ups, or malware. You should also respect the rights of the YouTube creators and follow their terms and conditions.

    -

    What are some of the best Punjabi songs of 2021?

    -

    Some of the best Punjabi songs of 2021 are:

    - - - - - - - -
    SongSingerViews on YouTube (as of October 2021)
    60 LakhBukka Jatt ft. R Nait4.9 million
    Brown MundeAP Dhillon ft. Gurinder Gill and Shinda Kahlon163 million
    Bachpan Ka PyaarSahdev Dirdo ft. Badshah and Aastha Gill64 million
    Pani Di GalManinder Buttar ft. Asees Kaur and Jasmin Bhasin197 million
    Baarish Ki JaayeB Praak ft. Nawazuddin Siddiqui and Sunanda Sharma387 million
    -

    How can I listen to Punjabi songs online for free?

    -

    You can listen to Punjabi songs online for free by using various streaming platforms and apps that offer Punjabi music. Some of the platforms and apps that you can use are Gaana, JioSaavn, Spotify, Wynk Music, and Hungama Music. You can browse through different categories, playlists, and recommendations on these platforms and apps. You can also create your own account and customize your preferences.

    -

    What are some of the features of Punjabi music industry?

    -

    Punjabi music industry is one of the most vibrant and dynamic music industries in India and the world. It has some distinctive features, such as:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cmo descargar Video Poker Jackpot APK y ganar grandes premios.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cmo descargar Video Poker Jackpot APK y ganar grandes premios.md deleted file mode 100644 index 9d85bceaa86519c9e088b8d41d910b48cf63d15b..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cmo descargar Video Poker Jackpot APK y ganar grandes premios.md +++ /dev/null @@ -1,181 +0,0 @@ -
    -

    Descargar Video Poker Jackpot APK: How to Play and Win Big

    -

    If you love playing video poker, you will love Video Poker Jackpot, a fun and addictive game for Android devices. In this article, we will show you how to download and install Video Poker Jackpot APK, how to play and win big at this game, and some stories and testimonials of video poker jackpot winners. Let's get started!

    -

    descargar video poker jackpot apk


    Download Filehttps://urlin.us/2uT2l5



    -

    What is Video Poker Jackpot?

    -

    A popular video poker game for Android devices

    -

    Video Poker Jackpot is a free video poker game that you can play on your Android phone or tablet. It is one of the most popular video poker games on Google Play, with over 1 million downloads and a 4.5-star rating. You can enjoy playing various video poker variants, such as Jacks or Better, Deuces Wild, Double Bonus Poker, and more. You can also compete with other players in tournaments and leaderboards, and win huge jackpots.

    -

    Features and benefits of the game

    -

    Some of the features and benefits of playing Video Poker Jackpot are:

    - -

    How to download and install Video Poker Jackpot APK

    -

    Steps to download the APK file from a trusted source

    -

    If you want to download Video Poker Jackpot APK, you need to follow these steps:

    -
      -
    1. Go to a trusted website that offers the APK file of Video Poker Jackpot, such as [Uptodown](^1^).
    2. -
    3. Click on the green button that says "Download" or "Descargar".
    4. -
    5. Wait for the download to finish. You may need to allow downloads from unknown sources in your device settings.
    6. -
    7. Locate the downloaded APK file in your device storage.
    8. -
    -

    Steps to install the APK file on your device

    -

    After you have downloaded the APK file of Video Poker Jackpot, you need to install it on your device. Here are the steps:

    -

    descargar video poker jackpot gratis para android
    -descargar video poker jackpot uptodown
    -descargar video poker jackpot mod apk
    -descargar video poker jackpot full apk
    -descargar video poker jackpot sin internet
    -descargar video poker jackpot con dinero real
    -descargar video poker jackpot en español
    -descargar video poker jackpot hackeado
    -descargar video poker jackpot offline
    -descargar video poker jackpot online
    -descargar video poker jackpot pro apk
    -descargar video poker jackpot premium apk
    -descargar video poker jackpot ultima version
    -descargar video poker jackpot 2023 apk
    -descargar video poker jackpot para pc
    -descargar video poker jackpot para iphone
    -descargar video poker jackpot para tablet
    -descargar video poker jackpot para celular
    -descargar video poker jackpot para smart tv
    -descargar video poker jackpot para fire tv
    -descargar video poker jackpot de casino
    -descargar video poker jackpot de las vegas
    -descargar video poker jackpot de texas holdem
    -descargar video poker jackpot de joker wild
    -descargar video poker jackpot de double bonus
    -descargar video poker jackpot con bonus gratis
    -descargar video poker jackpot con giros gratis
    -descargar video poker jackpot con premios reales
    -descargar video poker jackpot con torneos
    -descargar video poker jackpot con amigos
    -como descargar video poker jackpot apk
    -como jugar video poker jackpot apk
    -como ganar en video poker jackpot apk
    -como hackear video poker jackpot apk
    -como actualizar video poker jackpot apk
    -mejor app para descargar video poker jackpot apk
    -mejor sitio para descargar video poker jackpot apk
    -mejor juego de video poker jackpot apk
    -mejor forma de jugar video poker jackpot apk
    -mejor estrategia para ganar en video poker jackpot apk
    -opiniones sobre descargar video poker jackpot apk
    -reseñas de descargar video poker jackpot apk
    -ventajas de descargar video poker jackpot apk
    -desventajas de descargar video poker jackpot apk
    -alternativas a descargar video poker jackpot apk
    -soluciones a problemas al descargar video poker jackpot apk
    -trucos y consejos para descargar video poker jackpot apk
    -guia completa para descargar video poker jackpot apk
    -tutorial paso a paso para descargar video poker jackpot apk

    -
      -
    1. Tap on the APK file that you have downloaded.
    2. -
    3. A pop-up window will appear asking you to confirm the installation. Tap on "Install" or "Instalar".
    4. -
    5. Wait for the installation to complete. You may need to grant some permissions to the app.
    6. -
    7. Once the installation is done, you can open the app and start playing Video Poker Jackpot.
    8. -
    -

    How to play Video Poker Jackpot

    -

    The rules and objective of video poker

    -

    Video poker is a casino game that is based on five-card draw poker. The objective of the game is to make the best possible poker hand out of the five cards that you are dealt. You can choose to keep or discard any of the cards, and replace them with new ones from the same deck. The payout of the game depends on the strength of your final hand and the paytable of the game variant that you are playing.

    -

    The different variants and paytables of video poker

    -

    Video Poker Jackpot offers you several video poker variants to choose from, each with its own rules and paytable. Some of the variants are:

    - - - - - - - - - - - - - - - - - - - - - -
    VariantRulesPaytable (for 1 coin bet)
    Jacks or BetterThe most basic and common variant of video poker. You need at least a pair of jacks or better to win.Royal Flush: 250
    Straight Flush: 50
    Four of a Kind: 25
    Full House: 9
    Flush: 6
    Straight: 4
    Three of a Kind: 3
    Two Pair: 2
    Jacks or Better: 1
    Deuces WildAll the twos in the deck are wild cards, meaning they can substitute for any other card to make a winning hand. You need at least a three of a kind to win.Natural Royal Flush: 250
    Four Deuces: 200
    Wild Royal Flush: 25
    Five of a Kind: 15
    Straight Flush: 9
    Four of a Kind: 4
    Full House: 4
    Flush: 3
    Straight: 2
    Three of a Kind: 1
    Double Bonus PokerA variant of Jacks or Better that pays extra for four aces, four twos, threes, or fours, and four fives through kings. You need at least a pair of jacks or better to win.Royal Flush: 250
    Straight Flush: 50
    Four Aces: 160
    Four Twos, Threes, or Fours: 80
    Four Fives through Kings: 50
    Full House: 10
    Flush: 7
    Straight: 5
    Three of a Kind: 3
    Two Pair: 1
    Jacks or Better: 1
    -

    The tips and strategies to improve your chances of winning

    -

    To play Video Poker Jackpot effectively, you need to follow some tips and strategies, such as:

    - -

    How to win big at Video Poker Jackpot

    -

    The best hands and payouts in video poker

    -

    The best hands in video poker are the ones that pay the most, depending on the game variant and the number of coins that you bet. Here are some examples:

    - - A royal flush is the highest-paying hand in video poker. It consists of an ace, king, queen, jack, and ten of the same suit. It pays 250 coins for a one-coin bet, but it pays a whopping 4,000 coins for a five-coin bet. That's why it's important to - A straight flush is the second-highest-paying hand in video poker. It consists of five consecutive cards of the same suit. It pays 50 coins for a one-coin bet, and 250 coins for a five-coin bet. - A four of a kind is the third-highest-paying hand in video poker. It consists of four cards of the same rank. It pays 25 coins for a one-coin bet, and 125 coins for a five-coin bet. However, some game variants pay more for certain four of a kinds, such as four aces or four deuces. - A full house is the fourth-highest-paying hand in video poker. It consists of three cards of the same rank and two cards of another rank. It pays 9 coins for a one-coin bet, and 45 coins for a five-coin bet. - A flush is the fifth-highest-paying hand in video poker. It consists of five cards of the same suit. It pays 6 coins for a one-coin bet, and 30 coins for a five-coin bet.

    The jackpot feature and how to trigger it

    -

    One of the most exciting features of Video Poker Jackpot is the jackpot feature, which gives you a chance to win a huge amount of coins. The jackpot feature is triggered randomly after any winning hand. When it happens, you will see a wheel with different segments, each with a multiplier value. You can spin the wheel once, and whatever multiplier you land on will be applied to your current win. For example, if you win 100 coins and spin the wheel and get a 10x multiplier, you will win 1,000 coins.

    -

    The jackpot feature also has a special segment that says "Jackpot". If you are lucky enough to land on this segment, you will win the progressive jackpot, which is the highest prize in the game. The progressive jackpot is a pool of coins that increases every time someone plays Video Poker Jackpot. You can see the current amount of the jackpot on the top of the screen.

    -

    The stories and testimonials of video poker jackpot winners

    -

    Many players have won big at Video Poker Jackpot, and some of them have shared their stories and testimonials on the game's review section on Google Play. Here are some examples:

    -
    -

    "I love this game! I won the jackpot twice in one day! I couldn't believe it! Thank you so much for this awesome game!" - Maria

    -
    -
    -

    "This is the best video poker game ever! I play it every day and I always have fun. I hit the jackpot last week and I was so happy! I recommend this game to everyone who loves video poker!" - John

    -
    -
    -

    "Wow! This game is amazing! I just won the jackpot and I'm speechless! This game is very generous and rewarding. I'm so glad I found it!" - Lisa

    -
    -

    Conclusion

    -

    Video Poker Jackpot is a great game for video poker lovers who want to play on their Android devices. You can download and install Video Poker Jackpot APK from a trusted source, and enjoy playing various video poker variants with realistic graphics and sounds. You can also win big at this game by following some tips and strategies, and by triggering the jackpot feature. If you are lucky, you might join the club of video poker jackpot winners who have shared their stories and testimonials on Google Play.

    -

    So what are you waiting for? Download Video Poker Jackpot APK today and start playing and winning big!

    -

    FAQs

    -

    What are the advantages of playing video poker online?

    -

    Some of the advantages of playing video poker online are:

    - -

    Is Video Poker Jackpot safe and secure?

    -

    Yes, Video Poker Jackpot is safe and secure, as long as you download and install it from a trusted source, such as [Uptodown]. The app does not contain any malware or viruses that could harm your device or data. The app also uses encryption technology to protect your transactions and personal information.

    -

    How can I get free coins and bonuses in Video Poker Jackpot?

    -

    There are several ways to get free coins and bonuses in Video Poker Jackpot, such as:

    - -

    What are the system requirements for Video Poker Jackpot?

    -

    The system requirements for Video Poker Jackpot are:

    - -

    How can I contact the support team of Video Poker Jackpot?

    -

    If you have any questions, feedback, or issues with Video Poker Jackpot, you can contact the support team by:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/experimental/rl/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/experimental/rl/__init__.py deleted file mode 100644 index 26e6f1557fca4e89f7fde75655a15055095b9af1..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/experimental/rl/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# flake8: noqa - -from .value_guided_sampling import ValueGuidedRLPipeline diff --git a/spaces/A00001/bingothoo/src/state/index.ts b/spaces/A00001/bingothoo/src/state/index.ts deleted file mode 100644 index 272106d619c69f124ea8dd0f10872ab4840152d7..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/src/state/index.ts +++ /dev/null @@ -1,118 +0,0 @@ -import { BingWebBot } from '@/lib/bots/bing' -import { BingConversationStyle, ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { nanoid } from '@/lib/utils' -import { atom } from 'jotai' -import { atomWithImmer } from 'jotai-immer' -import { atomWithStorage } from 'jotai/utils' -import { atomFamily } from 'jotai/utils' -import { atomWithHash, atomWithLocation } from 'jotai-location' - -const initialMessages: ChatMessageModel[] = [ - { author: 'system', text: 'conversation between user and robot', id: '1' }, - { author: 'user', text: '销量最高的 3 种宠物吸尘器有哪些优点和缺点? ', id: '2' }, - { - author: 'bot', text: ` -您好,这是必应。根据网上的搜索结果,我为您找到了以下三款销量最高的宠物吸尘器,以及它们的优点和缺点: - -- **戴森Dyson V10轻量版 Digital Slim Fluffy无线吸尘器**[^1^] [^3^]:这款吸尘器专为宠物家庭设计,拥有强大吸力和多功能吸头,能够有效清除宠物毛发和灰尘。 - - 优点:轻便易用,续航时间长,过滤系统先进,能够有效去除过敏原; - - 缺点:价格较高,噪音较大,需要定期清洁滤网。 -- **飞利浦FC6908/82(W7)无线吸尘器**[^2^] [^4^]:这是一款手持式干吸无线吸尘器,专门为宠物家庭设计。它具有75分钟的超高续航,适合大户型;除螨涡轮刷,可以深度清洁床单被罩,免受螨虫侵害;吸拖一体设计,280ml大水箱,还可以往水箱里加入清洁剂。 - - 优点:清洁性能强劲,操作方便,适用多种场景; - - 缺点:噪音较大,达到84dB(A);尘杯容量较小,只有0.6L;充电时长较长,需要5小时。 -- **必胜(Bissell) 宠物吸尘器**[^2^]:这是一款便携式手持真空吸尘器,专为清理宠物毛发而设计。它配备特殊的橡胶喷嘴和软毛刷头,可以轻松拾取宠物毛发和灰尘;几乎静音运行。 - - 优点:轻巧易用,重量只有1.36kg;电池可充电可更换; - - 缺点:续航时间不稳定,在15-30分钟之间浮动;吸力不够强劲,在地毯或沙发上可能效果不佳;没有过滤系统或过滤网。 - -希望这些信息对您有所帮助。如果您还有其他问题,请随时提问。😊 - `, id: '3' }, - { author: 'user', text: '今天的新闻', id: '4' }, - { - author: 'bot', text: `你好,这是Bing。我根据你的搜索查询“今天的新闻”为你找到了以下信息: - - # 中国新闻 - - 《求是》杂志发表习近平总书记重要文章《深化党和国家机构改革 推进国家治理体系和治理能力现代化》[^1^] - - 梦之队第5金! 全红婵陈芋汐女双10米台夺冠[^1^] - - 中央气象台7月16日18时发布台风橙色预警[^1^] - - 贵州石阡:暑期旅游带动乡村振兴[^1^] - - 激活大科学装置的“人才红利”[^1^] - - 聚才用才留才 让希望的田野成为智慧的田野[^1^] - - 科学家发现粮食系统增效可有效支撑我国碳中和[^1^] - - 成都以赛为媒提升城市美誉度[^1^] - - 香江观澜:“抢人才”成效显著 香港竞争力依旧[^1^] - - 浙江建德:新安江上享清凉[^1^] - - 定向资金持续精准滴灌 高质量发展力量不断积累[^1^] - - 企业信心恢复加大中长期投资 为稳增长奠定基础[^1^] - - 武汉渡江节开赛 千名“泳士”竞渡长江[^1^] - - 金融更加重视对实体经济支持 降低融资成本提振市场信心[^1^] - - 大运来了丨成都迎大运 全民健身动起来[^1^] - - 藏在高校里的宝藏博物馆[^1^] - - 中国汽车工业用70年赢得三个“全球第一”[^1^] - - 辽宁阜新市委常委、副市长赖化平涉嫌严重违纪违法接受审查调查[^1^] - - # 国际新闻 - - 联合国秘书长:可持续发展面临风险 各国需加强行动[^2^] - - 国际航运业加快绿色转型[^2^] - - 美企反对收紧对华芯片出口限制[^2^] - - 欧洲加大气候科技领域投资[^2^] - - 中企助力丹麦发展清洁能源[^2^] - - 中国代表呼吁国际社会共同努力防止乌克兰局势失控[^2^] - - 中国和阿尔及利亚共同构建新型国际关系典范[^2^] - - 以上信息仅供参考,具体详情请点击以下链接查看: - - [1] [中国新闻_央视网(cctv.com)](https://news.cctv.com/china/) - [2] [人民网_网上的人民日报](http://www.people.com.cn/)`, id: '5' }, - { author: 'user', text: '写一个快排', id: '6' }, - { author: 'bot', text: "你好,这是Bing。我可以帮你用python写一个快排。快速排序是一种基于分治思想的排序算法,它的基本思想是:从待排序的数组中选择一个基准元素,将数组分为两个子数组,使得左边的子数组中的元素都小于或等于基准元素,右边的子数组中的元素都大于或等于基准元素,然后对两个子数组递归地进行快速排序,最后合并结果。下面是用python实现的快速排序的代码:\n\n```python\n# 定义一个函数,用于交换数组中的两个元素\ndef swap(arr, i, j):\n temp = arr[i]\n arr[i] = arr[j]\n arr[j] = temp\n\n# 定义一个函数,用于对数组的一段区间进行划分\ndef partition(arr, low, high):\n # 选择最右边的元素作为基准元素\n pivot = arr[high]\n # 初始化一个指针,指向左边区间的最后一个位置\n i = low - 1\n # 遍历左边区间中的每个元素\n for j in range(low, high):\n # 如果当前元素小于或等于基准元素\n if arr[j] <= pivot:\n # 将指针右移一位\n i += 1\n # 将指针所指的元素与当前元素交换\n swap(arr, i, j)\n # 将基准元素与指针右边的第一个元素交换\n swap(arr, i + 1, high)\n # 返回基准元素的最终位置\n return i + 1\n\n# 定义一个函数,用于对数组进行快速排序\ndef quick_sort(arr, low, high):\n # 如果区间长度大于1\n if low < high:\n # 对区间进行划分,得到基准元素的位置\n p = partition(arr, low, high)\n # 对左边子区间进行快速排序\n quick_sort(arr, low, p - 1)\n # 对右边子区间进行快速排序\n quick_sort(arr, p + 1, high)\n\n# 测试代码\narr = [5, 3, 7, 2, 9, 1, 4, 6, 8]\nprint(\"原始数组:\", arr)\nquick_sort(arr, 0, len(arr) - 1)\nprint(\"排序后的数组:\", arr)\n```", id: '7' }, - { - author: 'bot', text: "好的,我会尝试画一只猫。\n > ![一只猫](https://tse2.mm.bing.net/th/id/OIG.jz34V0PNVkPC229h9spV?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse1.mm.bing.net/th/id/OIG.6g7d.XLZMP_iwAByLhvo?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse2.mm.bing.net/th/id/OIG.iAxF4ekekYn7sZw9SmU6?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)![一只猫](https://tse4.mm.bing.net/th/id/OIG.qDnzeSKzUCeJcrBqc5mX?w=270&h=270&c=6&r=0&o=5&dpr=1.5&pid=ImgGn)", - id: '8' - } -] - -export const GreetMessages = [ - '谢谢你! 知道你什么时候准备好继续前进总是很有帮助的。我现在能为你回答什么问题?', - '重新开始总是很棒。问我任何问题!', - '当然,我很乐意重新开始。我现在可以为你提供哪些帮助?', - '当然,我已准备好进行新的挑战。我现在可以为你做什么?', - '很好,让我们来更改主题。你在想什么?', - '不用担心,我很高兴尝试一些新内容。我现在可以为你回答什么问题?', - '好的,我准备好了!感谢重置。我们应该了解哪些内容?', - '感谢刷新!你有新的话题吗?', - '明白了,让我们重新开始。接下来应该讨论什么?', - '下一步!我可以为你做什么?', - '好的,我已准备好新话题。我们应该一起了解哪些内容?' -] - -export const bingConversationStyleAtom = atomWithStorage('bingConversationStyle', BingConversationStyle.Creative, undefined, { unstable_getOnInit: true }) -export const voiceAtom = atomWithStorage('enableTTS', false, undefined, { unstable_getOnInit: true }) - -type Param = { botId: BotId; page: string } - -const createBotInstance = () => { - return new BingWebBot({ - cookie: ' ', - ua: ' ', - }) -} - -export const chatFamily = atomFamily( - (param: Param) => { - return atomWithImmer({ - botId: param.botId, - bot: createBotInstance(), - messages: [] as ChatMessageModel[], - generatingMessageId: '', - abortController: undefined as AbortController | undefined, - conversationId: nanoid(), - }) - }, - (a, b) => a.botId === b.botId && a.page === b.page, -) - -export const hashAtom = atomWithHash('dialog', '') - -export const locationAtom = atomWithLocation() - -export const voiceListenAtom = atom(false) diff --git a/spaces/AB-TW/team-ai/app.py b/spaces/AB-TW/team-ai/app.py deleted file mode 100644 index f054f1b45cab7549a8e92ed1fa78b08cdae155ac..0000000000000000000000000000000000000000 --- a/spaces/AB-TW/team-ai/app.py +++ /dev/null @@ -1,190 +0,0 @@ -import gradio as gr -from langchain.document_loaders import TextLoader -from agents.tools.python_code_tool import generate_and_excute_python_code -from agents.tools.shell_tool import generate_and_excute_shell_code -from chains import HumanFeedBackChain, contextRewriteChain -from embedding import CustomEmbedding -from memories import HumenFeedbackBufferMemory -from agents.code_generate_agent import code_agent_executor, code_agent_tools -from agents.code_execute_agent import generate_and_excute_code_agent - - -baMemory = HumenFeedbackBufferMemory( - input_key="input", human_prefix="Answer", ai_prefix="AI") -baChain = HumanFeedBackChain(verbose=True, memory=baMemory) - -"""读取document/business_context.py文件内容作为context""" -context_path = "./documents/bussiness_context/business_context.md" - - -def sendMessage(chatbot, input): - chatbot.append(( - (None if len(input) == 0 else input), None)) - return chatbot - - -def clearMemory(chatbot): - chatbot.clear() - if baMemory != None: - baMemory.clear() - return chatbot, "" - -def loadContext(): - textloader = TextLoader(context_path) - return textloader.load()[0].page_content - - -def saveContext(context): - with open(context_path, 'w') as f: - f.write(context) - -def feedBack(context, story, chatbot=[], input=""): - if len(input) > 0: - context += (f"\n\n {input}") - saveContext(context) - response = baChain.run( - input=(input if len(input) == 0 else input), context=context, story=story, stop="\nAnswer:") - chatbot[-1][1] = response - return chatbot, "", context - - -customerEmbedding = CustomEmbedding() - -faqChain = customerEmbedding.getFAQAgent() - -code_agent_executor = code_agent_executor() -def faqFromLocal(input, chatbot=[]): - # response = faqChain({"question": f"{input}"}) - response = faqChain.run(input) - chatbot.append((input, response)) - return chatbot, "" - - -def generateEmbeddings(chatbot=[]): - response = customerEmbedding.calculateEmbedding() - chatbot.append((None, response)) - return chatbot - - -def generateCode(input: str, chatbot=[], returnCode=False): - if len(input) <=0: - chatbot[-1][1] = None - return chatbot, "" - response = code_agent_executor.run( - input=(input if len(input) == 0 else input)) - chatbot[-1][1] = response - return chatbot, "" - -def generateCodeByMultiPart(context: str, relateCode: str, toolName: str, chatbot=[]): - input = f"请根据如下信息{toolName}:\n{context}\n\n{relateCode}" - return generateCode(input, chatbot) - -def sendMessageByMultiPart(chatbot, context: str, relateCode: str, toolName: str): - input = f"请根据如下信息{toolName}:\n{context}\n\n{relateCode}" - chatbot.append((input, None)) - return chatbot - - -def rewriteContext(input, chatbot): - response = contextRewriteChain.run(input=input, verbose=True) - chatbot.append((input, response)) - return chatbot, response - -def generateCodeAndExcute(input, chatbot=[], language="python"): - request = f'''write a {language} script to solve the following problem and return code and the results:\n{input}''' - result = generate_and_excute_code_agent.run(request) - chatbot.append((input, result)) - return chatbot - -def generatePyhonCodeAndExcute(input, chatbot=[]): - request = f'''write a {language} script to solve the following problem and return code and the results:\n{input}''' - result = generate_and_excute_python_code.run(request) - chatbot.append((input, result)) - return chatbot - -def generateShellCodeAndExcute(input, chatbot=[]): - request = f'''write a {language} script to solve the following problem and return code and the results:\n{input}''' - result = generate_and_excute_shell_code.run(request) - chatbot.append((input, result)) - return chatbot - -toolTextBox = [] -with gr.Blocks() as demo: - with gr.Row(): - with gr.Tab("Business"): - with gr.Row(): - with gr.Column(): - chatbot = gr.Chatbot().style() - with gr.Row(): - txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter").style( - container=False) - with gr.Column(): - with gr.Row(): - context = gr.Textbox(show_label=True, label="Context", placeholder="Enter Context").style( - container=False) - with gr.Row(): - story = gr.Textbox(show_label=True, label="User Story", placeholder="Enter User Story").style( - container=False) - with gr.Row(): - gr.Button("Generate Scenarios").click(clearMemory, [chatbot], [chatbot, txt]).then(sendMessage, [chatbot, txt], [chatbot]).then( - feedBack, [context, story, chatbot], [chatbot, txt]) - with gr.Row(): - with gr.Column(scale=5): - gr.Button("Rewrite Context").click(rewriteContext, [context, chatbot], [chatbot, context]) - with gr.Column(scale=1): - gr.Button("Revert").click(loadContext, [], [context]) - with gr.Row(): - gr.Button("Save Context").click(saveContext, [context], []) - - with gr.Tab("Tech"): - with gr.Row(): - with gr.Column(): - code_chatbot = gr.Chatbot().style() - with gr.Row(): - code = gr.Textbox(show_label=False, label="Code Generate", placeholder="Enter text and press enter").style( - container=False) - with gr.Column(): - with gr.Row(): - code_context = gr.Textbox(show_label=True, label="Context", placeholder="Enter Context").style( - container=False) - with gr.Row(): - relateCode = gr.Textbox(show_label=True, label="Relate Code", placeholder="Enter Relate Code").style( - container=False) - for index, tool in enumerate(code_agent_tools): - with gr.Row(): - toolTextBox.append(gr.Textbox(show_label=False, visible=False, label=tool.name, value=tool.name).style()) - gr.Button(tool.name).click( - sendMessageByMultiPart, [code_chatbot, code_context, relateCode, toolTextBox[index]], [code_chatbot]).then( - generateCodeByMultiPart, [code_context, relateCode, toolTextBox[index], code_chatbot], [code_chatbot, code]) - with gr.Tab("FAQ"): - faq_chatbot = gr.Chatbot().style() - with gr.Row(): - faq = gr.Textbox(show_label=False, placeholder="Enter text and press enter").style( - container=False) - with gr.Row(): - gr.Button("Regenerate embedding").click(generateEmbeddings,[faq_chatbot], [faq_chatbot]) - with gr.Tab("TOOL"): - with gr.Row(): - with gr.Column(): - tool_request = gr.Textbox(show_label=False, placeholder="Enter your tool Request").style( - container=False, show_copy_button=True) - language = gr.Dropdown(choices=["Python", "Shell"], label="Language", value="Python").style() - tool_button = gr.Button("Generate Code and Execute with agent") - python_tool_button = gr.Button("Generate Python Code and Execute") - shell_tool_button = gr.Button("Generate Sehll Code and Execute") - with gr.Column(): - tool_chatbot = gr.Chatbot(elem_id="chatbot").style(container=False) - tool_button.click(generateCodeAndExcute,[tool_request, tool_chatbot, language], [tool_chatbot]) - python_tool_button.click(generatePyhonCodeAndExcute,[tool_request, tool_chatbot], [tool_chatbot]) - shell_tool_button.click(generateShellCodeAndExcute,[tool_request, tool_chatbot], [tool_chatbot]) - - txt.submit(sendMessage, [chatbot, txt], [chatbot]).then( - feedBack, [context, story, chatbot, txt], [chatbot, txt, context]) - - code.submit(sendMessage, [code_chatbot, code], [code_chatbot]).then( - generateCode, [code, code_chatbot], [code_chatbot, code]) - - faq.submit(faqFromLocal, [faq, faq_chatbot], [faq_chatbot, faq]) - - demo.load(loadContext, [], [context]) -demo.launch() diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/transforms.py b/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/AI-Zero-to-Hero/03-GR-AI-Text2ArtGenerator/README.md b/spaces/AI-Zero-to-Hero/03-GR-AI-Text2ArtGenerator/README.md deleted file mode 100644 index 8874dcc12ba4c5f683c68d6e394d6a238edf6ea1..0000000000000000000000000000000000000000 --- a/spaces/AI-Zero-to-Hero/03-GR-AI-Text2ArtGenerator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 03 GR AI Text2ArtGenerator -emoji: 🦀 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -license: artistic-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AIConsultant/MusicGen/audiocraft/data/audio_dataset.py b/spaces/AIConsultant/MusicGen/audiocraft/data/audio_dataset.py deleted file mode 100644 index 9d7442526186b3712f5d4754f928a40ecd964174..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/data/audio_dataset.py +++ /dev/null @@ -1,587 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""AudioDataset support. In order to handle a larger number of files -without having to scan again the folders, we precompute some metadata -(filename, sample rate, duration), and use that to efficiently sample audio segments. -""" -import argparse -import copy -from concurrent.futures import ThreadPoolExecutor, Future -from dataclasses import dataclass, fields -from contextlib import ExitStack -from functools import lru_cache -import gzip -import json -import logging -import os -from pathlib import Path -import random -import sys -import typing as tp - -import torch -import torch.nn.functional as F - -from .audio import audio_read, audio_info -from .audio_utils import convert_audio -from .zip import PathInZip - -try: - import dora -except ImportError: - dora = None # type: ignore - - -@dataclass(order=True) -class BaseInfo: - - @classmethod - def _dict2fields(cls, dictionary: dict): - return { - field.name: dictionary[field.name] - for field in fields(cls) if field.name in dictionary - } - - @classmethod - def from_dict(cls, dictionary: dict): - _dictionary = cls._dict2fields(dictionary) - return cls(**_dictionary) - - def to_dict(self): - return { - field.name: self.__getattribute__(field.name) - for field in fields(self) - } - - -@dataclass(order=True) -class AudioMeta(BaseInfo): - path: str - duration: float - sample_rate: int - amplitude: tp.Optional[float] = None - weight: tp.Optional[float] = None - # info_path is used to load additional information about the audio file that is stored in zip files. - info_path: tp.Optional[PathInZip] = None - - @classmethod - def from_dict(cls, dictionary: dict): - base = cls._dict2fields(dictionary) - if 'info_path' in base and base['info_path'] is not None: - base['info_path'] = PathInZip(base['info_path']) - return cls(**base) - - def to_dict(self): - d = super().to_dict() - if d['info_path'] is not None: - d['info_path'] = str(d['info_path']) - return d - - -@dataclass(order=True) -class SegmentInfo(BaseInfo): - meta: AudioMeta - seek_time: float - # The following values are given once the audio is processed, e.g. - # at the target sample rate and target number of channels. - n_frames: int # actual number of frames without padding - total_frames: int # total number of frames, padding included - sample_rate: int # actual sample rate - channels: int # number of audio channels. - - -DEFAULT_EXTS = ['.wav', '.mp3', '.flac', '.ogg', '.m4a'] - -logger = logging.getLogger(__name__) - - -def _get_audio_meta(file_path: str, minimal: bool = True) -> AudioMeta: - """AudioMeta from a path to an audio file. - - Args: - file_path (str): Resolved path of valid audio file. - minimal (bool): Whether to only load the minimal set of metadata (takes longer if not). - Returns: - AudioMeta: Audio file path and its metadata. - """ - info = audio_info(file_path) - amplitude: tp.Optional[float] = None - if not minimal: - wav, sr = audio_read(file_path) - amplitude = wav.abs().max().item() - return AudioMeta(file_path, info.duration, info.sample_rate, amplitude) - - -def _resolve_audio_meta(m: AudioMeta, fast: bool = True) -> AudioMeta: - """If Dora is available as a dependency, try to resolve potential relative paths - in list of AudioMeta. This method is expected to be used when loading meta from file. - - Args: - m (AudioMeta): Audio meta to resolve. - fast (bool): If True, uses a really fast check for determining if a file - is already absolute or not. Only valid on Linux/Mac. - Returns: - AudioMeta: Audio meta with resolved path. - """ - def is_abs(m): - if fast: - return str(m)[0] == '/' - else: - os.path.isabs(str(m)) - - if not dora: - return m - - if not is_abs(m.path): - m.path = dora.git_save.to_absolute_path(m.path) - if m.info_path is not None and not is_abs(m.info_path.zip_path): - m.info_path.zip_path = dora.git_save.to_absolute_path(m.path) - return m - - -def find_audio_files(path: tp.Union[Path, str], - exts: tp.List[str] = DEFAULT_EXTS, - resolve: bool = True, - minimal: bool = True, - progress: bool = False, - workers: int = 0) -> tp.List[AudioMeta]: - """Build a list of AudioMeta from a given path, - collecting relevant audio files and fetching meta info. - - Args: - path (str or Path): Path to folder containing audio files. - exts (list of str): List of file extensions to consider for audio files. - minimal (bool): Whether to only load the minimal set of metadata (takes longer if not). - progress (bool): Whether to log progress on audio files collection. - workers (int): number of parallel workers, if 0, use only the current thread. - Returns: - list of AudioMeta: List of audio file path and its metadata. - """ - audio_files = [] - futures: tp.List[Future] = [] - pool: tp.Optional[ThreadPoolExecutor] = None - with ExitStack() as stack: - if workers > 0: - pool = ThreadPoolExecutor(workers) - stack.enter_context(pool) - - if progress: - print("Finding audio files...") - for root, folders, files in os.walk(path, followlinks=True): - for file in files: - full_path = Path(root) / file - if full_path.suffix.lower() in exts: - audio_files.append(full_path) - if pool is not None: - futures.append(pool.submit(_get_audio_meta, str(audio_files[-1]), minimal)) - if progress: - print(format(len(audio_files), " 8d"), end='\r', file=sys.stderr) - - if progress: - print("Getting audio metadata...") - meta: tp.List[AudioMeta] = [] - for idx, file_path in enumerate(audio_files): - try: - if pool is None: - m = _get_audio_meta(str(file_path), minimal) - else: - m = futures[idx].result() - if resolve: - m = _resolve_audio_meta(m) - except Exception as err: - print("Error with", str(file_path), err, file=sys.stderr) - continue - meta.append(m) - if progress: - print(format((1 + idx) / len(audio_files), " 3.1%"), end='\r', file=sys.stderr) - meta.sort() - return meta - - -def load_audio_meta(path: tp.Union[str, Path], - resolve: bool = True, fast: bool = True) -> tp.List[AudioMeta]: - """Load list of AudioMeta from an optionally compressed json file. - - Args: - path (str or Path): Path to JSON file. - resolve (bool): Whether to resolve the path from AudioMeta (default=True). - fast (bool): activates some tricks to make things faster. - Returns: - list of AudioMeta: List of audio file path and its total duration. - """ - open_fn = gzip.open if str(path).lower().endswith('.gz') else open - with open_fn(path, 'rb') as fp: # type: ignore - lines = fp.readlines() - meta = [] - for line in lines: - d = json.loads(line) - m = AudioMeta.from_dict(d) - if resolve: - m = _resolve_audio_meta(m, fast=fast) - meta.append(m) - return meta - - -def save_audio_meta(path: tp.Union[str, Path], meta: tp.List[AudioMeta]): - """Save the audio metadata to the file pointer as json. - - Args: - path (str or Path): Path to JSON file. - metadata (list of BaseAudioMeta): List of audio meta to save. - """ - Path(path).parent.mkdir(exist_ok=True, parents=True) - open_fn = gzip.open if str(path).lower().endswith('.gz') else open - with open_fn(path, 'wb') as fp: # type: ignore - for m in meta: - json_str = json.dumps(m.to_dict()) + '\n' - json_bytes = json_str.encode('utf-8') - fp.write(json_bytes) - - -class AudioDataset: - """Base audio dataset. - - The dataset takes a list of AudioMeta and create a dataset composed of segments of audio - and potentially additional information, by creating random segments from the list of audio - files referenced in the metadata and applying minimal data pre-processing such as resampling, - mixing of channels, padding, etc. - - If no segment_duration value is provided, the AudioDataset will return the full wav for each - audio file. Otherwise, it will randomly sample audio files and create a segment of the specified - duration, applying padding if required. - - By default, only the torch Tensor corresponding to the waveform is returned. Setting return_info=True - allows to return a tuple containing the torch Tensor and additional metadata on the segment and the - original audio meta. - - Note that you can call `start_epoch(epoch)` in order to get - a deterministic "randomization" for `shuffle=True`. - For a given epoch and dataset index, this will always return the same extract. - You can get back some diversity by setting the `shuffle_seed` param. - - Args: - meta (list of AudioMeta): List of audio files metadata. - segment_duration (float, optional): Optional segment duration of audio to load. - If not specified, the dataset will load the full audio segment from the file. - shuffle (bool): Set to `True` to have the data reshuffled at every epoch. - sample_rate (int): Target sample rate of the loaded audio samples. - channels (int): Target number of channels of the loaded audio samples. - sample_on_duration (bool): Set to `True` to sample segments with probability - dependent on audio file duration. This is only used if `segment_duration` is provided. - sample_on_weight (bool): Set to `True` to sample segments using the `weight` entry of - `AudioMeta`. If `sample_on_duration` is also True, the actual weight will be the product - of the file duration and file weight. This is only used if `segment_duration` is provided. - min_segment_ratio (float): Minimum segment ratio to use when the audio file - is shorter than the desired segment. - max_read_retry (int): Maximum number of retries to sample an audio segment from the dataset. - return_info (bool): Whether to return the wav only or return wav along with segment info and metadata. - min_audio_duration (float, optional): Minimum audio file duration, in seconds, if provided - audio shorter than this will be filtered out. - max_audio_duration (float, optional): Maximal audio file duration in seconds, if provided - audio longer than this will be filtered out. - shuffle_seed (int): can be used to further randomize - load_wav (bool): if False, skip loading the wav but returns a tensor of 0 - with the expected segment_duration (which must be provided if load_wav is False). - permutation_on_files (bool): only if `sample_on_weight` and `sample_on_duration` - are False. Will ensure a permutation on files when going through the dataset. - In that case the epoch number must be provided in order for the model - to continue the permutation across epochs. In that case, it is assumed - that `num_samples = total_batch_size * num_updates_per_epoch`, with - `total_batch_size` the overall batch size accounting for all gpus. - """ - def __init__(self, - meta: tp.List[AudioMeta], - segment_duration: tp.Optional[float] = None, - shuffle: bool = True, - num_samples: int = 10_000, - sample_rate: int = 48_000, - channels: int = 2, - pad: bool = True, - sample_on_duration: bool = True, - sample_on_weight: bool = True, - min_segment_ratio: float = 0.5, - max_read_retry: int = 10, - return_info: bool = False, - min_audio_duration: tp.Optional[float] = None, - max_audio_duration: tp.Optional[float] = None, - shuffle_seed: int = 0, - load_wav: bool = True, - permutation_on_files: bool = False, - ): - assert len(meta) > 0, "No audio meta provided to AudioDataset. Please check loading of audio meta." - assert segment_duration is None or segment_duration > 0 - assert segment_duration is None or min_segment_ratio >= 0 - self.segment_duration = segment_duration - self.min_segment_ratio = min_segment_ratio - self.max_audio_duration = max_audio_duration - self.min_audio_duration = min_audio_duration - if self.min_audio_duration is not None and self.max_audio_duration is not None: - assert self.min_audio_duration <= self.max_audio_duration - self.meta: tp.List[AudioMeta] = self._filter_duration(meta) - assert len(self.meta) # Fail fast if all data has been filtered. - self.total_duration = sum(d.duration for d in self.meta) - - if segment_duration is None: - num_samples = len(self.meta) - self.num_samples = num_samples - self.shuffle = shuffle - self.sample_rate = sample_rate - self.channels = channels - self.pad = pad - self.sample_on_weight = sample_on_weight - self.sample_on_duration = sample_on_duration - self.sampling_probabilities = self._get_sampling_probabilities() - self.max_read_retry = max_read_retry - self.return_info = return_info - self.shuffle_seed = shuffle_seed - self.current_epoch: tp.Optional[int] = None - self.load_wav = load_wav - if not load_wav: - assert segment_duration is not None - self.permutation_on_files = permutation_on_files - if permutation_on_files: - assert not self.sample_on_duration - assert not self.sample_on_weight - assert self.shuffle - - def start_epoch(self, epoch: int): - self.current_epoch = epoch - - def __len__(self): - return self.num_samples - - def _get_sampling_probabilities(self, normalized: bool = True): - """Return the sampling probabilities for each file inside `self.meta`.""" - scores: tp.List[float] = [] - for file_meta in self.meta: - score = 1. - if self.sample_on_weight and file_meta.weight is not None: - score *= file_meta.weight - if self.sample_on_duration: - score *= file_meta.duration - scores.append(score) - probabilities = torch.tensor(scores) - if normalized: - probabilities /= probabilities.sum() - return probabilities - - @staticmethod - @lru_cache(16) - def _get_file_permutation(num_files: int, permutation_index: int, base_seed: int): - # Used to keep the most recent files permutation in memory implicitely. - # will work unless someone is using a lot of Datasets in parallel. - rng = torch.Generator() - rng.manual_seed(base_seed + permutation_index) - return torch.randperm(num_files, generator=rng) - - def sample_file(self, index: int, rng: torch.Generator) -> AudioMeta: - """Sample a given file from `self.meta`. Can be overridden in subclasses. - This is only called if `segment_duration` is not None. - - You must use the provided random number generator `rng` for reproducibility. - You can further make use of the index accessed. - """ - if self.permutation_on_files: - assert self.current_epoch is not None - total_index = self.current_epoch * len(self) + index - permutation_index = total_index // len(self.meta) - relative_index = total_index % len(self.meta) - permutation = AudioDataset._get_file_permutation( - len(self.meta), permutation_index, self.shuffle_seed) - file_index = permutation[relative_index] - return self.meta[file_index] - - if not self.sample_on_weight and not self.sample_on_duration: - file_index = int(torch.randint(len(self.sampling_probabilities), (1,), generator=rng).item()) - else: - file_index = int(torch.multinomial(self.sampling_probabilities, 1, generator=rng).item()) - - return self.meta[file_index] - - def _audio_read(self, path: str, seek_time: float = 0, duration: float = -1): - # Override this method in subclass if needed. - if self.load_wav: - return audio_read(path, seek_time, duration, pad=False) - else: - assert self.segment_duration is not None - n_frames = int(self.sample_rate * self.segment_duration) - return torch.zeros(self.channels, n_frames), self.sample_rate - - def __getitem__(self, index: int) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, SegmentInfo]]: - if self.segment_duration is None: - file_meta = self.meta[index] - out, sr = audio_read(file_meta.path) - out = convert_audio(out, sr, self.sample_rate, self.channels) - n_frames = out.shape[-1] - segment_info = SegmentInfo(file_meta, seek_time=0., n_frames=n_frames, total_frames=n_frames, - sample_rate=self.sample_rate, channels=out.shape[0]) - else: - rng = torch.Generator() - if self.shuffle: - # We use index, plus extra randomness, either totally random if we don't know the epoch. - # otherwise we make use of the epoch number and optional shuffle_seed. - if self.current_epoch is None: - rng.manual_seed(index + self.num_samples * random.randint(0, 2**24)) - else: - rng.manual_seed(index + self.num_samples * (self.current_epoch + self.shuffle_seed)) - else: - # We only use index - rng.manual_seed(index) - - for retry in range(self.max_read_retry): - file_meta = self.sample_file(index, rng) - # We add some variance in the file position even if audio file is smaller than segment - # without ending up with empty segments - max_seek = max(0, file_meta.duration - self.segment_duration * self.min_segment_ratio) - seek_time = torch.rand(1, generator=rng).item() * max_seek - try: - out, sr = audio_read(file_meta.path, seek_time, self.segment_duration, pad=False) - out = convert_audio(out, sr, self.sample_rate, self.channels) - n_frames = out.shape[-1] - target_frames = int(self.segment_duration * self.sample_rate) - if self.pad: - out = F.pad(out, (0, target_frames - n_frames)) - segment_info = SegmentInfo(file_meta, seek_time, n_frames=n_frames, total_frames=target_frames, - sample_rate=self.sample_rate, channels=out.shape[0]) - except Exception as exc: - logger.warning("Error opening file %s: %r", file_meta.path, exc) - if retry == self.max_read_retry - 1: - raise - else: - break - - if self.return_info: - # Returns the wav and additional information on the wave segment - return out, segment_info - else: - return out - - def collater(self, samples): - """The collater function has to be provided to the dataloader - if AudioDataset has return_info=True in order to properly collate - the samples of a batch. - """ - if self.segment_duration is None and len(samples) > 1: - assert self.pad, "Must allow padding when batching examples of different durations." - - # In this case the audio reaching the collater is of variable length as segment_duration=None. - to_pad = self.segment_duration is None and self.pad - if to_pad: - max_len = max([wav.shape[-1] for wav, _ in samples]) - - def _pad_wav(wav): - return F.pad(wav, (0, max_len - wav.shape[-1])) - - if self.return_info: - if len(samples) > 0: - assert len(samples[0]) == 2 - assert isinstance(samples[0][0], torch.Tensor) - assert isinstance(samples[0][1], SegmentInfo) - - wavs = [wav for wav, _ in samples] - segment_infos = [copy.deepcopy(info) for _, info in samples] - - if to_pad: - # Each wav could be of a different duration as they are not segmented. - for i in range(len(samples)): - # Determines the total length of the signal with padding, so we update here as we pad. - segment_infos[i].total_frames = max_len - wavs[i] = _pad_wav(wavs[i]) - - wav = torch.stack(wavs) - return wav, segment_infos - else: - assert isinstance(samples[0], torch.Tensor) - if to_pad: - samples = [_pad_wav(s) for s in samples] - return torch.stack(samples) - - def _filter_duration(self, meta: tp.List[AudioMeta]) -> tp.List[AudioMeta]: - """Filters out audio files with audio durations that will not allow to sample examples from them.""" - orig_len = len(meta) - - # Filter data that is too short. - if self.min_audio_duration is not None: - meta = [m for m in meta if m.duration >= self.min_audio_duration] - - # Filter data that is too long. - if self.max_audio_duration is not None: - meta = [m for m in meta if m.duration <= self.max_audio_duration] - - filtered_len = len(meta) - removed_percentage = 100*(1-float(filtered_len)/orig_len) - msg = 'Removed %.2f percent of the data because it was too short or too long.' % removed_percentage - if removed_percentage < 10: - logging.debug(msg) - else: - logging.warning(msg) - return meta - - @classmethod - def from_meta(cls, root: tp.Union[str, Path], **kwargs): - """Instantiate AudioDataset from a path to a directory containing a manifest as a jsonl file. - - Args: - root (str or Path): Path to root folder containing audio files. - kwargs: Additional keyword arguments for the AudioDataset. - """ - root = Path(root) - if root.is_dir(): - if (root / 'data.jsonl').exists(): - root = root / 'data.jsonl' - elif (root / 'data.jsonl.gz').exists(): - root = root / 'data.jsonl.gz' - else: - raise ValueError("Don't know where to read metadata from in the dir. " - "Expecting either a data.jsonl or data.jsonl.gz file but none found.") - meta = load_audio_meta(root) - return cls(meta, **kwargs) - - @classmethod - def from_path(cls, root: tp.Union[str, Path], minimal_meta: bool = True, - exts: tp.List[str] = DEFAULT_EXTS, **kwargs): - """Instantiate AudioDataset from a path containing (possibly nested) audio files. - - Args: - root (str or Path): Path to root folder containing audio files. - minimal_meta (bool): Whether to only load minimal metadata or not. - exts (list of str): Extensions for audio files. - kwargs: Additional keyword arguments for the AudioDataset. - """ - root = Path(root) - if root.is_file(): - meta = load_audio_meta(root, resolve=True) - else: - meta = find_audio_files(root, exts, minimal=minimal_meta, resolve=True) - return cls(meta, **kwargs) - - -def main(): - logging.basicConfig(stream=sys.stderr, level=logging.INFO) - parser = argparse.ArgumentParser( - prog='audio_dataset', - description='Generate .jsonl files by scanning a folder.') - parser.add_argument('root', help='Root folder with all the audio files') - parser.add_argument('output_meta_file', - help='Output file to store the metadata, ') - parser.add_argument('--complete', - action='store_false', dest='minimal', default=True, - help='Retrieve all metadata, even the one that are expansive ' - 'to compute (e.g. normalization).') - parser.add_argument('--resolve', - action='store_true', default=False, - help='Resolve the paths to be absolute and with no symlinks.') - parser.add_argument('--workers', - default=10, type=int, - help='Number of workers.') - args = parser.parse_args() - meta = find_audio_files(args.root, DEFAULT_EXTS, progress=True, - resolve=args.resolve, minimal=args.minimal, workers=args.workers) - save_audio_meta(args.output_meta_file, meta) - - -if __name__ == '__main__': - main() diff --git a/spaces/AICopilot/Dropbox/README.md b/spaces/AICopilot/Dropbox/README.md deleted file mode 100644 index 205a990da112f6cab63766a22b1a8e169af8da28..0000000000000000000000000000000000000000 --- a/spaces/AICopilot/Dropbox/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Dropbox -emoji: 🌍 -colorFrom: pink -colorTo: indigo -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/AIFILMS/StyleGANEX/configs/transforms_config.py b/spaces/AIFILMS/StyleGANEX/configs/transforms_config.py deleted file mode 100644 index 0af0404f4f59c79e5f672205031470bdab013622..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/configs/transforms_config.py +++ /dev/null @@ -1,242 +0,0 @@ -from abc import abstractmethod -import torchvision.transforms as transforms -from datasets import augmentations - - -class TransformsConfig(object): - - def __init__(self, opts): - self.opts = opts - - @abstractmethod - def get_transforms(self): - pass - - -class EncodeTransforms(TransformsConfig): - - def __init__(self, opts): - super(EncodeTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.RandomHorizontalFlip(0.5), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': None, - 'transform_test': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - - -class FrontalizationTransforms(TransformsConfig): - - def __init__(self, opts): - super(FrontalizationTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.RandomHorizontalFlip(0.5), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.RandomHorizontalFlip(0.5), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_test': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - - -class SketchToImageTransforms(TransformsConfig): - - def __init__(self, opts): - super(SketchToImageTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor()]), - 'transform_test': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor()]), - } - return transforms_dict - - -class SegToImageTransforms(TransformsConfig): - - def __init__(self, opts): - super(SegToImageTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((320, 320)), - augmentations.ToOneHot(self.opts.label_nc), - transforms.ToTensor()]), - 'transform_test': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((320, 320)), - augmentations.ToOneHot(self.opts.label_nc), - transforms.ToTensor()]) - } - return transforms_dict - - -class SuperResTransforms(TransformsConfig): - - def __init__(self, opts): - super(SuperResTransforms, self).__init__(opts) - - def get_transforms(self): - if self.opts.resize_factors is None: - self.opts.resize_factors = '1,2,4,8,16,32' - factors = [int(f) for f in self.opts.resize_factors.split(",")] - print("Performing down-sampling with factors: {}".format(factors)) - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((1280, 1280)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((320, 320)), - augmentations.BilinearResize(factors=factors), - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_test': transforms.Compose([ - transforms.Resize((1280, 1280)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((320, 320)), - augmentations.BilinearResize(factors=factors), - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - - -class SuperResTransforms_320(TransformsConfig): - - def __init__(self, opts): - super(SuperResTransforms_320, self).__init__(opts) - - def get_transforms(self): - if self.opts.resize_factors is None: - self.opts.resize_factors = '1,2,4,8,16,32' - factors = [int(f) for f in self.opts.resize_factors.split(",")] - print("Performing down-sampling with factors: {}".format(factors)) - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((320, 320)), - augmentations.BilinearResize(factors=factors), - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_test': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((320, 320)), - augmentations.BilinearResize(factors=factors), - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - - -class ToonifyTransforms(TransformsConfig): - - def __init__(self, opts): - super(ToonifyTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((1024, 1024)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_test': transforms.Compose([ - transforms.Resize((1024, 1024)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict - -class EditingTransforms(TransformsConfig): - - def __init__(self, opts): - super(EditingTransforms, self).__init__(opts) - - def get_transforms(self): - transforms_dict = { - 'transform_gt_train': transforms.Compose([ - transforms.Resize((1280, 1280)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_source': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_test': transforms.Compose([ - transforms.Resize((1280, 1280)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]), - 'transform_inference': transforms.Compose([ - transforms.Resize((320, 320)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - } - return transforms_dict \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/base_model.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/base_model.py deleted file mode 100644 index 5cf430239b47ec5ec07531263f26f5c24a2311cd..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/midas/midas/base_model.py +++ /dev/null @@ -1,16 +0,0 @@ -import torch - - -class BaseModel(torch.nn.Module): - def load(self, path): - """Load model from file. - - Args: - path (str): file path - """ - parameters = torch.load(path, map_location=torch.device('cpu')) - - if "optimizer" in parameters: - parameters = parameters["model"] - - self.load_state_dict(parameters) diff --git a/spaces/AISuperheroes/07GR-NLP-Seq2Seq-AutoQA/README.md b/spaces/AISuperheroes/07GR-NLP-Seq2Seq-AutoQA/README.md deleted file mode 100644 index 11a560d4ab5bb0e5ba99d748a2321ae7306aaa5d..0000000000000000000000000000000000000000 --- a/spaces/AISuperheroes/07GR-NLP-Seq2Seq-AutoQA/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 07GR NLP Seq2Seq AutoQA -emoji: 😻 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Aaron299/bingo/Dockerfile b/spaces/Aaron299/bingo/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/Aaron299/bingo/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/base_provider.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/base_provider.py deleted file mode 100644 index 35764081ac16bf631166e208274ad58ba6547cbe..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/base_provider.py +++ /dev/null @@ -1,138 +0,0 @@ -from __future__ import annotations - -from asyncio import AbstractEventLoop -from concurrent.futures import ThreadPoolExecutor -from abc import ABC, abstractmethod - -from .helper import get_event_loop, get_cookies, format_prompt -from ..typing import AsyncGenerator, CreateResult - - -class BaseProvider(ABC): - url: str - working: bool = False - needs_auth: bool = False - supports_stream: bool = False - supports_gpt_35_turbo: bool = False - supports_gpt_4: bool = False - - @staticmethod - @abstractmethod - def create_completion( - model: str, - messages: list[dict[str, str]], - stream: bool, - **kwargs - ) -> CreateResult: - raise NotImplementedError() - - @classmethod - async def create_async( - cls, - model: str, - messages: list[dict[str, str]], - *, - loop: AbstractEventLoop = None, - executor: ThreadPoolExecutor = None, - **kwargs - ) -> str: - if not loop: - loop = get_event_loop() - - def create_func() -> str: - return "".join(cls.create_completion( - model, - messages, - False, - **kwargs - )) - - return await loop.run_in_executor( - executor, - create_func - ) - - @classmethod - @property - def params(cls) -> str: - params = [ - ("model", "str"), - ("messages", "list[dict[str, str]]"), - ("stream", "bool"), - ] - param = ", ".join([": ".join(p) for p in params]) - return f"g4f.provider.{cls.__name__} supports: ({param})" - - -class AsyncProvider(BaseProvider): - @classmethod - def create_completion( - cls, - model: str, - messages: list[dict[str, str]], - stream: bool = False, - **kwargs - ) -> CreateResult: - loop = get_event_loop() - coro = cls.create_async(model, messages, **kwargs) - yield loop.run_until_complete(coro) - - @staticmethod - @abstractmethod - async def create_async( - model: str, - messages: list[dict[str, str]], - **kwargs - ) -> str: - raise NotImplementedError() - - -class AsyncGeneratorProvider(AsyncProvider): - supports_stream = True - - @classmethod - def create_completion( - cls, - model: str, - messages: list[dict[str, str]], - stream: bool = True, - **kwargs - ) -> CreateResult: - loop = get_event_loop() - generator = cls.create_async_generator( - model, - messages, - stream=stream, - **kwargs - ) - gen = generator.__aiter__() - while True: - try: - yield loop.run_until_complete(gen.__anext__()) - except StopAsyncIteration: - break - - @classmethod - async def create_async( - cls, - model: str, - messages: list[dict[str, str]], - **kwargs - ) -> str: - return "".join([ - chunk async for chunk in cls.create_async_generator( - model, - messages, - stream=False, - **kwargs - ) - ]) - - @staticmethod - @abstractmethod - def create_async_generator( - model: str, - messages: list[dict[str, str]], - **kwargs - ) -> AsyncGenerator: - raise NotImplementedError() \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/canvasinput.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/canvasinput.js deleted file mode 100644 index 721c242b4fc04c350e8529377da255e11ce5bcf7..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/canvasinput.js +++ /dev/null @@ -1,2 +0,0 @@ -import CanvasInput from './gameobjects/dynamictext/canvasinput/CanvasInput.js'; -export default CanvasInput; \ No newline at end of file diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/training_scripts/sg2/training/networks.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/training_scripts/sg2/training/networks.py deleted file mode 100644 index 291d1f6d157aeab10896bc106c15fe4d03fcb145..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/training_scripts/sg2/training/networks.py +++ /dev/null @@ -1,966 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import numpy as np -import torch -from torch_utils import misc -from torch_utils import persistence -from torch_utils.ops import conv2d_resample -from torch_utils.ops import upfirdn2d -from torch_utils.ops import bias_act -from torch_utils.ops import fma - -# ---------------------------------------------------------------------------- - - -@misc.profiled_function -def normalize_2nd_moment(x, dim=1, eps=1e-8): - return x * (x.square().mean(dim=dim, keepdim=True) + eps).rsqrt() - -# ---------------------------------------------------------------------------- - - -@misc.profiled_function -def modulated_conv2d( - # Input tensor of shape [batch_size, in_channels, in_height, in_width]. - x, - # Weight tensor of shape [out_channels, in_channels, kernel_height, kernel_width]. - weight, - # Modulation coefficients of shape [batch_size, in_channels]. - styles, - noise=None, # Optional noise tensor to add to the output activations. - up=1, # Integer upsampling factor. - down=1, # Integer downsampling factor. - padding=0, # Padding with respect to the upsampled image. - # Low-pass filter to apply when resampling activations. Must be prepared beforehand by calling upfirdn2d.setup_filter(). - resample_filter=None, - demodulate=True, # Apply weight demodulation? - # False = convolution, True = correlation (matches torch.nn.functional.conv2d). - flip_weight=True, - # Perform modulation, convolution, and demodulation as a single fused operation? - fused_modconv=True, -): - batch_size = x.shape[0] - out_channels, in_channels, kh, kw = weight.shape - misc.assert_shape(weight, [out_channels, in_channels, kh, kw]) # [OIkk] - misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW] - misc.assert_shape(styles, [batch_size, in_channels]) # [NI] - - # Pre-normalize inputs to avoid FP16 overflow. - if x.dtype == torch.float16 and demodulate: - weight = weight * (1 / np.sqrt(in_channels * kh * kw) / - weight.norm(float('inf'), dim=[1, 2, 3], keepdim=True)) # max_Ikk - styles = styles / \ - styles.norm(float('inf'), dim=1, keepdim=True) # max_I - - # Calculate per-sample weights and demodulation coefficients. - w = None - dcoefs = None - if demodulate or fused_modconv: - w = weight.unsqueeze(0) # [NOIkk] - w = w * styles.reshape(batch_size, 1, -1, 1, 1) # [NOIkk] - if demodulate: - dcoefs = (w.square().sum(dim=[2, 3, 4]) + 1e-8).rsqrt() # [NO] - if demodulate and fused_modconv: - w = w * dcoefs.reshape(batch_size, -1, 1, 1, 1) # [NOIkk] - - # Execute by scaling the activations before and after the convolution. - if not fused_modconv: - x = x * styles.to(x.dtype).reshape(batch_size, -1, 1, 1) - x = conv2d_resample.conv2d_resample(x=x, w=weight.to( - x.dtype), f=resample_filter, up=up, down=down, padding=padding, flip_weight=flip_weight) - if demodulate and noise is not None: - x = fma.fma(x, dcoefs.to(x.dtype).reshape( - batch_size, -1, 1, 1), noise.to(x.dtype)) - elif demodulate: - x = x * dcoefs.to(x.dtype).reshape(batch_size, -1, 1, 1) - elif noise is not None: - x = x.add_(noise.to(x.dtype)) - return x - - # Execute as one fused op using grouped convolution. - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - batch_size = int(batch_size) - misc.assert_shape(x, [batch_size, in_channels, None, None]) - x = x.reshape(1, -1, *x.shape[2:]) - w = w.reshape(-1, in_channels, kh, kw) - x = conv2d_resample.conv2d_resample(x=x, w=w.to( - x.dtype), f=resample_filter, up=up, down=down, padding=padding, groups=batch_size, flip_weight=flip_weight) - x = x.reshape(batch_size, -1, *x.shape[2:]) - if noise is not None: - x = x.add_(noise) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class FullyConnectedLayer(torch.nn.Module): - def __init__(self, - in_features, # Number of input features. - out_features, # Number of output features. - bias=True, # Apply additive bias before the activation function? - # Activation function: 'relu', 'lrelu', etc. - activation='linear', - lr_multiplier=1, # Learning rate multiplier. - bias_init=0, # Initial value for the additive bias. - ): - super().__init__() - self.activation = activation - self.weight = torch.nn.Parameter(torch.randn( - [out_features, in_features]) / lr_multiplier) - self.bias = torch.nn.Parameter(torch.full( - [out_features], np.float32(bias_init))) if bias else None - self.weight_gain = lr_multiplier / np.sqrt(in_features) - self.bias_gain = lr_multiplier - - def forward(self, x): - w = self.weight.to(x.dtype) * self.weight_gain - b = self.bias - if b is not None: - b = b.to(x.dtype) - if self.bias_gain != 1: - b = b * self.bias_gain - - if self.activation == 'linear' and b is not None: - x = torch.addmm(b.unsqueeze(0), x, w.t()) - else: - x = x.matmul(w.t()) - x = bias_act.bias_act(x, b, act=self.activation) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class Conv2dLayer(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - # Width and height of the convolution kernel. - kernel_size, - bias=True, # Apply additive bias before the activation function? - # Activation function: 'relu', 'lrelu', etc. - activation='linear', - up=1, # Integer upsampling factor. - down=1, # Integer downsampling factor. - # Low-pass filter to apply when resampling activations. - resample_filter=[1, 3, 3, 1], - # Clamp the output to +-X, None = disable clamping. - conv_clamp=None, - channels_last=False, # Expect the input to have memory_format=channels_last? - trainable=True, # Update the weights of this layer during training? - ): - super().__init__() - self.activation = activation - self.up = up - self.down = down - self.conv_clamp = conv_clamp - self.register_buffer( - 'resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.padding = kernel_size // 2 - self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2)) - self.act_gain = bias_act.activation_funcs[activation].def_gain - - memory_format = torch.channels_last if channels_last else torch.contiguous_format - weight = torch.randn([out_channels, in_channels, kernel_size, kernel_size]).to( - memory_format=memory_format) - bias = torch.zeros([out_channels]) if bias else None - if trainable: - self.weight = torch.nn.Parameter(weight) - self.bias = torch.nn.Parameter(bias) if bias is not None else None - else: - self.register_buffer('weight', weight) - if bias is not None: - self.register_buffer('bias', bias) - else: - self.bias = None - - def forward(self, x, gain=1): - w = self.weight * self.weight_gain - b = self.bias.to(x.dtype) if self.bias is not None else None - flip_weight = (self.up == 1) # slightly faster - x = conv2d_resample.conv2d_resample(x=x, w=w.to( - x.dtype), f=self.resample_filter, up=self.up, down=self.down, padding=self.padding, flip_weight=flip_weight) - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - x = bias_act.bias_act(x, b, act=self.activation, - gain=act_gain, clamp=act_clamp) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class MappingNetwork(torch.nn.Module): - def __init__(self, - # Input latent (Z) dimensionality, 0 = no latent. - z_dim, - # Conditioning label (C) dimensionality, 0 = no label. - c_dim, - # Intermediate latent (W) dimensionality. - w_dim, - # Number of intermediate latents to output, None = do not broadcast. - num_ws, - num_layers=8, # Number of mapping layers. - # Label embedding dimensionality, None = same as w_dim. - embed_features=None, - # Number of intermediate features in the mapping layers, None = same as w_dim. - layer_features=None, - # Activation function: 'relu', 'lrelu', etc. - activation='lrelu', - # Learning rate multiplier for the mapping layers. - lr_multiplier=0.01, - # Decay for tracking the moving average of W during training, None = do not track. - w_avg_beta=0.995, - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.num_ws = num_ws - self.num_layers = num_layers - self.w_avg_beta = w_avg_beta - - if embed_features is None: - embed_features = w_dim - if c_dim == 0: - embed_features = 0 - if layer_features is None: - layer_features = w_dim - features_list = [z_dim + embed_features] + \ - [layer_features] * (num_layers - 1) + [w_dim] - - if c_dim > 0: - self.embed = FullyConnectedLayer(c_dim, embed_features) - for idx in range(num_layers): - in_features = features_list[idx] - out_features = features_list[idx + 1] - layer = FullyConnectedLayer( - in_features, out_features, activation=activation, lr_multiplier=lr_multiplier) - setattr(self, f'fc{idx}', layer) - - if num_ws is not None and w_avg_beta is not None: - self.register_buffer('w_avg', torch.zeros([w_dim])) - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, skip_w_avg_update=False): - # Embed, normalize, and concat inputs. - x = None - with torch.autograd.profiler.record_function('input'): - if self.z_dim > 0: - misc.assert_shape(z, [None, self.z_dim]) - x = normalize_2nd_moment(z.to(torch.float32)) - if self.c_dim > 0: - misc.assert_shape(c, [None, self.c_dim]) - y = normalize_2nd_moment(self.embed(c.to(torch.float32))) - x = torch.cat([x, y], dim=1) if x is not None else y - - # Main layers. - for idx in range(self.num_layers): - layer = getattr(self, f'fc{idx}') - x = layer(x) - - # Update moving average of W. - if self.w_avg_beta is not None and self.training and not skip_w_avg_update: - with torch.autograd.profiler.record_function('update_w_avg'): - self.w_avg.copy_(x.detach().mean( - dim=0).lerp(self.w_avg, self.w_avg_beta)) - - # Broadcast. - if self.num_ws is not None: - with torch.autograd.profiler.record_function('broadcast'): - x = x.unsqueeze(1).repeat([1, self.num_ws, 1]) - - # Apply truncation. - if truncation_psi != 1: - with torch.autograd.profiler.record_function('truncate'): - assert self.w_avg_beta is not None - if self.num_ws is None or truncation_cutoff is None: - x = self.w_avg.lerp(x, truncation_psi) - else: - x[:, :truncation_cutoff] = self.w_avg.lerp( - x[:, :truncation_cutoff], truncation_psi) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class SynthesisLayer(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - # Intermediate latent (W) dimensionality. - w_dim, - resolution, # Resolution of this layer. - kernel_size=3, # Convolution kernel size. - up=1, # Integer upsampling factor. - use_noise=True, # Enable noise input? - # Activation function: 'relu', 'lrelu', etc. - activation='lrelu', - # Low-pass filter to apply when resampling activations. - resample_filter=[1, 3, 3, 1], - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=None, - channels_last=False, # Use channels_last format for the weights? - square=False, # default if for rectangle images - ): - super().__init__() - self.resolution = resolution - self.up = up - self.use_noise = use_noise - self.activation = activation - self.conv_clamp = conv_clamp - self.register_buffer( - 'resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.padding = kernel_size // 2 - self.act_gain = bias_act.activation_funcs[activation].def_gain - self.square = square - - self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1) - memory_format = torch.channels_last if channels_last else torch.contiguous_format - self.weight = torch.nn.Parameter(torch.randn( - [out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format)) - if use_noise: - if self.square: - self.register_buffer( - 'noise_const', torch.randn([resolution, resolution])) - else: - self.register_buffer('noise_const', torch.randn( - [resolution, resolution // 2])) - self.noise_strength = torch.nn.Parameter(torch.zeros([])) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - - def forward(self, x, w, noise_mode='random', fused_modconv=True, gain=1): - assert noise_mode in ['random', 'const', 'none'] - in_resolution = self.resolution // self.up - if self.square: - misc.assert_shape( - x, [None, self.weight.shape[1], in_resolution, in_resolution]) - else: - misc.assert_shape( - x, [None, self.weight.shape[1], in_resolution, in_resolution // 2]) - styles = self.affine(w) - - noise = None - if self.use_noise and noise_mode == 'random': - if self.square: - noise = torch.randn( - [x.shape[0], 1, self.resolution, self.resolution], device=x.device) * self.noise_strength - else: - noise = torch.randn( - [x.shape[0], 1, self.resolution, self.resolution // 2], device=x.device) * self.noise_strength - if self.use_noise and noise_mode == 'const': - noise = self.noise_const * self.noise_strength - - flip_weight = (self.up == 1) # slightly faster - x = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up, - padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv) - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - x = bias_act.bias_act(x, self.bias.to( - x.dtype), act=self.activation, gain=act_gain, clamp=act_clamp) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class ToRGBLayer(torch.nn.Module): - def __init__(self, in_channels, out_channels, w_dim, kernel_size=1, conv_clamp=None, channels_last=False): - super().__init__() - self.conv_clamp = conv_clamp - self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1) - memory_format = torch.channels_last if channels_last else torch.contiguous_format - self.weight = torch.nn.Parameter(torch.randn( - [out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format)) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2)) - - def forward(self, x, w, fused_modconv=True): - styles = self.affine(w) * self.weight_gain - x = modulated_conv2d(x=x, weight=self.weight, styles=styles, - demodulate=False, fused_modconv=fused_modconv) - x = bias_act.bias_act(x, self.bias.to(x.dtype), clamp=self.conv_clamp) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class SynthesisBlock(torch.nn.Module): - def __init__(self, - # Number of input channels, 0 = first block. - in_channels, - # Number of output channels. - out_channels, - # Intermediate latent (W) dimensionality. - w_dim, - # Resolution of this block. - resolution, - # Number of output color channels. - img_channels, - is_last, # Is this the last block? - # Architecture: 'orig', 'skip', 'resnet'. - architecture='skip', - # Low-pass filter to apply when resampling activations. - resample_filter=[1, 3, 3, 1], - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=None, - use_fp16=False, # Use FP16 for this block? - fp16_channels_last=False, # Use channels-last memory format with FP16? - square=False, # default is for rectangle images - # Arguments for SynthesisLayer. - **layer_kwargs, - ): - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.w_dim = w_dim - self.resolution = resolution - self.img_channels = img_channels - self.is_last = is_last - self.architecture = architecture - self.use_fp16 = use_fp16 - self.channels_last = (use_fp16 and fp16_channels_last) - self.register_buffer( - 'resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.num_conv = 0 - self.num_torgb = 0 - self.square = square - - if in_channels == 0: - if self.square: - self.const = torch.nn.Parameter(torch.randn( - [out_channels, resolution, resolution])) - else: # rectangle - self.const = torch.nn.Parameter(torch.randn( - [out_channels, resolution, resolution // 2])) - - if in_channels != 0: - self.conv0 = SynthesisLayer(in_channels, out_channels, w_dim=w_dim, resolution=resolution, up=2, - resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last, square=square, **layer_kwargs) - self.num_conv += 1 - - self.conv1 = SynthesisLayer(out_channels, out_channels, w_dim=w_dim, resolution=resolution, - conv_clamp=conv_clamp, channels_last=self.channels_last, square=square, **layer_kwargs) - self.num_conv += 1 - - if is_last or architecture == 'skip': - self.torgb = ToRGBLayer(out_channels, img_channels, w_dim=w_dim, - conv_clamp=conv_clamp, channels_last=self.channels_last) - self.num_torgb += 1 - - if in_channels != 0 and architecture == 'resnet': - self.skip = Conv2dLayer(in_channels, out_channels, kernel_size=1, bias=False, up=2, - resample_filter=resample_filter, channels_last=self.channels_last) - - def forward(self, x, img, ws, force_fp32=False, fused_modconv=None, **layer_kwargs): - misc.assert_shape( - ws, [None, self.num_conv + self.num_torgb, self.w_dim]) - w_iter = iter(ws.unbind(dim=1)) - dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32 - memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format - if fused_modconv is None: - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - fused_modconv = (not self.training) and ( - dtype == torch.float32 or int(x.shape[0]) == 1) - - # Input. - if self.in_channels == 0: - x = self.const.to(dtype=dtype, memory_format=memory_format) - x = x.unsqueeze(0).repeat([ws.shape[0], 1, 1, 1]) - else: - if self.square: - misc.assert_shape( - x, [None, self.in_channels, self.resolution // 2, self.resolution // 2]) - else: # rectangle - misc.assert_shape( - x, [None, self.in_channels, self.resolution // 2, self.resolution // 4]) - x = x.to(dtype=dtype, memory_format=memory_format) - - # Main layers. - if self.in_channels == 0: - x = self.conv1(x, next(w_iter), - fused_modconv=fused_modconv, **layer_kwargs) - elif self.architecture == 'resnet': - y = self.skip(x, gain=np.sqrt(0.5)) - x = self.conv0(x, next(w_iter), - fused_modconv=fused_modconv, **layer_kwargs) - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, - gain=np.sqrt(0.5), **layer_kwargs) - x = y.add_(x) - else: - x = self.conv0(x, next(w_iter), - fused_modconv=fused_modconv, **layer_kwargs) - x = self.conv1(x, next(w_iter), - fused_modconv=fused_modconv, **layer_kwargs) - - # ToRGB. - if img is not None: - if self.square: - misc.assert_shape( - img, [None, self.img_channels, self.resolution // 2, self.resolution // 2]) - else: - misc.assert_shape( - img, [None, self.img_channels, self.resolution // 2, self.resolution // 4]) - img = upfirdn2d.upsample2d(img, self.resample_filter) - if self.is_last or self.architecture == 'skip': - y = self.torgb(x, next(w_iter), fused_modconv=fused_modconv) - y = y.to(dtype=torch.float32, - memory_format=torch.contiguous_format) - img = img.add_(y) if img is not None else y - - assert x.dtype == dtype - assert img is None or img.dtype == torch.float32 - return x, img - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class SynthesisNetwork(torch.nn.Module): - def __init__(self, - # Intermediate latent (W) dimensionality. - w_dim, - img_resolution, # Output image resolution. - img_channels, # Number of color channels. - square, - # Overall multiplier for the number of channels. - channel_base=32768, - # Maximum number of channels in any layer. - channel_max=512, - # Use FP16 for the N highest resolutions. - num_fp16_res=0, - **block_kwargs, # Arguments for SynthesisBlock. - ): - assert img_resolution >= 4 and img_resolution & ( - img_resolution - 1) == 0 - super().__init__() - self.w_dim = w_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.square = square - self.block_resolutions = [ - 2 ** i for i in range(2, self.img_resolution_log2 + 1)] - channels_dict = {res: min(channel_base // res, channel_max) - for res in self.block_resolutions} - fp16_resolution = max( - 2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - self.num_ws = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res // 2] if res > 4 else 0 - out_channels = channels_dict[res] - use_fp16 = (res >= fp16_resolution) - is_last = (res == self.img_resolution) - block = SynthesisBlock(in_channels, out_channels, w_dim=w_dim, resolution=res, - img_channels=img_channels, is_last=is_last, use_fp16=use_fp16, square=square, **block_kwargs) - self.num_ws += block.num_conv - if is_last: - self.num_ws += block.num_torgb - setattr(self, f'b{res}', block) - - def forward(self, ws, return_feature=False, **block_kwargs): - block_ws = [] - features = [] - with torch.autograd.profiler.record_function('split_ws'): - misc.assert_shape(ws, [None, self.num_ws, self.w_dim]) - ws = ws.to(torch.float32) - w_idx = 0 - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - block_ws.append( - ws.narrow(1, w_idx, block.num_conv + block.num_torgb)) - w_idx += block.num_conv - - x = img = None - for res, cur_ws in zip(self.block_resolutions, block_ws): - block = getattr(self, f'b{res}') - x, img = block(x, img, cur_ws, **block_kwargs) - features.append(x) - if return_feature: - return img, features - else: - return img - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class Generator(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality. - # Conditioning label (C) dimensionality. - c_dim, - # Intermediate latent (W) dimensionality. - w_dim, - img_resolution, # Output resolution. - square, - img_channels, # Number of output color channels. - mapping_kwargs={}, # Arguments for MappingNetwork. - synthesis_kwargs={}, # Arguments for SynthesisNetwork. - padding=False - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.square = square - self.img_resolution = img_resolution - self.img_channels = img_channels - self.padding = padding - self.synthesis = SynthesisNetwork( - w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, square=square, **synthesis_kwargs) - self.num_ws = self.synthesis.num_ws - self.mapping = MappingNetwork( - z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs) - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, input_is_w=False, return_feature=False, **synthesis_kwargs): - if input_is_w: - ws = z - if ws.dim() == 2: - ws = ws.unsqueeze(1).repeat([1, self.mapping.num_ws, 1]) - else: - ws = self.mapping(z, c, truncation_psi=truncation_psi, - truncation_cutoff=truncation_cutoff) - img = self.synthesis( - ws, return_feature=return_feature, **synthesis_kwargs) - if return_feature: - img, feature = img - if self.padding: - pad = (img.size(2) - img.size(3)) // 2 - img = torch.nn.functional.pad(img, (pad, pad), "constant", 1) - if return_feature: - for i, feat in enumerate(feature): - pad = (feat.size(2) - feat.size(3)) // 2 - feature[i] = torch.nn.functional.pad( - feat, (pad, pad), "constant", 0) - if return_feature: - return img, feature - else: - return img - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class DiscriminatorBlock(torch.nn.Module): - def __init__(self, - # Number of input channels, 0 = first block. - in_channels, - # Number of intermediate channels. - tmp_channels, - # Number of output channels. - out_channels, - # Resolution of this block. - resolution, - # Number of input color channels. - img_channels, - # Index of the first layer. - first_layer_idx, - # Architecture: 'orig', 'skip', 'resnet'. - architecture='resnet', - # Activation function: 'relu', 'lrelu', etc. - activation='lrelu', - # Low-pass filter to apply when resampling activations. - resample_filter=[1, 3, 3, 1], - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=None, - use_fp16=False, # Use FP16 for this block? - fp16_channels_last=False, # Use channels-last memory format with FP16? - # Freeze-D: Number of layers to freeze. - freeze_layers=0, - square=False, - ): - assert in_channels in [0, tmp_channels] - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.resolution = resolution - self.img_channels = img_channels - self.first_layer_idx = first_layer_idx - self.architecture = architecture - self.use_fp16 = use_fp16 - self.channels_last = (use_fp16 and fp16_channels_last) - self.register_buffer( - 'resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.square = square - - self.num_layers = 0 - - def trainable_gen(): - while True: - layer_idx = self.first_layer_idx + self.num_layers - trainable = (layer_idx >= freeze_layers) - self.num_layers += 1 - yield trainable - trainable_iter = trainable_gen() - - if in_channels == 0 or architecture == 'skip': - self.fromrgb = Conv2dLayer(img_channels, tmp_channels, kernel_size=1, activation=activation, - trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last) - - self.conv0 = Conv2dLayer(tmp_channels, tmp_channels, kernel_size=3, activation=activation, - trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last) - - self.conv1 = Conv2dLayer(tmp_channels, out_channels, kernel_size=3, activation=activation, down=2, - trainable=next(trainable_iter), resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last) - - if architecture == 'resnet': - self.skip = Conv2dLayer(tmp_channels, out_channels, kernel_size=1, bias=False, down=2, - trainable=next(trainable_iter), resample_filter=resample_filter, channels_last=self.channels_last) - - def forward(self, x, img, force_fp32=False): - dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32 - memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format - - # Input. - if x is not None: - if self.square: - misc.assert_shape( - x, [None, self.in_channels, self.resolution, self.resolution]) - else: - misc.assert_shape( - x, [None, self.in_channels, self.resolution, self.resolution // 2]) - x = x.to(dtype=dtype, memory_format=memory_format) - - # FromRGB. - if self.in_channels == 0 or self.architecture == 'skip': - if self.square: - misc.assert_shape( - img, [None, self.img_channels, self.resolution, self.resolution]) - else: - misc.assert_shape( - img, [None, self.img_channels, self.resolution, self.resolution // 2]) - img = img.to(dtype=dtype, memory_format=memory_format) - y = self.fromrgb(img) - x = x + y if x is not None else y - img = upfirdn2d.downsample2d( - img, self.resample_filter) if self.architecture == 'skip' else None - - # Main layers. - if self.architecture == 'resnet': - y = self.skip(x, gain=np.sqrt(0.5)) - x = self.conv0(x) - x = self.conv1(x, gain=np.sqrt(0.5)) - x = y.add_(x) - else: - x = self.conv0(x) - x = self.conv1(x) - - assert x.dtype == dtype - return x, img - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class MinibatchStdLayer(torch.nn.Module): - def __init__(self, group_size, num_channels=1): - super().__init__() - self.group_size = group_size - self.num_channels = num_channels - - def forward(self, x): - N, C, H, W = x.shape - with misc.suppress_tracer_warnings(): # as_tensor results are registered as constants - G = torch.min(torch.as_tensor(self.group_size), torch.as_tensor( - N)) if self.group_size is not None else N - F = self.num_channels - c = C // F - - # [GnFcHW] Split minibatch N into n groups of size G, and channels C into F groups of size c. - y = x.reshape(G, -1, F, c, H, W) - # [GnFcHW] Subtract mean over group. - y = y - y.mean(dim=0) - # [nFcHW] Calc variance over group. - y = y.square().mean(dim=0) - y = (y + 1e-8).sqrt() # [nFcHW] Calc stddev over group. - # [nF] Take average over channels and pixels. - y = y.mean(dim=[2, 3, 4]) - y = y.reshape(-1, F, 1, 1) # [nF11] Add missing dimensions. - # [NFHW] Replicate over group and pixels. - y = y.repeat(G, 1, H, W) - # [NCHW] Append to input as new channels. - x = torch.cat([x, y], dim=1) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class DiscriminatorEpilogue(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - # Dimensionality of mapped conditioning label, 0 = no label. - cmap_dim, - resolution, # Resolution of this block. - # Number of input color channels. - img_channels, - # Architecture: 'orig', 'skip', 'resnet'. - architecture='resnet', - # Group size for the minibatch standard deviation layer, None = entire minibatch. - mbstd_group_size=4, - # Number of features for the minibatch standard deviation layer, 0 = disable. - mbstd_num_channels=1, - # Activation function: 'relu', 'lrelu', etc. - activation='lrelu', - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=None, - square=False, - ): - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.cmap_dim = cmap_dim - self.resolution = resolution - self.img_channels = img_channels - self.architecture = architecture - self.square = square - - if architecture == 'skip': - self.fromrgb = Conv2dLayer( - img_channels, in_channels, kernel_size=1, activation=activation) - self.mbstd = MinibatchStdLayer( - group_size=mbstd_group_size, num_channels=mbstd_num_channels) if mbstd_num_channels > 0 else None - self.conv = Conv2dLayer(in_channels + mbstd_num_channels, in_channels, - kernel_size=3, activation=activation, conv_clamp=conv_clamp) - - if self.square: - self.fc = FullyConnectedLayer( - in_channels * (resolution ** 2), in_channels, activation=activation) - else: - self.fc = FullyConnectedLayer( - in_channels * (resolution ** 2 // 2), in_channels, activation=activation) - - self.out = FullyConnectedLayer( - in_channels, 1 if cmap_dim == 0 else cmap_dim) - - def forward(self, x, img, cmap, force_fp32=False): - if self.square: - misc.assert_shape(x, [None, self.in_channels, - self.resolution, self.resolution]) - else: - misc.assert_shape( - x, [None, self.in_channels, self.resolution, self.resolution // 2]) # [NCHW] - _ = force_fp32 # unused - dtype = torch.float32 - memory_format = torch.contiguous_format - - # FromRGB. - x = x.to(dtype=dtype, memory_format=memory_format) - if self.architecture == 'skip': - if self.square: - misc.assert_shape( - img, [None, self.img_channels, self.resolution, self.resolution]) - else: - misc.assert_shape( - img, [None, self.img_channels, self.resolution, self.resolution // 2]) - img = img.to(dtype=dtype, memory_format=memory_format) - x = x + self.fromrgb(img) - - # Main layers. - if self.mbstd is not None: - x = self.mbstd(x) - x = self.conv(x) - x = self.fc(x.flatten(1)) - x = self.out(x) - - # Conditioning. - if self.cmap_dim > 0: - misc.assert_shape(cmap, [None, self.cmap_dim]) - x = (x * cmap).sum(dim=1, keepdim=True) * \ - (1 / np.sqrt(self.cmap_dim)) - - assert x.dtype == dtype - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class Discriminator(torch.nn.Module): - def __init__(self, - # Conditioning label (C) dimensionality. - c_dim, - img_resolution, # Input resolution. - # Number of input color channels. - img_channels, - # Architecture: 'orig', 'skip', 'resnet'. - architecture='resnet', - # Overall multiplier for the number of channels. - channel_base=32768, - # Maximum number of channels in any layer. - channel_max=512, - # Use FP16 for the N highest resolutions. - num_fp16_res=0, - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=None, - # Dimensionality of mapped conditioning label, None = default. - cmap_dim=None, - square=False, # default for rectangle images - block_kwargs={}, # Arguments for DiscriminatorBlock. - mapping_kwargs={}, # Arguments for MappingNetwork. - # Arguments for DiscriminatorEpilogue. - epilogue_kwargs={}, - ): - super().__init__() - self.c_dim = c_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.square = square - self.block_resolutions = [ - 2 ** i for i in range(self.img_resolution_log2, 2, -1)] - channels_dict = {res: min(channel_base // res, channel_max) - for res in self.block_resolutions + [4]} - fp16_resolution = max( - 2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - if cmap_dim is None: - cmap_dim = channels_dict[4] - if c_dim == 0: - cmap_dim = 0 - - common_kwargs = dict(img_channels=img_channels, - architecture=architecture, conv_clamp=conv_clamp) - cur_layer_idx = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res] if res < img_resolution else 0 - tmp_channels = channels_dict[res] - out_channels = channels_dict[res // 2] - use_fp16 = (res >= fp16_resolution) - block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res, - first_layer_idx=cur_layer_idx, use_fp16=use_fp16, square=square, **block_kwargs, **common_kwargs) - setattr(self, f'b{res}', block) - cur_layer_idx += block.num_layers - if c_dim > 0: - self.mapping = MappingNetwork( - z_dim=0, c_dim=c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs) - self.b4 = DiscriminatorEpilogue( - channels_dict[4], cmap_dim=cmap_dim, resolution=4, square=square, **epilogue_kwargs, **common_kwargs) - - def forward(self, img, c, **block_kwargs): - x = None - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - x, img = block(x, img, **block_kwargs) - - cmap = None - if self.c_dim > 0: - cmap = self.mapping(None, c) - x = self.b4(x, img, cmap) - return x - -# ---------------------------------------------------------------------------- diff --git a/spaces/Amrrs/openai-whisper-live-transcribe/app.py b/spaces/Amrrs/openai-whisper-live-transcribe/app.py deleted file mode 100644 index 89966c9fd3ea7fac0d7668d97fda3919b2e676d2..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/openai-whisper-live-transcribe/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import whisper -import gradio as gr - -model = whisper.load_model("small") - -def transcribe(audio): - - #time.sleep(3) - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - # detect the spoken language - _, probs = model.detect_language(mel) - print(f"Detected language: {max(probs, key=probs.get)}") - - # decode the audio - options = whisper.DecodingOptions(fp16 = False) - result = whisper.decode(model, mel, options) - return result.text - - - -gr.Interface( - title = 'OpenAI Whisper ASR Gradio Web UI', - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type="filepath") - ], - outputs=[ - "textbox" - ], - live=True).launch() diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/paint_by_example.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/paint_by_example.md deleted file mode 100644 index ec7172060926649e66e678ed0dcbf04ca8781c0d..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/paint_by_example.md +++ /dev/null @@ -1,39 +0,0 @@ - - -# PaintByExample - -[Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://huggingface.co/papers/2211.13227) is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen. - -The abstract from the paper is: - -*Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.* - -The original codebase can be found at [Fantasy-Studio/Paint-by-Example](https://github.com/Fantasy-Studio/Paint-by-Example), and you can try it out in a [demo](https://huggingface.co/spaces/Fantasy-Studio/Paint-by-Example). - -## Tips - -PaintByExample is supported by the official [Fantasy-Studio/Paint-by-Example](https://huggingface.co/Fantasy-Studio/Paint-by-Example) checkpoint. The checkpoint is warm-started from [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) to inpaint partly masked images conditioned on example and reference images. - - - -Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. - - - -## PaintByExamplePipeline -[[autodoc]] PaintByExamplePipeline - - all - - __call__ - -## StableDiffusionPipelineOutput -[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/unidiffuser/modeling_text_decoder.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/unidiffuser/modeling_text_decoder.py deleted file mode 100644 index 9b962f6e065621c8fc83775f555bbd732ccc8a26..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/unidiffuser/modeling_text_decoder.py +++ /dev/null @@ -1,296 +0,0 @@ -from typing import Optional - -import numpy as np -import torch -from torch import nn -from transformers import GPT2Config, GPT2LMHeadModel -from transformers.modeling_utils import ModuleUtilsMixin - -from ...configuration_utils import ConfigMixin, register_to_config -from ...models import ModelMixin - - -# Modified from ClipCaptionModel in https://github.com/thu-ml/unidiffuser/blob/main/libs/caption_decoder.py -class UniDiffuserTextDecoder(ModelMixin, ConfigMixin, ModuleUtilsMixin): - """ - Text decoder model for a image-text [UniDiffuser](https://arxiv.org/pdf/2303.06555.pdf) model. This is used to - generate text from the UniDiffuser image-text embedding. - - Parameters: - prefix_length (`int`): - Max number of prefix tokens that will be supplied to the model. - prefix_inner_dim (`int`): - The hidden size of the the incoming prefix embeddings. For UniDiffuser, this would be the hidden dim of the - CLIP text encoder. - prefix_hidden_dim (`int`, *optional*): - Hidden dim of the MLP if we encode the prefix. - vocab_size (`int`, *optional*, defaults to 50257): - Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`GPT2Model`] or [`TFGPT2Model`]. - n_positions (`int`, *optional*, defaults to 1024): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 512 or 1024 or 2048). - n_embd (`int`, *optional*, defaults to 768): - Dimensionality of the embeddings and hidden states. - n_layer (`int`, *optional*, defaults to 12): - Number of hidden layers in the Transformer encoder. - n_head (`int`, *optional*, defaults to 12): - Number of attention heads for each attention layer in the Transformer encoder. - n_inner (`int`, *optional*, defaults to None): - Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd - activation_function (`str`, *optional*, defaults to `"gelu"`): - Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`. - resid_pdrop (`float`, *optional*, defaults to 0.1): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - embd_pdrop (`float`, *optional*, defaults to 0.1): - The dropout ratio for the embeddings. - attn_pdrop (`float`, *optional*, defaults to 0.1): - The dropout ratio for the attention. - layer_norm_epsilon (`float`, *optional*, defaults to 1e-5): - The epsilon to use in the layer normalization layers. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - scale_attn_weights (`bool`, *optional*, defaults to `True`): - Scale attention weights by dividing by sqrt(hidden_size).. - use_cache (`bool`, *optional*, defaults to `True`): - Whether or not the model should return the last key/values attentions (not used by all models). - scale_attn_by_inverse_layer_idx (`bool`, *optional*, defaults to `False`): - Whether to additionally scale attention weights by `1 / layer_idx + 1`. - reorder_and_upcast_attn (`bool`, *optional*, defaults to `False`): - Whether to scale keys (K) prior to computing attention (dot-product) and upcast attention - dot-product/softmax to float() when training with mixed precision. - """ - - _keys_to_ignore_on_load_unexpected = [r"h\.\d+\.attn\.bias", r"h\.\d+\.attn\.masked_bias"] - - @register_to_config - def __init__( - self, - prefix_length: int, - prefix_inner_dim: int, - prefix_hidden_dim: Optional[int] = None, - vocab_size: int = 50257, # Start of GPT2 config args - n_positions: int = 1024, - n_embd: int = 768, - n_layer: int = 12, - n_head: int = 12, - n_inner: Optional[int] = None, - activation_function: str = "gelu_new", - resid_pdrop: float = 0.1, - embd_pdrop: float = 0.1, - attn_pdrop: float = 0.1, - layer_norm_epsilon: float = 1e-5, - initializer_range: float = 0.02, - scale_attn_weights: bool = True, - use_cache: bool = True, - scale_attn_by_inverse_layer_idx: bool = False, - reorder_and_upcast_attn: bool = False, - ): - super().__init__() - - self.prefix_length = prefix_length - - if prefix_inner_dim != n_embd and prefix_hidden_dim is None: - raise ValueError( - f"`prefix_hidden_dim` cannot be `None` when `prefix_inner_dim`: {prefix_hidden_dim} and" - f" `n_embd`: {n_embd} are not equal." - ) - - self.prefix_inner_dim = prefix_inner_dim - self.prefix_hidden_dim = prefix_hidden_dim - - self.encode_prefix = ( - nn.Linear(self.prefix_inner_dim, self.prefix_hidden_dim) - if self.prefix_hidden_dim is not None - else nn.Identity() - ) - self.decode_prefix = ( - nn.Linear(self.prefix_hidden_dim, n_embd) if self.prefix_hidden_dim is not None else nn.Identity() - ) - - gpt_config = GPT2Config( - vocab_size=vocab_size, - n_positions=n_positions, - n_embd=n_embd, - n_layer=n_layer, - n_head=n_head, - n_inner=n_inner, - activation_function=activation_function, - resid_pdrop=resid_pdrop, - embd_pdrop=embd_pdrop, - attn_pdrop=attn_pdrop, - layer_norm_epsilon=layer_norm_epsilon, - initializer_range=initializer_range, - scale_attn_weights=scale_attn_weights, - use_cache=use_cache, - scale_attn_by_inverse_layer_idx=scale_attn_by_inverse_layer_idx, - reorder_and_upcast_attn=reorder_and_upcast_attn, - ) - self.transformer = GPT2LMHeadModel(gpt_config) - - def forward( - self, - input_ids: torch.Tensor, - prefix_embeds: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - ): - """ - Args: - input_ids (`torch.Tensor` of shape `(N, max_seq_len)`): - Text tokens to use for inference. - prefix_embeds (`torch.Tensor` of shape `(N, prefix_length, 768)`): - Prefix embedding to preprend to the embedded tokens. - attention_mask (`torch.Tensor` of shape `(N, prefix_length + max_seq_len, 768)`, *optional*): - Attention mask for the prefix embedding. - labels (`torch.Tensor`, *optional*): - Labels to use for language modeling. - """ - embedding_text = self.transformer.transformer.wte(input_ids) - hidden = self.encode_prefix(prefix_embeds) - prefix_embeds = self.decode_prefix(hidden) - embedding_cat = torch.cat((prefix_embeds, embedding_text), dim=1) - - if labels is not None: - dummy_token = self.get_dummy_token(input_ids.shape[0], input_ids.device) - labels = torch.cat((dummy_token, input_ids), dim=1) - out = self.transformer(inputs_embeds=embedding_cat, labels=labels, attention_mask=attention_mask) - if self.prefix_hidden_dim is not None: - return out, hidden - else: - return out - - def get_dummy_token(self, batch_size: int, device: torch.device) -> torch.Tensor: - return torch.zeros(batch_size, self.prefix_length, dtype=torch.int64, device=device) - - def encode(self, prefix): - return self.encode_prefix(prefix) - - @torch.no_grad() - def generate_captions(self, features, eos_token_id, device): - """ - Generate captions given text embedding features. Returns list[L]. - - Args: - features (`torch.Tensor` of shape `(B, L, D)`): - Text embedding features to generate captions from. - eos_token_id (`int`): - The token ID of the EOS token for the text decoder model. - device: - Device to perform text generation on. - - Returns: - `List[str]`: A list of strings generated from the decoder model. - """ - - features = torch.split(features, 1, dim=0) - generated_tokens = [] - generated_seq_lengths = [] - for feature in features: - feature = self.decode_prefix(feature.to(device)) # back to the clip feature - # Only support beam search for now - output_tokens, seq_lengths = self.generate_beam( - input_embeds=feature, device=device, eos_token_id=eos_token_id - ) - generated_tokens.append(output_tokens[0]) - generated_seq_lengths.append(seq_lengths[0]) - generated_tokens = torch.stack(generated_tokens) - generated_seq_lengths = torch.stack(generated_seq_lengths) - return generated_tokens, generated_seq_lengths - - @torch.no_grad() - def generate_beam( - self, - input_ids=None, - input_embeds=None, - device=None, - beam_size: int = 5, - entry_length: int = 67, - temperature: float = 1.0, - eos_token_id: Optional[int] = None, - ): - """ - Generates text using the given tokenizer and text prompt or token embedding via beam search. This - implementation is based on the beam search implementation from the [original UniDiffuser - code](https://github.com/thu-ml/unidiffuser/blob/main/libs/caption_decoder.py#L89). - - Args: - eos_token_id (`int`, *optional*): - The token ID of the EOS token for the text decoder model. - input_ids (`torch.LongTensor` of shape `(batch_size, input_ids_length)`, *optional*): - Tokenizer indices of input sequence tokens in the vocabulary. One of `input_ids` and `input_embeds` - must be supplied. - input_embeds (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`, *optional*): - An embedded representation to directly pass to the transformer as a prefix for beam search. One of - `input_ids` and `input_embeds` must be supplied. - device: - The device to perform beam search on. - beam_size (`int`, *optional*, defaults to `5`): - The number of best states to store during beam search. - entry_length (`int`, *optional*, defaults to `67`): - The number of iterations to run beam search. - temperature (`float`, *optional*, defaults to 1.0): - The temperature to use when performing the softmax over logits from the decoding model. - - Returns: - `Tuple(torch.Tensor, torch.Tensor)`: A tuple of tensors where the first element is a tensor of generated - token sequences sorted by score in descending order, and the second element is the sequence lengths - corresponding to those sequences. - """ - # Generates text until stop_token is reached using beam search with the desired beam size. - stop_token_index = eos_token_id - tokens = None - scores = None - seq_lengths = torch.ones(beam_size, device=device, dtype=torch.int) - is_stopped = torch.zeros(beam_size, device=device, dtype=torch.bool) - - if input_embeds is not None: - generated = input_embeds - else: - generated = self.transformer.transformer.wte(input_ids) - - for i in range(entry_length): - outputs = self.transformer(inputs_embeds=generated) - logits = outputs.logits - logits = logits[:, -1, :] / (temperature if temperature > 0 else 1.0) - logits = logits.softmax(-1).log() - - if scores is None: - scores, next_tokens = logits.topk(beam_size, -1) - generated = generated.expand(beam_size, *generated.shape[1:]) - next_tokens, scores = next_tokens.permute(1, 0), scores.squeeze(0) - if tokens is None: - tokens = next_tokens - else: - tokens = tokens.expand(beam_size, *tokens.shape[1:]) - tokens = torch.cat((tokens, next_tokens), dim=1) - else: - logits[is_stopped] = -float(np.inf) - logits[is_stopped, 0] = 0 - scores_sum = scores[:, None] + logits - seq_lengths[~is_stopped] += 1 - scores_sum_average = scores_sum / seq_lengths[:, None] - scores_sum_average, next_tokens = scores_sum_average.view(-1).topk(beam_size, -1) - next_tokens_source = next_tokens // scores_sum.shape[1] - seq_lengths = seq_lengths[next_tokens_source] - next_tokens = next_tokens % scores_sum.shape[1] - next_tokens = next_tokens.unsqueeze(1) - tokens = tokens[next_tokens_source] - tokens = torch.cat((tokens, next_tokens), dim=1) - generated = generated[next_tokens_source] - scores = scores_sum_average * seq_lengths - is_stopped = is_stopped[next_tokens_source] - - next_token_embed = self.transformer.transformer.wte(next_tokens.squeeze()).view(generated.shape[0], 1, -1) - generated = torch.cat((generated, next_token_embed), dim=1) - is_stopped = is_stopped + next_tokens.eq(stop_token_index).squeeze() - if is_stopped.all(): - break - - scores = scores / seq_lengths - order = scores.argsort(descending=True) - # tokens tensors are already padded to max_seq_length - output_texts = [tokens[i] for i in order] - output_texts = torch.stack(output_texts, dim=0) - seq_lengths = torch.tensor([seq_lengths[i] for i in order], dtype=seq_lengths.dtype) - return output_texts, seq_lengths diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/deprecation_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/deprecation_utils.py deleted file mode 100644 index f482deddd2f46b8d2e29d5229faa0e9a21f2fd98..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/deprecation_utils.py +++ /dev/null @@ -1,49 +0,0 @@ -import inspect -import warnings -from typing import Any, Dict, Optional, Union - -from packaging import version - - -def deprecate(*args, take_from: Optional[Union[Dict, Any]] = None, standard_warn=True, stacklevel=2): - from .. import __version__ - - deprecated_kwargs = take_from - values = () - if not isinstance(args[0], tuple): - args = (args,) - - for attribute, version_name, message in args: - if version.parse(version.parse(__version__).base_version) >= version.parse(version_name): - raise ValueError( - f"The deprecation tuple {(attribute, version_name, message)} should be removed since diffusers'" - f" version {__version__} is >= {version_name}" - ) - - warning = None - if isinstance(deprecated_kwargs, dict) and attribute in deprecated_kwargs: - values += (deprecated_kwargs.pop(attribute),) - warning = f"The `{attribute}` argument is deprecated and will be removed in version {version_name}." - elif hasattr(deprecated_kwargs, attribute): - values += (getattr(deprecated_kwargs, attribute),) - warning = f"The `{attribute}` attribute is deprecated and will be removed in version {version_name}." - elif deprecated_kwargs is None: - warning = f"`{attribute}` is deprecated and will be removed in version {version_name}." - - if warning is not None: - warning = warning + " " if standard_warn else "" - warnings.warn(warning + message, FutureWarning, stacklevel=stacklevel) - - if isinstance(deprecated_kwargs, dict) and len(deprecated_kwargs) > 0: - call_frame = inspect.getouterframes(inspect.currentframe())[1] - filename = call_frame.filename - line_number = call_frame.lineno - function = call_frame.function - key, value = next(iter(deprecated_kwargs.items())) - raise TypeError(f"{function} in {filename} line {line_number-1} got an unexpected keyword argument `{key}`") - - if len(values) == 0: - return - elif len(values) == 1: - return values[0] - return values diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_lora_layers.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_lora_layers.py deleted file mode 100644 index 000748312fca1053a22f2178275b52a5dce310fe..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_lora_layers.py +++ /dev/null @@ -1,841 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import os -import tempfile -import unittest - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from huggingface_hub.repocard import RepoCard -from transformers import CLIPTextConfig, CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer - -from diffusers import ( - AutoencoderKL, - DDIMScheduler, - EulerDiscreteScheduler, - StableDiffusionPipeline, - StableDiffusionXLPipeline, - UNet2DConditionModel, -) -from diffusers.loaders import AttnProcsLayers, LoraLoaderMixin, PatchedLoraProjection, text_encoder_attn_modules -from diffusers.models.attention_processor import ( - Attention, - AttnProcessor, - AttnProcessor2_0, - LoRAAttnProcessor, - LoRAAttnProcessor2_0, - LoRAXFormersAttnProcessor, - XFormersAttnProcessor, -) -from diffusers.utils import floats_tensor, torch_device -from diffusers.utils.testing_utils import require_torch_gpu, slow - - -def create_unet_lora_layers(unet: nn.Module): - lora_attn_procs = {} - for name in unet.attn_processors.keys(): - cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim - if name.startswith("mid_block"): - hidden_size = unet.config.block_out_channels[-1] - elif name.startswith("up_blocks"): - block_id = int(name[len("up_blocks.")]) - hidden_size = list(reversed(unet.config.block_out_channels))[block_id] - elif name.startswith("down_blocks"): - block_id = int(name[len("down_blocks.")]) - hidden_size = unet.config.block_out_channels[block_id] - lora_attn_processor_class = ( - LoRAAttnProcessor2_0 if hasattr(F, "scaled_dot_product_attention") else LoRAAttnProcessor - ) - lora_attn_procs[name] = lora_attn_processor_class( - hidden_size=hidden_size, cross_attention_dim=cross_attention_dim - ) - unet_lora_layers = AttnProcsLayers(lora_attn_procs) - return lora_attn_procs, unet_lora_layers - - -def create_text_encoder_lora_attn_procs(text_encoder: nn.Module): - text_lora_attn_procs = {} - lora_attn_processor_class = ( - LoRAAttnProcessor2_0 if hasattr(F, "scaled_dot_product_attention") else LoRAAttnProcessor - ) - for name, module in text_encoder_attn_modules(text_encoder): - if isinstance(module.out_proj, nn.Linear): - out_features = module.out_proj.out_features - elif isinstance(module.out_proj, PatchedLoraProjection): - out_features = module.out_proj.regular_linear_layer.out_features - else: - assert False, module.out_proj.__class__ - - text_lora_attn_procs[name] = lora_attn_processor_class(hidden_size=out_features, cross_attention_dim=None) - return text_lora_attn_procs - - -def create_text_encoder_lora_layers(text_encoder: nn.Module): - text_lora_attn_procs = create_text_encoder_lora_attn_procs(text_encoder) - text_encoder_lora_layers = AttnProcsLayers(text_lora_attn_procs) - return text_encoder_lora_layers - - -def set_lora_weights(lora_attn_parameters, randn_weight=False): - with torch.no_grad(): - for parameter in lora_attn_parameters: - if randn_weight: - parameter[:] = torch.randn_like(parameter) - else: - torch.zero_(parameter) - - -class LoraLoaderMixinTests(unittest.TestCase): - def get_dummy_components(self): - torch.manual_seed(0) - unet = UNet2DConditionModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=4, - out_channels=4, - down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), - up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), - cross_attention_dim=32, - ) - scheduler = DDIMScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - clip_sample=False, - set_alpha_to_one=False, - steps_offset=1, - ) - torch.manual_seed(0) - vae = AutoencoderKL( - block_out_channels=[32, 64], - in_channels=3, - out_channels=3, - down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=4, - ) - text_encoder_config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - text_encoder = CLIPTextModel(text_encoder_config) - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - unet_lora_attn_procs, unet_lora_layers = create_unet_lora_layers(unet) - text_encoder_lora_layers = create_text_encoder_lora_layers(text_encoder) - - pipeline_components = { - "unet": unet, - "scheduler": scheduler, - "vae": vae, - "text_encoder": text_encoder, - "tokenizer": tokenizer, - "safety_checker": None, - "feature_extractor": None, - } - lora_components = { - "unet_lora_layers": unet_lora_layers, - "text_encoder_lora_layers": text_encoder_lora_layers, - "unet_lora_attn_procs": unet_lora_attn_procs, - } - return pipeline_components, lora_components - - def get_dummy_inputs(self, with_generator=True): - batch_size = 1 - sequence_length = 10 - num_channels = 4 - sizes = (32, 32) - - generator = torch.manual_seed(0) - noise = floats_tensor((batch_size, num_channels) + sizes) - input_ids = torch.randint(1, sequence_length, size=(batch_size, sequence_length), generator=generator) - - pipeline_inputs = { - "prompt": "A painting of a squirrel eating a burger", - "num_inference_steps": 2, - "guidance_scale": 6.0, - "output_type": "np", - } - if with_generator: - pipeline_inputs.update({"generator": generator}) - - return noise, input_ids, pipeline_inputs - - # copied from: https://colab.research.google.com/gist/sayakpaul/df2ef6e1ae6d8c10a49d859883b10860/scratchpad.ipynb - def get_dummy_tokens(self): - max_seq_length = 77 - - inputs = torch.randint(2, 56, size=(1, max_seq_length), generator=torch.manual_seed(0)) - - prepared_inputs = {} - prepared_inputs["input_ids"] = inputs - return prepared_inputs - - def create_lora_weight_file(self, tmpdirname): - _, lora_components = self.get_dummy_components() - LoraLoaderMixin.save_lora_weights( - save_directory=tmpdirname, - unet_lora_layers=lora_components["unet_lora_layers"], - text_encoder_lora_layers=lora_components["text_encoder_lora_layers"], - ) - self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin"))) - - def test_lora_save_load(self): - pipeline_components, lora_components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**pipeline_components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - _, _, pipeline_inputs = self.get_dummy_inputs() - - original_images = sd_pipe(**pipeline_inputs).images - orig_image_slice = original_images[0, -3:, -3:, -1] - - with tempfile.TemporaryDirectory() as tmpdirname: - LoraLoaderMixin.save_lora_weights( - save_directory=tmpdirname, - unet_lora_layers=lora_components["unet_lora_layers"], - text_encoder_lora_layers=lora_components["text_encoder_lora_layers"], - ) - self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin"))) - sd_pipe.load_lora_weights(tmpdirname) - - lora_images = sd_pipe(**pipeline_inputs).images - lora_image_slice = lora_images[0, -3:, -3:, -1] - - # Outputs shouldn't match. - self.assertFalse(torch.allclose(torch.from_numpy(orig_image_slice), torch.from_numpy(lora_image_slice))) - - def test_lora_save_load_safetensors(self): - pipeline_components, lora_components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**pipeline_components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - _, _, pipeline_inputs = self.get_dummy_inputs() - - original_images = sd_pipe(**pipeline_inputs).images - orig_image_slice = original_images[0, -3:, -3:, -1] - - with tempfile.TemporaryDirectory() as tmpdirname: - LoraLoaderMixin.save_lora_weights( - save_directory=tmpdirname, - unet_lora_layers=lora_components["unet_lora_layers"], - text_encoder_lora_layers=lora_components["text_encoder_lora_layers"], - safe_serialization=True, - ) - self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.safetensors"))) - sd_pipe.load_lora_weights(tmpdirname) - - lora_images = sd_pipe(**pipeline_inputs).images - lora_image_slice = lora_images[0, -3:, -3:, -1] - - # Outputs shouldn't match. - self.assertFalse(torch.allclose(torch.from_numpy(orig_image_slice), torch.from_numpy(lora_image_slice))) - - def test_lora_save_load_legacy(self): - pipeline_components, lora_components = self.get_dummy_components() - unet_lora_attn_procs = lora_components["unet_lora_attn_procs"] - sd_pipe = StableDiffusionPipeline(**pipeline_components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - _, _, pipeline_inputs = self.get_dummy_inputs() - - original_images = sd_pipe(**pipeline_inputs).images - orig_image_slice = original_images[0, -3:, -3:, -1] - - with tempfile.TemporaryDirectory() as tmpdirname: - unet = sd_pipe.unet - unet.set_attn_processor(unet_lora_attn_procs) - unet.save_attn_procs(tmpdirname) - self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin"))) - sd_pipe.load_lora_weights(tmpdirname) - - lora_images = sd_pipe(**pipeline_inputs).images - lora_image_slice = lora_images[0, -3:, -3:, -1] - - # Outputs shouldn't match. - self.assertFalse(torch.allclose(torch.from_numpy(orig_image_slice), torch.from_numpy(lora_image_slice))) - - def test_text_encoder_lora_monkey_patch(self): - pipeline_components, _ = self.get_dummy_components() - pipe = StableDiffusionPipeline(**pipeline_components) - - dummy_tokens = self.get_dummy_tokens() - - # inference without lora - outputs_without_lora = pipe.text_encoder(**dummy_tokens)[0] - assert outputs_without_lora.shape == (1, 77, 32) - - # monkey patch - params = pipe._modify_text_encoder(pipe.text_encoder, pipe.lora_scale) - - set_lora_weights(params, randn_weight=False) - - # inference with lora - outputs_with_lora = pipe.text_encoder(**dummy_tokens)[0] - assert outputs_with_lora.shape == (1, 77, 32) - - assert torch.allclose( - outputs_without_lora, outputs_with_lora - ), "lora_up_weight are all zero, so the lora outputs should be the same to without lora outputs" - - # create lora_attn_procs with randn up.weights - create_text_encoder_lora_attn_procs(pipe.text_encoder) - - # monkey patch - params = pipe._modify_text_encoder(pipe.text_encoder, pipe.lora_scale) - - set_lora_weights(params, randn_weight=True) - - # inference with lora - outputs_with_lora = pipe.text_encoder(**dummy_tokens)[0] - assert outputs_with_lora.shape == (1, 77, 32) - - assert not torch.allclose( - outputs_without_lora, outputs_with_lora - ), "lora_up_weight are not zero, so the lora outputs should be different to without lora outputs" - - def test_text_encoder_lora_remove_monkey_patch(self): - pipeline_components, _ = self.get_dummy_components() - pipe = StableDiffusionPipeline(**pipeline_components) - - dummy_tokens = self.get_dummy_tokens() - - # inference without lora - outputs_without_lora = pipe.text_encoder(**dummy_tokens)[0] - assert outputs_without_lora.shape == (1, 77, 32) - - # monkey patch - params = pipe._modify_text_encoder(pipe.text_encoder, pipe.lora_scale) - - set_lora_weights(params, randn_weight=True) - - # inference with lora - outputs_with_lora = pipe.text_encoder(**dummy_tokens)[0] - assert outputs_with_lora.shape == (1, 77, 32) - - assert not torch.allclose( - outputs_without_lora, outputs_with_lora - ), "lora outputs should be different to without lora outputs" - - # remove monkey patch - pipe._remove_text_encoder_monkey_patch() - - # inference with removed lora - outputs_without_lora_removed = pipe.text_encoder(**dummy_tokens)[0] - assert outputs_without_lora_removed.shape == (1, 77, 32) - - assert torch.allclose( - outputs_without_lora, outputs_without_lora_removed - ), "remove lora monkey patch should restore the original outputs" - - def test_text_encoder_lora_scale(self): - pipeline_components, lora_components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**pipeline_components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - _, _, pipeline_inputs = self.get_dummy_inputs() - - with tempfile.TemporaryDirectory() as tmpdirname: - LoraLoaderMixin.save_lora_weights( - save_directory=tmpdirname, - unet_lora_layers=lora_components["unet_lora_layers"], - text_encoder_lora_layers=lora_components["text_encoder_lora_layers"], - ) - self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin"))) - sd_pipe.load_lora_weights(tmpdirname) - - lora_images = sd_pipe(**pipeline_inputs).images - lora_image_slice = lora_images[0, -3:, -3:, -1] - - lora_images_with_scale = sd_pipe(**pipeline_inputs, cross_attention_kwargs={"scale": 0.5}).images - lora_image_with_scale_slice = lora_images_with_scale[0, -3:, -3:, -1] - - # Outputs shouldn't match. - self.assertFalse( - torch.allclose(torch.from_numpy(lora_image_slice), torch.from_numpy(lora_image_with_scale_slice)) - ) - - def test_lora_unet_attn_processors(self): - with tempfile.TemporaryDirectory() as tmpdirname: - self.create_lora_weight_file(tmpdirname) - - pipeline_components, _ = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**pipeline_components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - # check if vanilla attention processors are used - for _, module in sd_pipe.unet.named_modules(): - if isinstance(module, Attention): - self.assertIsInstance(module.processor, (AttnProcessor, AttnProcessor2_0)) - - # load LoRA weight file - sd_pipe.load_lora_weights(tmpdirname) - - # check if lora attention processors are used - for _, module in sd_pipe.unet.named_modules(): - if isinstance(module, Attention): - attn_proc_class = ( - LoRAAttnProcessor2_0 if hasattr(F, "scaled_dot_product_attention") else LoRAAttnProcessor - ) - self.assertIsInstance(module.processor, attn_proc_class) - - def test_unload_lora_sd(self): - pipeline_components, lora_components = self.get_dummy_components() - _, _, pipeline_inputs = self.get_dummy_inputs(with_generator=False) - sd_pipe = StableDiffusionPipeline(**pipeline_components) - - original_images = sd_pipe(**pipeline_inputs, generator=torch.manual_seed(0)).images - orig_image_slice = original_images[0, -3:, -3:, -1] - - # Emulate training. - set_lora_weights(lora_components["unet_lora_layers"].parameters(), randn_weight=True) - set_lora_weights(lora_components["text_encoder_lora_layers"].parameters(), randn_weight=True) - - with tempfile.TemporaryDirectory() as tmpdirname: - LoraLoaderMixin.save_lora_weights( - save_directory=tmpdirname, - unet_lora_layers=lora_components["unet_lora_layers"], - text_encoder_lora_layers=lora_components["text_encoder_lora_layers"], - ) - self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin"))) - sd_pipe.load_lora_weights(tmpdirname) - - lora_images = sd_pipe(**pipeline_inputs, generator=torch.manual_seed(0)).images - lora_image_slice = lora_images[0, -3:, -3:, -1] - - # Unload LoRA parameters. - sd_pipe.unload_lora_weights() - original_images_two = sd_pipe(**pipeline_inputs, generator=torch.manual_seed(0)).images - orig_image_slice_two = original_images_two[0, -3:, -3:, -1] - - assert not np.allclose( - orig_image_slice, lora_image_slice - ), "LoRA parameters should lead to a different image slice." - assert not np.allclose( - orig_image_slice_two, lora_image_slice - ), "LoRA parameters should lead to a different image slice." - assert np.allclose( - orig_image_slice, orig_image_slice_two, atol=1e-3 - ), "Unloading LoRA parameters should lead to results similar to what was obtained with the pipeline without any LoRA parameters." - - @unittest.skipIf(torch_device != "cuda", "This test is supposed to run on GPU") - def test_lora_unet_attn_processors_with_xformers(self): - with tempfile.TemporaryDirectory() as tmpdirname: - self.create_lora_weight_file(tmpdirname) - - pipeline_components, _ = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**pipeline_components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - # enable XFormers - sd_pipe.enable_xformers_memory_efficient_attention() - - # check if xFormers attention processors are used - for _, module in sd_pipe.unet.named_modules(): - if isinstance(module, Attention): - self.assertIsInstance(module.processor, XFormersAttnProcessor) - - # load LoRA weight file - sd_pipe.load_lora_weights(tmpdirname) - - # check if lora attention processors are used - for _, module in sd_pipe.unet.named_modules(): - if isinstance(module, Attention): - self.assertIsInstance(module.processor, LoRAXFormersAttnProcessor) - - # unload lora weights - sd_pipe.unload_lora_weights() - - # check if attention processors are reverted back to xFormers - for _, module in sd_pipe.unet.named_modules(): - if isinstance(module, Attention): - self.assertIsInstance(module.processor, XFormersAttnProcessor) - - @unittest.skipIf(torch_device != "cuda", "This test is supposed to run on GPU") - def test_lora_save_load_with_xformers(self): - pipeline_components, lora_components = self.get_dummy_components() - sd_pipe = StableDiffusionPipeline(**pipeline_components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - _, _, pipeline_inputs = self.get_dummy_inputs() - - # enable XFormers - sd_pipe.enable_xformers_memory_efficient_attention() - - original_images = sd_pipe(**pipeline_inputs).images - orig_image_slice = original_images[0, -3:, -3:, -1] - - with tempfile.TemporaryDirectory() as tmpdirname: - LoraLoaderMixin.save_lora_weights( - save_directory=tmpdirname, - unet_lora_layers=lora_components["unet_lora_layers"], - text_encoder_lora_layers=lora_components["text_encoder_lora_layers"], - ) - self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin"))) - sd_pipe.load_lora_weights(tmpdirname) - - lora_images = sd_pipe(**pipeline_inputs).images - lora_image_slice = lora_images[0, -3:, -3:, -1] - - # Outputs shouldn't match. - self.assertFalse(torch.allclose(torch.from_numpy(orig_image_slice), torch.from_numpy(lora_image_slice))) - - -class SDXLLoraLoaderMixinTests(unittest.TestCase): - def get_dummy_components(self): - torch.manual_seed(0) - unet = UNet2DConditionModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=4, - out_channels=4, - down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), - up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), - # SD2-specific config below - attention_head_dim=(2, 4), - use_linear_projection=True, - addition_embed_type="text_time", - addition_time_embed_dim=8, - transformer_layers_per_block=(1, 2), - projection_class_embeddings_input_dim=80, # 6 * 8 + 32 - cross_attention_dim=64, - ) - scheduler = EulerDiscreteScheduler( - beta_start=0.00085, - beta_end=0.012, - steps_offset=1, - beta_schedule="scaled_linear", - timestep_spacing="leading", - ) - torch.manual_seed(0) - vae = AutoencoderKL( - block_out_channels=[32, 64], - in_channels=3, - out_channels=3, - down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=4, - sample_size=128, - ) - torch.manual_seed(0) - text_encoder_config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - # SD2-specific config below - hidden_act="gelu", - projection_dim=32, - ) - text_encoder = CLIPTextModel(text_encoder_config) - tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - text_encoder_2 = CLIPTextModelWithProjection(text_encoder_config) - tokenizer_2 = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - unet_lora_attn_procs, unet_lora_layers = create_unet_lora_layers(unet) - text_encoder_one_lora_layers = create_text_encoder_lora_layers(text_encoder) - text_encoder_two_lora_layers = create_text_encoder_lora_layers(text_encoder_2) - - pipeline_components = { - "unet": unet, - "scheduler": scheduler, - "vae": vae, - "text_encoder": text_encoder, - "text_encoder_2": text_encoder_2, - "tokenizer": tokenizer, - "tokenizer_2": tokenizer_2, - } - lora_components = { - "unet_lora_layers": unet_lora_layers, - "text_encoder_one_lora_layers": text_encoder_one_lora_layers, - "text_encoder_two_lora_layers": text_encoder_two_lora_layers, - "unet_lora_attn_procs": unet_lora_attn_procs, - } - return pipeline_components, lora_components - - def get_dummy_inputs(self, with_generator=True): - batch_size = 1 - sequence_length = 10 - num_channels = 4 - sizes = (32, 32) - - generator = torch.manual_seed(0) - noise = floats_tensor((batch_size, num_channels) + sizes) - input_ids = torch.randint(1, sequence_length, size=(batch_size, sequence_length), generator=generator) - - pipeline_inputs = { - "prompt": "A painting of a squirrel eating a burger", - "num_inference_steps": 2, - "guidance_scale": 6.0, - "output_type": "np", - } - if with_generator: - pipeline_inputs.update({"generator": generator}) - - return noise, input_ids, pipeline_inputs - - def test_lora_save_load(self): - pipeline_components, lora_components = self.get_dummy_components() - sd_pipe = StableDiffusionXLPipeline(**pipeline_components) - sd_pipe = sd_pipe.to(torch_device) - sd_pipe.set_progress_bar_config(disable=None) - - _, _, pipeline_inputs = self.get_dummy_inputs() - - original_images = sd_pipe(**pipeline_inputs).images - orig_image_slice = original_images[0, -3:, -3:, -1] - - with tempfile.TemporaryDirectory() as tmpdirname: - StableDiffusionXLPipeline.save_lora_weights( - save_directory=tmpdirname, - unet_lora_layers=lora_components["unet_lora_layers"], - text_encoder_lora_layers=lora_components["text_encoder_one_lora_layers"], - text_encoder_2_lora_layers=lora_components["text_encoder_two_lora_layers"], - ) - self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin"))) - sd_pipe.load_lora_weights(tmpdirname) - - lora_images = sd_pipe(**pipeline_inputs).images - lora_image_slice = lora_images[0, -3:, -3:, -1] - - # Outputs shouldn't match. - self.assertFalse(torch.allclose(torch.from_numpy(orig_image_slice), torch.from_numpy(lora_image_slice))) - - def test_unload_lora_sdxl(self): - pipeline_components, lora_components = self.get_dummy_components() - _, _, pipeline_inputs = self.get_dummy_inputs(with_generator=False) - sd_pipe = StableDiffusionXLPipeline(**pipeline_components) - - original_images = sd_pipe(**pipeline_inputs, generator=torch.manual_seed(0)).images - orig_image_slice = original_images[0, -3:, -3:, -1] - - # Emulate training. - set_lora_weights(lora_components["unet_lora_layers"].parameters(), randn_weight=True) - set_lora_weights(lora_components["text_encoder_one_lora_layers"].parameters(), randn_weight=True) - set_lora_weights(lora_components["text_encoder_two_lora_layers"].parameters(), randn_weight=True) - - with tempfile.TemporaryDirectory() as tmpdirname: - StableDiffusionXLPipeline.save_lora_weights( - save_directory=tmpdirname, - unet_lora_layers=lora_components["unet_lora_layers"], - text_encoder_lora_layers=lora_components["text_encoder_one_lora_layers"], - text_encoder_2_lora_layers=lora_components["text_encoder_two_lora_layers"], - ) - self.assertTrue(os.path.isfile(os.path.join(tmpdirname, "pytorch_lora_weights.bin"))) - sd_pipe.load_lora_weights(tmpdirname) - - lora_images = sd_pipe(**pipeline_inputs, generator=torch.manual_seed(0)).images - lora_image_slice = lora_images[0, -3:, -3:, -1] - - # Unload LoRA parameters. - sd_pipe.unload_lora_weights() - original_images_two = sd_pipe(**pipeline_inputs, generator=torch.manual_seed(0)).images - orig_image_slice_two = original_images_two[0, -3:, -3:, -1] - - assert not np.allclose( - orig_image_slice, lora_image_slice - ), "LoRA parameters should lead to a different image slice." - assert not np.allclose( - orig_image_slice_two, lora_image_slice - ), "LoRA parameters should lead to a different image slice." - assert np.allclose( - orig_image_slice, orig_image_slice_two, atol=1e-3 - ), "Unloading LoRA parameters should lead to results similar to what was obtained with the pipeline without any LoRA parameters." - - -@slow -@require_torch_gpu -class LoraIntegrationTests(unittest.TestCase): - def test_dreambooth_old_format(self): - generator = torch.Generator("cpu").manual_seed(0) - - lora_model_id = "hf-internal-testing/lora_dreambooth_dog_example" - card = RepoCard.load(lora_model_id) - base_model_id = card.data.to_dict()["base_model"] - - pipe = StableDiffusionPipeline.from_pretrained(base_model_id, safety_checker=None) - pipe = pipe.to(torch_device) - pipe.load_lora_weights(lora_model_id) - - images = pipe( - "A photo of a sks dog floating in the river", output_type="np", generator=generator, num_inference_steps=2 - ).images - - images = images[0, -3:, -3:, -1].flatten() - - expected = np.array([0.7207, 0.6787, 0.6010, 0.7478, 0.6838, 0.6064, 0.6984, 0.6443, 0.5785]) - - self.assertTrue(np.allclose(images, expected, atol=1e-4)) - - def test_dreambooth_text_encoder_new_format(self): - generator = torch.Generator().manual_seed(0) - - lora_model_id = "hf-internal-testing/lora-trained" - card = RepoCard.load(lora_model_id) - base_model_id = card.data.to_dict()["base_model"] - - pipe = StableDiffusionPipeline.from_pretrained(base_model_id, safety_checker=None) - pipe = pipe.to(torch_device) - pipe.load_lora_weights(lora_model_id) - - images = pipe("A photo of a sks dog", output_type="np", generator=generator, num_inference_steps=2).images - - images = images[0, -3:, -3:, -1].flatten() - - expected = np.array([0.6628, 0.6138, 0.5390, 0.6625, 0.6130, 0.5463, 0.6166, 0.5788, 0.5359]) - - self.assertTrue(np.allclose(images, expected, atol=1e-4)) - - def test_a1111(self): - generator = torch.Generator().manual_seed(0) - - pipe = StableDiffusionPipeline.from_pretrained("hf-internal-testing/Counterfeit-V2.5", safety_checker=None).to( - torch_device - ) - lora_model_id = "hf-internal-testing/civitai-light-shadow-lora" - lora_filename = "light_and_shadow.safetensors" - pipe.load_lora_weights(lora_model_id, weight_name=lora_filename) - - images = pipe( - "masterpiece, best quality, mountain", output_type="np", generator=generator, num_inference_steps=2 - ).images - - images = images[0, -3:, -3:, -1].flatten() - expected = np.array([0.3725, 0.3767, 0.3761, 0.3796, 0.3827, 0.3763, 0.3831, 0.3809, 0.3392]) - - self.assertTrue(np.allclose(images, expected, atol=1e-4)) - - def test_vanilla_funetuning(self): - generator = torch.Generator().manual_seed(0) - - lora_model_id = "hf-internal-testing/sd-model-finetuned-lora-t4" - card = RepoCard.load(lora_model_id) - base_model_id = card.data.to_dict()["base_model"] - - pipe = StableDiffusionPipeline.from_pretrained(base_model_id, safety_checker=None) - pipe = pipe.to(torch_device) - pipe.load_lora_weights(lora_model_id) - - images = pipe("A pokemon with blue eyes.", output_type="np", generator=generator, num_inference_steps=2).images - - images = images[0, -3:, -3:, -1].flatten() - - expected = np.array([0.7406, 0.699, 0.5963, 0.7493, 0.7045, 0.6096, 0.6886, 0.6388, 0.583]) - - self.assertTrue(np.allclose(images, expected, atol=1e-4)) - - def test_unload_lora(self): - generator = torch.manual_seed(0) - prompt = "masterpiece, best quality, mountain" - num_inference_steps = 2 - - pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", safety_checker=None).to( - torch_device - ) - initial_images = pipe( - prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps - ).images - initial_images = initial_images[0, -3:, -3:, -1].flatten() - - lora_model_id = "hf-internal-testing/civitai-colored-icons-lora" - lora_filename = "Colored_Icons_by_vizsumit.safetensors" - - pipe.load_lora_weights(lora_model_id, weight_name=lora_filename) - generator = torch.manual_seed(0) - lora_images = pipe( - prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps - ).images - lora_images = lora_images[0, -3:, -3:, -1].flatten() - - pipe.unload_lora_weights() - generator = torch.manual_seed(0) - unloaded_lora_images = pipe( - prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps - ).images - unloaded_lora_images = unloaded_lora_images[0, -3:, -3:, -1].flatten() - - self.assertFalse(np.allclose(initial_images, lora_images)) - self.assertTrue(np.allclose(initial_images, unloaded_lora_images, atol=1e-3)) - - def test_load_unload_load_kohya_lora(self): - # This test ensures that a Kohya-style LoRA can be safely unloaded and then loaded - # without introducing any side-effects. Even though the test uses a Kohya-style - # LoRA, the underlying adapter handling mechanism is format-agnostic. - generator = torch.manual_seed(0) - prompt = "masterpiece, best quality, mountain" - num_inference_steps = 2 - - pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", safety_checker=None).to( - torch_device - ) - initial_images = pipe( - prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps - ).images - initial_images = initial_images[0, -3:, -3:, -1].flatten() - - lora_model_id = "hf-internal-testing/civitai-colored-icons-lora" - lora_filename = "Colored_Icons_by_vizsumit.safetensors" - - pipe.load_lora_weights(lora_model_id, weight_name=lora_filename) - generator = torch.manual_seed(0) - lora_images = pipe( - prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps - ).images - lora_images = lora_images[0, -3:, -3:, -1].flatten() - - pipe.unload_lora_weights() - generator = torch.manual_seed(0) - unloaded_lora_images = pipe( - prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps - ).images - unloaded_lora_images = unloaded_lora_images[0, -3:, -3:, -1].flatten() - - self.assertFalse(np.allclose(initial_images, lora_images)) - self.assertTrue(np.allclose(initial_images, unloaded_lora_images, atol=1e-3)) - - # make sure we can load a LoRA again after unloading and they don't have - # any undesired effects. - pipe.load_lora_weights(lora_model_id, weight_name=lora_filename) - generator = torch.manual_seed(0) - lora_images_again = pipe( - prompt, output_type="np", generator=generator, num_inference_steps=num_inference_steps - ).images - lora_images_again = lora_images_again[0, -3:, -3:, -1].flatten() - - self.assertTrue(np.allclose(lora_images, lora_images_again, atol=1e-3)) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/README.md b/spaces/Andy1621/uniformer_image_detection/configs/gcnet/README.md deleted file mode 100644 index 0ef8db737743c63fbf2089e53d8f5302b52ee5e6..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/gcnet/README.md +++ /dev/null @@ -1,59 +0,0 @@ -# GCNet for Object Detection - -By [Yue Cao](http://yue-cao.me), [Jiarui Xu](http://jerryxu.net), [Stephen Lin](https://scholar.google.com/citations?user=c3PYmxUAAAAJ&hl=en), Fangyun Wei, [Han Hu](https://sites.google.com/site/hanhushomepage/). - -We provide config files to reproduce the results in the paper for -["GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond"](https://arxiv.org/abs/1904.11492) on COCO object detection. - -## Introduction - -[ALGORITHM] - -**GCNet** is initially described in [arxiv](https://arxiv.org/abs/1904.11492). Via absorbing advantages of Non-Local Networks (NLNet) and Squeeze-Excitation Networks (SENet), GCNet provides a simple, fast and effective approach for global context modeling, which generally outperforms both NLNet and SENet on major benchmarks for various recognition tasks. - -## Citing GCNet - -```latex -@article{cao2019GCNet, - title={GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond}, - author={Cao, Yue and Xu, Jiarui and Lin, Stephen and Wei, Fangyun and Hu, Han}, - journal={arXiv preprint arXiv:1904.11492}, - year={2019} -} -``` - -## Results and models - -The results on COCO 2017val are shown in the below table. - -| Backbone | Model | Context | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download | -| :-------: | :--------------: | :------------: | :-----: | :------: | :------------: | :----: | :-----: | :------: | :--------: | -| R-50-FPN | Mask | GC(c3-c5, r16) | 1x | 5.0 | | 39.7 | 35.9 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r50_fpn_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_r16_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_r16_gcb_c3-c5_1x_coco_20200515_211915-187da160.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_r16_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_r16_gcb_c3-c5_1x_coco_20200515_211915.log.json) | -| R-50-FPN | Mask | GC(c3-c5, r4) | 1x | 5.1 | 15.0 | 39.9 | 36.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r50_fpn_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_r4_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_r4_gcb_c3-c5_1x_coco_20200204-17235656.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_r4_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_r4_gcb_c3-c5_1x_coco_20200204_024626.log.json) | -| R-101-FPN | Mask | GC(c3-c5, r16) | 1x | 7.6 | 11.4 | 41.3 | 37.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco_20200205-e58ae947.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco_20200205_192835.log.json) | -| R-101-FPN | Mask | GC(c3-c5, r4) | 1x | 7.8 | 11.6 | 42.2 | 37.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r101_fpn_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_r4_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_r4_gcb_c3-c5_1x_coco_20200206-af22dc9d.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_r4_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_r4_gcb_c3-c5_1x_coco_20200206_112128.log.json) | - -| Backbone | Model | Context | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download | -| :-------: | :--------------: | :------------: | :-----: | :------: | :------------: | :----: | :-----: | :------: | :-------: | -| R-50-FPN | Mask | - | 1x | 4.4 | 16.6 | 38.4 | 34.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco_20200202-bb3eb55c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco_20200202_214122.log.json) | -| R-50-FPN | Mask | GC(c3-c5, r16) | 1x | 5.0 | 15.5 | 40.4 | 36.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200202-587b99aa.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200202_174907.log.json) | -| R-50-FPN | Mask | GC(c3-c5, r4) | 1x | 5.1 | 15.1 | 40.7 | 36.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200202-50b90e5c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200202_085547.log.json) | -| R-101-FPN | Mask | - | 1x | 6.4 | 13.3 | 40.5 | 36.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco_20200210-81658c8a.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco_20200210_220422.log.json) | -| R-101-FPN | Mask | GC(c3-c5, r16) | 1x | 7.6 | 12.0 | 42.2 | 37.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200207-945e77ca.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200207_015330.log.json) | -| R-101-FPN | Mask | GC(c3-c5, r4) | 1x | 7.8 | 11.8 | 42.2 | 37.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200206-8407a3f0.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200206_142508.log.json) | -| X-101-FPN | Mask | - | 1x | 7.6 | 11.3 | 42.4 | 37.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco_20200211-7584841c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco_20200211_054326.log.json) | -| X-101-FPN | Mask | GC(c3-c5, r16) | 1x | 8.8 | 9.8 | 43.5 | 38.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200211-cbed3d2c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200211_164715.log.json) | -| X-101-FPN | Mask | GC(c3-c5, r4) | 1x | 9.0 | 9.7 | 43.9 | 39.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200212-68164964.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200212_070942.log.json) | -| X-101-FPN | Cascade Mask | - | 1x | 9.2 | 8.4 | 44.7 | 38.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco_20200310-d5ad2a5e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco_20200310_115217.log.json) | -| X-101-FPN | Cascade Mask | GC(c3-c5, r16) | 1x | 10.3 | 7.7 | 46.2 | 39.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200211-10bf2463.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200211_184154.log.json) | -| X-101-FPN | Cascade Mask | GC(c3-c5, r4) | 1x | 10.6 | | 46.4 | 40.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200703_180653-ed035291.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco_20200703_180653.log.json) | -| X-101-FPN | DCN Cascade Mask | - | 1x | | | 44.9 | 38.9 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco_20200516_182249-680fc3f2.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco_20200516_182249.log.json)| -| X-101-FPN | DCN Cascade Mask | GC(c3-c5, r16) | 1x | | | 44.6 | |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r16_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r16_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r16_gcb_c3-c5_1x_coco_20200516_015634-08f56b56.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r16_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r16_gcb_c3-c5_1x_coco_20200516_015634.log.json) | -| X-101-FPN | DCN Cascade Mask | GC(c3-c5, r4) | 1x | | | 45.7 | 39.5 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco_20200518_041145-24cabcfd.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco_20200518_041145.log.json) | - -**Notes:** - -- The `SyncBN` is added in the backbone for all models in **Table 2**. -- `GC` denotes Global Context (GC) block is inserted after 1x1 conv of backbone. -- `DCN` denotes replace 3x3 conv with 3x3 Deformable Convolution in `c3-c5` stages of backbone. -- `r4` and `r16` denote ratio 4 and ratio 16 in GC block respectively. diff --git a/spaces/Andy1621/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py deleted file mode 100644 index 89caaafbc17d871d836e810ba7c038648937254c..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py +++ /dev/null @@ -1,15 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py' -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained='open-mmlab://contrib/resnet50_gn', - backbone=dict(norm_cfg=norm_cfg), - neck=dict(norm_cfg=norm_cfg), - roi_head=dict( - bbox_head=dict( - type='Shared4Conv1FCBBoxHead', - conv_out_channels=256, - norm_cfg=norm_cfg), - mask_head=dict(norm_cfg=norm_cfg))) -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/swin/cascade_mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/swin/cascade_mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_1x_coco.py deleted file mode 100644 index 36e3acd0a4b6ad08e5af3c7b9c639eff028431f7..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/swin/cascade_mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_1x_coco.py +++ /dev/null @@ -1,140 +0,0 @@ -_base_ = [ - '../_base_/models/cascade_mask_rcnn_swin_fpn.py', - '../_base_/datasets/coco_instance.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] - -model = dict( - backbone=dict( - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - ape=False, - drop_path_rate=0.0, - patch_norm=True, - use_checkpoint=False - ), - neck=dict(in_channels=[96, 192, 384, 768]), - roi_head=dict( - bbox_head=[ - dict( - type='ConvFCBBoxHead', - num_shared_convs=4, - num_shared_fcs=1, - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - reg_decoded_bbox=True, - norm_cfg=dict(type='SyncBN', requires_grad=True), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='GIoULoss', loss_weight=10.0)), - dict( - type='ConvFCBBoxHead', - num_shared_convs=4, - num_shared_fcs=1, - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=False, - reg_decoded_bbox=True, - norm_cfg=dict(type='SyncBN', requires_grad=True), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='GIoULoss', loss_weight=10.0)), - dict( - type='ConvFCBBoxHead', - num_shared_convs=4, - num_shared_fcs=1, - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=False, - reg_decoded_bbox=True, - norm_cfg=dict(type='SyncBN', requires_grad=True), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='GIoULoss', loss_weight=10.0)) - ])) - -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -# augmentation strategy originates from DETR / Sparse RCNN -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='AutoAugment', - policies=[ - [ - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333), - (608, 1333), (640, 1333), (672, 1333), (704, 1333), - (736, 1333), (768, 1333), (800, 1333)], - multiscale_mode='value', - keep_ratio=True) - ], - [ - dict(type='Resize', - img_scale=[(400, 1333), (500, 1333), (600, 1333)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomCrop', - crop_type='absolute_range', - crop_size=(384, 600), - allow_negative_crop=True), - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), - (576, 1333), (608, 1333), (640, 1333), - (672, 1333), (704, 1333), (736, 1333), - (768, 1333), (800, 1333)], - multiscale_mode='value', - override=True, - keep_ratio=True) - ] - ]), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -data = dict(train=dict(pipeline=train_pipeline)) - -optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) -lr_config = dict(step=[8, 11]) -runner = dict(type='EpochBasedRunnerAmp', max_epochs=12) - -# do not use mmdet version fp16 -fp16 = None -optimizer_config = dict( - type="DistOptimizerHook", - update_interval=1, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - use_fp16=True, -) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_20k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_20k_voc12aug.py deleted file mode 100644 index 09a5fe5468f0155f8fd0bf2cd1574a33624d8492..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x512_20k_voc12aug.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './fcn_r50-d8_512x512_20k_voc12aug.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/RWKV.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/RWKV.py deleted file mode 100644 index 39487c66b7dabec49a6aa80c4e499a088f1fa1a2..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/RWKV.py +++ /dev/null @@ -1,153 +0,0 @@ -''' -This loader is not currently maintained as RWKV can now be loaded -through the transformers library. -''' - -import copy -import os -from pathlib import Path - -import numpy as np -from tokenizers import Tokenizer - -import modules.shared as shared -from modules.callbacks import Iteratorize - -np.set_printoptions(precision=4, suppress=True, linewidth=200) - -os.environ['RWKV_JIT_ON'] = '1' -os.environ["RWKV_CUDA_ON"] = '1' if shared.args.rwkv_cuda_on else '0' # use CUDA kernel for seq mode (much faster) - -from rwkv.model import RWKV -from rwkv.utils import PIPELINE, PIPELINE_ARGS - - -class RWKVModel: - def __init__(self): - pass - - @classmethod - def from_pretrained(self, path, dtype="fp16", device="cuda"): - tokenizer_path = Path(f"{path.parent}/20B_tokenizer.json") - if shared.args.rwkv_strategy is None: - model = RWKV(model=str(path), strategy=f'{device} {dtype}') - else: - model = RWKV(model=str(path), strategy=shared.args.rwkv_strategy) - - pipeline = PIPELINE(model, str(tokenizer_path)) - result = self() - result.pipeline = pipeline - result.model = model - result.cached_context = "" - result.cached_model_state = None - result.cached_output_logits = None - return result - - def generate(self, prompt, state, callback=None): - args = PIPELINE_ARGS( - temperature=state['temperature'], - top_p=state['top_p'], - top_k=state['top_k'], - alpha_frequency=0.1, # Frequency Penalty (as in GPT-3) - alpha_presence=0.1, # Presence Penalty (as in GPT-3) - token_ban=[0], # ban the generation of some tokens - token_stop=[] - ) - - if self.cached_context != "": - if prompt.startswith(self.cached_context): - prompt = prompt[len(self.cached_context):] - else: - self.cached_context = "" - self.cached_model_state = None - self.cached_output_logits = None - - # out = self.pipeline.generate(prompt, token_count=state['max_new_tokens'], args=args, callback=callback) - out = self.generate_from_cached_state(prompt, token_count=state['max_new_tokens'], args=args, callback=callback) - return out - - def generate_with_streaming(self, *args, **kwargs): - with Iteratorize(self.generate, args, kwargs, callback=None) as generator: - reply = '' - for token in generator: - reply += token - yield reply - - # Similar to the PIPELINE.generate, but lets us maintain the cached_model_state - def generate_from_cached_state(self, ctx="", token_count=20, args=None, callback=None): - all_tokens = [] - out_str = '' - occurrence = {} - state = copy.deepcopy(self.cached_model_state) if self.cached_model_state is not None else None - - # if we ended up with an empty context, just reuse the cached logits - # this can happen if a user undoes a message and then sends the exact message again - # in that case the full context ends up being the same as the cached_context, so the remaining context is empty. - if ctx == "": - out = self.cached_output_logits - - token = None - for i in range(token_count): - # forward - tokens = self.pipeline.encode(ctx) if i == 0 else [token] - while len(tokens) > 0: - out, state = self.model.forward(tokens[:args.chunk_len], state) - tokens = tokens[args.chunk_len:] - if i == 0: - begin_token = len(all_tokens) - last_token_posi = begin_token - # cache the model state after scanning the context - # we don't cache the state after processing our own generated tokens because - # the output string might be post-processed arbitrarily. Therefore, what's fed into the model - # on the next round of chat might be slightly different what what it output on the previous round - if i == 0: - self.cached_context += ctx - self.cached_model_state = copy.deepcopy(state) - self.cached_output_logits = copy.deepcopy(out) - - # adjust probabilities - for n in args.token_ban: - out[n] = -float('inf') - - for n in occurrence: - out[n] -= (args.alpha_presence + occurrence[n] * args.alpha_frequency) - - # sampler - token = self.pipeline.sample_logits(out, temperature=args.temperature, top_p=args.top_p, top_k=args.top_k) - if token in args.token_stop: - break - - all_tokens += [token] - if token not in occurrence: - occurrence[token] = 1 - else: - occurrence[token] += 1 - - # output - tmp = self.pipeline.decode(all_tokens[last_token_posi:]) - if '\ufffd' not in tmp: # is valid utf-8 string? - if callback: - callback(tmp) - - out_str += tmp - last_token_posi = begin_token + i + 1 - return out_str - - -class RWKVTokenizer: - def __init__(self): - pass - - @classmethod - def from_pretrained(self, path): - tokenizer_path = path / "20B_tokenizer.json" - tokenizer = Tokenizer.from_file(str(tokenizer_path)) - result = self() - result.tokenizer = tokenizer - return result - - def encode(self, prompt): - return self.tokenizer.encode(prompt).ids - - def decode(self, ids): - return self.tokenizer.decode(ids) diff --git a/spaces/AnkitGaur2811/Image_Conversion_app_using_Opencv/README.md b/spaces/AnkitGaur2811/Image_Conversion_app_using_Opencv/README.md deleted file mode 100644 index e2ec15a3b9641f0408cdcd10cee07c353df84097..0000000000000000000000000000000000000000 --- a/spaces/AnkitGaur2811/Image_Conversion_app_using_Opencv/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Image Conversion App Using Opencv -emoji: 😻 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.0.20 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/main.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/main.py deleted file mode 100644 index 7f59cad7b70cece88aaa2687f8780cdf1d8c15e7..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/main.py +++ /dev/null @@ -1,9 +0,0 @@ -from optimization.image_editor import ImageEditor -from optimization.arguments import get_arguments - - -if __name__ == "__main__": - args = get_arguments() - image_editor = ImageEditor(args) - image_editor.edit_image_by_prompt() - # image_editor.reconstruct_image() diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/gather_points.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/gather_points.py deleted file mode 100644 index f52f1677d8ea0facafc56a3672d37adb44677ff3..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/gather_points.py +++ /dev/null @@ -1,57 +0,0 @@ -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['gather_points_forward', 'gather_points_backward']) - - -class GatherPoints(Function): - """Gather points with given index.""" - - @staticmethod - def forward(ctx, features: torch.Tensor, - indices: torch.Tensor) -> torch.Tensor: - """ - Args: - features (Tensor): (B, C, N) features to gather. - indices (Tensor): (B, M) where M is the number of points. - - Returns: - Tensor: (B, C, M) where M is the number of points. - """ - assert features.is_contiguous() - assert indices.is_contiguous() - - B, npoint = indices.size() - _, C, N = features.size() - output = torch.cuda.FloatTensor(B, C, npoint) - - ext_module.gather_points_forward( - features, indices, output, b=B, c=C, n=N, npoints=npoint) - - ctx.for_backwards = (indices, C, N) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(indices) - return output - - @staticmethod - def backward(ctx, grad_out): - idx, C, N = ctx.for_backwards - B, npoint = idx.size() - - grad_features = torch.cuda.FloatTensor(B, C, N).zero_() - grad_out_data = grad_out.data.contiguous() - ext_module.gather_points_backward( - grad_out_data, - idx, - grad_features.data, - b=B, - c=C, - n=N, - npoints=npoint) - return grad_features, None - - -gather_points = GatherPoints.apply diff --git a/spaces/AntX-ai/Fintech/index.html b/spaces/AntX-ai/Fintech/index.html deleted file mode 100644 index ebacd19a35a6aa56f692fda5fd182bca221dd549..0000000000000000000000000000000000000000 --- a/spaces/AntX-ai/Fintech/index.html +++ /dev/null @@ -1,15 +0,0 @@ - - - - - - My static Space - - - -
    -

    Welcome to AntX.ai Fintech Space!

    -

    You will explore various topics and algorithms about fintech

    -
    - - diff --git a/spaces/AnthonyTruchetPoC/persistent-docker/scripts/build-clean-docs.sh b/spaces/AnthonyTruchetPoC/persistent-docker/scripts/build-clean-docs.sh deleted file mode 100644 index 16608a11a1fb8ce1909767db207d80aa778526d1..0000000000000000000000000000000000000000 --- a/spaces/AnthonyTruchetPoC/persistent-docker/scripts/build-clean-docs.sh +++ /dev/null @@ -1,11 +0,0 @@ -#!/usr/bin/env sh - -DOCS_GENERATED_API_SRC=src/athai -DOCS_SRC=doc -DOCS_GENERATED_API_DST=doc/_autosummary -DOCS_DST=dist/doc - -rm -rf $DOCS_DST -rm -rf $DOCS_GENERATED_API_DST - -poetry run sphinx-build -E -a $DOCS_SRC $DOCS_DST diff --git a/spaces/Apex-X/nono/roop/typing.py b/spaces/Apex-X/nono/roop/typing.py deleted file mode 100644 index 1cff7440616e20bfe7b8bc287f86d11bf1b0f083..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/nono/roop/typing.py +++ /dev/null @@ -1,7 +0,0 @@ -from typing import Any - -from insightface.app.common import Face -import numpy - -Face = Face -Frame = numpy.ndarray[Any, Any] diff --git a/spaces/Atsushi/kinoko-mini-AI/app.py b/spaces/Atsushi/kinoko-mini-AI/app.py deleted file mode 100644 index b917015e42a22331f1664ecfbb1e645190558bce..0000000000000000000000000000000000000000 --- a/spaces/Atsushi/kinoko-mini-AI/app.py +++ /dev/null @@ -1,30 +0,0 @@ -import gradio as gr -import os -pkl = "all_20211108_res34.pkl" -from fastai.vision.all import * -from fastai.vision.widgets import * -import jaconv -import pathlib -plt = platform.system() -if plt == 'Linux': pathlib.WindowsPath = pathlib.PosixPath -model_inf = load_learner(pkl) -#print(os.getcwd()) -title = "きのこミニAI" -description = "615種類のきのこを判定します。日本国内で撮られた約10万枚の写真を学習に使用。食べる人ではなく学ぶ人のためのツールです。ご利用は自己責任で。最終更新日:2021/11/9" -def kinoko_uranai(img): - replace_dic = {"_ッロウッ":" (group)","ー":""} - result_dic = {} - pred_class, pred_idxs, outputs = model_inf.predict(img) - top_5_conf, i = outputs.topk(5) - itr = 0 - classes = model_inf.dls.vocab - result_dic = {} - for x in i: - kwamei = jaconv.alphabet2kata(classes[x.item()].lower()) - for k,v in replace_dic.items(): - kwamei = kwamei.replace(k,v) - result_dic[kwamei] = str(round(top_5_conf[itr].item(),2)) - itr=itr+1 - return result_dic -outputs = gr.outputs.Label(num_top_classes=5) -iface = gr.Interface(fn=kinoko_uranai, inputs="image", outputs=outputs,title=title,description=description).launch(debug=True) \ No newline at end of file diff --git a/spaces/Bavesh/Oral_Cancer_Detection/README.md b/spaces/Bavesh/Oral_Cancer_Detection/README.md deleted file mode 100644 index ce809fa34d40a99d34957297eae254d555f39826..0000000000000000000000000000000000000000 --- a/spaces/Bavesh/Oral_Cancer_Detection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Oral_Cancer_Detection -emoji: 👀 -colorFrom: indigo -colorTo: purple -sdk: streamlit -sdk_version: 1.9.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Benson/text-generation/Examples/Apk Download Traffic Rider Hack.md b/spaces/Benson/text-generation/Examples/Apk Download Traffic Rider Hack.md deleted file mode 100644 index 4337c1dcfff0828200acd13fd6818916c79ea957..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Apk Download Traffic Rider Hack.md +++ /dev/null @@ -1,77 +0,0 @@ - -

    Cómo descargar e instalar tráfico jinete Hack APK en Android

    -

    Traffic Rider es uno de los juegos de carreras de motos más populares y adictivos en Android. Ofrece una experiencia de juego realista e inmersiva con una vista en primera persona, sonidos reales de bicicleta, entornos detallados y un modo de carrera completo. Puedes elegir entre más de 30 bicicletas diferentes, personalizarlas y competir a través de varios escenarios evitando el tráfico y los obstáculos.

    -

    apk download traffic rider hack


    Download File ☆☆☆ https://bltlly.com/2v6LWU



    -

    Sin embargo, si desea disfrutar del juego sin limitaciones o restricciones, es posible que desee probar Traffic Rider hack APK. Esta es una versión modificada del juego original que te da acceso a dinero ilimitado y oro, todas las bicicletas desbloqueadas y actualizadas, sin anuncios, sin temporizadores y más. Con este hack, usted puede tener más diversión y desafío en Traffic Rider.

    -

    En este artículo, le mostraremos cómo descargar e instalar Traffic Rider hack APK en su dispositivo Android. También discutiremos los beneficios y riesgos de usar este hack, así como algunos consejos y trucos para jugar el juego. Siga los pasos a continuación para comenzar.

    -

    Pasos para descargar e instalar tráfico Rider hack APK en Android

    -

    Antes de que pueda instalar Traffic Rider hack APK en su dispositivo, es necesario asegurarse de que ha habilitado fuentes desconocidas en su configuración. Esto te permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Te mostramos cómo hacerlo:

    -

    Paso 1: Habilitar fuentes desconocidas en el dispositivo

    -
      -
    • Ve a la configuración de tu dispositivo y toca Aplicaciones y notificaciones (o Aplicaciones en versiones anteriores de Android).
    • -
    • Toque los tres puntos en la esquina superior derecha.
    • -
    • Toque Acceso especial.
    • -
    • Toca Instalar aplicaciones desconocidas.
    • -
    • Toque Chrome (o cualquier navegador web que utilice).
    • -
    • Mover Permitir desde esta fuente a la posición On.
    • -
    - -

    Paso 2: Descargar tráfico Rider hack archivo APK de una fuente de buena reputación

    -
      -
    • Abra su navegador web y vaya a la página web que ofrece Traffic Rider hack archivo APK. Por ejemplo, puedes ir a [APK Done]( 1 ) o [Traffic Rider Games]( 2 ).
    • -
    • Encontrar el enlace de descarga para el tráfico Rider hack archivo APK y toque en él.
    • -
    • Aceptar cualquier ventana emergente o permisos que puedan aparecer.
    • -
    • Espera a que termine la descarga.
    • -
    -

    Una vez que haya descargado el archivo APK, debe ubicarlo en su dispositivo y pulsar para instalarlo. Puede usar cualquier aplicación del explorador de archivos que tenga en su teléfono, como Cx File Explorer o Administrador de archivos. Aquí está cómo hacerlo:

    -

    Paso 3: Localizar y tocar el archivo APK para instalarlo

    -
      -
    • Abra su aplicación explorador de archivos y vaya a la carpeta Descargas en su dispositivo.
    • -
    • Encontrar el tráfico Rider hack APK archivo que ha descargado y toque.
    • -
    • Puede ver un mensaje de advertencia diciendo que este tipo de archivo puede dañar su dispositivo. Toque OK o Instalar de todos modos (inseguro) para proceder.
    • -
    • Pulse Instalar y espere a que se complete la instalación.
    • -
    • Pulse Abrir para iniciar el juego o Listo para salir del instalador.
    • -
    -

    Felicitaciones, que ha instalado con éxito Traffic Rider hack APK en su dispositivo. Ahora puede disfrutar del juego con todas las funciones de hackeo habilitadas. Aquí está cómo hacerlo:

    -

    -

    Paso 4: Iniciar el juego y disfrutar de las características de corte

    -
      -
    • Abre el juego desde el cajón de la aplicación o la pantalla de inicio.
    • -
    • Puede ver una ventana emergente pidiéndole que permita el acceso a sus fotos, medios y archivos. Pulse Permitir continuar.
    • -
    • También puede ver una ventana emergente pidiéndole que califique el juego. Toque Más tarde o Califique ahora como desee.
    • -
    • Verás el menú principal del juego con cuatro opciones: carrera, Endless, Time Trial y Free Ride. Toca cualquiera de ellas para empezar a jugar.
    • - -
    • También notará que todas las bicicletas están desbloqueadas y actualizadas al nivel máximo. Puedes elegir cualquier bicicleta que te guste y personalizarla con diferentes colores y ruedas.
    • -
    • No verás anuncios ni temporizadores en el juego. Puedes jugar el tiempo que quieras sin interrupciones ni tiempos de espera.
    • -
    -

    Beneficios de usar Traffic Rider hack APK

    -

    Usando Traffic Rider hack APK tiene muchos beneficios que pueden mejorar su experiencia de juego. Estos son algunos de ellos:

    -

    Dinero y oro ilimitados

    -

    Con dinero y oro ilimitados, puede comprar y actualizar cualquier bicicleta que desee sin preocuparse por el costo. También puede desbloquear todos los logros y recompensas en el juego con facilidad. Usted puede tener más diversión y variedad en Traffic Rider con dinero ilimitado y oro.

    -

    Todas las bicicletas desbloqueadas y actualizadas

    -

    Con todas las bicicletas desbloqueadas y actualizadas, puede elegir entre más de 30 bicicletas diferentes, cada una con sus propias características y rendimiento. También puede personalizarlos con diferentes colores y ruedas para adaptarse a su estilo. Puedes disfrutar de más realismo e inmersión en Traffic Rider con todas las bicicletas desbloqueadas y actualizadas.

    -

    No hay anuncios ni temporizadores

    -

    Sin anuncios y sin temporizadores, puede jugar Traffic Rider sin interrupciones ni tiempos de espera. Usted puede centrarse en el juego y los gráficos sin ser molestado por los anuncios o temporizadores. Puedes tener más desafío y emoción en Traffic Rider sin anuncios y sin temporizadores.

    -

    Los riesgos de usar Traffic Rider hack APK

    -

    Sin embargo, el uso de Traffic Rider hack APK también tiene algunos riesgos que usted debe tener en cuenta antes de instalarlo. Estos son algunos de ellos:

    -

    Infección potencial de malware o virus

    - -

    Posible prohibición o suspensión del juego

    -

    Dado que Traffic Rider hack APK es una herramienta de trucos que le da una ventaja injusta sobre otros jugadores, puede violar los términos de servicio del desarrollador de juegos o editor. Puedes enfrentarte a una prohibición o suspensión del juego si te pillan usándolo por su sistema anti-trampa o por otros jugadores que te denuncien. Usted debe utilizar Traffic Rider hack APK a su propio riesgo y discreción.

    -

    Cuestiones jurídicas o éticas

    -

    Dado que Traffic Rider hack APK es una versión pirata del juego original, puede infringir los derechos de propiedad intelectual del desarrollador o editor del juego. Usted puede enfrentar problemas legales o éticos si lo usa sin su permiso o consentimiento. Debes respetar el trabajo y el esfuerzo del desarrollador o editor de juegos y apoyarlos comprando sus productos oficiales.

    -

    Conclusión

    -

    Traffic Rider es un gran juego de carreras de motos que ofrece una experiencia de juego realista e inmersiva con una vista en primera persona, sonidos reales de bicicletas, entornos detallados y un modo de carrera completo. Sin embargo, si desea disfrutar del juego sin limitaciones o restricciones, es posible que desee probar Traffic Rider hack APK.

    -

    Tráfico Rider hack APK es una versión modificada del juego original que le da acceso a dinero ilimitado y oro, todas las bicicletas desbloqueadas y actualizadas, sin anuncios, sin temporizadores, y más. Con este hack, usted puede tener más diversión y desafío en Traffic Rider.

    -

    En este artículo, le mostramos cómo descargar e instalar Traffic Rider hack APK en su dispositivo Android También discutimos los beneficios y riesgos de usar este hack, así como algunos consejos y trucos para jugar el juego. Esperamos que haya encontrado este artículo útil e informativo. Si tiene algún comentario o pregunta, no dude en dejar un comentario a continuación. Aquí hay algunas preguntas frecuentes (Preguntas frecuentes) sobre Traffic Rider hack APK:

    Q: ¿Es seguro de usar Traffic Rider hack APK?

    - -

    Q: ¿Cómo puedo actualizar Traffic Rider hack APK?

    -

    A: Tráfico Rider hack APK puede no ser compatible con la última versión del juego original. Es posible que tenga que desinstalar el hack y descargar una nueva versión de la misma fuente o una diferente. También debe hacer una copia de seguridad de los datos del juego antes de actualizar el hack.

    -

    Q: ¿Puedo jugar Traffic Rider hack APK en línea con otros jugadores?

    -

    A: Tráfico Rider hack APK no es compatible con el modo multijugador en línea. Solo se puede jugar el juego sin conexión con las características de corte habilitado. También puede enfrentar una prohibición o suspensión del juego si intenta jugar en línea con el hack.

    -

    Q: ¿Puedo usar Traffic Rider hack APK en otros dispositivos o plataformas?

    -

    A: Tráfico Rider hack APK solo está diseñado para dispositivos Android. No se puede utilizar en otros dispositivos o plataformas, como iOS, Windows o Mac. Es posible que necesite encontrar un hack o mod diferente para esos dispositivos o plataformas.

    -

    Q: ¿Cuáles son algunas alternativas a Traffic Rider hack APK?

    -

    A: Si no desea utilizar Traffic Rider hack APK, puede probar algunas alternativas que pueden mejorar su experiencia de juego. Por ejemplo, puedes usar trucos, consejos, guías o trucos de Traffic Rider que pueden ayudarte a mejorar tus habilidades y rendimiento en el juego. También puedes usar Traffic Rider mod APKs que pueden ofrecer diferentes características o modos que el juego original.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Apk Mvil Zingspeed.md b/spaces/Benson/text-generation/Examples/Descargar Apk Mvil Zingspeed.md deleted file mode 100644 index b5f9c81750360ff78f62381320a68e4ef4b81005..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Apk Mvil Zingspeed.md +++ /dev/null @@ -1,103 +0,0 @@ - -

    Cómo descargar ZingSpeed móvil APK para Android

    -

    Si usted está buscando un ritmo rápido, emocionante, y la adrenalina de bombeo juego de carreras, es posible que desee echa un vistazo ZingSpeed Mobile. Este juego vietnamita es muy popular localmente y te permite personalizar todo lo que quieras en tu coche, y además de eso, ¡también es multijugador! De esta manera se puede correr con cualquier persona y realizar acrobacias increíbles en varias pistas.

    -

    En este artículo, le mostraremos cómo descargar ZingSpeed Mobile APK para Android, cómo instalarlo, y cómo jugarlo. También compartiremos algunos consejos y trucos para ZingSpeed Mobile juego que le ayudará a mejorar sus habilidades de carreras y el rendimiento. Así que vamos a empezar!

    -

    descargar apk móvil zingspeed


    DOWNLOADhttps://bltlly.com/2v6JTC



    -

    ¿Qué es ZingSpeed Mobile?

    -

    Una breve introducción al juego y sus características

    -

    ZingSpeed Mobile es un juego de carreras en 3D para móviles, basado en el PC ZingSpeed original, que mantiene las habilidades y la profundidad del control, mientras mejora enormemente la creación de personajes y la moda. Fue desarrollado por VNG Corporation, uno de los principales desarrolladores de juegos en Vietnam.

    -

    ZingSpeed Mobile ofrece varios modos de juego, como Speed Race, Props Race, Ranked Race, Storyline, Couple Racing, Speed Racing Superpowers, Skateboarding Race, Border Races, Pig Wars y más. También puedes elegir entre diferentes tipos de vehículos, como coches, motocicletas, monopatines, etc. Puedes personalizar tu vehículo con diferentes partes, colores, pegatinas, calcomanías, etc. También puedes crear tu propio personaje con diferentes atuendos, peinados, accesorios, etc.

    -

    ZingSpeed Mobile es también un juego multijugador que te permite competir con otros jugadores de todo el mundo. Puede unirse o crear un equipo con sus amigos u otros corredores que comparten la misma pasión por la velocidad. También puedes participar en torneos de varios tamaños y competir con oponentes internacionales. También puedes chatear con otros jugadores en el juego o en plataformas de redes sociales.

    - -

    Jugar a ZingSpeed Mobile puede traerte muchos beneficios, como:

    -
      -
    • Puede mejorar la coordinación mano-ojo y los reflejos como usted tiene que controlar su vehículo y evitar obstáculos en la pista.
    • -
    • Puede mejorar su creatividad e imaginación, ya que puede personalizar su vehículo y carácter de acuerdo a sus preferencias.
    • -
    • Puede aumentar su confianza y autoestima, ya que puede mostrar sus habilidades y logros de carreras a otros jugadores.
    • -
    • Puede reducir el estrés y el aburrimiento, ya que puede disfrutar de la emoción y la emoción de las carreras.
    • -
    • Puede aumentar sus habilidades sociales y la red como usted puede hacer amigos con otros jugadores que comparten el mismo interés en las carreras.
    • -
    -

    Cómo descargar ZingSpeed móvil APK de Google Play Store

    -

    Los pasos para descargar el archivo APK directamente a su dispositivo

    -

    Si desea descargar ZingSpeed Mobile APK directamente a su dispositivo Android, necesita una conexión a Internet y un navegador. Estos son los pasos:

    -
      -
    1. Abra la Google Play Store en su dispositivo Android y busque ZingSpeed Mobile. Toque en el nombre de la aplicación para abrir su página de detalles.
    2. -
    3. Toque en el menú de tres puntos en la esquina superior derecha de la pantalla y seleccione Compartir. Aparecerá un menú emergente con diferentes opciones para compartir el enlace de la aplicación.
    4. -
    5. Seleccione la opción que le permite copiar el enlace al portapapeles, como Copiar al portapapeles, Copiar enlace, etc.
    6. -
    7. Abra el navegador en su dispositivo y pegue el enlace en la barra de direcciones. Toque en Ir o Entrar para cargar la página.
    8. -
    9. En la página, verá un botón que dice Descargar APK. Toque en él y espere a que comience la descarga.
    10. -
    11. Una vez que se complete la descarga, verá una notificación que dice ZingSpeed Mobile APK descargado. Toque en él para abrir el archivo.
    12. -
    -

    Los pasos para descargar el archivo APK a su ordenador y transferir a su dispositivo

    - -
      -
    1. Abra la Google Play Store en su computadora y busque ZingSpeed Mobile. Haga clic en el nombre de la aplicación para abrir su página de detalles.
    2. -
    3. Copie la URL de la página desde la barra de direcciones de su navegador.
    4. -
    5. Abra una nueva pestaña en su navegador y vaya a un sitio web que le permite descargar archivos APK de Google Play Store, como APKPure, APKMirror, etc.
    6. -
    7. Pegue la URL de la página de la aplicación ZingSpeed Mobile en el cuadro de búsqueda del sitio web y haga clic en Buscar o Enter.
    8. -
    9. Verá una lista de resultados con diferentes versiones de ZingSpeed Mobile APK. Elija la última versión y haga clic en Descargar o Descargar APK.
    10. -
    11. Espere a que la descarga termine y localice el archivo en su computadora.
    12. -
    13. Conecte su dispositivo Android a su computadora usando un cable USB o una conexión inalámbrica. Asegúrese de que su dispositivo sea detectado por su computadora.
    14. -
    15. Copie o mueva el archivo APK de ZingSpeed Mobile desde su computadora al almacenamiento de su dispositivo. Puede elegir la carpeta que desee, pero asegúrese de recordar su ubicación.
    16. -
    17. Desconecte el dispositivo de su computadora y abra la aplicación de administrador de archivos en su dispositivo. Vaya a la carpeta donde guardó el archivo APK de ZingSpeed Mobile y toque en él para abrirlo.
    18. -
    -

    Cómo instalar ZingSpeed móvil APK en Android

    -

    Los pasos para habilitar fuentes desconocidas e instalar el archivo APK

    -

    Antes de que pueda instalar ZingSpeed Mobile APK en su dispositivo Android, es necesario habilitar fuentes desconocidas, que le permite instalar aplicaciones de fuentes distintas de Google Play Store. Estos son los pasos:

    -
      -
    1. Vaya a Configuración en su dispositivo y toque en Seguridad o Privacidad.
    2. -
    3. Encontrar la opción que dice Fuentes desconocidas o Instalar aplicaciones desconocidas y alternar en. Es posible que vea un mensaje de advertencia que indica que instalar aplicaciones de fuentes desconocidas puede dañar su dispositivo. Toque en OK o Permitir proceder.
    4. - -
    5. Espere a que se complete la instalación y toque en Abrir o Listo.
    6. -
    -

    Los pasos para lanzar el juego y disfrutarlo

    -

    Después de haber instalado ZingSpeed Mobile APK en su dispositivo Android, puede iniciar el juego y disfrutarlo. Estos son los pasos:

    -
      -
    1. Ir a su cajón de aplicaciones o pantalla de inicio y encontrar el icono de ZingSpeed Mobile. Toque en él para iniciar el juego.
    2. -
    3. Puede ver una pantalla de bienvenida con el logotipo del juego y algunas animaciones de carga. Espere unos segundos hasta que el juego se cargue por completo.
    4. -
    5. Puede ver una pantalla de bienvenida con algunas opciones, como Inicio de sesión, Registro, Invitado, etc. Elija la opción que más le convenga. Si tiene una cuenta existente, puede iniciar sesión con su nombre de usuario y contraseña. Si no tiene una cuenta, puede registrarse con su correo electrónico o número de teléfono. Si no quieres crear una cuenta, puedes jugar como invitado sin guardar tu progreso.
    6. -
    7. Usted puede ver una pantalla de tutorial que explica cómo jugar ZingSpeed juego móvil. Puede seguir las instrucciones o omitirlas si ya sabe cómo jugar.
    8. -
    9. Puede ver una pantalla de menú principal con diferentes opciones, como Modo de juego, Garaje, Tienda, Equipo, Chat, Configuración, etc. Puede explorar estas opciones o comenzar a jugar de inmediato tocando en Modo de juego.
    10. -
    11. Puedes ver una lista de modos de juego que puedes elegir, como Carrera de velocidad, Carrera de accesorios, Carrera clasificada, Historia, Carreras de pareja, Superpotencias de carreras de velocidad, Carrera de skateboarding, Carreras fronterizas, Guerras de cerdos , y más. También puede filtrar los modos de juego por dificultad, región, modo, etc. Toque en el modo de juego que desea jugar y espere a que comience el partido.
    12. -
    13. Puedes ver una pantalla del lobby donde puedes ver a tus oponentes, tu vehículo y algunos ajustes. También puede chatear con otros jugadores, invitar a amigos, cambiar de vehículo, etc. Toque en Listo o Comenzar cuando esté listo para la carrera.
    14. - -
    15. Puedes ver una pantalla de carreras donde puedes controlar tu vehículo y competir con otros jugadores. Puedes usar los botones de la pantalla para acelerar, frenar, desviar, usar objetos, etc. También puedes inclinar el dispositivo para dirigir tu vehículo. Puede ver su posición, velocidad, tiempo, vuelta, etc. en la pantalla. Trate de llegar a la línea de meta lo más rápido posible y ganar la carrera.
    16. -
    17. Puede ver una pantalla de resultados donde puede ver su rango, puntuación, recompensas, etc. También puede calificar el partido, chatear con otros jugadores, reproducir la carrera, etc. Toque en Siguiente o Volver para continuar jugando o volver al menú principal.
    18. -
    -

    Consejos y trucos para el juego móvil ZingSpeed

    -

    Algunos consejos y trucos útiles para mejorar sus habilidades de carreras y rendimiento

    -

    ZingSpeed El juego móvil no solo se trata de velocidad, sino también de habilidades y estrategia. Aquí hay algunos consejos y trucos útiles que pueden ayudarle a mejorar sus habilidades de carreras y rendimiento:

    -

    -
      -La práctica lo hace perfecto. Cuanto más juegues a ZingSpeed Mobile, más familiarizados estarás con las pistas, los vehículos, los objetos, etc. También podrás practicar en diferentes modos de juego y niveles de dificultad para desafiarte y aprender nuevas habilidades. -
    • Personaliza tu vehículo y personaje. El juego ZingSpeed Mobile te permite personalizar todo lo que quieras en tu vehículo y personaje. Puede cambiar las piezas, colores, pegatinas, calcomanías, etc. de su vehículo para mejorar su rendimiento y apariencia. También puedes cambiar los atuendos, peinados, accesorios, etc. de tu personaje para expresar tu personalidad y estilo.
    • - -
    • Deriva como un profesional. A la deriva es una de las habilidades más importantes en ZingSpeed juego móvil. Drifting le permite girar las esquinas sin problemas y rápidamente sin perder velocidad o control. Para la deriva, es necesario tocar y mantener pulsado el botón de freno mientras conduce su vehículo. Cuanto más tiempo mantenga pulsado el botón de freno, más ángulo y humo se creará. Drifting también llena su medidor de nitro, que puede utilizar para aumentar su velocidad tocando el botón nitro.
    • -
    • Realizar acrobacias y trucos. ZingSpeed juego móvil tiene varias pistas que tienen rampas, bucles, saltos, etc. Estas pistas le permiten realizar acrobacias y trucos que pueden hacer su carrera más emocionante y divertido. Para realizar acrobacias y trucos, debe pulsar el botón de acrobacias mientras está en el aire o en una rampa. También puede inclinar el dispositivo para ajustar la dirección y el equilibrio. Realizar acrobacias y trucos también llena tu medidor de nitro y te da puntos extra.
    • -
    -

    Algunos problemas comunes y soluciones para ZingSpeed juego móvil

    -

    ZingSpeed Mobile es un gran juego de carreras que puede proporcionarle horas de entretenimiento y disfrute. Sin embargo, como cualquier otro juego, puede tener algunos problemas que pueden afectar su experiencia de juego. Aquí hay algunos problemas y soluciones comunes para el juego ZingSpeed Mobile:

    - -ProblemaSolución -El juego se bloquea o se congela. Esto puede ser causado por la baja memoria o el espacio de almacenamiento en el dispositivo. Puedes intentar despejar el espacio eliminando aplicaciones o archivos no deseados. También puedes intentar reiniciar el dispositivo o reinstalar el juego. -El juego se retrasa o se ejecuta lentamente. Esto puede ser causado por una mala conexión a Internet o un alto tráfico en el servidor. Puede intentar cambiar a una red diferente o jugar en un momento diferente cuando hay menos jugadores en línea. También puede intentar reducir la calidad de los gráficos o cerrar otras aplicaciones que se ejecutan en segundo plano. - -El juego no se conecta ni sincroniza. Esto puede ser causado por un firewall o software antivirus que bloquea el juego para acceder a Internet o al servidor. Puedes intentar desactivar o permitir el juego a través de tu firewall o configuración antivirus. También puede intentar cerrar sesión y volver a iniciar sesión en su cuenta o usar una cuenta diferente. -El juego no reconoce mi cuenta ni mis compras. Esto puede ser causado por un error en el juego o en el servidor. Puedes intentar ponerte en contacto con el servicio de atención al cliente del juego o con Google Play Store y facilitarles los detalles de tu cuenta y el comprobante de compra. También puedes intentar restaurar tus compras desde el menú de configuración del juego. - -

    Conclusión

    -

    Un resumen de los puntos principales y una llamada a la acción

    -

    ZingSpeed Mobile es un juego de carreras en 3D para móviles que ofrece varios modos de juego, vehículos, opciones de personalización y funciones multijugador. Es un juego divertido y emocionante que puede mejorar sus habilidades de carreras, creatividad, confianza, habilidades sociales y más. En este artículo, le hemos mostrado cómo descargar ZingSpeed Mobile APK para Android, cómo instalarlo, y cómo jugarlo. También hemos compartido algunos consejos y trucos para ZingSpeed Mobile juego que puede ayudar a mejorar su rendimiento y resolver algunos problemas comunes.

    -

    Si usted está listo para experimentar la emoción y la adrenalina del juego ZingSpeed Mobile, descargarlo ahora y unirse a la carrera! También puedes compartir este artículo con tus amigos que aman los juegos de carreras e invitarlos a jugar contigo. ¡Diviértete y buena suerte!

    -

    Preguntas frecuentes

    -

    Cinco preguntas frecuentes únicas sobre el juego ZingSpeed Mobile y sus respuestas

    -

    Aquí hay algunas preguntas frecuentes sobre ZingSpeed Mobile juego y sus respuestas:

    -
      - -
    1. P: ¿Cómo puedo cambiar mi vehículo o personaje en el juego ZingSpeed Mobile?
      A: Puedes cambiar tu vehículo o personaje yendo al garaje o a la tienda. Puedes elegir entre diferentes tipos de vehículos, como coches, motocicletas, monopatines, etc. También puedes elegir entre diferentes trajes, peinados, accesorios, etc. para tu personaje.
    2. -
    3. Q: ¿Cómo puedo unirme o crear un equipo en el juego ZingSpeed Mobile?
      A: Puedes unirte o crear un equipo yendo al menú del equipo. Puede buscar un equipo existente o crear su propio equipo con un nombre, logotipo, descripción, etc. También puede invitar a amigos u otros jugadores a unirse a su equipo.
    4. -
    5. P: ¿Cómo puedo chatear con otros jugadores en el juego ZingSpeed Mobile?
      A: Puedes chatear con otros jugadores yendo al menú de chat. Puedes elegir entre diferentes canales de chat, como global, equipo, amigo, etc. También puedes enviar mensajes privados a otros jugadores tocando sus nombres.
    6. -
    7. Q: ¿Cómo puedo reportar un error o un problema en el juego ZingSpeed Mobile?
      A: Puede reportar un error o un problema yendo al menú de configuración y tocando en la retroalimentación. Puede rellenar un formulario con sus datos y describir su problema. También puede adjuntar capturas de pantalla o vídeos si es posible. También puede ponerse en contacto con el servicio de atención al cliente del juego a través de correo electrónico o plataformas de redes sociales.
    8. -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar El Formulario 29 30.md b/spaces/Benson/text-generation/Examples/Descargar El Formulario 29 30.md deleted file mode 100644 index 5872eed6f8a5c177a71910d6889689d559ed2cb9..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar El Formulario 29 30.md +++ /dev/null @@ -1,51 +0,0 @@ -
    -

    Cómo descargar el formulario 29 30

    -

    Hola, esto es Bing. Estoy aquí para ayudarte a escribir un artículo de estilo conversacional sobre cómo descargar 29 30 form. Antes de empezar, déjame explicarte qué es la forma 29 30 y por qué la necesitas.

    -

    descargar el formulario 29 30


    Downloadhttps://bltlly.com/2v6MG8



    -

    29 30 form es un conjunto de documentos que se requieren para transferir la propiedad de un vehículo de motor en la India. Consta de dos formas: Form 29 y Form 30. Formulario 29 es el aviso de transferencia de propiedad de un vehículo de motor, mientras que Formulario 30 es el informe de transferencia de propiedad de un vehículo de motor. Debe llenar y enviar estos formularios a la Oficina Regional de Transporte (RTO) cuando venda o compre un vehículo usado.

    -

    Ahora que sabes lo que es la forma 29 30, veamos cómo puedes descargarla desde el sitio web oficial de Parivahan Sewa, que es el portal en línea para los servicios de transporte por carretera en la India. Estos son los pasos a seguir:

    -

    Cómo descargar el formulario 29 30 desde el sitio web de Parivahan Sewa

    -
      -
    1. Ir al sitio web de Parivahan Sewa en https://www.parivahan.gov.in/parivahan/.
    2. -
    3. Haga clic en la pestaña "Servicios en línea" en la barra de menú superior y seleccione "Servicios relacionados con vehículos".
    4. -
    5. Introduzca el número de su vehículo y haga clic en "Proceder".
    6. -
    7. En la página siguiente, haga clic en "Descargar formularios" en la sección "Misceláneos".
    8. -
    9. Verá una lista de todos los formularios disponibles para descargar. Desplácese hacia abajo para encontrar Form 29 y Form 30. Haga clic en el botón "Descargar" junto a cada formulario para guardarlos en su computadora.
    10. -
    11. Imprima los formularios y llénelos con los detalles requeridos. Deberá proporcionar información como el número del vehículo, el número del motor, el número del chasis, el nombre y la dirección del vendedor, el nombre y la dirección del comprador, la fecha de transferencia, etc.
    12. -
    -

    Cómo llenar y enviar 29 30 Formulario

    - -
      -
    • Firme los formularios y adjunte los documentos necesarios, como prueba de identidad, prueba de domicilio, certificado de seguro, certificado de contaminación bajo control, etc.
    • -
    • Envíe los formularios y los documentos a la RTO más cercana dentro de los 14 días de la transferencia de la propiedad. También puede tener que pagar una tarifa nominal por procesar los formularios.
    • -
    • También puede solicitar un nuevo certificado de registro (RC) para su vehículo en línea a través del sitio web de Parivahan Sewa. Tendrá que subir copias escaneadas de sus documentos y pagar la tarifa en línea. Recibirá un recibo y un número de referencia que puede usar para rastrear el estado de su solicitud.
    • -
    • También puede verificar el estado de su transferencia de propiedad en línea ingresando su número de vehículo y número de solicitud en el sitio web de Parivahan Sewa.
    • -
    -

    Conclusión

    -

    En este artículo, hemos aprendido cómo descargar 29 30 form del sitio web de Parivahan Sewa y cómo llenarlo y enviarlo a la RTO. También hemos visto cómo solicitar un nuevo RC en línea y cómo comprobar el estado de nuestra transferencia de propiedad. Siguiendo estos pasos, puede transferir fácilmente la propiedad de su vehículo en la India sin problemas.

    -

    Espero que este artículo sea útil e informativo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Gracias por leer!

    -

    Preguntas frecuentes

    -
      -
    1. ¿Qué es la forma 29 30?
    2. -

      29 30 form es un conjunto de documentos que se requieren para transferir la propiedad de un vehículo de motor en la India. Consta de dos formas: Form 29 y Form 30. Formulario 29 es el aviso de transferencia de propiedad de un vehículo de motor, mientras que Formulario 30 es el informe de transferencia de propiedad de un vehículo de motor.

      -

      -
    3. ¿Dónde puedo descargar 29 30 formas?
    4. - -
    5. ¿Qué documentos necesito presentar con el formulario 29 30?
    6. -

      Necesitas enviar los siguientes documentos con 29 30 form:

      -
        -
      • Prueba de identidad (como tarjeta Aadhaar, tarjeta PAN, licencia de conducir, etc.)
      • -
      • Prueba de domicilio (como factura de electricidad, factura de agua, contrato de alquiler, etc.)
      • -
      • Certificado de seguro
      • -
      • Contaminación bajo certificado de control
      • -
      • Certificado de no objeción (NOC) del propietario anterior (si procede)
      • -
      • Certificado de no objeción (NOC) del financiador (si procede)
      • -
      • Declaración jurada que indica que el vehículo está libre de cualquier gravamen legal (si es aplicable)
      • -
      -
    7. ¿Cuánto tiempo se necesita para transferir la propiedad de un vehículo?
    8. -

      El tiempo necesario para transferir la propiedad de un vehículo depende de varios factores, como la ubicación de la RTO, el tipo de vehículo, el modo de pago, etc. En general, el RTO tarda de 15 a 30 días en procesar su solicitud y emitir un nuevo RC. Sin embargo, puede comprobar el estado de su solicitud en línea a través del sitio web de Parivahan Sewa.

      -
    9. ¿Cuánto pago por transferir la propiedad de un vehículo?
    10. -

      La tarifa para transferir la propiedad de un vehículo varía dependiendo de la ubicación de RTO, el tipo de vehículo, la edad del vehículo, etc. Puede consultar la tarifa exacta para su vehículo en el sitio web de Parivahan Sewa o ponerse en contacto con su RTO más cercano para obtener más detalles. En general, la tarifa varía de Rs. 200 a Rs. 500 para vehículos de dos ruedas y de Rs. 300 a Rs. 1000 para vehículos de cuatro ruedas.

      -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/certifi/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/certifi/__init__.py deleted file mode 100644 index a3546f12555c2c8d186489c5220e8d2e25f0b0a9..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/certifi/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .core import contents, where - -__all__ = ["contents", "where"] -__version__ = "2022.12.07" diff --git a/spaces/CVH-vn1210/make_hair/minigpt4/models/eva_vit.py b/spaces/CVH-vn1210/make_hair/minigpt4/models/eva_vit.py deleted file mode 100644 index 7fcc63a74049f1faf65c99943ef94f72383ca3f5..0000000000000000000000000000000000000000 --- a/spaces/CVH-vn1210/make_hair/minigpt4/models/eva_vit.py +++ /dev/null @@ -1,442 +0,0 @@ -# Based on EVA, BEIT, timm and DeiT code bases -# https://github.com/baaivision/EVA -# https://github.com/rwightman/pytorch-image-models/tree/master/timm -# https://github.com/microsoft/unilm/tree/master/beit -# https://github.com/facebookresearch/deit/ -# https://github.com/facebookresearch/dino -# --------------------------------------------------------' -import math -from functools import partial - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from timm.models.layers import drop_path, to_2tuple, trunc_normal_ -from timm.models.registry import register_model - -from minigpt4.common.dist_utils import download_cached_file - -def _cfg(url='', **kwargs): - return { - 'url': url, - 'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None, - 'crop_pct': .9, 'interpolation': 'bicubic', - 'mean': (0.5, 0.5, 0.5), 'std': (0.5, 0.5, 0.5), - **kwargs - } - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - """ - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - def extra_repr(self) -> str: - return 'p={}'.format(self.drop_prob) - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - # x = self.drop(x) - # commit this for the orignal BERT implement - x = self.fc2(x) - x = self.drop(x) - return x - - -class Attention(nn.Module): - def __init__( - self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., - proj_drop=0., window_size=None, attn_head_dim=None): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - if attn_head_dim is not None: - head_dim = attn_head_dim - all_head_dim = head_dim * self.num_heads - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, all_head_dim * 3, bias=False) - if qkv_bias: - self.q_bias = nn.Parameter(torch.zeros(all_head_dim)) - self.v_bias = nn.Parameter(torch.zeros(all_head_dim)) - else: - self.q_bias = None - self.v_bias = None - - if window_size: - self.window_size = window_size - self.num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3 - self.relative_position_bias_table = nn.Parameter( - torch.zeros(self.num_relative_distance, num_heads)) # 2*Wh-1 * 2*Ww-1, nH - # cls to token & token 2 cls & cls to cls - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(window_size[0]) - coords_w = torch.arange(window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * window_size[1] - 1 - relative_position_index = \ - torch.zeros(size=(window_size[0] * window_size[1] + 1, ) * 2, dtype=relative_coords.dtype) - relative_position_index[1:, 1:] = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - relative_position_index[0, 0:] = self.num_relative_distance - 3 - relative_position_index[0:, 0] = self.num_relative_distance - 2 - relative_position_index[0, 0] = self.num_relative_distance - 1 - - self.register_buffer("relative_position_index", relative_position_index) - else: - self.window_size = None - self.relative_position_bias_table = None - self.relative_position_index = None - - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(all_head_dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x, rel_pos_bias=None): - B, N, C = x.shape - qkv_bias = None - if self.q_bias is not None: - qkv_bias = torch.cat((self.q_bias, torch.zeros_like(self.v_bias, requires_grad=False), self.v_bias)) - # qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias) - qkv = qkv.reshape(B, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - if self.relative_position_bias_table is not None: - relative_position_bias = \ - self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1] + 1, - self.window_size[0] * self.window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if rel_pos_bias is not None: - attn = attn + rel_pos_bias - - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, -1) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class Block(nn.Module): - - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., init_values=None, act_layer=nn.GELU, norm_layer=nn.LayerNorm, - window_size=None, attn_head_dim=None): - super().__init__() - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop, window_size=window_size, attn_head_dim=attn_head_dim) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - if init_values is not None and init_values > 0: - self.gamma_1 = nn.Parameter(init_values * torch.ones((dim)),requires_grad=True) - self.gamma_2 = nn.Parameter(init_values * torch.ones((dim)),requires_grad=True) - else: - self.gamma_1, self.gamma_2 = None, None - - def forward(self, x, rel_pos_bias=None): - if self.gamma_1 is None: - x = x + self.drop_path(self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias)) - x = x + self.drop_path(self.mlp(self.norm2(x))) - else: - x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias)) - x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x))) - return x - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - """ - def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0]) - self.patch_shape = (img_size[0] // patch_size[0], img_size[1] // patch_size[1]) - self.img_size = img_size - self.patch_size = patch_size - self.num_patches = num_patches - - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x, **kwargs): - B, C, H, W = x.shape - # FIXME look at relaxing size constraints - assert H == self.img_size[0] and W == self.img_size[1], \ - f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})." - x = self.proj(x).flatten(2).transpose(1, 2) - return x - - -class RelativePositionBias(nn.Module): - - def __init__(self, window_size, num_heads): - super().__init__() - self.window_size = window_size - self.num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3 - self.relative_position_bias_table = nn.Parameter( - torch.zeros(self.num_relative_distance, num_heads)) # 2*Wh-1 * 2*Ww-1, nH - # cls to token & token 2 cls & cls to cls - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(window_size[0]) - coords_w = torch.arange(window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * window_size[1] - 1 - relative_position_index = \ - torch.zeros(size=(window_size[0] * window_size[1] + 1,) * 2, dtype=relative_coords.dtype) - relative_position_index[1:, 1:] = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - relative_position_index[0, 0:] = self.num_relative_distance - 3 - relative_position_index[0:, 0] = self.num_relative_distance - 2 - relative_position_index[0, 0] = self.num_relative_distance - 1 - - self.register_buffer("relative_position_index", relative_position_index) - - # trunc_normal_(self.relative_position_bias_table, std=.02) - - def forward(self): - relative_position_bias = \ - self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1] + 1, - self.window_size[0] * self.window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH - return relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - - -class VisionTransformer(nn.Module): - """ Vision Transformer with support for patch or hybrid CNN input stage - """ - def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12, - num_heads=12, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop_rate=0., attn_drop_rate=0., - drop_path_rate=0., norm_layer=nn.LayerNorm, init_values=None, - use_abs_pos_emb=True, use_rel_pos_bias=False, use_shared_rel_pos_bias=False, - use_mean_pooling=True, init_scale=0.001, use_checkpoint=False): - super().__init__() - self.image_size = img_size - self.num_classes = num_classes - self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models - - self.patch_embed = PatchEmbed( - img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim) - num_patches = self.patch_embed.num_patches - - self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) - if use_abs_pos_emb: - self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim)) - else: - self.pos_embed = None - self.pos_drop = nn.Dropout(p=drop_rate) - - if use_shared_rel_pos_bias: - self.rel_pos_bias = RelativePositionBias(window_size=self.patch_embed.patch_shape, num_heads=num_heads) - else: - self.rel_pos_bias = None - self.use_checkpoint = use_checkpoint - - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule - self.use_rel_pos_bias = use_rel_pos_bias - self.blocks = nn.ModuleList([ - Block( - dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, - init_values=init_values, window_size=self.patch_embed.patch_shape if use_rel_pos_bias else None) - for i in range(depth)]) -# self.norm = nn.Identity() if use_mean_pooling else norm_layer(embed_dim) -# self.fc_norm = norm_layer(embed_dim) if use_mean_pooling else None -# self.head = nn.Linear(embed_dim, num_classes) if num_classes > 0 else nn.Identity() - - if self.pos_embed is not None: - trunc_normal_(self.pos_embed, std=.02) - trunc_normal_(self.cls_token, std=.02) - # trunc_normal_(self.mask_token, std=.02) -# if isinstance(self.head, nn.Linear): -# trunc_normal_(self.head.weight, std=.02) - self.apply(self._init_weights) - self.fix_init_weight() -# if isinstance(self.head, nn.Linear): -# self.head.weight.data.mul_(init_scale) -# self.head.bias.data.mul_(init_scale) - - def fix_init_weight(self): - def rescale(param, layer_id): - param.div_(math.sqrt(2.0 * layer_id)) - - for layer_id, layer in enumerate(self.blocks): - rescale(layer.attn.proj.weight.data, layer_id + 1) - rescale(layer.mlp.fc2.weight.data, layer_id + 1) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - def get_classifier(self): - return self.head - - def reset_classifier(self, num_classes, global_pool=''): - self.num_classes = num_classes - self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() - - def forward_features(self, x): - x = self.patch_embed(x) - batch_size, seq_len, _ = x.size() - - cls_tokens = self.cls_token.expand(batch_size, -1, -1) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - if self.pos_embed is not None: - x = x + self.pos_embed - x = self.pos_drop(x) - - rel_pos_bias = self.rel_pos_bias() if self.rel_pos_bias is not None else None - for blk in self.blocks: - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x, rel_pos_bias) - else: - x = blk(x, rel_pos_bias) - return x -# x = self.norm(x) - -# if self.fc_norm is not None: -# t = x[:, 1:, :] -# return self.fc_norm(t.mean(1)) -# else: -# return x[:, 0] - - def forward(self, x): - x = self.forward_features(x) -# x = self.head(x) - return x - - def get_intermediate_layers(self, x): - x = self.patch_embed(x) - batch_size, seq_len, _ = x.size() - - cls_tokens = self.cls_token.expand(batch_size, -1, -1) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - if self.pos_embed is not None: - x = x + self.pos_embed - x = self.pos_drop(x) - - features = [] - rel_pos_bias = self.rel_pos_bias() if self.rel_pos_bias is not None else None - for blk in self.blocks: - x = blk(x, rel_pos_bias) - features.append(x) - - return features - - -def interpolate_pos_embed(model, checkpoint_model): - if 'pos_embed' in checkpoint_model: - pos_embed_checkpoint = checkpoint_model['pos_embed'].float() - embedding_size = pos_embed_checkpoint.shape[-1] - num_patches = model.patch_embed.num_patches - num_extra_tokens = model.pos_embed.shape[-2] - num_patches - # height (== width) for the checkpoint position embedding - orig_size = int((pos_embed_checkpoint.shape[-2] - num_extra_tokens) ** 0.5) - # height (== width) for the new position embedding - new_size = int(num_patches ** 0.5) - # class_token and dist_token are kept unchanged - if orig_size != new_size: - print("Position interpolate from %dx%d to %dx%d" % (orig_size, orig_size, new_size, new_size)) - extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens] - # only the position tokens are interpolated - pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:] - pos_tokens = pos_tokens.reshape(-1, orig_size, orig_size, embedding_size).permute(0, 3, 1, 2) - pos_tokens = torch.nn.functional.interpolate( - pos_tokens, size=(new_size, new_size), mode='bicubic', align_corners=False) - pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2) - new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1) - checkpoint_model['pos_embed'] = new_pos_embed - - -def convert_weights_to_fp16(model: nn.Module): - """Convert applicable model parameters to fp16""" - - def _convert_weights_to_fp16(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - -# if isinstance(l, (nn.MultiheadAttention, Attention)): -# for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]: -# tensor = getattr(l, attr) -# if tensor is not None: -# tensor.data = tensor.data.half() - - model.apply(_convert_weights_to_fp16) - - -def create_eva_vit_g(img_size=224,drop_path_rate=0.4,use_checkpoint=False,precision="fp16"): - model = VisionTransformer( - img_size=img_size, - patch_size=14, - use_mean_pooling=False, - embed_dim=1408, - depth=39, - num_heads=1408//88, - mlp_ratio=4.3637, - qkv_bias=True, - drop_path_rate=drop_path_rate, - norm_layer=partial(nn.LayerNorm, eps=1e-6), - use_checkpoint=use_checkpoint, - ) - url = "https://storage.googleapis.com/sfr-vision-language-research/LAVIS/models/BLIP2/eva_vit_g.pth" - cached_file = download_cached_file( - url, check_hash=False, progress=True - ) - state_dict = torch.load(cached_file, map_location="cpu") - interpolate_pos_embed(model,state_dict) - - incompatible_keys = model.load_state_dict(state_dict, strict=False) -# print(incompatible_keys) - - if precision == "fp16": -# model.to("cuda") - convert_weights_to_fp16(model) - return model \ No newline at end of file diff --git a/spaces/CVPR/LIVE/pybind11/include/pybind11/attr.h b/spaces/CVPR/LIVE/pybind11/include/pybind11/attr.h deleted file mode 100644 index 54065fc9e10a075e1a2de5d6095e88d4b0a4aca2..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/include/pybind11/attr.h +++ /dev/null @@ -1,528 +0,0 @@ -/* - pybind11/attr.h: Infrastructure for processing custom - type and function attributes - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#pragma once - -#include "cast.h" - -PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE) - -/// \addtogroup annotations -/// @{ - -/// Annotation for methods -struct is_method { handle class_; is_method(const handle &c) : class_(c) { } }; - -/// Annotation for operators -struct is_operator { }; - -/// Annotation for classes that cannot be subclassed -struct is_final { }; - -/// Annotation for parent scope -struct scope { handle value; scope(const handle &s) : value(s) { } }; - -/// Annotation for documentation -struct doc { const char *value; doc(const char *value) : value(value) { } }; - -/// Annotation for function names -struct name { const char *value; name(const char *value) : value(value) { } }; - -/// Annotation indicating that a function is an overload associated with a given "sibling" -struct sibling { handle value; sibling(const handle &value) : value(value.ptr()) { } }; - -/// Annotation indicating that a class derives from another given type -template struct base { - PYBIND11_DEPRECATED("base() was deprecated in favor of specifying 'T' as a template argument to class_") - base() { } -}; - -/// Keep patient alive while nurse lives -template struct keep_alive { }; - -/// Annotation indicating that a class is involved in a multiple inheritance relationship -struct multiple_inheritance { }; - -/// Annotation which enables dynamic attributes, i.e. adds `__dict__` to a class -struct dynamic_attr { }; - -/// Annotation which enables the buffer protocol for a type -struct buffer_protocol { }; - -/// Annotation which requests that a special metaclass is created for a type -struct metaclass { - handle value; - - PYBIND11_DEPRECATED("py::metaclass() is no longer required. It's turned on by default now.") - metaclass() {} - - /// Override pybind11's default metaclass - explicit metaclass(handle value) : value(value) { } -}; - -/// Annotation that marks a class as local to the module: -struct module_local { const bool value; constexpr module_local(bool v = true) : value(v) { } }; - -/// Annotation to mark enums as an arithmetic type -struct arithmetic { }; - -/** \rst - A call policy which places one or more guard variables (``Ts...``) around the function call. - - For example, this definition: - - .. code-block:: cpp - - m.def("foo", foo, py::call_guard()); - - is equivalent to the following pseudocode: - - .. code-block:: cpp - - m.def("foo", [](args...) { - T scope_guard; - return foo(args...); // forwarded arguments - }); - \endrst */ -template struct call_guard; - -template <> struct call_guard<> { using type = detail::void_type; }; - -template -struct call_guard { - static_assert(std::is_default_constructible::value, - "The guard type must be default constructible"); - - using type = T; -}; - -template -struct call_guard { - struct type { - T guard{}; // Compose multiple guard types with left-to-right default-constructor order - typename call_guard::type next{}; - }; -}; - -/// @} annotations - -PYBIND11_NAMESPACE_BEGIN(detail) -/* Forward declarations */ -enum op_id : int; -enum op_type : int; -struct undefined_t; -template struct op_; -inline void keep_alive_impl(size_t Nurse, size_t Patient, function_call &call, handle ret); - -/// Internal data structure which holds metadata about a keyword argument -struct argument_record { - const char *name; ///< Argument name - const char *descr; ///< Human-readable version of the argument value - handle value; ///< Associated Python object - bool convert : 1; ///< True if the argument is allowed to convert when loading - bool none : 1; ///< True if None is allowed when loading - - argument_record(const char *name, const char *descr, handle value, bool convert, bool none) - : name(name), descr(descr), value(value), convert(convert), none(none) { } -}; - -/// Internal data structure which holds metadata about a bound function (signature, overloads, etc.) -struct function_record { - function_record() - : is_constructor(false), is_new_style_constructor(false), is_stateless(false), - is_operator(false), is_method(false), - has_args(false), has_kwargs(false), has_kwonly_args(false) { } - - /// Function name - char *name = nullptr; /* why no C++ strings? They generate heavier code.. */ - - // User-specified documentation string - char *doc = nullptr; - - /// Human-readable version of the function signature - char *signature = nullptr; - - /// List of registered keyword arguments - std::vector args; - - /// Pointer to lambda function which converts arguments and performs the actual call - handle (*impl) (function_call &) = nullptr; - - /// Storage for the wrapped function pointer and captured data, if any - void *data[3] = { }; - - /// Pointer to custom destructor for 'data' (if needed) - void (*free_data) (function_record *ptr) = nullptr; - - /// Return value policy associated with this function - return_value_policy policy = return_value_policy::automatic; - - /// True if name == '__init__' - bool is_constructor : 1; - - /// True if this is a new-style `__init__` defined in `detail/init.h` - bool is_new_style_constructor : 1; - - /// True if this is a stateless function pointer - bool is_stateless : 1; - - /// True if this is an operator (__add__), etc. - bool is_operator : 1; - - /// True if this is a method - bool is_method : 1; - - /// True if the function has a '*args' argument - bool has_args : 1; - - /// True if the function has a '**kwargs' argument - bool has_kwargs : 1; - - /// True once a 'py::kwonly' is encountered (any following args are keyword-only) - bool has_kwonly_args : 1; - - /// Number of arguments (including py::args and/or py::kwargs, if present) - std::uint16_t nargs; - - /// Number of trailing arguments (counted in `nargs`) that are keyword-only - std::uint16_t nargs_kwonly = 0; - - /// Python method object - PyMethodDef *def = nullptr; - - /// Python handle to the parent scope (a class or a module) - handle scope; - - /// Python handle to the sibling function representing an overload chain - handle sibling; - - /// Pointer to next overload - function_record *next = nullptr; -}; - -/// Special data structure which (temporarily) holds metadata about a bound class -struct type_record { - PYBIND11_NOINLINE type_record() - : multiple_inheritance(false), dynamic_attr(false), buffer_protocol(false), - default_holder(true), module_local(false), is_final(false) { } - - /// Handle to the parent scope - handle scope; - - /// Name of the class - const char *name = nullptr; - - // Pointer to RTTI type_info data structure - const std::type_info *type = nullptr; - - /// How large is the underlying C++ type? - size_t type_size = 0; - - /// What is the alignment of the underlying C++ type? - size_t type_align = 0; - - /// How large is the type's holder? - size_t holder_size = 0; - - /// The global operator new can be overridden with a class-specific variant - void *(*operator_new)(size_t) = nullptr; - - /// Function pointer to class_<..>::init_instance - void (*init_instance)(instance *, const void *) = nullptr; - - /// Function pointer to class_<..>::dealloc - void (*dealloc)(detail::value_and_holder &) = nullptr; - - /// List of base classes of the newly created type - list bases; - - /// Optional docstring - const char *doc = nullptr; - - /// Custom metaclass (optional) - handle metaclass; - - /// Multiple inheritance marker - bool multiple_inheritance : 1; - - /// Does the class manage a __dict__? - bool dynamic_attr : 1; - - /// Does the class implement the buffer protocol? - bool buffer_protocol : 1; - - /// Is the default (unique_ptr) holder type used? - bool default_holder : 1; - - /// Is the class definition local to the module shared object? - bool module_local : 1; - - /// Is the class inheritable from python classes? - bool is_final : 1; - - PYBIND11_NOINLINE void add_base(const std::type_info &base, void *(*caster)(void *)) { - auto base_info = detail::get_type_info(base, false); - if (!base_info) { - std::string tname(base.name()); - detail::clean_type_id(tname); - pybind11_fail("generic_type: type \"" + std::string(name) + - "\" referenced unknown base type \"" + tname + "\""); - } - - if (default_holder != base_info->default_holder) { - std::string tname(base.name()); - detail::clean_type_id(tname); - pybind11_fail("generic_type: type \"" + std::string(name) + "\" " + - (default_holder ? "does not have" : "has") + - " a non-default holder type while its base \"" + tname + "\" " + - (base_info->default_holder ? "does not" : "does")); - } - - bases.append((PyObject *) base_info->type); - - if (base_info->type->tp_dictoffset != 0) - dynamic_attr = true; - - if (caster) - base_info->implicit_casts.emplace_back(type, caster); - } -}; - -inline function_call::function_call(const function_record &f, handle p) : - func(f), parent(p) { - args.reserve(f.nargs); - args_convert.reserve(f.nargs); -} - -/// Tag for a new-style `__init__` defined in `detail/init.h` -struct is_new_style_constructor { }; - -/** - * Partial template specializations to process custom attributes provided to - * cpp_function_ and class_. These are either used to initialize the respective - * fields in the type_record and function_record data structures or executed at - * runtime to deal with custom call policies (e.g. keep_alive). - */ -template struct process_attribute; - -template struct process_attribute_default { - /// Default implementation: do nothing - static void init(const T &, function_record *) { } - static void init(const T &, type_record *) { } - static void precall(function_call &) { } - static void postcall(function_call &, handle) { } -}; - -/// Process an attribute specifying the function's name -template <> struct process_attribute : process_attribute_default { - static void init(const name &n, function_record *r) { r->name = const_cast(n.value); } -}; - -/// Process an attribute specifying the function's docstring -template <> struct process_attribute : process_attribute_default { - static void init(const doc &n, function_record *r) { r->doc = const_cast(n.value); } -}; - -/// Process an attribute specifying the function's docstring (provided as a C-style string) -template <> struct process_attribute : process_attribute_default { - static void init(const char *d, function_record *r) { r->doc = const_cast(d); } - static void init(const char *d, type_record *r) { r->doc = const_cast(d); } -}; -template <> struct process_attribute : process_attribute { }; - -/// Process an attribute indicating the function's return value policy -template <> struct process_attribute : process_attribute_default { - static void init(const return_value_policy &p, function_record *r) { r->policy = p; } -}; - -/// Process an attribute which indicates that this is an overloaded function associated with a given sibling -template <> struct process_attribute : process_attribute_default { - static void init(const sibling &s, function_record *r) { r->sibling = s.value; } -}; - -/// Process an attribute which indicates that this function is a method -template <> struct process_attribute : process_attribute_default { - static void init(const is_method &s, function_record *r) { r->is_method = true; r->scope = s.class_; } -}; - -/// Process an attribute which indicates the parent scope of a method -template <> struct process_attribute : process_attribute_default { - static void init(const scope &s, function_record *r) { r->scope = s.value; } -}; - -/// Process an attribute which indicates that this function is an operator -template <> struct process_attribute : process_attribute_default { - static void init(const is_operator &, function_record *r) { r->is_operator = true; } -}; - -template <> struct process_attribute : process_attribute_default { - static void init(const is_new_style_constructor &, function_record *r) { r->is_new_style_constructor = true; } -}; - -inline void process_kwonly_arg(const arg &a, function_record *r) { - if (!a.name || strlen(a.name) == 0) - pybind11_fail("arg(): cannot specify an unnamed argument after an kwonly() annotation"); - ++r->nargs_kwonly; -} - -/// Process a keyword argument attribute (*without* a default value) -template <> struct process_attribute : process_attribute_default { - static void init(const arg &a, function_record *r) { - if (r->is_method && r->args.empty()) - r->args.emplace_back("self", nullptr, handle(), true /*convert*/, false /*none not allowed*/); - r->args.emplace_back(a.name, nullptr, handle(), !a.flag_noconvert, a.flag_none); - - if (r->has_kwonly_args) process_kwonly_arg(a, r); - } -}; - -/// Process a keyword argument attribute (*with* a default value) -template <> struct process_attribute : process_attribute_default { - static void init(const arg_v &a, function_record *r) { - if (r->is_method && r->args.empty()) - r->args.emplace_back("self", nullptr /*descr*/, handle() /*parent*/, true /*convert*/, false /*none not allowed*/); - - if (!a.value) { -#if !defined(NDEBUG) - std::string descr("'"); - if (a.name) descr += std::string(a.name) + ": "; - descr += a.type + "'"; - if (r->is_method) { - if (r->name) - descr += " in method '" + (std::string) str(r->scope) + "." + (std::string) r->name + "'"; - else - descr += " in method of '" + (std::string) str(r->scope) + "'"; - } else if (r->name) { - descr += " in function '" + (std::string) r->name + "'"; - } - pybind11_fail("arg(): could not convert default argument " - + descr + " into a Python object (type not registered yet?)"); -#else - pybind11_fail("arg(): could not convert default argument " - "into a Python object (type not registered yet?). " - "Compile in debug mode for more information."); -#endif - } - r->args.emplace_back(a.name, a.descr, a.value.inc_ref(), !a.flag_noconvert, a.flag_none); - - if (r->has_kwonly_args) process_kwonly_arg(a, r); - } -}; - -/// Process a keyword-only-arguments-follow pseudo argument -template <> struct process_attribute : process_attribute_default { - static void init(const kwonly &, function_record *r) { - r->has_kwonly_args = true; - } -}; - -/// Process a parent class attribute. Single inheritance only (class_ itself already guarantees that) -template -struct process_attribute::value>> : process_attribute_default { - static void init(const handle &h, type_record *r) { r->bases.append(h); } -}; - -/// Process a parent class attribute (deprecated, does not support multiple inheritance) -template -struct process_attribute> : process_attribute_default> { - static void init(const base &, type_record *r) { r->add_base(typeid(T), nullptr); } -}; - -/// Process a multiple inheritance attribute -template <> -struct process_attribute : process_attribute_default { - static void init(const multiple_inheritance &, type_record *r) { r->multiple_inheritance = true; } -}; - -template <> -struct process_attribute : process_attribute_default { - static void init(const dynamic_attr &, type_record *r) { r->dynamic_attr = true; } -}; - -template <> -struct process_attribute : process_attribute_default { - static void init(const is_final &, type_record *r) { r->is_final = true; } -}; - -template <> -struct process_attribute : process_attribute_default { - static void init(const buffer_protocol &, type_record *r) { r->buffer_protocol = true; } -}; - -template <> -struct process_attribute : process_attribute_default { - static void init(const metaclass &m, type_record *r) { r->metaclass = m.value; } -}; - -template <> -struct process_attribute : process_attribute_default { - static void init(const module_local &l, type_record *r) { r->module_local = l.value; } -}; - -/// Process an 'arithmetic' attribute for enums (does nothing here) -template <> -struct process_attribute : process_attribute_default {}; - -template -struct process_attribute> : process_attribute_default> { }; - -/** - * Process a keep_alive call policy -- invokes keep_alive_impl during the - * pre-call handler if both Nurse, Patient != 0 and use the post-call handler - * otherwise - */ -template struct process_attribute> : public process_attribute_default> { - template = 0> - static void precall(function_call &call) { keep_alive_impl(Nurse, Patient, call, handle()); } - template = 0> - static void postcall(function_call &, handle) { } - template = 0> - static void precall(function_call &) { } - template = 0> - static void postcall(function_call &call, handle ret) { keep_alive_impl(Nurse, Patient, call, ret); } -}; - -/// Recursively iterate over variadic template arguments -template struct process_attributes { - static void init(const Args&... args, function_record *r) { - int unused[] = { 0, (process_attribute::type>::init(args, r), 0) ... }; - ignore_unused(unused); - } - static void init(const Args&... args, type_record *r) { - int unused[] = { 0, (process_attribute::type>::init(args, r), 0) ... }; - ignore_unused(unused); - } - static void precall(function_call &call) { - int unused[] = { 0, (process_attribute::type>::precall(call), 0) ... }; - ignore_unused(unused); - } - static void postcall(function_call &call, handle fn_ret) { - int unused[] = { 0, (process_attribute::type>::postcall(call, fn_ret), 0) ... }; - ignore_unused(unused); - } -}; - -template -using is_call_guard = is_instantiation; - -/// Extract the ``type`` from the first `call_guard` in `Extras...` (or `void_type` if none found) -template -using extract_guard_t = typename exactly_one_t, Extra...>::type; - -/// Check the number of named arguments at compile time -template ::value...), - size_t self = constexpr_sum(std::is_same::value...)> -constexpr bool expected_num_args(size_t nargs, bool has_args, bool has_kwargs) { - return named == 0 || (self + named + has_args + has_kwargs) == nargs; -} - -PYBIND11_NAMESPACE_END(detail) -PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE) diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/allocator/no_throw_allocator.h b/spaces/CVPR/LIVE/thrust/thrust/detail/allocator/no_throw_allocator.h deleted file mode 100644 index ba8c3d852988e9add8659236293a424682701489..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/allocator/no_throw_allocator.h +++ /dev/null @@ -1,71 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -namespace thrust -{ -namespace detail -{ - -template - struct no_throw_allocator : BaseAllocator -{ - private: - typedef BaseAllocator super_t; - - public: - inline __host__ __device__ - no_throw_allocator(const BaseAllocator &other = BaseAllocator()) - : super_t(other) - {} - - template - struct rebind - { - typedef no_throw_allocator::other> other; - }; // end rebind - - __host__ __device__ - void deallocate(typename super_t::pointer p, typename super_t::size_type n) - { -#ifndef __CUDA_ARCH__ - try - { - super_t::deallocate(p, n); - } // end try - catch(...) - { - // catch anything - } // end catch -#else - super_t::deallocate(p, n); -#endif - } // end deallocate() - - inline __host__ __device__ - bool operator==(no_throw_allocator const &other) { return super_t::operator==(other); } - - inline __host__ __device__ - bool operator!=(no_throw_allocator const &other) { return super_t::operator!=(other); } -}; // end no_throw_allocator - -} // end detail -} // end thrust - - diff --git a/spaces/CVPR/WALT/mmdet/core/mask/__init__.py b/spaces/CVPR/WALT/mmdet/core/mask/__init__.py deleted file mode 100644 index ab1e88bc686d5c2fe72b3114cb2b3e372e73a0f8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/mask/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -from .mask_target import mask_target -from .structures import BaseInstanceMasks, BitmapMasks, PolygonMasks -from .utils import encode_mask_results, split_combined_polys - -__all__ = [ - 'split_combined_polys', 'mask_target', 'BaseInstanceMasks', 'BitmapMasks', - 'PolygonMasks', 'encode_mask_results' -] diff --git a/spaces/CVPR/WALT/mmdet/core/mask/mask_target.py b/spaces/CVPR/WALT/mmdet/core/mask/mask_target.py deleted file mode 100644 index 15d26a88bbf3710bd92813335918407db8c4e053..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/mask/mask_target.py +++ /dev/null @@ -1,122 +0,0 @@ -import numpy as np -import torch -from torch.nn.modules.utils import _pair - - -def mask_target(pos_proposals_list, pos_assigned_gt_inds_list, gt_masks_list, - cfg): - """Compute mask target for positive proposals in multiple images. - - Args: - pos_proposals_list (list[Tensor]): Positive proposals in multiple - images. - pos_assigned_gt_inds_list (list[Tensor]): Assigned GT indices for each - positive proposals. - gt_masks_list (list[:obj:`BaseInstanceMasks`]): Ground truth masks of - each image. - cfg (dict): Config dict that specifies the mask size. - - Returns: - list[Tensor]: Mask target of each image. - - Example: - >>> import mmcv - >>> import mmdet - >>> from mmdet.core.mask import BitmapMasks - >>> from mmdet.core.mask.mask_target import * - >>> H, W = 17, 18 - >>> cfg = mmcv.Config({'mask_size': (13, 14)}) - >>> rng = np.random.RandomState(0) - >>> # Positive proposals (tl_x, tl_y, br_x, br_y) for each image - >>> pos_proposals_list = [ - >>> torch.Tensor([ - >>> [ 7.2425, 5.5929, 13.9414, 14.9541], - >>> [ 7.3241, 3.6170, 16.3850, 15.3102], - >>> ]), - >>> torch.Tensor([ - >>> [ 4.8448, 6.4010, 7.0314, 9.7681], - >>> [ 5.9790, 2.6989, 7.4416, 4.8580], - >>> [ 0.0000, 0.0000, 0.1398, 9.8232], - >>> ]), - >>> ] - >>> # Corresponding class index for each proposal for each image - >>> pos_assigned_gt_inds_list = [ - >>> torch.LongTensor([7, 0]), - >>> torch.LongTensor([5, 4, 1]), - >>> ] - >>> # Ground truth mask for each true object for each image - >>> gt_masks_list = [ - >>> BitmapMasks(rng.rand(8, H, W), height=H, width=W), - >>> BitmapMasks(rng.rand(6, H, W), height=H, width=W), - >>> ] - >>> mask_targets = mask_target( - >>> pos_proposals_list, pos_assigned_gt_inds_list, - >>> gt_masks_list, cfg) - >>> assert mask_targets.shape == (5,) + cfg['mask_size'] - """ - cfg_list = [cfg for _ in range(len(pos_proposals_list))] - mask_targets = map(mask_target_single, pos_proposals_list, - pos_assigned_gt_inds_list, gt_masks_list, cfg_list) - mask_targets = list(mask_targets) - if len(mask_targets) > 0: - mask_targets = torch.cat(mask_targets) - return mask_targets - - -def mask_target_single(pos_proposals, pos_assigned_gt_inds, gt_masks, cfg): - """Compute mask target for each positive proposal in the image. - - Args: - pos_proposals (Tensor): Positive proposals. - pos_assigned_gt_inds (Tensor): Assigned GT inds of positive proposals. - gt_masks (:obj:`BaseInstanceMasks`): GT masks in the format of Bitmap - or Polygon. - cfg (dict): Config dict that indicate the mask size. - - Returns: - Tensor: Mask target of each positive proposals in the image. - - Example: - >>> import mmcv - >>> import mmdet - >>> from mmdet.core.mask import BitmapMasks - >>> from mmdet.core.mask.mask_target import * # NOQA - >>> H, W = 32, 32 - >>> cfg = mmcv.Config({'mask_size': (7, 11)}) - >>> rng = np.random.RandomState(0) - >>> # Masks for each ground truth box (relative to the image) - >>> gt_masks_data = rng.rand(3, H, W) - >>> gt_masks = BitmapMasks(gt_masks_data, height=H, width=W) - >>> # Predicted positive boxes in one image - >>> pos_proposals = torch.FloatTensor([ - >>> [ 16.2, 5.5, 19.9, 20.9], - >>> [ 17.3, 13.6, 19.3, 19.3], - >>> [ 14.8, 16.4, 17.0, 23.7], - >>> [ 0.0, 0.0, 16.0, 16.0], - >>> [ 4.0, 0.0, 20.0, 16.0], - >>> ]) - >>> # For each predicted proposal, its assignment to a gt mask - >>> pos_assigned_gt_inds = torch.LongTensor([0, 1, 2, 1, 1]) - >>> mask_targets = mask_target_single( - >>> pos_proposals, pos_assigned_gt_inds, gt_masks, cfg) - >>> assert mask_targets.shape == (5,) + cfg['mask_size'] - """ - device = pos_proposals.device - mask_size = _pair(cfg.mask_size) - num_pos = pos_proposals.size(0) - if num_pos > 0: - proposals_np = pos_proposals.cpu().numpy() - maxh, maxw = gt_masks.height, gt_masks.width - proposals_np[:, [0, 2]] = np.clip(proposals_np[:, [0, 2]], 0, maxw) - proposals_np[:, [1, 3]] = np.clip(proposals_np[:, [1, 3]], 0, maxh) - pos_assigned_gt_inds = pos_assigned_gt_inds.cpu().numpy() - - mask_targets = gt_masks.crop_and_resize( - proposals_np, mask_size, device=device, - inds=pos_assigned_gt_inds).to_ndarray() - - mask_targets = torch.from_numpy(mask_targets).float().to(device) - else: - mask_targets = pos_proposals.new_zeros((0, ) + mask_size) - - return mask_targets diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/cityscapes_panoptic.py b/spaces/CVPR/regionclip-demo/detectron2/data/datasets/cityscapes_panoptic.py deleted file mode 100644 index 48c136f1623261b079591065fec7c7fc38165076..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/data/datasets/cityscapes_panoptic.py +++ /dev/null @@ -1,187 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import logging -import os - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.builtin_meta import CITYSCAPES_CATEGORIES -from detectron2.utils.file_io import PathManager - -""" -This file contains functions to register the Cityscapes panoptic dataset to the DatasetCatalog. -""" - - -logger = logging.getLogger(__name__) - - -def get_cityscapes_panoptic_files(image_dir, gt_dir, json_info): - files = [] - # scan through the directory - cities = PathManager.ls(image_dir) - logger.info(f"{len(cities)} cities found in '{image_dir}'.") - image_dict = {} - for city in cities: - city_img_dir = os.path.join(image_dir, city) - for basename in PathManager.ls(city_img_dir): - image_file = os.path.join(city_img_dir, basename) - - suffix = "_leftImg8bit.png" - assert basename.endswith(suffix), basename - basename = os.path.basename(basename)[: -len(suffix)] - - image_dict[basename] = image_file - - for ann in json_info["annotations"]: - image_file = image_dict.get(ann["image_id"], None) - assert image_file is not None, "No image {} found for annotation {}".format( - ann["image_id"], ann["file_name"] - ) - label_file = os.path.join(gt_dir, ann["file_name"]) - segments_info = ann["segments_info"] - - files.append((image_file, label_file, segments_info)) - - assert len(files), "No images found in {}".format(image_dir) - assert PathManager.isfile(files[0][0]), files[0][0] - assert PathManager.isfile(files[0][1]), files[0][1] - return files - - -def load_cityscapes_panoptic(image_dir, gt_dir, gt_json, meta): - """ - Args: - image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train". - gt_dir (str): path to the raw annotations. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train". - gt_json (str): path to the json file. e.g., - "~/cityscapes/gtFine/cityscapes_panoptic_train.json". - meta (dict): dictionary containing "thing_dataset_id_to_contiguous_id" - and "stuff_dataset_id_to_contiguous_id" to map category ids to - contiguous ids for training. - - Returns: - list[dict]: a list of dicts in Detectron2 standard format. (See - `Using Custom Datasets `_ ) - """ - - def _convert_category_id(segment_info, meta): - if segment_info["category_id"] in meta["thing_dataset_id_to_contiguous_id"]: - segment_info["category_id"] = meta["thing_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - else: - segment_info["category_id"] = meta["stuff_dataset_id_to_contiguous_id"][ - segment_info["category_id"] - ] - return segment_info - - assert os.path.exists( - gt_json - ), "Please run `python cityscapesscripts/preparation/createPanopticImgs.py` to generate label files." # noqa - with open(gt_json) as f: - json_info = json.load(f) - files = get_cityscapes_panoptic_files(image_dir, gt_dir, json_info) - ret = [] - for image_file, label_file, segments_info in files: - sem_label_file = ( - image_file.replace("leftImg8bit", "gtFine").split(".")[0] + "_labelTrainIds.png" - ) - segments_info = [_convert_category_id(x, meta) for x in segments_info] - ret.append( - { - "file_name": image_file, - "image_id": "_".join( - os.path.splitext(os.path.basename(image_file))[0].split("_")[:3] - ), - "sem_seg_file_name": sem_label_file, - "pan_seg_file_name": label_file, - "segments_info": segments_info, - } - ) - assert len(ret), f"No images found in {image_dir}!" - assert PathManager.isfile( - ret[0]["sem_seg_file_name"] - ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa - assert PathManager.isfile( - ret[0]["pan_seg_file_name"] - ), "Please generate panoptic annotation with python cityscapesscripts/preparation/createPanopticImgs.py" # noqa - return ret - - -_RAW_CITYSCAPES_PANOPTIC_SPLITS = { - "cityscapes_fine_panoptic_train": ( - "cityscapes/leftImg8bit/train", - "cityscapes/gtFine/cityscapes_panoptic_train", - "cityscapes/gtFine/cityscapes_panoptic_train.json", - ), - "cityscapes_fine_panoptic_val": ( - "cityscapes/leftImg8bit/val", - "cityscapes/gtFine/cityscapes_panoptic_val", - "cityscapes/gtFine/cityscapes_panoptic_val.json", - ), - # "cityscapes_fine_panoptic_test": not supported yet -} - - -def register_all_cityscapes_panoptic(root): - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - thing_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - stuff_classes = [k["name"] for k in CITYSCAPES_CATEGORIES] - stuff_colors = [k["color"] for k in CITYSCAPES_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # There are three types of ids in cityscapes panoptic segmentation: - # (1) category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the classifier - # (2) instance id: this id is used to differentiate different instances from - # the same category. For "stuff" classes, the instance id is always 0; for - # "thing" classes, the instance id starts from 1 and 0 is reserved for - # ignored instances (e.g. crowd annotation). - # (3) panoptic id: this is the compact id that encode both category and - # instance id by: category_id * 1000 + instance_id. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for k in CITYSCAPES_CATEGORIES: - if k["isthing"] == 1: - thing_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - else: - stuff_dataset_id_to_contiguous_id[k["id"]] = k["trainId"] - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - for key, (image_dir, gt_dir, gt_json) in _RAW_CITYSCAPES_PANOPTIC_SPLITS.items(): - image_dir = os.path.join(root, image_dir) - gt_dir = os.path.join(root, gt_dir) - gt_json = os.path.join(root, gt_json) - - DatasetCatalog.register( - key, lambda x=image_dir, y=gt_dir, z=gt_json: load_cityscapes_panoptic(x, y, z, meta) - ) - MetadataCatalog.get(key).set( - panoptic_root=gt_dir, - image_root=image_dir, - panoptic_json=gt_json, - gt_dir=gt_dir.replace("cityscapes_panoptic_", ""), - evaluator_type="cityscapes_panoptic_seg", - ignore_label=255, - label_divisor=1000, - **meta, - ) diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/utils/visualization.py b/spaces/Caoyunkang/Segment-Any-Anomaly/utils/visualization.py deleted file mode 100644 index bfb5ef708f32ee33be209c10acf5f80c8ae950ab..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/utils/visualization.py +++ /dev/null @@ -1,131 +0,0 @@ -import cv2 -import matplotlib - -matplotlib.use("Agg") -import matplotlib.pyplot as plt -import numpy as np -import os -import seaborn as sns - -## -from sklearn.manifold import TSNE -from sklearn.decomposition import PCA - -## -import matplotlib.ticker as mtick - - -def plot_sample_cv2(names, imgs, scores_: dict, gts, save_folder=None): - # get subplot number - total_number = len(imgs) - - scores = scores_.copy() - # normarlisze anomalies - for k, v in scores.items(): - max_value = np.max(v) - min_value = np.min(v) - - scores[k] = (scores[k] - min_value) / max_value * 255 - scores[k] = scores[k].astype(np.uint8) - # draw gts - mask_imgs = [] - for idx in range(total_number): - gts_ = gts[idx] - mask_imgs_ = imgs[idx].copy() - mask_imgs_[gts_ > 0.5] = (0, 0, 255) - mask_imgs.append(mask_imgs_) - - # save imgs - for idx in range(total_number): - cv2.imwrite(os.path.join(save_folder, f'{names[idx]}_ori.jpg'), imgs[idx]) - cv2.imwrite(os.path.join(save_folder, f'{names[idx]}_gt.jpg'), mask_imgs[idx]) - - for key in scores: - heat_map = cv2.applyColorMap(scores[key][idx], cv2.COLORMAP_JET) - visz_map = cv2.addWeighted(heat_map, 0.5, imgs[idx], 0.5, 0) - cv2.imwrite(os.path.join(save_folder, f'{names[idx]}_{key}.jpg'), - visz_map) - - -def plot_anomaly_score_distributions(scores: dict, ground_truths_list, save_folder, class_name): - ground_truths = np.stack(ground_truths_list, axis=0) - - N_COUNT = 100000 - - for k, v in scores.items(): - layer_score = np.stack(v, axis=0) - normal_score = layer_score[ground_truths == 0] - abnormal_score = layer_score[ground_truths != 0] - - plt.clf() - plt.figure(figsize=(4, 3)) - ax = plt.gca() - ax.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.2f')) - ax.xaxis.set_major_formatter(mtick.FormatStrFormatter('%.2f')) - - # with plt.style.context(['science', 'ieee', 'no-latex']): - sns.histplot(np.random.choice(normal_score, N_COUNT), color="green", bins=50, label='${d(p_n)}$', - stat='probability', alpha=.75) - sns.histplot(np.random.choice(abnormal_score, N_COUNT), color="red", bins=50, label='${d(p_a)}$', - stat='probability', alpha=.75) - - plt.xlim([0, 3]) - - save_path = os.path.join(save_folder, f'distributions_{class_name}_{k}.jpg') - - plt.savefig(save_path, bbox_inches='tight', dpi=300) - - -valid_feature_visualization_methods = ['TSNE', 'PCA'] - - -def visualize_feature(features, labels, legends, n_components=3, method='TSNE'): - assert method in valid_feature_visualization_methods - assert n_components in [2, 3] - - if method == 'TSNE': - model = TSNE(n_components=n_components) - elif method == 'PCA': - model = PCA(n_components=n_components) - - else: - raise NotImplementedError - - feat_proj = model.fit_transform(features) - - if n_components == 2: - ax = scatter_2d(feat_proj, labels) - elif n_components == 3: - ax = scatter_3d(feat_proj, labels) - else: - raise NotImplementedError - - plt.legend(legends) - plt.axis('off') - - -def scatter_3d(feat_proj, label): - plt.clf() - ax1 = plt.axes(projection='3d') - - label_unique = np.unique(label) - - for l in label_unique: - ax1.scatter3D(feat_proj[label == l, 0], - feat_proj[label == l, 1], - feat_proj[label == l, 2], s=5) - - return ax1 - - -def scatter_2d(feat_proj, label): - plt.clf() - ax1 = plt.axes() - - label_unique = np.unique(label) - - for l in label_unique: - ax1.scatter(feat_proj[label == l, 0], - feat_proj[label == l, 1], s=5) - - return ax1 diff --git a/spaces/CourserLi/classify/app.py b/spaces/CourserLi/classify/app.py deleted file mode 100644 index 1de0b5aafbca6124f796041de88f74a7ccc76e55..0000000000000000000000000000000000000000 --- a/spaces/CourserLi/classify/app.py +++ /dev/null @@ -1,32 +0,0 @@ -# AUTOGENERATED! DO NOT EDIT! File to edit: app.ipynb. - -# %% auto 0 -__all__ = ['temp', 'learn', 'categories', 'image', 'label', 'examples', 'intf', 'is_cat', 'classify_image'] - -# %% app.ipynb 1 -from fastai.vision.all import * -import gradio as gr - -# %% app.ipynb 2 -def is_cat(x): - return x[0].isupper() - -learn = load_learner('model.pkl') - -# %% app.ipynb 5 -categories = ('Dog', 'Cat') - -def classify_image(img): - pred, idx, probs = learn.predict(img) - return dict(zip(categories, map(float, probs))) - -# %% app.ipynb 8 -image = gr.Image(shape=(192, 192)) -label = gr.Label() -examples = ['dog.png', 'cat.png'] - -intf = gr.Interface(fn=classify_image, - inputs=image, - outputs=label, - examples=examples) -intf.launch(inline=False) diff --git a/spaces/DEfiAnTH/SPSpace/Dockerfile b/spaces/DEfiAnTH/SPSpace/Dockerfile deleted file mode 100644 index 29ec24bfb63cdbf2c92fc41c33e24b329aa6e1ca..0000000000000000000000000000000000000000 --- a/spaces/DEfiAnTH/SPSpace/Dockerfile +++ /dev/null @@ -1,65 +0,0 @@ -FROM zenmldocker/zenml-server:latest - -ENV ZENML_ANALYTICS_OPT_IN=true -ENV ZENML_SERVER_DEPLOYMENT_TYPE="hf_spaces" -ENV ZENML_LOGGING_VERBOSITY=DEBUG - -################################################################################ -# -# CONFIGURING YOUR ZENML HF SPACES SERVER -# --------------------------------------- -# By default this space is not persistent. All ZenML metadata is stored in -# localstorage in a SQLite database. If you would like to make your storage -# persistent, use the appropriate environment variables below to configure the -# image to use a MySQL-compatible database service that is reachable from the -# container. See https://docs.zenml.io/getting-started/deploying-zenml/docker -# for more information on how to configure these environment variables. - -# You can also configure the secrets store to use for your ZenML server. Be -# sure to use Huggingface Spaces' 'Repository Secrets' feature to store any -# secrets referenced here. See -# https://huggingface.co/docs/hub/spaces-overview#managing-secrets for more -# information on how to configure these environment variables. - -# ENV ZENML_DEFAULT_PROJECT_NAME="" -# ENV ZENML_DEFAULT_USER_NAME="" -# ENV ZENML_DEFAULT_USER_PASSWORD="" -# ENV ZENML_STORE_URL="" -# ENV ZENML_STORE_SSL_CA="" -# ENV ZENML_STORE_SSL_CERT="" -# ENV ZENML_STORE_SSL_KEY="" -# ENV ZENML_STORE_SSL_VERIFY_SERVER_CERT="" - -# ENV ZENML_LOGGING_VERBOSITY="" - -# # SECRETS STORE CONFIGURATION -# ENV ZENML_SECRETS_STORE_TYPE="" -# ENV ZENML_SECRETS_STORE_ENCRYPTION_KEY="" -# ENV ZENML_SECRETS_STORE_CLASS_PATH="" -# ENV ZENML_JWT_SECRET_KEY="" - -# # AWS Secrets Store Configuration -# ENV ZENML_SECRETS_STORE_REGION_NAME="" -# ENV ZENML_SECRETS_STORE_AWS_ACCESS_KEY_ID="" -# ENV ZENML_SECRETS_STORE_AWS_SECRET_ACCESS_KEY="" -# ENV ZENML_SECRETS_STORE_AWS_SESSION_TOKEN="" -# ENV ZENML_SECRETS_STORE_SECRET_LIST_REFRESH_TIMEOUT="" - -# # GCP Secrets Store Configuration -# ENV ZENML_SECRETS_STORE_PROJECT_ID="" -# ENV GOOGLE_APPLICATION_CREDENTIALS="" - -# # Azure Secrets Store Configuration -# ENV ZENML_SECRETS_STORE_KEY_VAULT_NAME="" -# ENV ZENML_SECRETS_STORE_AZURE_CLIENT_ID="" -# ENV ZENML_SECRETS_STORE_AZURE_CLIENT_SECRET="" -# ENV ZENML_SECRETS_STORE_AZURE_TENANT_ID="" - -# # Hashicorp Secrets Store Configuration -# ENV ZENML_SECRETS_STORE_VAULT_ADDR="" -# ENV ZENML_SECRETS_STORE_VAULT_TOKEN="" -# ENV ZENML_SECRETS_STORE_VAULT_NAMESPACE="" -# ENV ZENML_SECRETS_STORE_MAX_VERSIONS="" - -ENTRYPOINT ["uvicorn", "zenml.zen_server.zen_server_api:app", "--log-level", "debug"] -CMD ["--proxy-headers", "--port", "8080", "--host", "0.0.0.0"] diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/psCharStrings.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/psCharStrings.py deleted file mode 100644 index 0092a98ecee87286568b8593a4662a22235ee0e0..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/psCharStrings.py +++ /dev/null @@ -1,1479 +0,0 @@ -"""psCharStrings.py -- module implementing various kinds of CharStrings: -CFF dictionary data and Type1/Type2 CharStrings. -""" - -from fontTools.misc.fixedTools import ( - fixedToFloat, - floatToFixed, - floatToFixedToStr, - strToFixedToFloat, -) -from fontTools.misc.textTools import bytechr, byteord, bytesjoin, strjoin -from fontTools.pens.boundsPen import BoundsPen -import struct -import logging - - -log = logging.getLogger(__name__) - - -def read_operator(self, b0, data, index): - if b0 == 12: - op = (b0, byteord(data[index])) - index = index + 1 - else: - op = b0 - try: - operator = self.operators[op] - except KeyError: - return None, index - value = self.handle_operator(operator) - return value, index - - -def read_byte(self, b0, data, index): - return b0 - 139, index - - -def read_smallInt1(self, b0, data, index): - b1 = byteord(data[index]) - return (b0 - 247) * 256 + b1 + 108, index + 1 - - -def read_smallInt2(self, b0, data, index): - b1 = byteord(data[index]) - return -(b0 - 251) * 256 - b1 - 108, index + 1 - - -def read_shortInt(self, b0, data, index): - (value,) = struct.unpack(">h", data[index : index + 2]) - return value, index + 2 - - -def read_longInt(self, b0, data, index): - (value,) = struct.unpack(">l", data[index : index + 4]) - return value, index + 4 - - -def read_fixed1616(self, b0, data, index): - (value,) = struct.unpack(">l", data[index : index + 4]) - return fixedToFloat(value, precisionBits=16), index + 4 - - -def read_reserved(self, b0, data, index): - assert NotImplementedError - return NotImplemented, index - - -def read_realNumber(self, b0, data, index): - number = "" - while True: - b = byteord(data[index]) - index = index + 1 - nibble0 = (b & 0xF0) >> 4 - nibble1 = b & 0x0F - if nibble0 == 0xF: - break - number = number + realNibbles[nibble0] - if nibble1 == 0xF: - break - number = number + realNibbles[nibble1] - return float(number), index - - -t1OperandEncoding = [None] * 256 -t1OperandEncoding[0:32] = (32) * [read_operator] -t1OperandEncoding[32:247] = (247 - 32) * [read_byte] -t1OperandEncoding[247:251] = (251 - 247) * [read_smallInt1] -t1OperandEncoding[251:255] = (255 - 251) * [read_smallInt2] -t1OperandEncoding[255] = read_longInt -assert len(t1OperandEncoding) == 256 - -t2OperandEncoding = t1OperandEncoding[:] -t2OperandEncoding[28] = read_shortInt -t2OperandEncoding[255] = read_fixed1616 - -cffDictOperandEncoding = t2OperandEncoding[:] -cffDictOperandEncoding[29] = read_longInt -cffDictOperandEncoding[30] = read_realNumber -cffDictOperandEncoding[255] = read_reserved - - -realNibbles = [ - "0", - "1", - "2", - "3", - "4", - "5", - "6", - "7", - "8", - "9", - ".", - "E", - "E-", - None, - "-", -] -realNibblesDict = {v: i for i, v in enumerate(realNibbles)} - -maxOpStack = 193 - - -def buildOperatorDict(operatorList): - oper = {} - opc = {} - for item in operatorList: - if len(item) == 2: - oper[item[0]] = item[1] - else: - oper[item[0]] = item[1:] - if isinstance(item[0], tuple): - opc[item[1]] = item[0] - else: - opc[item[1]] = (item[0],) - return oper, opc - - -t2Operators = [ - # opcode name - (1, "hstem"), - (3, "vstem"), - (4, "vmoveto"), - (5, "rlineto"), - (6, "hlineto"), - (7, "vlineto"), - (8, "rrcurveto"), - (10, "callsubr"), - (11, "return"), - (14, "endchar"), - (15, "vsindex"), - (16, "blend"), - (18, "hstemhm"), - (19, "hintmask"), - (20, "cntrmask"), - (21, "rmoveto"), - (22, "hmoveto"), - (23, "vstemhm"), - (24, "rcurveline"), - (25, "rlinecurve"), - (26, "vvcurveto"), - (27, "hhcurveto"), - # (28, 'shortint'), # not really an operator - (29, "callgsubr"), - (30, "vhcurveto"), - (31, "hvcurveto"), - ((12, 0), "ignore"), # dotsection. Yes, there a few very early OTF/CFF - # fonts with this deprecated operator. Just ignore it. - ((12, 3), "and"), - ((12, 4), "or"), - ((12, 5), "not"), - ((12, 8), "store"), - ((12, 9), "abs"), - ((12, 10), "add"), - ((12, 11), "sub"), - ((12, 12), "div"), - ((12, 13), "load"), - ((12, 14), "neg"), - ((12, 15), "eq"), - ((12, 18), "drop"), - ((12, 20), "put"), - ((12, 21), "get"), - ((12, 22), "ifelse"), - ((12, 23), "random"), - ((12, 24), "mul"), - ((12, 26), "sqrt"), - ((12, 27), "dup"), - ((12, 28), "exch"), - ((12, 29), "index"), - ((12, 30), "roll"), - ((12, 34), "hflex"), - ((12, 35), "flex"), - ((12, 36), "hflex1"), - ((12, 37), "flex1"), -] - - -def getIntEncoder(format): - if format == "cff": - twoByteOp = bytechr(28) - fourByteOp = bytechr(29) - elif format == "t1": - twoByteOp = None - fourByteOp = bytechr(255) - else: - assert format == "t2" - twoByteOp = bytechr(28) - fourByteOp = None - - def encodeInt( - value, - fourByteOp=fourByteOp, - bytechr=bytechr, - pack=struct.pack, - unpack=struct.unpack, - twoByteOp=twoByteOp, - ): - if -107 <= value <= 107: - code = bytechr(value + 139) - elif 108 <= value <= 1131: - value = value - 108 - code = bytechr((value >> 8) + 247) + bytechr(value & 0xFF) - elif -1131 <= value <= -108: - value = -value - 108 - code = bytechr((value >> 8) + 251) + bytechr(value & 0xFF) - elif twoByteOp is not None and -32768 <= value <= 32767: - code = twoByteOp + pack(">h", value) - elif fourByteOp is None: - # Backwards compatible hack: due to a previous bug in FontTools, - # 16.16 fixed numbers were written out as 4-byte ints. When - # these numbers were small, they were wrongly written back as - # small ints instead of 4-byte ints, breaking round-tripping. - # This here workaround doesn't do it any better, since we can't - # distinguish anymore between small ints that were supposed to - # be small fixed numbers and small ints that were just small - # ints. Hence the warning. - log.warning( - "4-byte T2 number got passed to the " - "IntType handler. This should happen only when reading in " - "old XML files.\n" - ) - code = bytechr(255) + pack(">l", value) - else: - code = fourByteOp + pack(">l", value) - return code - - return encodeInt - - -encodeIntCFF = getIntEncoder("cff") -encodeIntT1 = getIntEncoder("t1") -encodeIntT2 = getIntEncoder("t2") - - -def encodeFixed(f, pack=struct.pack): - """For T2 only""" - value = floatToFixed(f, precisionBits=16) - if value & 0xFFFF == 0: # check if the fractional part is zero - return encodeIntT2(value >> 16) # encode only the integer part - else: - return b"\xff" + pack(">l", value) # encode the entire fixed point value - - -realZeroBytes = bytechr(30) + bytechr(0xF) - - -def encodeFloat(f): - # For CFF only, used in cffLib - if f == 0.0: # 0.0 == +0.0 == -0.0 - return realZeroBytes - # Note: 14 decimal digits seems to be the limitation for CFF real numbers - # in macOS. However, we use 8 here to match the implementation of AFDKO. - s = "%.8G" % f - if s[:2] == "0.": - s = s[1:] - elif s[:3] == "-0.": - s = "-" + s[2:] - nibbles = [] - while s: - c = s[0] - s = s[1:] - if c == "E": - c2 = s[:1] - if c2 == "-": - s = s[1:] - c = "E-" - elif c2 == "+": - s = s[1:] - nibbles.append(realNibblesDict[c]) - nibbles.append(0xF) - if len(nibbles) % 2: - nibbles.append(0xF) - d = bytechr(30) - for i in range(0, len(nibbles), 2): - d = d + bytechr(nibbles[i] << 4 | nibbles[i + 1]) - return d - - -class CharStringCompileError(Exception): - pass - - -class SimpleT2Decompiler(object): - def __init__(self, localSubrs, globalSubrs, private=None, blender=None): - self.localSubrs = localSubrs - self.localBias = calcSubrBias(localSubrs) - self.globalSubrs = globalSubrs - self.globalBias = calcSubrBias(globalSubrs) - self.private = private - self.blender = blender - self.reset() - - def reset(self): - self.callingStack = [] - self.operandStack = [] - self.hintCount = 0 - self.hintMaskBytes = 0 - self.numRegions = 0 - self.vsIndex = 0 - - def execute(self, charString): - self.callingStack.append(charString) - needsDecompilation = charString.needsDecompilation() - if needsDecompilation: - program = [] - pushToProgram = program.append - else: - pushToProgram = lambda x: None - pushToStack = self.operandStack.append - index = 0 - while True: - token, isOperator, index = charString.getToken(index) - if token is None: - break # we're done! - pushToProgram(token) - if isOperator: - handlerName = "op_" + token - handler = getattr(self, handlerName, None) - if handler is not None: - rv = handler(index) - if rv: - hintMaskBytes, index = rv - pushToProgram(hintMaskBytes) - else: - self.popall() - else: - pushToStack(token) - if needsDecompilation: - charString.setProgram(program) - del self.callingStack[-1] - - def pop(self): - value = self.operandStack[-1] - del self.operandStack[-1] - return value - - def popall(self): - stack = self.operandStack[:] - self.operandStack[:] = [] - return stack - - def push(self, value): - self.operandStack.append(value) - - def op_return(self, index): - if self.operandStack: - pass - - def op_endchar(self, index): - pass - - def op_ignore(self, index): - pass - - def op_callsubr(self, index): - subrIndex = self.pop() - subr = self.localSubrs[subrIndex + self.localBias] - self.execute(subr) - - def op_callgsubr(self, index): - subrIndex = self.pop() - subr = self.globalSubrs[subrIndex + self.globalBias] - self.execute(subr) - - def op_hstem(self, index): - self.countHints() - - def op_vstem(self, index): - self.countHints() - - def op_hstemhm(self, index): - self.countHints() - - def op_vstemhm(self, index): - self.countHints() - - def op_hintmask(self, index): - if not self.hintMaskBytes: - self.countHints() - self.hintMaskBytes = (self.hintCount + 7) // 8 - hintMaskBytes, index = self.callingStack[-1].getBytes(index, self.hintMaskBytes) - return hintMaskBytes, index - - op_cntrmask = op_hintmask - - def countHints(self): - args = self.popall() - self.hintCount = self.hintCount + len(args) // 2 - - # misc - def op_and(self, index): - raise NotImplementedError - - def op_or(self, index): - raise NotImplementedError - - def op_not(self, index): - raise NotImplementedError - - def op_store(self, index): - raise NotImplementedError - - def op_abs(self, index): - raise NotImplementedError - - def op_add(self, index): - raise NotImplementedError - - def op_sub(self, index): - raise NotImplementedError - - def op_div(self, index): - raise NotImplementedError - - def op_load(self, index): - raise NotImplementedError - - def op_neg(self, index): - raise NotImplementedError - - def op_eq(self, index): - raise NotImplementedError - - def op_drop(self, index): - raise NotImplementedError - - def op_put(self, index): - raise NotImplementedError - - def op_get(self, index): - raise NotImplementedError - - def op_ifelse(self, index): - raise NotImplementedError - - def op_random(self, index): - raise NotImplementedError - - def op_mul(self, index): - raise NotImplementedError - - def op_sqrt(self, index): - raise NotImplementedError - - def op_dup(self, index): - raise NotImplementedError - - def op_exch(self, index): - raise NotImplementedError - - def op_index(self, index): - raise NotImplementedError - - def op_roll(self, index): - raise NotImplementedError - - def op_blend(self, index): - if self.numRegions == 0: - self.numRegions = self.private.getNumRegions() - numBlends = self.pop() - numOps = numBlends * (self.numRegions + 1) - if self.blender is None: - del self.operandStack[ - -(numOps - numBlends) : - ] # Leave the default operands on the stack. - else: - argi = len(self.operandStack) - numOps - end_args = tuplei = argi + numBlends - while argi < end_args: - next_ti = tuplei + self.numRegions - deltas = self.operandStack[tuplei:next_ti] - delta = self.blender(self.vsIndex, deltas) - self.operandStack[argi] += delta - tuplei = next_ti - argi += 1 - self.operandStack[end_args:] = [] - - def op_vsindex(self, index): - vi = self.pop() - self.vsIndex = vi - self.numRegions = self.private.getNumRegions(vi) - - -t1Operators = [ - # opcode name - (1, "hstem"), - (3, "vstem"), - (4, "vmoveto"), - (5, "rlineto"), - (6, "hlineto"), - (7, "vlineto"), - (8, "rrcurveto"), - (9, "closepath"), - (10, "callsubr"), - (11, "return"), - (13, "hsbw"), - (14, "endchar"), - (21, "rmoveto"), - (22, "hmoveto"), - (30, "vhcurveto"), - (31, "hvcurveto"), - ((12, 0), "dotsection"), - ((12, 1), "vstem3"), - ((12, 2), "hstem3"), - ((12, 6), "seac"), - ((12, 7), "sbw"), - ((12, 12), "div"), - ((12, 16), "callothersubr"), - ((12, 17), "pop"), - ((12, 33), "setcurrentpoint"), -] - - -class T2WidthExtractor(SimpleT2Decompiler): - def __init__( - self, - localSubrs, - globalSubrs, - nominalWidthX, - defaultWidthX, - private=None, - blender=None, - ): - SimpleT2Decompiler.__init__(self, localSubrs, globalSubrs, private, blender) - self.nominalWidthX = nominalWidthX - self.defaultWidthX = defaultWidthX - - def reset(self): - SimpleT2Decompiler.reset(self) - self.gotWidth = 0 - self.width = 0 - - def popallWidth(self, evenOdd=0): - args = self.popall() - if not self.gotWidth: - if evenOdd ^ (len(args) % 2): - # For CFF2 charstrings, this should never happen - assert ( - self.defaultWidthX is not None - ), "CFF2 CharStrings must not have an initial width value" - self.width = self.nominalWidthX + args[0] - args = args[1:] - else: - self.width = self.defaultWidthX - self.gotWidth = 1 - return args - - def countHints(self): - args = self.popallWidth() - self.hintCount = self.hintCount + len(args) // 2 - - def op_rmoveto(self, index): - self.popallWidth() - - def op_hmoveto(self, index): - self.popallWidth(1) - - def op_vmoveto(self, index): - self.popallWidth(1) - - def op_endchar(self, index): - self.popallWidth() - - -class T2OutlineExtractor(T2WidthExtractor): - def __init__( - self, - pen, - localSubrs, - globalSubrs, - nominalWidthX, - defaultWidthX, - private=None, - blender=None, - ): - T2WidthExtractor.__init__( - self, - localSubrs, - globalSubrs, - nominalWidthX, - defaultWidthX, - private, - blender, - ) - self.pen = pen - self.subrLevel = 0 - - def reset(self): - T2WidthExtractor.reset(self) - self.currentPoint = (0, 0) - self.sawMoveTo = 0 - self.subrLevel = 0 - - def execute(self, charString): - self.subrLevel += 1 - super().execute(charString) - self.subrLevel -= 1 - if self.subrLevel == 0: - self.endPath() - - def _nextPoint(self, point): - x, y = self.currentPoint - point = x + point[0], y + point[1] - self.currentPoint = point - return point - - def rMoveTo(self, point): - self.pen.moveTo(self._nextPoint(point)) - self.sawMoveTo = 1 - - def rLineTo(self, point): - if not self.sawMoveTo: - self.rMoveTo((0, 0)) - self.pen.lineTo(self._nextPoint(point)) - - def rCurveTo(self, pt1, pt2, pt3): - if not self.sawMoveTo: - self.rMoveTo((0, 0)) - nextPoint = self._nextPoint - self.pen.curveTo(nextPoint(pt1), nextPoint(pt2), nextPoint(pt3)) - - def closePath(self): - if self.sawMoveTo: - self.pen.closePath() - self.sawMoveTo = 0 - - def endPath(self): - # In T2 there are no open paths, so always do a closePath when - # finishing a sub path. We avoid spurious calls to closePath() - # because its a real T1 op we're emulating in T2 whereas - # endPath() is just a means to that emulation - if self.sawMoveTo: - self.closePath() - - # - # hint operators - # - # def op_hstem(self, index): - # self.countHints() - # def op_vstem(self, index): - # self.countHints() - # def op_hstemhm(self, index): - # self.countHints() - # def op_vstemhm(self, index): - # self.countHints() - # def op_hintmask(self, index): - # self.countHints() - # def op_cntrmask(self, index): - # self.countHints() - - # - # path constructors, moveto - # - def op_rmoveto(self, index): - self.endPath() - self.rMoveTo(self.popallWidth()) - - def op_hmoveto(self, index): - self.endPath() - self.rMoveTo((self.popallWidth(1)[0], 0)) - - def op_vmoveto(self, index): - self.endPath() - self.rMoveTo((0, self.popallWidth(1)[0])) - - def op_endchar(self, index): - self.endPath() - args = self.popallWidth() - if args: - from fontTools.encodings.StandardEncoding import StandardEncoding - - # endchar can do seac accent bulding; The T2 spec says it's deprecated, - # but recent software that shall remain nameless does output it. - adx, ady, bchar, achar = args - baseGlyph = StandardEncoding[bchar] - self.pen.addComponent(baseGlyph, (1, 0, 0, 1, 0, 0)) - accentGlyph = StandardEncoding[achar] - self.pen.addComponent(accentGlyph, (1, 0, 0, 1, adx, ady)) - - # - # path constructors, lines - # - def op_rlineto(self, index): - args = self.popall() - for i in range(0, len(args), 2): - point = args[i : i + 2] - self.rLineTo(point) - - def op_hlineto(self, index): - self.alternatingLineto(1) - - def op_vlineto(self, index): - self.alternatingLineto(0) - - # - # path constructors, curves - # - def op_rrcurveto(self, index): - """{dxa dya dxb dyb dxc dyc}+ rrcurveto""" - args = self.popall() - for i in range(0, len(args), 6): - ( - dxa, - dya, - dxb, - dyb, - dxc, - dyc, - ) = args[i : i + 6] - self.rCurveTo((dxa, dya), (dxb, dyb), (dxc, dyc)) - - def op_rcurveline(self, index): - """{dxa dya dxb dyb dxc dyc}+ dxd dyd rcurveline""" - args = self.popall() - for i in range(0, len(args) - 2, 6): - dxb, dyb, dxc, dyc, dxd, dyd = args[i : i + 6] - self.rCurveTo((dxb, dyb), (dxc, dyc), (dxd, dyd)) - self.rLineTo(args[-2:]) - - def op_rlinecurve(self, index): - """{dxa dya}+ dxb dyb dxc dyc dxd dyd rlinecurve""" - args = self.popall() - lineArgs = args[:-6] - for i in range(0, len(lineArgs), 2): - self.rLineTo(lineArgs[i : i + 2]) - dxb, dyb, dxc, dyc, dxd, dyd = args[-6:] - self.rCurveTo((dxb, dyb), (dxc, dyc), (dxd, dyd)) - - def op_vvcurveto(self, index): - "dx1? {dya dxb dyb dyc}+ vvcurveto" - args = self.popall() - if len(args) % 2: - dx1 = args[0] - args = args[1:] - else: - dx1 = 0 - for i in range(0, len(args), 4): - dya, dxb, dyb, dyc = args[i : i + 4] - self.rCurveTo((dx1, dya), (dxb, dyb), (0, dyc)) - dx1 = 0 - - def op_hhcurveto(self, index): - """dy1? {dxa dxb dyb dxc}+ hhcurveto""" - args = self.popall() - if len(args) % 2: - dy1 = args[0] - args = args[1:] - else: - dy1 = 0 - for i in range(0, len(args), 4): - dxa, dxb, dyb, dxc = args[i : i + 4] - self.rCurveTo((dxa, dy1), (dxb, dyb), (dxc, 0)) - dy1 = 0 - - def op_vhcurveto(self, index): - """dy1 dx2 dy2 dx3 {dxa dxb dyb dyc dyd dxe dye dxf}* dyf? vhcurveto (30) - {dya dxb dyb dxc dxd dxe dye dyf}+ dxf? vhcurveto - """ - args = self.popall() - while args: - args = self.vcurveto(args) - if args: - args = self.hcurveto(args) - - def op_hvcurveto(self, index): - """dx1 dx2 dy2 dy3 {dya dxb dyb dxc dxd dxe dye dyf}* dxf? - {dxa dxb dyb dyc dyd dxe dye dxf}+ dyf? - """ - args = self.popall() - while args: - args = self.hcurveto(args) - if args: - args = self.vcurveto(args) - - # - # path constructors, flex - # - def op_hflex(self, index): - dx1, dx2, dy2, dx3, dx4, dx5, dx6 = self.popall() - dy1 = dy3 = dy4 = dy6 = 0 - dy5 = -dy2 - self.rCurveTo((dx1, dy1), (dx2, dy2), (dx3, dy3)) - self.rCurveTo((dx4, dy4), (dx5, dy5), (dx6, dy6)) - - def op_flex(self, index): - dx1, dy1, dx2, dy2, dx3, dy3, dx4, dy4, dx5, dy5, dx6, dy6, fd = self.popall() - self.rCurveTo((dx1, dy1), (dx2, dy2), (dx3, dy3)) - self.rCurveTo((dx4, dy4), (dx5, dy5), (dx6, dy6)) - - def op_hflex1(self, index): - dx1, dy1, dx2, dy2, dx3, dx4, dx5, dy5, dx6 = self.popall() - dy3 = dy4 = 0 - dy6 = -(dy1 + dy2 + dy3 + dy4 + dy5) - - self.rCurveTo((dx1, dy1), (dx2, dy2), (dx3, dy3)) - self.rCurveTo((dx4, dy4), (dx5, dy5), (dx6, dy6)) - - def op_flex1(self, index): - dx1, dy1, dx2, dy2, dx3, dy3, dx4, dy4, dx5, dy5, d6 = self.popall() - dx = dx1 + dx2 + dx3 + dx4 + dx5 - dy = dy1 + dy2 + dy3 + dy4 + dy5 - if abs(dx) > abs(dy): - dx6 = d6 - dy6 = -dy - else: - dx6 = -dx - dy6 = d6 - self.rCurveTo((dx1, dy1), (dx2, dy2), (dx3, dy3)) - self.rCurveTo((dx4, dy4), (dx5, dy5), (dx6, dy6)) - - # misc - def op_and(self, index): - raise NotImplementedError - - def op_or(self, index): - raise NotImplementedError - - def op_not(self, index): - raise NotImplementedError - - def op_store(self, index): - raise NotImplementedError - - def op_abs(self, index): - raise NotImplementedError - - def op_add(self, index): - raise NotImplementedError - - def op_sub(self, index): - raise NotImplementedError - - def op_div(self, index): - num2 = self.pop() - num1 = self.pop() - d1 = num1 // num2 - d2 = num1 / num2 - if d1 == d2: - self.push(d1) - else: - self.push(d2) - - def op_load(self, index): - raise NotImplementedError - - def op_neg(self, index): - raise NotImplementedError - - def op_eq(self, index): - raise NotImplementedError - - def op_drop(self, index): - raise NotImplementedError - - def op_put(self, index): - raise NotImplementedError - - def op_get(self, index): - raise NotImplementedError - - def op_ifelse(self, index): - raise NotImplementedError - - def op_random(self, index): - raise NotImplementedError - - def op_mul(self, index): - raise NotImplementedError - - def op_sqrt(self, index): - raise NotImplementedError - - def op_dup(self, index): - raise NotImplementedError - - def op_exch(self, index): - raise NotImplementedError - - def op_index(self, index): - raise NotImplementedError - - def op_roll(self, index): - raise NotImplementedError - - # - # miscellaneous helpers - # - def alternatingLineto(self, isHorizontal): - args = self.popall() - for arg in args: - if isHorizontal: - point = (arg, 0) - else: - point = (0, arg) - self.rLineTo(point) - isHorizontal = not isHorizontal - - def vcurveto(self, args): - dya, dxb, dyb, dxc = args[:4] - args = args[4:] - if len(args) == 1: - dyc = args[0] - args = [] - else: - dyc = 0 - self.rCurveTo((0, dya), (dxb, dyb), (dxc, dyc)) - return args - - def hcurveto(self, args): - dxa, dxb, dyb, dyc = args[:4] - args = args[4:] - if len(args) == 1: - dxc = args[0] - args = [] - else: - dxc = 0 - self.rCurveTo((dxa, 0), (dxb, dyb), (dxc, dyc)) - return args - - -class T1OutlineExtractor(T2OutlineExtractor): - def __init__(self, pen, subrs): - self.pen = pen - self.subrs = subrs - self.reset() - - def reset(self): - self.flexing = 0 - self.width = 0 - self.sbx = 0 - T2OutlineExtractor.reset(self) - - def endPath(self): - if self.sawMoveTo: - self.pen.endPath() - self.sawMoveTo = 0 - - def popallWidth(self, evenOdd=0): - return self.popall() - - def exch(self): - stack = self.operandStack - stack[-1], stack[-2] = stack[-2], stack[-1] - - # - # path constructors - # - def op_rmoveto(self, index): - if self.flexing: - return - self.endPath() - self.rMoveTo(self.popall()) - - def op_hmoveto(self, index): - if self.flexing: - # We must add a parameter to the stack if we are flexing - self.push(0) - return - self.endPath() - self.rMoveTo((self.popall()[0], 0)) - - def op_vmoveto(self, index): - if self.flexing: - # We must add a parameter to the stack if we are flexing - self.push(0) - self.exch() - return - self.endPath() - self.rMoveTo((0, self.popall()[0])) - - def op_closepath(self, index): - self.closePath() - - def op_setcurrentpoint(self, index): - args = self.popall() - x, y = args - self.currentPoint = x, y - - def op_endchar(self, index): - self.endPath() - - def op_hsbw(self, index): - sbx, wx = self.popall() - self.width = wx - self.sbx = sbx - self.currentPoint = sbx, self.currentPoint[1] - - def op_sbw(self, index): - self.popall() # XXX - - # - def op_callsubr(self, index): - subrIndex = self.pop() - subr = self.subrs[subrIndex] - self.execute(subr) - - def op_callothersubr(self, index): - subrIndex = self.pop() - nArgs = self.pop() - # print nArgs, subrIndex, "callothersubr" - if subrIndex == 0 and nArgs == 3: - self.doFlex() - self.flexing = 0 - elif subrIndex == 1 and nArgs == 0: - self.flexing = 1 - # ignore... - - def op_pop(self, index): - pass # ignore... - - def doFlex(self): - finaly = self.pop() - finalx = self.pop() - self.pop() # flex height is unused - - p3y = self.pop() - p3x = self.pop() - bcp4y = self.pop() - bcp4x = self.pop() - bcp3y = self.pop() - bcp3x = self.pop() - p2y = self.pop() - p2x = self.pop() - bcp2y = self.pop() - bcp2x = self.pop() - bcp1y = self.pop() - bcp1x = self.pop() - rpy = self.pop() - rpx = self.pop() - - # call rrcurveto - self.push(bcp1x + rpx) - self.push(bcp1y + rpy) - self.push(bcp2x) - self.push(bcp2y) - self.push(p2x) - self.push(p2y) - self.op_rrcurveto(None) - - # call rrcurveto - self.push(bcp3x) - self.push(bcp3y) - self.push(bcp4x) - self.push(bcp4y) - self.push(p3x) - self.push(p3y) - self.op_rrcurveto(None) - - # Push back final coords so subr 0 can find them - self.push(finalx) - self.push(finaly) - - def op_dotsection(self, index): - self.popall() # XXX - - def op_hstem3(self, index): - self.popall() # XXX - - def op_seac(self, index): - "asb adx ady bchar achar seac" - from fontTools.encodings.StandardEncoding import StandardEncoding - - asb, adx, ady, bchar, achar = self.popall() - baseGlyph = StandardEncoding[bchar] - self.pen.addComponent(baseGlyph, (1, 0, 0, 1, 0, 0)) - accentGlyph = StandardEncoding[achar] - adx = adx + self.sbx - asb # seac weirdness - self.pen.addComponent(accentGlyph, (1, 0, 0, 1, adx, ady)) - - def op_vstem3(self, index): - self.popall() # XXX - - -class T2CharString(object): - - operandEncoding = t2OperandEncoding - operators, opcodes = buildOperatorDict(t2Operators) - decompilerClass = SimpleT2Decompiler - outlineExtractor = T2OutlineExtractor - - def __init__(self, bytecode=None, program=None, private=None, globalSubrs=None): - if program is None: - program = [] - self.bytecode = bytecode - self.program = program - self.private = private - self.globalSubrs = globalSubrs if globalSubrs is not None else [] - self._cur_vsindex = None - - def getNumRegions(self, vsindex=None): - pd = self.private - assert pd is not None - if vsindex is not None: - self._cur_vsindex = vsindex - elif self._cur_vsindex is None: - self._cur_vsindex = pd.vsindex if hasattr(pd, "vsindex") else 0 - return pd.getNumRegions(self._cur_vsindex) - - def __repr__(self): - if self.bytecode is None: - return "<%s (source) at %x>" % (self.__class__.__name__, id(self)) - else: - return "<%s (bytecode) at %x>" % (self.__class__.__name__, id(self)) - - def getIntEncoder(self): - return encodeIntT2 - - def getFixedEncoder(self): - return encodeFixed - - def decompile(self): - if not self.needsDecompilation(): - return - subrs = getattr(self.private, "Subrs", []) - decompiler = self.decompilerClass(subrs, self.globalSubrs, self.private) - decompiler.execute(self) - - def draw(self, pen, blender=None): - subrs = getattr(self.private, "Subrs", []) - extractor = self.outlineExtractor( - pen, - subrs, - self.globalSubrs, - self.private.nominalWidthX, - self.private.defaultWidthX, - self.private, - blender, - ) - extractor.execute(self) - self.width = extractor.width - - def calcBounds(self, glyphSet): - boundsPen = BoundsPen(glyphSet) - self.draw(boundsPen) - return boundsPen.bounds - - def compile(self, isCFF2=False): - if self.bytecode is not None: - return - opcodes = self.opcodes - program = self.program - - if isCFF2: - # If present, remove return and endchar operators. - if program and program[-1] in ("return", "endchar"): - program = program[:-1] - elif program and not isinstance(program[-1], str): - raise CharStringCompileError( - "T2CharString or Subr has items on the stack after last operator." - ) - - bytecode = [] - encodeInt = self.getIntEncoder() - encodeFixed = self.getFixedEncoder() - i = 0 - end = len(program) - while i < end: - token = program[i] - i = i + 1 - if isinstance(token, str): - try: - bytecode.extend(bytechr(b) for b in opcodes[token]) - except KeyError: - raise CharStringCompileError("illegal operator: %s" % token) - if token in ("hintmask", "cntrmask"): - bytecode.append(program[i]) # hint mask - i = i + 1 - elif isinstance(token, int): - bytecode.append(encodeInt(token)) - elif isinstance(token, float): - bytecode.append(encodeFixed(token)) - else: - assert 0, "unsupported type: %s" % type(token) - try: - bytecode = bytesjoin(bytecode) - except TypeError: - log.error(bytecode) - raise - self.setBytecode(bytecode) - - def needsDecompilation(self): - return self.bytecode is not None - - def setProgram(self, program): - self.program = program - self.bytecode = None - - def setBytecode(self, bytecode): - self.bytecode = bytecode - self.program = None - - def getToken(self, index, len=len, byteord=byteord, isinstance=isinstance): - if self.bytecode is not None: - if index >= len(self.bytecode): - return None, 0, 0 - b0 = byteord(self.bytecode[index]) - index = index + 1 - handler = self.operandEncoding[b0] - token, index = handler(self, b0, self.bytecode, index) - else: - if index >= len(self.program): - return None, 0, 0 - token = self.program[index] - index = index + 1 - isOperator = isinstance(token, str) - return token, isOperator, index - - def getBytes(self, index, nBytes): - if self.bytecode is not None: - newIndex = index + nBytes - bytes = self.bytecode[index:newIndex] - index = newIndex - else: - bytes = self.program[index] - index = index + 1 - assert len(bytes) == nBytes - return bytes, index - - def handle_operator(self, operator): - return operator - - def toXML(self, xmlWriter, ttFont=None): - from fontTools.misc.textTools import num2binary - - if self.bytecode is not None: - xmlWriter.dumphex(self.bytecode) - else: - index = 0 - args = [] - while True: - token, isOperator, index = self.getToken(index) - if token is None: - break - if isOperator: - if token in ("hintmask", "cntrmask"): - hintMask, isOperator, index = self.getToken(index) - bits = [] - for byte in hintMask: - bits.append(num2binary(byteord(byte), 8)) - hintMask = strjoin(bits) - line = " ".join(args + [token, hintMask]) - else: - line = " ".join(args + [token]) - xmlWriter.write(line) - xmlWriter.newline() - args = [] - else: - if isinstance(token, float): - token = floatToFixedToStr(token, precisionBits=16) - else: - token = str(token) - args.append(token) - if args: - # NOTE: only CFF2 charstrings/subrs can have numeric arguments on - # the stack after the last operator. Compiling this would fail if - # this is part of CFF 1.0 table. - line = " ".join(args) - xmlWriter.write(line) - - def fromXML(self, name, attrs, content): - from fontTools.misc.textTools import binary2num, readHex - - if attrs.get("raw"): - self.setBytecode(readHex(content)) - return - content = strjoin(content) - content = content.split() - program = [] - end = len(content) - i = 0 - while i < end: - token = content[i] - i = i + 1 - try: - token = int(token) - except ValueError: - try: - token = strToFixedToFloat(token, precisionBits=16) - except ValueError: - program.append(token) - if token in ("hintmask", "cntrmask"): - mask = content[i] - maskBytes = b"" - for j in range(0, len(mask), 8): - maskBytes = maskBytes + bytechr(binary2num(mask[j : j + 8])) - program.append(maskBytes) - i = i + 1 - else: - program.append(token) - else: - program.append(token) - self.setProgram(program) - - -class T1CharString(T2CharString): - - operandEncoding = t1OperandEncoding - operators, opcodes = buildOperatorDict(t1Operators) - - def __init__(self, bytecode=None, program=None, subrs=None): - super().__init__(bytecode, program) - self.subrs = subrs - - def getIntEncoder(self): - return encodeIntT1 - - def getFixedEncoder(self): - def encodeFixed(value): - raise TypeError("Type 1 charstrings don't support floating point operands") - - def decompile(self): - if self.bytecode is None: - return - program = [] - index = 0 - while True: - token, isOperator, index = self.getToken(index) - if token is None: - break - program.append(token) - self.setProgram(program) - - def draw(self, pen): - extractor = T1OutlineExtractor(pen, self.subrs) - extractor.execute(self) - self.width = extractor.width - - -class DictDecompiler(object): - - operandEncoding = cffDictOperandEncoding - - def __init__(self, strings, parent=None): - self.stack = [] - self.strings = strings - self.dict = {} - self.parent = parent - - def getDict(self): - assert len(self.stack) == 0, "non-empty stack" - return self.dict - - def decompile(self, data): - index = 0 - lenData = len(data) - push = self.stack.append - while index < lenData: - b0 = byteord(data[index]) - index = index + 1 - handler = self.operandEncoding[b0] - value, index = handler(self, b0, data, index) - if value is not None: - push(value) - - def pop(self): - value = self.stack[-1] - del self.stack[-1] - return value - - def popall(self): - args = self.stack[:] - del self.stack[:] - return args - - def handle_operator(self, operator): - operator, argType = operator - if isinstance(argType, tuple): - value = () - for i in range(len(argType) - 1, -1, -1): - arg = argType[i] - arghandler = getattr(self, "arg_" + arg) - value = (arghandler(operator),) + value - else: - arghandler = getattr(self, "arg_" + argType) - value = arghandler(operator) - if operator == "blend": - self.stack.extend(value) - else: - self.dict[operator] = value - - def arg_number(self, name): - if isinstance(self.stack[0], list): - out = self.arg_blend_number(self.stack) - else: - out = self.pop() - return out - - def arg_blend_number(self, name): - out = [] - blendArgs = self.pop() - numMasters = len(blendArgs) - out.append(blendArgs) - out.append("blend") - dummy = self.popall() - return blendArgs - - def arg_SID(self, name): - return self.strings[self.pop()] - - def arg_array(self, name): - return self.popall() - - def arg_blendList(self, name): - """ - There may be non-blend args at the top of the stack. We first calculate - where the blend args start in the stack. These are the last - numMasters*numBlends) +1 args. - The blend args starts with numMasters relative coordinate values, the BlueValues in the list from the default master font. This is followed by - numBlends list of values. Each of value in one of these lists is the - Variable Font delta for the matching region. - - We re-arrange this to be a list of numMaster entries. Each entry starts with the corresponding default font relative value, and is followed by - the delta values. We then convert the default values, the first item in each entry, to an absolute value. - """ - vsindex = self.dict.get("vsindex", 0) - numMasters = ( - self.parent.getNumRegions(vsindex) + 1 - ) # only a PrivateDict has blended ops. - numBlends = self.pop() - args = self.popall() - numArgs = len(args) - # The spec says that there should be no non-blended Blue Values,. - assert numArgs == numMasters * numBlends - value = [None] * numBlends - numDeltas = numMasters - 1 - i = 0 - prevVal = 0 - while i < numBlends: - newVal = args[i] + prevVal - prevVal = newVal - masterOffset = numBlends + (i * numDeltas) - blendList = [newVal] + args[masterOffset : masterOffset + numDeltas] - value[i] = blendList - i += 1 - return value - - def arg_delta(self, name): - valueList = self.popall() - out = [] - if valueList and isinstance(valueList[0], list): - # arg_blendList() has already converted these to absolute values. - out = valueList - else: - current = 0 - for v in valueList: - current = current + v - out.append(current) - return out - - -def calcSubrBias(subrs): - nSubrs = len(subrs) - if nSubrs < 1240: - bias = 107 - elif nSubrs < 33900: - bias = 1131 - else: - bias = 32768 - return bias diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/cu2quPen.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/cu2quPen.py deleted file mode 100644 index f182aed44a0e8a6dfd906c385f10a5f3a14c332e..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/cu2quPen.py +++ /dev/null @@ -1,325 +0,0 @@ -# Copyright 2016 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import operator -from fontTools.cu2qu import curve_to_quadratic, curves_to_quadratic -from fontTools.pens.basePen import decomposeSuperBezierSegment -from fontTools.pens.filterPen import FilterPen -from fontTools.pens.reverseContourPen import ReverseContourPen -from fontTools.pens.pointPen import BasePointToSegmentPen -from fontTools.pens.pointPen import ReverseContourPointPen - - -class Cu2QuPen(FilterPen): - """A filter pen to convert cubic bezier curves to quadratic b-splines - using the FontTools SegmentPen protocol. - - Args: - - other_pen: another SegmentPen used to draw the transformed outline. - max_err: maximum approximation error in font units. For optimal results, - if you know the UPEM of the font, we recommend setting this to a - value equal, or close to UPEM / 1000. - reverse_direction: flip the contours' direction but keep starting point. - stats: a dictionary counting the point numbers of quadratic segments. - all_quadratic: if True (default), only quadratic b-splines are generated. - if False, quadratic curves or cubic curves are generated depending - on which one is more economical. - """ - - def __init__( - self, - other_pen, - max_err, - reverse_direction=False, - stats=None, - all_quadratic=True, - ): - if reverse_direction: - other_pen = ReverseContourPen(other_pen) - super().__init__(other_pen) - self.max_err = max_err - self.stats = stats - self.all_quadratic = all_quadratic - - def _convert_curve(self, pt1, pt2, pt3): - curve = (self.current_pt, pt1, pt2, pt3) - result = curve_to_quadratic(curve, self.max_err, self.all_quadratic) - if self.stats is not None: - n = str(len(result) - 2) - self.stats[n] = self.stats.get(n, 0) + 1 - if self.all_quadratic: - self.qCurveTo(*result[1:]) - else: - if len(result) == 3: - self.qCurveTo(*result[1:]) - else: - assert len(result) == 4 - super().curveTo(*result[1:]) - - def curveTo(self, *points): - n = len(points) - if n == 3: - # this is the most common case, so we special-case it - self._convert_curve(*points) - elif n > 3: - for segment in decomposeSuperBezierSegment(points): - self._convert_curve(*segment) - else: - self.qCurveTo(*points) - - -class Cu2QuPointPen(BasePointToSegmentPen): - """A filter pen to convert cubic bezier curves to quadratic b-splines - using the FontTools PointPen protocol. - - Args: - other_point_pen: another PointPen used to draw the transformed outline. - max_err: maximum approximation error in font units. For optimal results, - if you know the UPEM of the font, we recommend setting this to a - value equal, or close to UPEM / 1000. - reverse_direction: reverse the winding direction of all contours. - stats: a dictionary counting the point numbers of quadratic segments. - all_quadratic: if True (default), only quadratic b-splines are generated. - if False, quadratic curves or cubic curves are generated depending - on which one is more economical. - """ - - __points_required = { - "move": (1, operator.eq), - "line": (1, operator.eq), - "qcurve": (2, operator.ge), - "curve": (3, operator.eq), - } - - def __init__( - self, - other_point_pen, - max_err, - reverse_direction=False, - stats=None, - all_quadratic=True, - ): - BasePointToSegmentPen.__init__(self) - if reverse_direction: - self.pen = ReverseContourPointPen(other_point_pen) - else: - self.pen = other_point_pen - self.max_err = max_err - self.stats = stats - self.all_quadratic = all_quadratic - - def _flushContour(self, segments): - assert len(segments) >= 1 - closed = segments[0][0] != "move" - new_segments = [] - prev_points = segments[-1][1] - prev_on_curve = prev_points[-1][0] - for segment_type, points in segments: - if segment_type == "curve": - for sub_points in self._split_super_bezier_segments(points): - on_curve, smooth, name, kwargs = sub_points[-1] - bcp1, bcp2 = sub_points[0][0], sub_points[1][0] - cubic = [prev_on_curve, bcp1, bcp2, on_curve] - quad = curve_to_quadratic(cubic, self.max_err, self.all_quadratic) - if self.stats is not None: - n = str(len(quad) - 2) - self.stats[n] = self.stats.get(n, 0) + 1 - new_points = [(pt, False, None, {}) for pt in quad[1:-1]] - new_points.append((on_curve, smooth, name, kwargs)) - if self.all_quadratic or len(new_points) == 2: - new_segments.append(["qcurve", new_points]) - else: - new_segments.append(["curve", new_points]) - prev_on_curve = sub_points[-1][0] - else: - new_segments.append([segment_type, points]) - prev_on_curve = points[-1][0] - if closed: - # the BasePointToSegmentPen.endPath method that calls _flushContour - # rotates the point list of closed contours so that they end with - # the first on-curve point. We restore the original starting point. - new_segments = new_segments[-1:] + new_segments[:-1] - self._drawPoints(new_segments) - - def _split_super_bezier_segments(self, points): - sub_segments = [] - # n is the number of control points - n = len(points) - 1 - if n == 2: - # a simple bezier curve segment - sub_segments.append(points) - elif n > 2: - # a "super" bezier; decompose it - on_curve, smooth, name, kwargs = points[-1] - num_sub_segments = n - 1 - for i, sub_points in enumerate( - decomposeSuperBezierSegment([pt for pt, _, _, _ in points]) - ): - new_segment = [] - for point in sub_points[:-1]: - new_segment.append((point, False, None, {})) - if i == (num_sub_segments - 1): - # the last on-curve keeps its original attributes - new_segment.append((on_curve, smooth, name, kwargs)) - else: - # on-curves of sub-segments are always "smooth" - new_segment.append((sub_points[-1], True, None, {})) - sub_segments.append(new_segment) - else: - raise AssertionError("expected 2 control points, found: %d" % n) - return sub_segments - - def _drawPoints(self, segments): - pen = self.pen - pen.beginPath() - last_offcurves = [] - points_required = self.__points_required - for i, (segment_type, points) in enumerate(segments): - if segment_type in points_required: - n, op = points_required[segment_type] - assert op(len(points), n), ( - f"illegal {segment_type!r} segment point count: " - f"expected {n}, got {len(points)}" - ) - offcurves = points[:-1] - if i == 0: - # any off-curve points preceding the first on-curve - # will be appended at the end of the contour - last_offcurves = offcurves - else: - for (pt, smooth, name, kwargs) in offcurves: - pen.addPoint(pt, None, smooth, name, **kwargs) - pt, smooth, name, kwargs = points[-1] - if pt is None: - assert segment_type == "qcurve" - # special quadratic contour with no on-curve points: - # we need to skip the "None" point. See also the Pen - # protocol's qCurveTo() method and fontTools.pens.basePen - pass - else: - pen.addPoint(pt, segment_type, smooth, name, **kwargs) - else: - raise AssertionError("unexpected segment type: %r" % segment_type) - for (pt, smooth, name, kwargs) in last_offcurves: - pen.addPoint(pt, None, smooth, name, **kwargs) - pen.endPath() - - def addComponent(self, baseGlyphName, transformation): - assert self.currentPath is None - self.pen.addComponent(baseGlyphName, transformation) - - -class Cu2QuMultiPen: - """A filter multi-pen to convert cubic bezier curves to quadratic b-splines - in a interpolation-compatible manner, using the FontTools SegmentPen protocol. - - Args: - - other_pens: list of SegmentPens used to draw the transformed outlines. - max_err: maximum approximation error in font units. For optimal results, - if you know the UPEM of the font, we recommend setting this to a - value equal, or close to UPEM / 1000. - reverse_direction: flip the contours' direction but keep starting point. - - This pen does not follow the normal SegmentPen protocol. Instead, its - moveTo/lineTo/qCurveTo/curveTo methods take a list of tuples that are - arguments that would normally be passed to a SegmentPen, one item for - each of the pens in other_pens. - """ - - # TODO Simplify like 3e8ebcdce592fe8a59ca4c3a294cc9724351e1ce - # Remove start_pts and _add_moveTO - - def __init__(self, other_pens, max_err, reverse_direction=False): - if reverse_direction: - other_pens = [ - ReverseContourPen(pen, outputImpliedClosingLine=True) - for pen in other_pens - ] - self.pens = other_pens - self.max_err = max_err - self.start_pts = None - self.current_pts = None - - def _check_contour_is_open(self): - if self.current_pts is None: - raise AssertionError("moveTo is required") - - def _check_contour_is_closed(self): - if self.current_pts is not None: - raise AssertionError("closePath or endPath is required") - - def _add_moveTo(self): - if self.start_pts is not None: - for pt, pen in zip(self.start_pts, self.pens): - pen.moveTo(*pt) - self.start_pts = None - - def moveTo(self, pts): - self._check_contour_is_closed() - self.start_pts = self.current_pts = pts - self._add_moveTo() - - def lineTo(self, pts): - self._check_contour_is_open() - self._add_moveTo() - for pt, pen in zip(pts, self.pens): - pen.lineTo(*pt) - self.current_pts = pts - - def qCurveTo(self, pointsList): - self._check_contour_is_open() - if len(pointsList[0]) == 1: - self.lineTo([(points[0],) for points in pointsList]) - return - self._add_moveTo() - current_pts = [] - for points, pen in zip(pointsList, self.pens): - pen.qCurveTo(*points) - current_pts.append((points[-1],)) - self.current_pts = current_pts - - def _curves_to_quadratic(self, pointsList): - curves = [] - for current_pt, points in zip(self.current_pts, pointsList): - curves.append(current_pt + points) - quadratics = curves_to_quadratic(curves, [self.max_err] * len(curves)) - pointsList = [] - for quadratic in quadratics: - pointsList.append(quadratic[1:]) - self.qCurveTo(pointsList) - - def curveTo(self, pointsList): - self._check_contour_is_open() - self._curves_to_quadratic(pointsList) - - def closePath(self): - self._check_contour_is_open() - if self.start_pts is None: - for pen in self.pens: - pen.closePath() - self.current_pts = self.start_pts = None - - def endPath(self): - self._check_contour_is_open() - if self.start_pts is None: - for pen in self.pens: - pen.endPath() - self.current_pts = self.start_pts = None - - def addComponent(self, glyphName, transformations): - self._check_contour_is_closed() - for trans, pen in zip(transformations, self.pens): - pen.addComponent(glyphName, trans) diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/constants/publicSepToken.ts b/spaces/DaFujaTyping/hf-Chat-ui/src/lib/constants/publicSepToken.ts deleted file mode 100644 index 15d962d69ba33e1abeb8a35885aa7647d24cf7af..0000000000000000000000000000000000000000 --- a/spaces/DaFujaTyping/hf-Chat-ui/src/lib/constants/publicSepToken.ts +++ /dev/null @@ -1 +0,0 @@ -export const PUBLIC_SEP_TOKEN = ""; diff --git a/spaces/Datasculptor/MusicGen/tests/modules/test_rope.py b/spaces/Datasculptor/MusicGen/tests/modules/test_rope.py deleted file mode 100644 index 067c6f067acbf27fb0fef5c2b812c22474c4fcd0..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/MusicGen/tests/modules/test_rope.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from audiocraft.modules.rope import RotaryEmbedding -from audiocraft.modules.transformer import StreamingTransformer, set_efficient_attention_backend - - -def test_rope(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_rope_io_dtypes(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope_32 = RotaryEmbedding(dim=C, dtype=torch.float32) - rope_64 = RotaryEmbedding(dim=C, dtype=torch.float64) - - # Test bfloat16 inputs w/ both 32 and 64 precision rope. - xq_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xk_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xq_out, xk_out = rope_32.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - xq_out, xk_out = rope_64.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - - # Test float32 inputs w/ both 32 and 64 precision rope. - xq_32 = torch.rand((B, T, H, C)).to(torch.float32) - xk_32 = torch.rand((B, T, H, C)).to(torch.float32) - xq_out, xk_out = rope_32.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - xq_out, xk_out = rope_64.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - - -def test_transformer_with_rope(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - for pos in ['rope', 'sin_rope']: - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding=pos) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - out = tr(x) - assert list(out.shape) == list(x.shape) - - -@torch.no_grad() -def test_rope_streaming(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, causal=True, dropout=0., - custom=True, positional_embedding='rope') - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -@torch.no_grad() -def test_rope_streaming_past_context(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - - for context in [None, 10]: - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=True, - dropout=0., positional_embedding='rope') - tr.eval() - - steps = 20 - x = torch.randn(3, steps, 16) - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_rope_memory_efficient(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - # Check at float precision b/c this is the rope default. - assert torch.allclose(y, y2, atol=1e-7), (y - y2).norm() - - -def test_rope_with_xpos(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_positional_scale(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True, scale=0.0) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert torch.allclose(xq, xq_out) - assert torch.allclose(xk, xk_out) diff --git a/spaces/DiamondYin/AnewGame/Build/WaliwebGLgameFPS.framework.js b/spaces/DiamondYin/AnewGame/Build/WaliwebGLgameFPS.framework.js deleted file mode 100644 index 4ba2a954f48a58c730cad5e1c7a8c89334f16626..0000000000000000000000000000000000000000 --- a/spaces/DiamondYin/AnewGame/Build/WaliwebGLgameFPS.framework.js +++ /dev/null @@ -1,22 +0,0 @@ - -var unityFramework = (() => { - var _scriptDir = typeof document !== 'undefined' && document.currentScript ? document.currentScript.src : undefined; - if (typeof __filename !== 'undefined') _scriptDir = _scriptDir || __filename; - return ( -function(unityFramework) { - unityFramework = unityFramework || {}; - -var Module=typeof unityFramework!="undefined"?unityFramework:{};var readyPromiseResolve,readyPromiseReject;Module["ready"]=new Promise(function(resolve,reject){readyPromiseResolve=resolve;readyPromiseReject=reject}); -function Pointer_stringify(s,len){warnOnce("The JavaScript function 'Pointer_stringify(ptrToSomeCString)' is obsoleted and will be removed in a future Unity version. Please call 'UTF8ToString(ptrToSomeCString)' instead.");return UTF8ToString(s,len)}Module["Pointer_stringify"]=Pointer_stringify;var stackTraceReference="(^|\\n)(\\s+at\\s+|)jsStackTrace(\\s+\\(|@)([^\\n]+):\\d+:\\d+(\\)|)(\\n|$)";var stackTraceReferenceMatch=jsStackTrace().match(new RegExp(stackTraceReference));if(stackTraceReferenceMatch)Module.stackTraceRegExp=new RegExp(stackTraceReference.replace("([^\\n]+)",stackTraceReferenceMatch[4].replace(/[\\^${}[\]().*+?|]/g,"\\$&")).replace("jsStackTrace","[^\\n]+"));var abort=function(what){if(ABORT)return;ABORT=true;EXITSTATUS=1;if(typeof ENVIRONMENT_IS_PTHREAD!=="undefined"&&ENVIRONMENT_IS_PTHREAD)console.error("Pthread aborting at "+(new Error).stack);if(what!==undefined){out(what);err(what);what=JSON.stringify(what)}else{what=""}var message="abort("+what+") at "+stackTrace();if(Module.abortHandler&&Module.abortHandler(message))return;throw message};Module["SetFullscreen"]=function(fullscreen){if(typeof runtimeInitialized==="undefined"||!runtimeInitialized){console.log("Runtime not initialized yet.")}else if(typeof JSEvents==="undefined"){console.log("Player not loaded yet.")}else{var tmp=JSEvents.canPerformEventHandlerRequests;JSEvents.canPerformEventHandlerRequests=function(){return 1};Module.ccall("SetFullscreen",null,["number"],[fullscreen]);JSEvents.canPerformEventHandlerRequests=tmp}};if(!Module["ENVIRONMENT_IS_PTHREAD"]){Module["preRun"].push(function(){var unityFileSystemInit=Module["unityFileSystemInit"]||function(){FS.mkdir("/idbfs");FS.mount(IDBFS,{},"/idbfs");Module.addRunDependency("JS_FileSystem_Mount");FS.syncfs(true,function(err){if(err)console.log("IndexedDB is not available. Data will not persist in cache and PlayerPrefs will not be saved.");Module.removeRunDependency("JS_FileSystem_Mount")})};unityFileSystemInit()})}var videoInputDevices=[];var videoInputDevicesEnumerated=false;var removeEnumerateMediaDevicesRunDependency;var enumerateWatchdog=null;function matchToOldDevice(newDevice){var oldDevices=Object.keys(videoInputDevices);for(var i=0;i{throw toThrow};var ENVIRONMENT_IS_WEB=typeof window=="object";var ENVIRONMENT_IS_WORKER=typeof importScripts=="function";var ENVIRONMENT_IS_NODE=typeof process=="object"&&typeof process.versions=="object"&&typeof process.versions.node=="string";var scriptDirectory="";function locateFile(path){if(Module["locateFile"]){return Module["locateFile"](path,scriptDirectory)}return scriptDirectory+path}var read_,readAsync,readBinary,setWindowTitle;function logExceptionOnExit(e){if(e instanceof ExitStatus)return;let toLog=e;err("exiting due to exception: "+toLog)}var fs;var nodePath;var requireNodeFS;if(ENVIRONMENT_IS_NODE){if(ENVIRONMENT_IS_WORKER){scriptDirectory=require("path").dirname(scriptDirectory)+"/"}else{scriptDirectory=__dirname+"/"}requireNodeFS=(()=>{if(!nodePath){fs=require("fs");nodePath=require("path")}});read_=function shell_read(filename,binary){requireNodeFS();filename=nodePath["normalize"](filename);return fs.readFileSync(filename,binary?undefined:"utf8")};readBinary=(filename=>{var ret=read_(filename,true);if(!ret.buffer){ret=new Uint8Array(ret)}return ret});readAsync=((filename,onload,onerror)=>{requireNodeFS();filename=nodePath["normalize"](filename);fs.readFile(filename,function(err,data){if(err)onerror(err);else onload(data.buffer)})});if(process["argv"].length>1){thisProgram=process["argv"][1].replace(/\\/g,"/")}arguments_=process["argv"].slice(2);process["on"]("uncaughtException",function(ex){if(!(ex instanceof ExitStatus)){throw ex}});process["on"]("unhandledRejection",function(reason){throw reason});quit_=((status,toThrow)=>{if(keepRuntimeAlive()){process["exitCode"]=status;throw toThrow}logExceptionOnExit(toThrow);process["exit"](status)});Module["inspect"]=function(){return"[Emscripten Module object]"}}else if(ENVIRONMENT_IS_WEB||ENVIRONMENT_IS_WORKER){if(ENVIRONMENT_IS_WORKER){scriptDirectory=self.location.href}else if(typeof document!="undefined"&&document.currentScript){scriptDirectory=document.currentScript.src}if(_scriptDir){scriptDirectory=_scriptDir}if(scriptDirectory.indexOf("blob:")!==0){scriptDirectory=scriptDirectory.substr(0,scriptDirectory.replace(/[?#].*/,"").lastIndexOf("/")+1)}else{scriptDirectory=""}{read_=(url=>{var xhr=new XMLHttpRequest;xhr.open("GET",url,false);xhr.send(null);return xhr.responseText});if(ENVIRONMENT_IS_WORKER){readBinary=(url=>{var xhr=new XMLHttpRequest;xhr.open("GET",url,false);xhr.responseType="arraybuffer";xhr.send(null);return new Uint8Array(xhr.response)})}readAsync=((url,onload,onerror)=>{var xhr=new XMLHttpRequest;xhr.open("GET",url,true);xhr.responseType="arraybuffer";xhr.onload=(()=>{if(xhr.status==200||xhr.status==0&&xhr.response){onload(xhr.response);return}onerror()});xhr.onerror=onerror;xhr.send(null)})}setWindowTitle=(title=>document.title=title)}else{}var out=Module["print"]||console.log.bind(console);var err=Module["printErr"]||console.warn.bind(console);Object.assign(Module,moduleOverrides);moduleOverrides=null;if(Module["arguments"])arguments_=Module["arguments"];if(Module["thisProgram"])thisProgram=Module["thisProgram"];if(Module["quit"])quit_=Module["quit"];var POINTER_SIZE=4;function warnOnce(text){if(!warnOnce.shown)warnOnce.shown={};if(!warnOnce.shown[text]){warnOnce.shown[text]=1;err(text)}}function convertJsFunctionToWasm(func,sig){if(typeof WebAssembly.Function=="function"){var typeNames={"i":"i32","j":"i64","f":"f32","d":"f64"};var type={parameters:[],results:sig[0]=="v"?[]:[typeNames[sig[0]]]};for(var i=1;i{tempRet0=value};var getTempRet0=()=>tempRet0;var wasmBinary;if(Module["wasmBinary"])wasmBinary=Module["wasmBinary"];var noExitRuntime=Module["noExitRuntime"]||true;if(typeof WebAssembly!="object"){abort("no native wasm support detected")}var wasmMemory;var ABORT=false;var EXITSTATUS;function assert(condition,text){if(!condition){abort(text)}}function getCFunc(ident){var func=Module["_"+ident];return func}function ccall(ident,returnType,argTypes,args,opts){var toC={"string":function(str){var ret=0;if(str!==null&&str!==undefined&&str!==0){var len=(str.length<<2)+1;ret=stackAlloc(len);stringToUTF8(str,ret,len)}return ret},"array":function(arr){var ret=stackAlloc(arr.length);writeArrayToMemory(arr,ret);return ret}};function convertReturnValue(ret){if(returnType==="string")return UTF8ToString(ret);if(returnType==="boolean")return Boolean(ret);return ret}var func=getCFunc(ident);var cArgs=[];var stack=0;if(args){for(var i=0;i=endIdx))++endPtr;if(endPtr-idx>16&&heapOrArray.buffer&&UTF8Decoder){return UTF8Decoder.decode(heapOrArray.subarray(idx,endPtr))}else{var str="";while(idx>10,56320|ch&1023)}}}return str}function UTF8ToString(ptr,maxBytesToRead){return ptr?UTF8ArrayToString(HEAPU8,ptr,maxBytesToRead):""}function stringToUTF8Array(str,heap,outIdx,maxBytesToWrite){if(!(maxBytesToWrite>0))return 0;var startIdx=outIdx;var endIdx=outIdx+maxBytesToWrite-1;for(var i=0;i=55296&&u<=57343){var u1=str.charCodeAt(++i);u=65536+((u&1023)<<10)|u1&1023}if(u<=127){if(outIdx>=endIdx)break;heap[outIdx++]=u}else if(u<=2047){if(outIdx+1>=endIdx)break;heap[outIdx++]=192|u>>6;heap[outIdx++]=128|u&63}else if(u<=65535){if(outIdx+2>=endIdx)break;heap[outIdx++]=224|u>>12;heap[outIdx++]=128|u>>6&63;heap[outIdx++]=128|u&63}else{if(outIdx+3>=endIdx)break;heap[outIdx++]=240|u>>18;heap[outIdx++]=128|u>>12&63;heap[outIdx++]=128|u>>6&63;heap[outIdx++]=128|u&63}}heap[outIdx]=0;return outIdx-startIdx}function stringToUTF8(str,outPtr,maxBytesToWrite){return stringToUTF8Array(str,HEAPU8,outPtr,maxBytesToWrite)}function lengthBytesUTF8(str){var len=0;for(var i=0;i=55296&&u<=57343)u=65536+((u&1023)<<10)|str.charCodeAt(++i)&1023;if(u<=127)++len;else if(u<=2047)len+=2;else if(u<=65535)len+=3;else len+=4}return len}var UTF16Decoder=typeof TextDecoder!="undefined"?new TextDecoder("utf-16le"):undefined;function allocateUTF8(str){var size=lengthBytesUTF8(str)+1;var ret=_malloc(size);if(ret)stringToUTF8Array(str,HEAP8,ret,size);return ret}function allocateUTF8OnStack(str){var size=lengthBytesUTF8(str)+1;var ret=stackAlloc(size);stringToUTF8Array(str,HEAP8,ret,size);return ret}function writeArrayToMemory(array,buffer){HEAP8.set(array,buffer)}function writeAsciiToMemory(str,buffer,dontAddNull){for(var i=0;i>0]=str.charCodeAt(i)}if(!dontAddNull)HEAP8[buffer>>0]=0}var buffer,HEAP8,HEAPU8,HEAP16,HEAPU16,HEAP32,HEAPU32,HEAPF32,HEAPF64;function updateGlobalBufferAndViews(buf){buffer=buf;Module["HEAP8"]=HEAP8=new Int8Array(buf);Module["HEAP16"]=HEAP16=new Int16Array(buf);Module["HEAP32"]=HEAP32=new Int32Array(buf);Module["HEAPU8"]=HEAPU8=new Uint8Array(buf);Module["HEAPU16"]=HEAPU16=new Uint16Array(buf);Module["HEAPU32"]=HEAPU32=new Uint32Array(buf);Module["HEAPF32"]=HEAPF32=new Float32Array(buf);Module["HEAPF64"]=HEAPF64=new Float64Array(buf)}var INITIAL_MEMORY=Module["INITIAL_MEMORY"]||33554432;var wasmTable;var __ATPRERUN__=[];var __ATINIT__=[];var __ATMAIN__=[];var __ATEXIT__=[];var __ATPOSTRUN__=[];var runtimeInitialized=false;function keepRuntimeAlive(){return noExitRuntime}function preRun(){if(Module["preRun"]){if(typeof Module["preRun"]=="function")Module["preRun"]=[Module["preRun"]];while(Module["preRun"].length){addOnPreRun(Module["preRun"].shift())}}callRuntimeCallbacks(__ATPRERUN__)}function initRuntime(){runtimeInitialized=true;if(!Module["noFSInit"]&&!FS.init.initialized)FS.init();FS.ignorePermissions=false;TTY.init();SOCKFS.root=FS.mount(SOCKFS,{},null);PIPEFS.root=FS.mount(PIPEFS,{},null);callRuntimeCallbacks(__ATINIT__)}function preMain(){callRuntimeCallbacks(__ATMAIN__)}function postRun(){if(Module["postRun"]){if(typeof Module["postRun"]=="function")Module["postRun"]=[Module["postRun"]];while(Module["postRun"].length){addOnPostRun(Module["postRun"].shift())}}callRuntimeCallbacks(__ATPOSTRUN__)}function addOnPreRun(cb){__ATPRERUN__.unshift(cb)}function addOnInit(cb){__ATINIT__.unshift(cb)}function addOnPostRun(cb){__ATPOSTRUN__.unshift(cb)}var runDependencies=0;var runDependencyWatcher=null;var dependenciesFulfilled=null;function getUniqueRunDependency(id){return id}function addRunDependency(id){runDependencies++;if(Module["monitorRunDependencies"]){Module["monitorRunDependencies"](runDependencies)}}function removeRunDependency(id){runDependencies--;if(Module["monitorRunDependencies"]){Module["monitorRunDependencies"](runDependencies)}if(runDependencies==0){if(runDependencyWatcher!==null){clearInterval(runDependencyWatcher);runDependencyWatcher=null}if(dependenciesFulfilled){var callback=dependenciesFulfilled;dependenciesFulfilled=null;callback()}}}Module["preloadedImages"]={};Module["preloadedAudios"]={};function abort(what){{if(Module["onAbort"]){Module["onAbort"](what)}}what="Aborted("+what+")";err(what);ABORT=true;EXITSTATUS=1;what+=". Build with -s ASSERTIONS=1 for more info.";var e=new WebAssembly.RuntimeError(what);readyPromiseReject(e);throw e}var dataURIPrefix="data:application/octet-stream;base64,";function isDataURI(filename){return filename.startsWith(dataURIPrefix)}function isFileURI(filename){return filename.startsWith("file://")}var wasmBinaryFile;wasmBinaryFile="build.wasm";if(!isDataURI(wasmBinaryFile)){wasmBinaryFile=locateFile(wasmBinaryFile)}function getBinary(file){try{if(file==wasmBinaryFile&&wasmBinary){return new Uint8Array(wasmBinary)}if(readBinary){return readBinary(file)}else{throw"both async and sync fetching of the wasm failed"}}catch(err){abort(err)}}function getBinaryPromise(){if(!wasmBinary&&(ENVIRONMENT_IS_WEB||ENVIRONMENT_IS_WORKER)){if(typeof fetch=="function"&&!isFileURI(wasmBinaryFile)){return fetch(wasmBinaryFile,{credentials:"same-origin"}).then(function(response){if(!response["ok"]){throw"failed to load wasm binary file at '"+wasmBinaryFile+"'"}return response["arrayBuffer"]()}).catch(function(){return getBinary(wasmBinaryFile)})}else{if(readAsync){return new Promise(function(resolve,reject){readAsync(wasmBinaryFile,function(response){resolve(new Uint8Array(response))},reject)})}}}return Promise.resolve().then(function(){return getBinary(wasmBinaryFile)})}function createWasm(){var info={"env":asmLibraryArg,"wasi_snapshot_preview1":asmLibraryArg};function receiveInstance(instance,module){var exports=instance.exports;Module["asm"]=exports;wasmMemory=Module["asm"]["memory"];updateGlobalBufferAndViews(wasmMemory.buffer);wasmTable=Module["asm"]["__indirect_function_table"];addOnInit(Module["asm"]["__wasm_call_ctors"]);removeRunDependency("wasm-instantiate")}addRunDependency("wasm-instantiate");function receiveInstantiationResult(result){receiveInstance(result["instance"])}function instantiateArrayBuffer(receiver){return getBinaryPromise().then(function(binary){return WebAssembly.instantiate(binary,info)}).then(function(instance){return instance}).then(receiver,function(reason){err("failed to asynchronously prepare wasm: "+reason);abort(reason)})}function instantiateAsync(){if(!wasmBinary&&typeof WebAssembly.instantiateStreaming=="function"&&!isDataURI(wasmBinaryFile)&&!isFileURI(wasmBinaryFile)&&typeof fetch=="function"){return fetch(wasmBinaryFile,{credentials:"same-origin"}).then(function(response){var result=WebAssembly.instantiateStreaming(response,info);return result.then(receiveInstantiationResult,function(reason){err("wasm streaming compile failed: "+reason);err("falling back to ArrayBuffer instantiation");return instantiateArrayBuffer(receiveInstantiationResult)})})}else{return instantiateArrayBuffer(receiveInstantiationResult)}}if(Module["instantiateWasm"]){try{var exports=Module["instantiateWasm"](info,receiveInstance);return exports}catch(e){err("Module.instantiateWasm callback failed with error: "+e);return false}}instantiateAsync().catch(readyPromiseReject);return{}}var tempDouble;var tempI64;var ASM_CONSTS={4007452:function(){return Module.webglContextAttributes.premultipliedAlpha},4007513:function(){return Module.webglContextAttributes.preserveDrawingBuffer},4007577:function(){return Module.webglContextAttributes.powerPreference}};function callRuntimeCallbacks(callbacks){while(callbacks.length>0){var callback=callbacks.shift();if(typeof callback=="function"){callback(Module);continue}var func=callback.func;if(typeof func=="number"){if(callback.arg===undefined){(function(){dynCall_v.call(null,func)})()}else{(function(a1){dynCall_vi.apply(null,[func,a1])})(callback.arg)}}else{func(callback.arg===undefined?null:callback.arg)}}}function withStackSave(f){var stack=stackSave();var ret=f();stackRestore(stack);return ret}function demangle(func){return func}function demangleAll(text){var regex=/\b_Z[\w\d_]+/g;return text.replace(regex,function(x){var y=demangle(x);return x===y?x:y+" ["+x+"]"})}function dynCallLegacy(sig,ptr,args){var f=Module["dynCall_"+sig];return args&&args.length?f.apply(null,[ptr].concat(args)):f.call(null,ptr)}var wasmTableMirror=[];function getWasmTableEntry(funcPtr){var func=wasmTableMirror[funcPtr];if(!func){if(funcPtr>=wasmTableMirror.length)wasmTableMirror.length=funcPtr+1;wasmTableMirror[funcPtr]=func=wasmTable.get(funcPtr)}return func}function dynCall(sig,ptr,args){return dynCallLegacy(sig,ptr,args)}function handleException(e){if(e instanceof ExitStatus||e=="unwind"){return EXITSTATUS}quit_(1,e)}function jsStackTrace(){var error=new Error;if(!error.stack){try{throw new Error}catch(e){error=e}if(!error.stack){return"(no stack trace available)"}}return error.stack.toString()}function setWasmTableEntry(idx,func){wasmTable.set(idx,func);wasmTableMirror[idx]=func}function stackTrace(){var js=jsStackTrace();if(Module["extraStackTrace"])js+="\n"+Module["extraStackTrace"]();return demangleAll(js)}function _GetJSMemoryInfo(totalJSptr,usedJSptr){if(performance.memory){HEAPF64[totalJSptr>>3]=performance.memory.totalJSHeapSize;HEAPF64[usedJSptr>>3]=performance.memory.usedJSHeapSize}else{HEAPF64[totalJSptr>>3]=NaN;HEAPF64[usedJSptr>>3]=NaN}}var JS_Accelerometer=null;var JS_Accelerometer_callback=0;function _JS_Accelerometer_IsRunning(){return JS_Accelerometer&&JS_Accelerometer.activated||JS_Accelerometer_callback!=0}var JS_Accelerometer_multiplier=1;var JS_Accelerometer_lastValue={x:0,y:0,z:0};function JS_Accelerometer_eventHandler(){JS_Accelerometer_lastValue={x:JS_Accelerometer.x*JS_Accelerometer_multiplier,y:JS_Accelerometer.y*JS_Accelerometer_multiplier,z:JS_Accelerometer.z*JS_Accelerometer_multiplier};if(JS_Accelerometer_callback!=0)dynCall_vfff(JS_Accelerometer_callback,JS_Accelerometer_lastValue.x,JS_Accelerometer_lastValue.y,JS_Accelerometer_lastValue.z)}var JS_Accelerometer_frequencyRequest=0;var JS_Accelerometer_frequency=0;var JS_LinearAccelerationSensor_callback=0;var JS_GravitySensor_callback=0;var JS_Gyroscope_callback=0;function JS_ComputeGravity(accelerometerValue,linearAccelerationValue){var difference={x:accelerometerValue.x-linearAccelerationValue.x,y:accelerometerValue.y-linearAccelerationValue.y,z:accelerometerValue.z-linearAccelerationValue.z};var differenceMagnitudeSq=difference.x*difference.x+difference.y*difference.y+difference.z*difference.z;var sum={x:accelerometerValue.x+linearAccelerationValue.x,y:accelerometerValue.y+linearAccelerationValue.y,z:accelerometerValue.z+linearAccelerationValue.z};var sumMagnitudeSq=sum.x*sum.x+sum.y*sum.y+sum.z*sum.z;return differenceMagnitudeSq<=sumMagnitudeSq?difference:sum}function JS_DeviceMotion_eventHandler(event){var accelerometerValue={x:event.accelerationIncludingGravity.x*JS_Accelerometer_multiplier,y:event.accelerationIncludingGravity.y*JS_Accelerometer_multiplier,z:event.accelerationIncludingGravity.z*JS_Accelerometer_multiplier};if(JS_Accelerometer_callback!=0)dynCall_vfff(JS_Accelerometer_callback,accelerometerValue.x,accelerometerValue.y,accelerometerValue.z);var linearAccelerationValue={x:event.acceleration.x*JS_Accelerometer_multiplier,y:event.acceleration.y*JS_Accelerometer_multiplier,z:event.acceleration.z*JS_Accelerometer_multiplier};if(JS_LinearAccelerationSensor_callback!=0)dynCall_vfff(JS_LinearAccelerationSensor_callback,linearAccelerationValue.x,linearAccelerationValue.y,linearAccelerationValue.z);if(JS_GravitySensor_callback!=0){var gravityValue=JS_ComputeGravity(accelerometerValue,linearAccelerationValue);dynCall_vfff(JS_GravitySensor_callback,gravityValue.x,gravityValue.y,gravityValue.z)}if(JS_Gyroscope_callback!=0){var degToRad=Math.PI/180;dynCall_vfff(JS_Gyroscope_callback,event.rotationRate.alpha*degToRad,event.rotationRate.beta*degToRad,event.rotationRate.gamma*degToRad)}}var JS_DeviceSensorPermissions=0;function JS_RequestDeviceSensorPermissions(permissions){if(permissions&1){if(typeof DeviceOrientationEvent.requestPermission==="function"){DeviceOrientationEvent.requestPermission().then(function(permissionState){if(permissionState==="granted"){JS_DeviceSensorPermissions&=~1}else{warnOnce("DeviceOrientationEvent permission not granted")}}).catch(function(err){warnOnce(err);JS_DeviceSensorPermissions|=1})}}if(permissions&2){if(typeof DeviceMotionEvent.requestPermission==="function"){DeviceMotionEvent.requestPermission().then(function(permissionState){if(permissionState==="granted"){JS_DeviceSensorPermissions&=~2}else{warnOnce("DeviceMotionEvent permission not granted")}}).catch(function(err){warnOnce(err);JS_DeviceSensorPermissions|=2})}}}function JS_DeviceMotion_add(){if(JS_Accelerometer_callback==0&&JS_LinearAccelerationSensor_callback==0&&JS_GravitySensor_callback==0&&JS_Gyroscope_callback==0){JS_RequestDeviceSensorPermissions(2);window.addEventListener("devicemotion",JS_DeviceMotion_eventHandler)}}function JS_DefineAccelerometerMultiplier(){var g=9.80665;JS_Accelerometer_multiplier=/(iPhone|iPad|Macintosh)/i.test(navigator.userAgent)?1/g:-1/g}function _JS_Accelerometer_Start(callback,frequency){JS_DefineAccelerometerMultiplier();if(typeof Accelerometer==="undefined"){JS_DeviceMotion_add();if(callback!=0)JS_Accelerometer_callback=callback;return}if(callback!=0)JS_Accelerometer_callback=callback;function InitializeAccelerometer(frequency){JS_Accelerometer=new Accelerometer({frequency:frequency,referenceFrame:"device"});JS_Accelerometer.addEventListener("reading",JS_Accelerometer_eventHandler);JS_Accelerometer.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_Accelerometer.start();JS_Accelerometer_frequency=frequency}if(JS_Accelerometer){if(JS_Accelerometer_frequency!=frequency){JS_Accelerometer.stop();JS_Accelerometer.removeEventListener("reading",JS_Accelerometer_eventHandler);InitializeAccelerometer(frequency)}}else if(JS_Accelerometer_frequencyRequest!=0){JS_Accelerometer_frequencyRequest=frequency}else{JS_Accelerometer_frequencyRequest=frequency;navigator.permissions.query({name:"accelerometer"}).then(function(result){if(result.state==="granted"){InitializeAccelerometer(JS_Accelerometer_frequencyRequest)}else{warnOnce("No permission to use Accelerometer.")}JS_Accelerometer_frequencyRequest=0})}}function JS_DeviceMotion_remove(){if(JS_Accelerometer_callback==0&&JS_LinearAccelerationSensor_callback==0&&JS_GravitySensor_callback==0&&JS_Gyroscope_callback==0){window.removeEventListener("devicemotion",JS_DeviceOrientation_eventHandler)}}function _JS_Accelerometer_Stop(){if(JS_Accelerometer){if(typeof GravitySensor!=="undefined"||JS_GravitySensor_callback==0){JS_Accelerometer.stop();JS_Accelerometer.removeEventListener("reading",JS_Accelerometer_eventHandler);JS_Accelerometer=null}JS_Accelerometer_callback=0;JS_Accelerometer_frequency=0}else if(JS_Accelerometer_callback!=0){JS_Accelerometer_callback=0;JS_DeviceMotion_remove()}}function _JS_Cursor_SetImage(ptr,length){var binary="";for(var i=0;i>2]=viewportX-(rect?rect.left:0);HEAPU32[targetY>>2]=viewportY-(rect?rect.top:0)}function stringToNewUTF8(jsString){var length=lengthBytesUTF8(jsString)+1;var cString=_malloc(length);stringToUTF8(jsString,cString,length);return cString}function _JS_DOM_UnityCanvasSelector(){var canvasSelector=jsCanvasSelector();if(_JS_DOM_UnityCanvasSelector.selector!=canvasSelector){_free(_JS_DOM_UnityCanvasSelector.ptr);_JS_DOM_UnityCanvasSelector.ptr=stringToNewUTF8(canvasSelector);_JS_DOM_UnityCanvasSelector.selector=canvasSelector}return _JS_DOM_UnityCanvasSelector.ptr}function _JS_Eval_OpenURL(ptr){var str=UTF8ToString(ptr);window.open(str,"_blank","")}var fs={numPendingSync:0,syncInternal:1e3,syncInProgress:false,sync:function(onlyPendingSync){if(onlyPendingSync){if(fs.numPendingSync==0)return}else if(fs.syncInProgress){fs.numPendingSync++;return}fs.syncInProgress=true;FS.syncfs(false,function(err){fs.syncInProgress=false});fs.numPendingSync=0}};function _JS_FileSystem_Initialize(){Module.setInterval(function(){fs.sync(true)},fs.syncInternal)}function _JS_FileSystem_Sync(){fs.sync(false)}var JS_GravitySensor=null;function _JS_GravitySensor_IsRunning(){return typeof GravitySensor!=="undefined"?JS_GravitySensor&&JS_GravitySensor.activated:JS_GravitySensor_callback!=0}function JS_GravitySensor_eventHandler(){if(JS_GravitySensor_callback!=0)dynCall_vfff(JS_GravitySensor_callback,JS_GravitySensor.x*JS_Accelerometer_multiplier,JS_GravitySensor.y*JS_Accelerometer_multiplier,JS_GravitySensor.z*JS_Accelerometer_multiplier)}var JS_GravitySensor_frequencyRequest=0;var JS_LinearAccelerationSensor=null;function JS_LinearAccelerationSensor_eventHandler(){var linearAccelerationValue={x:JS_LinearAccelerationSensor.x*JS_Accelerometer_multiplier,y:JS_LinearAccelerationSensor.y*JS_Accelerometer_multiplier,z:JS_LinearAccelerationSensor.z*JS_Accelerometer_multiplier};if(JS_LinearAccelerationSensor_callback!=0)dynCall_vfff(JS_LinearAccelerationSensor_callback,linearAccelerationValue.x,linearAccelerationValue.y,linearAccelerationValue.z);if(JS_GravitySensor_callback!=0&&typeof GravitySensor==="undefined"){var gravityValue=JS_ComputeGravity(JS_Accelerometer_lastValue,linearAccelerationValue);dynCall_vfff(JS_GravitySensor_callback,gravityValue.x,gravityValue.y,gravityValue.z)}}var JS_LinearAccelerationSensor_frequencyRequest=0;var JS_LinearAccelerationSensor_frequency=0;function _JS_LinearAccelerationSensor_Start(callback,frequency){JS_DefineAccelerometerMultiplier();if(typeof LinearAccelerationSensor==="undefined"){JS_DeviceMotion_add();if(callback!=0)JS_LinearAccelerationSensor_callback=callback;return}if(callback!=0)JS_LinearAccelerationSensor_callback=callback;function InitializeLinearAccelerationSensor(frequency){JS_LinearAccelerationSensor=new LinearAccelerationSensor({frequency:frequency,referenceFrame:"device"});JS_LinearAccelerationSensor.addEventListener("reading",JS_LinearAccelerationSensor_eventHandler);JS_LinearAccelerationSensor.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_LinearAccelerationSensor.start();JS_LinearAccelerationSensor_frequency=frequency}if(JS_LinearAccelerationSensor){if(JS_LinearAccelerationSensor_frequency!=frequency){JS_LinearAccelerationSensor.stop();JS_LinearAccelerationSensor.removeEventListener("reading",JS_LinearAccelerationSensor_eventHandler);InitializeLinearAccelerationSensor(frequency)}}else if(JS_LinearAccelerationSensor_frequencyRequest!=0){JS_LinearAccelerationSensor_frequencyRequest=frequency}else{JS_LinearAccelerationSensor_frequencyRequest=frequency;navigator.permissions.query({name:"accelerometer"}).then(function(result){if(result.state==="granted"){InitializeLinearAccelerationSensor(JS_LinearAccelerationSensor_frequencyRequest)}else{warnOnce("No permission to use LinearAccelerationSensor.")}JS_LinearAccelerationSensor_frequencyRequest=0})}}function _JS_GravitySensor_Start(callback,frequency){if(typeof GravitySensor==="undefined"){_JS_Accelerometer_Start(0,Math.max(frequency,JS_Accelerometer_frequency));_JS_LinearAccelerationSensor_Start(0,Math.max(frequency,JS_LinearAccelerationSensor_frequency));JS_GravitySensor_callback=callback;return}JS_DefineAccelerometerMultiplier();JS_GravitySensor_callback=callback;function InitializeGravitySensor(frequency){JS_GravitySensor=new GravitySensor({frequency:frequency,referenceFrame:"device"});JS_GravitySensor.addEventListener("reading",JS_GravitySensor_eventHandler);JS_GravitySensor.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_GravitySensor.start()}if(JS_GravitySensor){JS_GravitySensor.stop();JS_GravitySensor.removeEventListener("reading",JS_GravitySensor_eventHandler);InitializeGravitySensor(frequency)}else if(JS_GravitySensor_frequencyRequest!=0){JS_GravitySensor_frequencyRequest=frequency}else{JS_GravitySensor_frequencyRequest=frequency;navigator.permissions.query({name:"accelerometer"}).then(function(result){if(result.state==="granted"){InitializeGravitySensor(JS_GravitySensor_frequencyRequest)}else{warnOnce("No permission to use GravitySensor.")}JS_GravitySensor_frequencyRequest=0})}}function _JS_LinearAccelerationSensor_Stop(){if(JS_LinearAccelerationSensor){if(typeof GravitySensor!=="undefined"||JS_GravitySensor_callback==0){JS_LinearAccelerationSensor.stop();JS_LinearAccelerationSensor.removeEventListener("reading",JS_LinearAccelerationSensor_eventHandler);JS_LinearAccelerationSensor=null}JS_LinearAccelerationSensor_callback=0;JS_LinearAccelerationSensor_frequency=0}else if(JS_LinearAccelerationSensor_callback!=0){JS_LinearAccelerationSensor_callback=0;JS_DeviceMotion_remove()}}function _JS_GravitySensor_Stop(){JS_GravitySensor_callback=0;if(typeof GravitySensor==="undefined"){if(JS_Accelerometer_callback==0)_JS_Accelerometer_Stop();if(JS_LinearAccelerationSensor_callback==0)_JS_LinearAccelerationSensor_Stop();return}if(JS_GravitySensor){JS_GravitySensor.stop();JS_GravitySensor.removeEventListener("reading",JS_GravitySensor_eventHandler);JS_GravitySensor=null}}function _JS_GuardAgainstJsExceptions(cb){try{(function(){dynCall_v.call(null,cb)})()}catch(e){console.warn(e)}}var JS_Gyroscope=null;function _JS_Gyroscope_IsRunning(){return JS_Gyroscope&&JS_Gyroscope.activated||JS_Gyroscope_callback!=0}function JS_Gyroscope_eventHandler(){if(JS_Gyroscope_callback!=0)dynCall_vfff(JS_Gyroscope_callback,JS_Gyroscope.x,JS_Gyroscope.y,JS_Gyroscope.z)}var JS_Gyroscope_frequencyRequest=0;function _JS_Gyroscope_Start(callback,frequency){if(typeof Gyroscope==="undefined"){JS_DeviceMotion_add();JS_Gyroscope_callback=callback;return}JS_Gyroscope_callback=callback;function InitializeGyroscope(frequency){JS_Gyroscope=new Gyroscope({frequency:frequency,referenceFrame:"device"});JS_Gyroscope.addEventListener("reading",JS_Gyroscope_eventHandler);JS_Gyroscope.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_Gyroscope.start()}if(JS_Gyroscope){JS_Gyroscope.stop();JS_Gyroscope.removeEventListener("reading",JS_Gyroscope_eventHandler);InitializeGyroscope(frequency)}else if(JS_Gyroscope_frequencyRequest!=0){JS_Gyroscope_frequencyRequest=frequency}else{JS_Gyroscope_frequencyRequest=frequency;navigator.permissions.query({name:"gyroscope"}).then(function(result){if(result.state==="granted"){InitializeGyroscope(JS_Gyroscope_frequencyRequest)}else{warnOnce("No permission to use Gyroscope.")}JS_Gyroscope_frequencyRequest=0})}}function _JS_Gyroscope_Stop(){if(JS_Gyroscope){JS_Gyroscope.stop();JS_Gyroscope.removeEventListener("reading",JS_Gyroscope_eventHandler);JS_Gyroscope=null;JS_Gyroscope_callback=0}else if(JS_Gyroscope_callback!=0){JS_Gyroscope_callback=0;JS_DeviceMotion_remove()}}function _JS_LinearAccelerationSensor_IsRunning(){return JS_LinearAccelerationSensor&&JS_LinearAccelerationSensor.activated||JS_LinearAccelerationSensor_callback!=0}function _JS_Log_Dump(ptr,type){var str=UTF8ToString(ptr);if(typeof dump=="function")dump(str);switch(type){case 0:case 1:case 4:console.error(str);return;case 2:console.warn(str);return;case 3:case 5:console.log(str);return;default:console.error("Unknown console message type!");console.error(str)}}function _JS_Log_StackTrace(buffer,bufferSize){var trace=stackTrace();if(buffer)stringToUTF8(trace,buffer,bufferSize);return lengthBytesUTF8(trace)}var mobile_input_hide_delay=null;var mobile_input_text=null;var mobile_input=null;var mobile_input_ignore_blur_event=false;function _JS_MobileKeybard_GetIgnoreBlurEvent(){return mobile_input_ignore_blur_event}function _JS_MobileKeyboard_GetKeyboardStatus(){var kKeyboardStatusVisible=0;var kKeyboardStatusDone=1;if(!mobile_input)return kKeyboardStatusDone;return kKeyboardStatusVisible}function _JS_MobileKeyboard_GetText(buffer,bufferSize){var text=mobile_input&&mobile_input.input?mobile_input.input.value:mobile_input_text?mobile_input_text:"";if(buffer)stringToUTF8(text,buffer,bufferSize);return lengthBytesUTF8(text)}function _JS_MobileKeyboard_GetTextSelection(outStart,outLength){if(!mobile_input){HEAP32[outStart>>2]=0;HEAP32[outLength>>2]=0;return}HEAP32[outStart>>2]=mobile_input.input.selectionStart;HEAP32[outLength>>2]=mobile_input.input.selectionEnd-mobile_input.input.selectionStart}function _JS_MobileKeyboard_Hide(delay){if(mobile_input_hide_delay)return;mobile_input_ignore_blur_event=true;function hideMobileKeyboard(){if(mobile_input&&mobile_input.input){mobile_input_text=mobile_input.input.value;mobile_input.input=null;if(mobile_input.parentNode&&mobile_input.parentNode){mobile_input.parentNode.removeChild(mobile_input)}}mobile_input=null;mobile_input_hide_delay=null;setTimeout(function(){mobile_input_ignore_blur_event=false},100)}if(delay){var hideDelay=200;mobile_input_hide_delay=setTimeout(hideMobileKeyboard,hideDelay)}else{hideMobileKeyboard()}}function _JS_MobileKeyboard_SetCharacterLimit(limit){if(!mobile_input)return;mobile_input.input.maxLength=limit}function _JS_MobileKeyboard_SetText(text){if(!mobile_input)return;text=UTF8ToString(text);mobile_input.input.value=text}function _JS_MobileKeyboard_SetTextSelection(start,length){if(!mobile_input)return;mobile_input.input.setSelectionRange(start,start+length)}function _JS_MobileKeyboard_Show(text,keyboardType,autocorrection,multiline,secure,alert,placeholder,characterLimit){if(mobile_input_hide_delay){clearTimeout(mobile_input_hide_delay);mobile_input_hide_delay=null}text=UTF8ToString(text);mobile_input_text=text;placeholder=UTF8ToString(placeholder);var container=document.body;var hasExistingMobileInput=!!mobile_input;var input_type;var KEYBOARD_TYPE_NUMBERS_AND_PUNCTUATION=2;var KEYBOARD_TYPE_URL=3;var KEYBOARD_TYPE_NUMBER_PAD=4;var KEYBOARD_TYPE_PHONE_PAD=5;var KEYBOARD_TYPE_EMAIL_ADDRESS=7;if(!secure){switch(keyboardType){case KEYBOARD_TYPE_EMAIL_ADDRESS:input_type="email";break;case KEYBOARD_TYPE_URL:input_type="url";break;case KEYBOARD_TYPE_NUMBERS_AND_PUNCTUATION:case KEYBOARD_TYPE_NUMBER_PAD:case KEYBOARD_TYPE_PHONE_PAD:input_type="number";break;default:input_type="text";break}}else{input_type="password"}if(hasExistingMobileInput){if(mobile_input.multiline!=multiline){_JS_MobileKeyboard_Hide(false);return}}var inputContainer=mobile_input||document.createElement("div");if(!hasExistingMobileInput){inputContainer.style="width:100%; position:fixed; bottom:0px; margin:0px; padding:0px; left:0px; border: 1px solid #000; border-radius: 5px; background-color:#fff; font-size:14pt;";container.appendChild(inputContainer);mobile_input=inputContainer}var input=hasExistingMobileInput?mobile_input.input:document.createElement(multiline?"textarea":"input");mobile_input.multiline=multiline;mobile_input.secure=secure;mobile_input.keyboardType=keyboardType;mobile_input.inputType=input_type;input.type=input_type;input.style="width:calc(100% - 85px); "+(multiline?"height:100px;":"")+"vertical-align:top; border-radius: 5px; outline:none; cursor:default; resize:none; border:0px; padding:10px 0px 10px 10px;";input.spellcheck=autocorrection?true:false;input.maxLength=characterLimit>0?characterLimit:524288;input.value=text;input.placeholder=placeholder;if(!hasExistingMobileInput){inputContainer.appendChild(input);inputContainer.input=input}if(!hasExistingMobileInput){var okButton=document.createElement("button");okButton.innerText="OK";okButton.style="border:0; position:absolute; left:calc(100% - 75px); top:0px; width:75px; height:100%; margin:0; padding:0; border-radius: 5px; background-color:#fff";okButton.addEventListener("touchend",function(){_JS_MobileKeyboard_Hide(true)});inputContainer.appendChild(okButton);inputContainer.okButton=okButton;input.addEventListener("keyup",function(e){if(input.parentNode.multiline)return;if(e.code=="Enter"||e.which==13||e.keyCode==13){_JS_MobileKeyboard_Hide(true)}});input.addEventListener("blur",function(e){_JS_MobileKeyboard_Hide(true);e.stopPropagation();e.preventDefault()});input.select();input.focus()}else{input.select()}}var JS_OrientationSensor=null;var JS_OrientationSensor_callback=0;function _JS_OrientationSensor_IsRunning(){return JS_OrientationSensor&&JS_OrientationSensor.activated||JS_OrientationSensor_callback!=0}function JS_OrientationSensor_eventHandler(){if(JS_OrientationSensor_callback!=0)dynCall_vffff(JS_OrientationSensor_callback,JS_OrientationSensor.quaternion[0],JS_OrientationSensor.quaternion[1],JS_OrientationSensor.quaternion[2],JS_OrientationSensor.quaternion[3])}var JS_OrientationSensor_frequencyRequest=0;function JS_DeviceOrientation_eventHandler(event){if(JS_OrientationSensor_callback){var degToRad=Math.PI/180;var x=event.beta*degToRad;var y=event.gamma*degToRad;var z=event.alpha*degToRad;var cx=Math.cos(x/2);var sx=Math.sin(x/2);var cy=Math.cos(y/2);var sy=Math.sin(y/2);var cz=Math.cos(z/2);var sz=Math.sin(z/2);var qx=sx*cy*cz-cx*sy*sz;var qy=cx*sy*cz+sx*cy*sz;var qz=cx*cy*sz+sx*sy*cz;var qw=cx*cy*cz-sx*sy*sz;dynCall_vffff(JS_OrientationSensor_callback,qx,qy,qz,qw)}}function _JS_OrientationSensor_Start(callback,frequency){if(typeof RelativeOrientationSensor==="undefined"){if(JS_OrientationSensor_callback==0){JS_OrientationSensor_callback=callback;JS_RequestDeviceSensorPermissions(1);window.addEventListener("deviceorientation",JS_DeviceOrientation_eventHandler)}return}JS_OrientationSensor_callback=callback;function InitializeOrientationSensor(frequency){JS_OrientationSensor=new RelativeOrientationSensor({frequency:frequency,referenceFrame:"device"});JS_OrientationSensor.addEventListener("reading",JS_OrientationSensor_eventHandler);JS_OrientationSensor.addEventListener("error",function(e){warnOnce(e.error?e.error:e)});JS_OrientationSensor.start()}if(JS_OrientationSensor){JS_OrientationSensor.stop();JS_OrientationSensor.removeEventListener("reading",JS_OrientationSensor_eventHandler);InitializeOrientationSensor(frequency)}else if(JS_OrientationSensor_frequencyRequest!=0){JS_OrientationSensor_frequencyRequest=frequency}else{JS_OrientationSensor_frequencyRequest=frequency;Promise.all([navigator.permissions.query({name:"accelerometer"}),navigator.permissions.query({name:"gyroscope"})]).then(function(results){if(results.every(function(result){return result.state==="granted"})){InitializeOrientationSensor(JS_OrientationSensor_frequencyRequest)}else{warnOnce("No permissions to use RelativeOrientationSensor.")}JS_OrientationSensor_frequencyRequest=0})}}function _JS_OrientationSensor_Stop(){if(JS_OrientationSensor){JS_OrientationSensor.stop();JS_OrientationSensor.removeEventListener("reading",JS_OrientationSensor_eventHandler);JS_OrientationSensor=null}else if(JS_OrientationSensor_callback!=0){window.removeEventListener("deviceorientation",JS_DeviceOrientation_eventHandler)}JS_OrientationSensor_callback=0}function _JS_RequestDeviceSensorPermissionsOnTouch(){if(JS_DeviceSensorPermissions==0)return;JS_RequestDeviceSensorPermissions(JS_DeviceSensorPermissions)}function _JS_RunQuitCallbacks(){Module.QuitCleanup()}var JS_ScreenOrientation_callback=0;function JS_ScreenOrientation_eventHandler(){if(JS_ScreenOrientation_callback)dynCall_viii(JS_ScreenOrientation_callback,window.innerWidth,window.innerHeight,screen.orientation?screen.orientation.angle:window.orientation)}function _JS_ScreenOrientation_DeInit(){JS_ScreenOrientation_callback=0;window.removeEventListener("resize",JS_ScreenOrientation_eventHandler);if(screen.orientation){screen.orientation.removeEventListener("change",JS_ScreenOrientation_eventHandler)}}function _JS_ScreenOrientation_Init(callback){if(!JS_ScreenOrientation_callback){if(screen.orientation){screen.orientation.addEventListener("change",JS_ScreenOrientation_eventHandler)}window.addEventListener("resize",JS_ScreenOrientation_eventHandler);JS_ScreenOrientation_callback=callback;setTimeout(JS_ScreenOrientation_eventHandler,0)}}var JS_ScreenOrientation_requestedLockType=-1;var JS_ScreenOrientation_appliedLockType=-1;var JS_ScreenOrientation_timeoutID=-1;function _JS_ScreenOrientation_Lock(orientationLockType){if(!screen.orientation){return}function applyLock(){JS_ScreenOrientation_appliedLockType=JS_ScreenOrientation_requestedLockType;var screenOrientations=["any",0,"landscape","portrait","portrait-primary","portrait-secondary","landscape-primary","landscape-secondary"];var type=screenOrientations[JS_ScreenOrientation_appliedLockType];screen.orientation.lock(type).then(function(){if(JS_ScreenOrientation_requestedLockType!=JS_ScreenOrientation_appliedLockType){JS_ScreenOrientation_timeoutID=setTimeout(applyLock,0)}else{JS_ScreenOrientation_timeoutID=-1}}).catch(function(err){warnOnce(err);JS_ScreenOrientation_timeoutID=-1})}JS_ScreenOrientation_requestedLockType=orientationLockType;if(JS_ScreenOrientation_timeoutID==-1&&orientationLockType!=JS_ScreenOrientation_appliedLockType){JS_ScreenOrientation_timeoutID=setTimeout(applyLock,0)}}var WEBAudio={audioInstanceIdCounter:0,audioInstances:{},audioContext:null,audioWebEnabled:0,audioCache:[],pendingAudioSources:{}};function jsAudioMixinSetPitch(source){source.estimatePlaybackPosition=function(){var t=(WEBAudio.audioContext.currentTime-source.playbackStartTime)*source.playbackRate.value;if(source.loop&&t>=source.loopStart){t=(t-source.loopStart)%(source.loopEnd-source.loopStart)+source.loopStart}return t};source.setPitch=function(newPitch){var curPosition=source.estimatePlaybackPosition();if(curPosition>=0){source.playbackStartTime=WEBAudio.audioContext.currentTime-curPosition/newPitch}if(source.playbackRate.value!==newPitch)source.playbackRate.value=newPitch}}function jsAudioCreateUncompressedSoundClip(buffer,error){var soundClip={buffer:buffer,error:error};soundClip.release=function(){};soundClip.getLength=function(){if(!this.buffer){console.log("Trying to get length of sound which is not loaded.");return 0}var sampleRateRatio=44100/this.buffer.sampleRate;return this.buffer.length*sampleRateRatio};soundClip.getData=function(ptr,length){if(!this.buffer){console.log("Trying to get data of sound which is not loaded.");return 0}var startOutputBuffer=ptr>>2;var output=HEAPF32.subarray(startOutputBuffer,startOutputBuffer+(length>>2));var numMaxSamples=Math.floor((length>>2)/this.buffer.numberOfChannels);var numReadSamples=Math.min(this.buffer.length,numMaxSamples);for(var i=0;istartDelayThresholdMS){source.playTimeout=setTimeout(function(){source.playTimeout=null;source._startPlayback(offset)},startDelayMS)}else{source._startPlayback(offset)}};source.stop=function(stopTime){if(typeof stopTime==="undefined"){stopTime=WEBAudio.audioContext.currentTime}var stopDelayThresholdMS=4;var stopDelayMS=(stopTime-WEBAudio.audioContext.currentTime)*1e3;if(stopDelayMS>stopDelayThresholdMS){setTimeout(function(){source._pauseMediaElement();source.isStopped=true},stopDelayMS)}else{source._pauseMediaElement();source.isStopped=true}};jsAudioMixinSetPitch(source);return source};return soundClip}function _JS_Sound_Load(ptr,length,decompress,fmodSoundType){if(WEBAudio.audioWebEnabled==0)return 0;var audioData=HEAPU8.buffer.slice(ptr,ptr+length);if(length<131072)decompress=1;var sound;if(decompress){sound=jsAudioCreateUncompressedSoundClipFromCompressedAudio(audioData)}else{sound=jsAudioCreateCompressedSoundClip(audioData,fmodSoundType)}WEBAudio.audioInstances[++WEBAudio.audioInstanceIdCounter]=sound;return WEBAudio.audioInstanceIdCounter}function jsAudioCreateUncompressedSoundClipFromPCM(channels,length,sampleRate,ptr){var buffer=WEBAudio.audioContext.createBuffer(channels,length,sampleRate);for(var i=0;i>2)+length*i;var copyToChannel=buffer["copyToChannel"]||function(source,channelNumber,startInChannel){var clipped=source.subarray(0,Math.min(source.length,this.length-(startInChannel|0)));this.getChannelData(channelNumber|0).set(clipped,startInChannel|0)};copyToChannel.apply(buffer,[HEAPF32.subarray(offs,offs+length),i,0])}return jsAudioCreateUncompressedSoundClip(buffer,false)}function _JS_Sound_Load_PCM(channels,length,sampleRate,ptr){if(WEBAudio.audioWebEnabled==0)return 0;var sound=jsAudioCreateUncompressedSoundClipFromPCM(channels,length,sampleRate,ptr);WEBAudio.audioInstances[++WEBAudio.audioInstanceIdCounter]=sound;return WEBAudio.audioInstanceIdCounter}function _JS_Sound_Play(bufferInstance,channelInstance,offset,delay){if(WEBAudio.audioWebEnabled==0)return;_JS_Sound_Stop(channelInstance,0);var soundClip=WEBAudio.audioInstances[bufferInstance];var channel=WEBAudio.audioInstances[channelInstance];if(!soundClip){console.log("Trying to play sound which is not loaded.");return}try{channel.playSoundClip(soundClip,WEBAudio.audioContext.currentTime+delay,offset)}catch(error){console.error("playSoundClip error. Exception: "+e)}}function _JS_Sound_ReleaseInstance(instance){var object=WEBAudio.audioInstances[instance];if(object){object.release()}delete WEBAudio.audioInstances[instance]}function _JS_Sound_ResumeIfNeeded(){if(WEBAudio.audioWebEnabled==0)return;if(WEBAudio.audioContext.state==="suspended")WEBAudio.audioContext.resume().catch(function(error){console.warn("Could not resume audio context. Exception: "+error)})}function _JS_Sound_Set3D(channelInstance,threeD){var channel=WEBAudio.audioInstances[channelInstance];channel.set3D(threeD)}function _JS_Sound_SetListenerOrientation(x,y,z,xUp,yUp,zUp){if(WEBAudio.audioWebEnabled==0)return;x=-x;y=-y;z=-z;var l=WEBAudio.audioContext.listener;if(l.forwardX){if(l.forwardX.value!==x)l.forwardX.value=x;if(l.forwardY.value!==y)l.forwardY.value=y;if(l.forwardZ.value!==z)l.forwardZ.value=z;if(l.upX.value!==xUp)l.upX.value=xUp;if(l.upY.value!==yUp)l.upY.value=yUp;if(l.upZ.value!==zUp)l.upZ.value=zUp}else if(l._forwardX!==x||l._forwardY!==y||l._forwardZ!==z||l._upX!==xUp||l._upY!==yUp||l._upZ!==zUp){l.setOrientation(x,y,z,xUp,yUp,zUp);l._forwardX=x;l._forwardY=y;l._forwardZ=z;l._upX=xUp;l._upY=yUp;l._upZ=zUp}}function _JS_Sound_SetListenerPosition(x,y,z){if(WEBAudio.audioWebEnabled==0)return;var l=WEBAudio.audioContext.listener;if(l.positionX){if(l.positionX.value!==x)l.positionX.value=x;if(l.positionY.value!==y)l.positionY.value=y;if(l.positionZ.value!==z)l.positionZ.value=z}else if(l._positionX!==x||l._positionY!==y||l._positionZ!==z){l.setPosition(x,y,z);l._positionX=x;l._positionY=y;l._positionZ=z}}function _JS_Sound_SetLoop(channelInstance,loop){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];channel.setLoop(loop)}function _JS_Sound_SetLoopPoints(channelInstance,loopStart,loopEnd){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];channel.setLoopPoints(loopStart,loopEnd)}function _JS_Sound_SetPaused(channelInstance,paused){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];if(paused!=channel.isPaused()){if(paused)channel.pause();else channel.resume()}}function _JS_Sound_SetPitch(channelInstance,v){if(WEBAudio.audioWebEnabled==0)return;try{var channel=WEBAudio.audioInstances[channelInstance];channel.setPitch(v)}catch(e){console.error("JS_Sound_SetPitch(channel="+channelInstance+", pitch="+v+") threw an exception: "+e)}}function _JS_Sound_SetPosition(channelInstance,x,y,z){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];channel.setPosition(x,y,z)}function _JS_Sound_SetVolume(channelInstance,v){if(WEBAudio.audioWebEnabled==0)return;try{var channel=WEBAudio.audioInstances[channelInstance];channel.setVolume(v)}catch(e){console.error("JS_Sound_SetVolume(channel="+channelInstance+", volume="+v+") threw an exception: "+e)}}function _JS_Sound_Stop(channelInstance,delay){if(WEBAudio.audioWebEnabled==0)return;var channel=WEBAudio.audioInstances[channelInstance];channel.stop(delay)}function _JS_SystemInfo_GetBrowserName(buffer,bufferSize){var browser=Module.SystemInfo.browser;if(buffer)stringToUTF8(browser,buffer,bufferSize);return lengthBytesUTF8(browser)}function _JS_SystemInfo_GetBrowserVersionString(buffer,bufferSize){var browserVer=Module.SystemInfo.browserVersion;if(buffer)stringToUTF8(browserVer,buffer,bufferSize);return lengthBytesUTF8(browserVer)}function _JS_SystemInfo_GetCanvasClientSize(domElementSelector,outWidth,outHeight){var selector=UTF8ToString(domElementSelector);var canvas=selector=="#canvas"?Module["canvas"]:document.querySelector(selector);var w=0,h=0;if(canvas){var size=canvas.getBoundingClientRect();w=size.width;h=size.height}HEAPF64[outWidth>>3]=w;HEAPF64[outHeight>>3]=h}function _JS_SystemInfo_GetDocumentURL(buffer,bufferSize){if(buffer)stringToUTF8(document.URL,buffer,bufferSize);return lengthBytesUTF8(document.URL)}function _JS_SystemInfo_GetGPUInfo(buffer,bufferSize){var gpuinfo=Module.SystemInfo.gpu;if(buffer)stringToUTF8(gpuinfo,buffer,bufferSize);return lengthBytesUTF8(gpuinfo)}function _JS_SystemInfo_GetLanguage(buffer,bufferSize){var language=Module.SystemInfo.language;if(buffer)stringToUTF8(language,buffer,bufferSize);return lengthBytesUTF8(language)}function _JS_SystemInfo_GetMatchWebGLToCanvasSize(){return Module.matchWebGLToCanvasSize||Module.matchWebGLToCanvasSize===undefined}function _JS_SystemInfo_GetMemory(){return HEAPU8.length/(1024*1024)}function _JS_SystemInfo_GetOS(buffer,bufferSize){var browser=Module.SystemInfo.os+" "+Module.SystemInfo.osVersion;if(buffer)stringToUTF8(browser,buffer,bufferSize);return lengthBytesUTF8(browser)}function _JS_SystemInfo_GetPreferredDevicePixelRatio(){return Module.matchWebGLToCanvasSize==false?1:Module.devicePixelRatio||window.devicePixelRatio||1}function _JS_SystemInfo_GetScreenSize(outWidth,outHeight){HEAPF64[outWidth>>3]=Module.SystemInfo.width;HEAPF64[outHeight>>3]=Module.SystemInfo.height}function _JS_SystemInfo_GetStreamingAssetsURL(buffer,bufferSize){if(buffer)stringToUTF8(Module.streamingAssetsUrl,buffer,bufferSize);return lengthBytesUTF8(Module.streamingAssetsUrl)}function _JS_SystemInfo_HasAstcHdr(){var ext=GLctx.getExtension("WEBGL_compressed_texture_astc");if(ext&&ext.getSupportedProfiles){return ext.getSupportedProfiles().includes("hdr")}return false}function _JS_SystemInfo_HasCursorLock(){return Module.SystemInfo.hasCursorLock}function _JS_SystemInfo_HasFullscreen(){return Module.SystemInfo.hasFullscreen}function _JS_SystemInfo_HasWebGL(){return Module.SystemInfo.hasWebGL}function _JS_UnityEngineShouldQuit(){return!!Module.shouldQuit}var wr={requests:{},responses:{},abortControllers:{},timer:{},nextRequestId:1};function _JS_WebRequest_Abort(requestId){var abortController=wr.abortControllers[requestId];if(!abortController||abortController.signal.aborted){return}abortController.abort()}function _JS_WebRequest_Create(url,method){var _url=UTF8ToString(url);var _method=UTF8ToString(method);var abortController=new AbortController;var requestOptions={url:_url,init:{method:_method,signal:abortController.signal,headers:{},enableStreamingDownload:true},tempBuffer:null,tempBufferSize:0};wr.abortControllers[wr.nextRequestId]=abortController;wr.requests[wr.nextRequestId]=requestOptions;return wr.nextRequestId++}function jsWebRequestGetResponseHeaderString(requestId){var response=wr.responses[requestId];if(!response){return""}if(response.headerString){return response.headerString}var headers="";var entries=response.headers.entries();for(var result=entries.next();!result.done;result=entries.next()){headers+=result.value[0]+": "+result.value[1]+"\r\n"}response.headerString=headers;return headers}function _JS_WebRequest_GetResponseMetaData(requestId,headerBuffer,headerSize,responseUrlBuffer,responseUrlSize){var response=wr.responses[requestId];if(!response){stringToUTF8("",headerBuffer,headerSize);stringToUTF8("",responseUrlBuffer,responseUrlSize);return}if(headerBuffer){var headers=jsWebRequestGetResponseHeaderString(requestId);stringToUTF8(headers,headerBuffer,headerSize)}if(responseUrlBuffer){stringToUTF8(response.url,responseUrlBuffer,responseUrlSize)}}function _JS_WebRequest_GetResponseMetaDataLengths(requestId,buffer){var response=wr.responses[requestId];if(!response){HEAPU32[buffer>>2]=0;HEAPU32[(buffer>>2)+1]=0;return}var headers=jsWebRequestGetResponseHeaderString(requestId);HEAPU32[buffer>>2]=lengthBytesUTF8(headers);HEAPU32[(buffer>>2)+1]=lengthBytesUTF8(response.url)}function _JS_WebRequest_Release(requestId){if(wr.timer[requestId]){clearTimeout(wr.timer[requestId])}delete wr.requests[requestId];delete wr.responses[requestId];delete wr.abortControllers[requestId];delete wr.timer[requestId]}function _JS_WebRequest_Send(requestId,ptr,length,arg,onresponse,onprogress){var requestOptions=wr.requests[requestId];var abortController=wr.abortControllers[requestId];function getTempBuffer(size){if(!requestOptions.tempBuffer){const initialSize=Math.max(size,1024);requestOptions.tempBuffer=_malloc(initialSize);requestOptions.tempBufferSize=initialSize}if(requestOptions.tempBufferSize0){var postData=HEAPU8.subarray(ptr,ptr+length);requestOptions.init.body=new Blob([postData])}if(requestOptions.timeout){wr.timer[requestId]=setTimeout(function(){requestOptions.isTimedOut=true;abortController.abort()},requestOptions.timeout)}var fetchImpl=Module.fetchWithProgress;requestOptions.init.onProgress=HandleProgress;if(Module.companyName&&Module.productName&&Module.cachedFetch){fetchImpl=Module.cachedFetch;requestOptions.init.companyName=Module.companyName;requestOptions.init.productName=Module.productName;requestOptions.init.productVersion=Module.productVersion;requestOptions.init.control=Module.cacheControl(requestOptions.url)}fetchImpl(requestOptions.url,requestOptions.init).then(function(response){wr.responses[requestId]=response;HandleSuccess(response,response.parsedBody)}).catch(function(error){var kWebErrorUnknown=2;var kWebErrorAborted=17;var kWebErrorTimeout=14;if(requestOptions.isTimedOut){HandleError("Connection timed out.",kWebErrorTimeout)}else if(abortController.signal.aborted){HandleError("Aborted.",kWebErrorAborted)}else{HandleError(error.message,kWebErrorUnknown)}})}catch(error){var kWebErrorUnknown=2;HandleError(error.message,kWebErrorUnknown)}}function _JS_WebRequest_SetRedirectLimit(request,redirectLimit){var requestOptions=wr.requests[request];if(!requestOptions){return}requestOptions.init.redirect=redirectLimit===0?"error":"follow"}function _JS_WebRequest_SetRequestHeader(requestId,header,value){var requestOptions=wr.requests[requestId];if(!requestOptions){return}var _header=UTF8ToString(header);var _value=UTF8ToString(value);requestOptions.init.headers[_header]=_value}function _JS_WebRequest_SetTimeout(requestId,timeout){var requestOptions=wr.requests[requestId];if(!requestOptions){return}requestOptions.timeout=timeout}function ___cxa_allocate_exception(size){return _malloc(size+16)+16}function ExceptionInfo(excPtr){this.excPtr=excPtr;this.ptr=excPtr-16;this.set_type=function(type){HEAP32[this.ptr+4>>2]=type};this.get_type=function(){return HEAP32[this.ptr+4>>2]};this.set_destructor=function(destructor){HEAP32[this.ptr+8>>2]=destructor};this.get_destructor=function(){return HEAP32[this.ptr+8>>2]};this.set_refcount=function(refcount){HEAP32[this.ptr>>2]=refcount};this.set_caught=function(caught){caught=caught?1:0;HEAP8[this.ptr+12>>0]=caught};this.get_caught=function(){return HEAP8[this.ptr+12>>0]!=0};this.set_rethrown=function(rethrown){rethrown=rethrown?1:0;HEAP8[this.ptr+13>>0]=rethrown};this.get_rethrown=function(){return HEAP8[this.ptr+13>>0]!=0};this.init=function(type,destructor){this.set_type(type);this.set_destructor(destructor);this.set_refcount(0);this.set_caught(false);this.set_rethrown(false)};this.add_ref=function(){var value=HEAP32[this.ptr>>2];HEAP32[this.ptr>>2]=value+1};this.release_ref=function(){var prev=HEAP32[this.ptr>>2];HEAP32[this.ptr>>2]=prev-1;return prev===1}}function CatchInfo(ptr){this.free=function(){_free(this.ptr);this.ptr=0};this.set_base_ptr=function(basePtr){HEAP32[this.ptr>>2]=basePtr};this.get_base_ptr=function(){return HEAP32[this.ptr>>2]};this.set_adjusted_ptr=function(adjustedPtr){HEAP32[this.ptr+4>>2]=adjustedPtr};this.get_adjusted_ptr_addr=function(){return this.ptr+4};this.get_adjusted_ptr=function(){return HEAP32[this.ptr+4>>2]};this.get_exception_ptr=function(){var isPointer=___cxa_is_pointer_type(this.get_exception_info().get_type());if(isPointer){return HEAP32[this.get_base_ptr()>>2]}var adjusted=this.get_adjusted_ptr();if(adjusted!==0)return adjusted;return this.get_base_ptr()};this.get_exception_info=function(){return new ExceptionInfo(this.get_base_ptr())};if(ptr===undefined){this.ptr=_malloc(8);this.set_adjusted_ptr(0)}else{this.ptr=ptr}}var exceptionCaught=[];function exception_addRef(info){info.add_ref()}var uncaughtExceptionCount=0;function ___cxa_begin_catch(ptr){var catchInfo=new CatchInfo(ptr);var info=catchInfo.get_exception_info();if(!info.get_caught()){info.set_caught(true);uncaughtExceptionCount--}info.set_rethrown(false);exceptionCaught.push(catchInfo);exception_addRef(info);return catchInfo.get_exception_ptr()}var exceptionLast=0;function ___cxa_free_exception(ptr){return _free(new ExceptionInfo(ptr).ptr)}function exception_decRef(info){if(info.release_ref()&&!info.get_rethrown()){var destructor=info.get_destructor();if(destructor){(function(a1){return dynCall_ii.apply(null,[destructor,a1])})(info.excPtr)}___cxa_free_exception(info.excPtr)}}function ___cxa_end_catch(){_setThrew(0);var catchInfo=exceptionCaught.pop();exception_decRef(catchInfo.get_exception_info());catchInfo.free();exceptionLast=0}function ___resumeException(catchInfoPtr){var catchInfo=new CatchInfo(catchInfoPtr);var ptr=catchInfo.get_base_ptr();if(!exceptionLast){exceptionLast=ptr}catchInfo.free();throw ptr}function ___cxa_find_matching_catch_2(){var thrown=exceptionLast;if(!thrown){setTempRet0(0);return 0|0}var info=new ExceptionInfo(thrown);var thrownType=info.get_type();var catchInfo=new CatchInfo;catchInfo.set_base_ptr(thrown);catchInfo.set_adjusted_ptr(thrown);if(!thrownType){setTempRet0(0);return catchInfo.ptr|0}var typeArray=Array.prototype.slice.call(arguments);for(var i=0;i=0;i--){var last=parts[i];if(last==="."){parts.splice(i,1)}else if(last===".."){parts.splice(i,1);up++}else if(up){parts.splice(i,1);up--}}if(allowAboveRoot){for(;up;up--){parts.unshift("..")}}return parts},normalize:function(path){var isAbsolute=path.charAt(0)==="/",trailingSlash=path.substr(-1)==="/";path=PATH.normalizeArray(path.split("/").filter(function(p){return!!p}),!isAbsolute).join("/");if(!path&&!isAbsolute){path="."}if(path&&trailingSlash){path+="/"}return(isAbsolute?"/":"")+path},dirname:function(path){var result=PATH.splitPath(path),root=result[0],dir=result[1];if(!root&&!dir){return"."}if(dir){dir=dir.substr(0,dir.length-1)}return root+dir},basename:function(path){if(path==="/")return"/";path=PATH.normalize(path);path=path.replace(/\/$/,"");var lastSlash=path.lastIndexOf("/");if(lastSlash===-1)return path;return path.substr(lastSlash+1)},extname:function(path){return PATH.splitPath(path)[3]},join:function(){var paths=Array.prototype.slice.call(arguments,0);return PATH.normalize(paths.join("/"))},join2:function(l,r){return PATH.normalize(l+"/"+r)}};function getRandomDevice(){if(typeof crypto=="object"&&typeof crypto["getRandomValues"]=="function"){var randomBuffer=new Uint8Array(1);return function(){crypto.getRandomValues(randomBuffer);return randomBuffer[0]}}else if(ENVIRONMENT_IS_NODE){try{var crypto_module=require("crypto");return function(){return crypto_module["randomBytes"](1)[0]}}catch(e){}}return function(){abort("randomDevice")}}var PATH_FS={resolve:function(){var resolvedPath="",resolvedAbsolute=false;for(var i=arguments.length-1;i>=-1&&!resolvedAbsolute;i--){var path=i>=0?arguments[i]:FS.cwd();if(typeof path!="string"){throw new TypeError("Arguments to path.resolve must be strings")}else if(!path){return""}resolvedPath=path+"/"+resolvedPath;resolvedAbsolute=path.charAt(0)==="/"}resolvedPath=PATH.normalizeArray(resolvedPath.split("/").filter(function(p){return!!p}),!resolvedAbsolute).join("/");return(resolvedAbsolute?"/":"")+resolvedPath||"."},relative:function(from,to){from=PATH_FS.resolve(from).substr(1);to=PATH_FS.resolve(to).substr(1);function trim(arr){var start=0;for(;start=0;end--){if(arr[end]!=="")break}if(start>end)return[];return arr.slice(start,end-start+1)}var fromParts=trim(from.split("/"));var toParts=trim(to.split("/"));var length=Math.min(fromParts.length,toParts.length);var samePartsLength=length;for(var i=0;i0){result=buf.slice(0,bytesRead).toString("utf-8")}else{result=null}}else if(typeof window!="undefined"&&typeof window.prompt=="function"){result=window.prompt("Input: ");if(result!==null){result+="\n"}}else if(typeof readline=="function"){result=readline();if(result!==null){result+="\n"}}if(!result){return null}tty.input=intArrayFromString(result,true)}return tty.input.shift()},put_char:function(tty,val){if(val===null||val===10){out(UTF8ArrayToString(tty.output,0));tty.output=[]}else{if(val!=0)tty.output.push(val)}},flush:function(tty){if(tty.output&&tty.output.length>0){out(UTF8ArrayToString(tty.output,0));tty.output=[]}}},default_tty1_ops:{put_char:function(tty,val){if(val===null||val===10){err(UTF8ArrayToString(tty.output,0));tty.output=[]}else{if(val!=0)tty.output.push(val)}},flush:function(tty){if(tty.output&&tty.output.length>0){err(UTF8ArrayToString(tty.output,0));tty.output=[]}}}};function zeroMemory(address,size){HEAPU8.fill(0,address,address+size)}function alignMemory(size,alignment){return Math.ceil(size/alignment)*alignment}function mmapAlloc(size){size=alignMemory(size,65536);var ptr=_emscripten_builtin_memalign(65536,size);if(!ptr)return 0;zeroMemory(ptr,size);return ptr}var MEMFS={ops_table:null,mount:function(mount){return MEMFS.createNode(null,"/",16384|511,0)},createNode:function(parent,name,mode,dev){if(FS.isBlkdev(mode)||FS.isFIFO(mode)){throw new FS.ErrnoError(63)}if(!MEMFS.ops_table){MEMFS.ops_table={dir:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr,lookup:MEMFS.node_ops.lookup,mknod:MEMFS.node_ops.mknod,rename:MEMFS.node_ops.rename,unlink:MEMFS.node_ops.unlink,rmdir:MEMFS.node_ops.rmdir,readdir:MEMFS.node_ops.readdir,symlink:MEMFS.node_ops.symlink},stream:{llseek:MEMFS.stream_ops.llseek}},file:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr},stream:{llseek:MEMFS.stream_ops.llseek,read:MEMFS.stream_ops.read,write:MEMFS.stream_ops.write,allocate:MEMFS.stream_ops.allocate,mmap:MEMFS.stream_ops.mmap,msync:MEMFS.stream_ops.msync}},link:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr,readlink:MEMFS.node_ops.readlink},stream:{}},chrdev:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr},stream:FS.chrdev_stream_ops}}}var node=FS.createNode(parent,name,mode,dev);if(FS.isDir(node.mode)){node.node_ops=MEMFS.ops_table.dir.node;node.stream_ops=MEMFS.ops_table.dir.stream;node.contents={}}else if(FS.isFile(node.mode)){node.node_ops=MEMFS.ops_table.file.node;node.stream_ops=MEMFS.ops_table.file.stream;node.usedBytes=0;node.contents=null}else if(FS.isLink(node.mode)){node.node_ops=MEMFS.ops_table.link.node;node.stream_ops=MEMFS.ops_table.link.stream}else if(FS.isChrdev(node.mode)){node.node_ops=MEMFS.ops_table.chrdev.node;node.stream_ops=MEMFS.ops_table.chrdev.stream}node.timestamp=Date.now();if(parent){parent.contents[name]=node;parent.timestamp=node.timestamp}return node},getFileDataAsTypedArray:function(node){if(!node.contents)return new Uint8Array(0);if(node.contents.subarray)return node.contents.subarray(0,node.usedBytes);return new Uint8Array(node.contents)},expandFileStorage:function(node,newCapacity){var prevCapacity=node.contents?node.contents.length:0;if(prevCapacity>=newCapacity)return;var CAPACITY_DOUBLING_MAX=1024*1024;newCapacity=Math.max(newCapacity,prevCapacity*(prevCapacity>>0);if(prevCapacity!=0)newCapacity=Math.max(newCapacity,256);var oldContents=node.contents;node.contents=new Uint8Array(newCapacity);if(node.usedBytes>0)node.contents.set(oldContents.subarray(0,node.usedBytes),0)},resizeFileStorage:function(node,newSize){if(node.usedBytes==newSize)return;if(newSize==0){node.contents=null;node.usedBytes=0}else{var oldContents=node.contents;node.contents=new Uint8Array(newSize);if(oldContents){node.contents.set(oldContents.subarray(0,Math.min(newSize,node.usedBytes)))}node.usedBytes=newSize}},node_ops:{getattr:function(node){var attr={};attr.dev=FS.isChrdev(node.mode)?node.id:1;attr.ino=node.id;attr.mode=node.mode;attr.nlink=1;attr.uid=0;attr.gid=0;attr.rdev=node.rdev;if(FS.isDir(node.mode)){attr.size=4096}else if(FS.isFile(node.mode)){attr.size=node.usedBytes}else if(FS.isLink(node.mode)){attr.size=node.link.length}else{attr.size=0}attr.atime=new Date(node.timestamp);attr.mtime=new Date(node.timestamp);attr.ctime=new Date(node.timestamp);attr.blksize=4096;attr.blocks=Math.ceil(attr.size/attr.blksize);return attr},setattr:function(node,attr){if(attr.mode!==undefined){node.mode=attr.mode}if(attr.timestamp!==undefined){node.timestamp=attr.timestamp}if(attr.size!==undefined){MEMFS.resizeFileStorage(node,attr.size)}},lookup:function(parent,name){throw FS.genericErrors[44]},mknod:function(parent,name,mode,dev){return MEMFS.createNode(parent,name,mode,dev)},rename:function(old_node,new_dir,new_name){if(FS.isDir(old_node.mode)){var new_node;try{new_node=FS.lookupNode(new_dir,new_name)}catch(e){}if(new_node){for(var i in new_node.contents){throw new FS.ErrnoError(55)}}}delete old_node.parent.contents[old_node.name];old_node.parent.timestamp=Date.now();old_node.name=new_name;new_dir.contents[new_name]=old_node;new_dir.timestamp=old_node.parent.timestamp;old_node.parent=new_dir},unlink:function(parent,name){delete parent.contents[name];parent.timestamp=Date.now()},rmdir:function(parent,name){var node=FS.lookupNode(parent,name);for(var i in node.contents){throw new FS.ErrnoError(55)}delete parent.contents[name];parent.timestamp=Date.now()},readdir:function(node){var entries=[".",".."];for(var key in node.contents){if(!node.contents.hasOwnProperty(key)){continue}entries.push(key)}return entries},symlink:function(parent,newname,oldpath){var node=MEMFS.createNode(parent,newname,511|40960,0);node.link=oldpath;return node},readlink:function(node){if(!FS.isLink(node.mode)){throw new FS.ErrnoError(28)}return node.link}},stream_ops:{read:function(stream,buffer,offset,length,position){var contents=stream.node.contents;if(position>=stream.node.usedBytes)return 0;var size=Math.min(stream.node.usedBytes-position,length);if(size>8&&contents.subarray){buffer.set(contents.subarray(position,position+size),offset)}else{for(var i=0;i0||position+length{if(typeof indexedDB!="undefined")return indexedDB;var ret=null;if(typeof window=="object")ret=window.indexedDB||window.mozIndexedDB||window.webkitIndexedDB||window.msIndexedDB;assert(ret,"IDBFS used, but indexedDB not supported");return ret},DB_VERSION:21,DB_STORE_NAME:"FILE_DATA",mount:function(mount){return MEMFS.mount.apply(null,arguments)},syncfs:(mount,populate,callback)=>{IDBFS.getLocalSet(mount,(err,local)=>{if(err)return callback(err);IDBFS.getRemoteSet(mount,(err,remote)=>{if(err)return callback(err);var src=populate?remote:local;var dst=populate?local:remote;IDBFS.reconcile(src,dst,callback)})})},getDB:(name,callback)=>{var db=IDBFS.dbs[name];if(db){return callback(null,db)}var req;try{req=IDBFS.indexedDB().open(name,IDBFS.DB_VERSION)}catch(e){return callback(e)}if(!req){return callback("Unable to connect to IndexedDB")}req.onupgradeneeded=(e=>{var db=e.target.result;var transaction=e.target.transaction;var fileStore;if(db.objectStoreNames.contains(IDBFS.DB_STORE_NAME)){fileStore=transaction.objectStore(IDBFS.DB_STORE_NAME)}else{fileStore=db.createObjectStore(IDBFS.DB_STORE_NAME)}if(!fileStore.indexNames.contains("timestamp")){fileStore.createIndex("timestamp","timestamp",{unique:false})}});req.onsuccess=(()=>{db=req.result;IDBFS.dbs[name]=db;callback(null,db)});req.onerror=(e=>{callback(this.error);e.preventDefault()})},getLocalSet:(mount,callback)=>{var entries={};function isRealDir(p){return p!=="."&&p!==".."}function toAbsolute(root){return p=>{return PATH.join2(root,p)}}var check=FS.readdir(mount.mountpoint).filter(isRealDir).map(toAbsolute(mount.mountpoint));while(check.length){var path=check.pop();var stat;try{stat=FS.stat(path)}catch(e){return callback(e)}if(FS.isDir(stat.mode)){check.push.apply(check,FS.readdir(path).filter(isRealDir).map(toAbsolute(path)))}entries[path]={"timestamp":stat.mtime}}return callback(null,{type:"local",entries:entries})},getRemoteSet:(mount,callback)=>{var entries={};IDBFS.getDB(mount.mountpoint,(err,db)=>{if(err)return callback(err);try{var transaction=db.transaction([IDBFS.DB_STORE_NAME],"readonly");transaction.onerror=(e=>{callback(this.error);e.preventDefault()});var store=transaction.objectStore(IDBFS.DB_STORE_NAME);var index=store.index("timestamp");index.openKeyCursor().onsuccess=(event=>{var cursor=event.target.result;if(!cursor){return callback(null,{type:"remote",db:db,entries:entries})}entries[cursor.primaryKey]={"timestamp":cursor.key};cursor.continue()})}catch(e){return callback(e)}})},loadLocalEntry:(path,callback)=>{var stat,node;try{var lookup=FS.lookupPath(path);node=lookup.node;stat=FS.stat(path)}catch(e){return callback(e)}if(FS.isDir(stat.mode)){return callback(null,{"timestamp":stat.mtime,"mode":stat.mode})}else if(FS.isFile(stat.mode)){node.contents=MEMFS.getFileDataAsTypedArray(node);return callback(null,{"timestamp":stat.mtime,"mode":stat.mode,"contents":node.contents})}else{return callback(new Error("node type not supported"))}},storeLocalEntry:(path,entry,callback)=>{try{if(FS.isDir(entry["mode"])){FS.mkdirTree(path,entry["mode"])}else if(FS.isFile(entry["mode"])){FS.writeFile(path,entry["contents"],{canOwn:true})}else{return callback(new Error("node type not supported"))}FS.chmod(path,entry["mode"]);FS.utime(path,entry["timestamp"],entry["timestamp"])}catch(e){return callback(e)}callback(null)},removeLocalEntry:(path,callback)=>{try{var lookup=FS.lookupPath(path);var stat=FS.stat(path);if(FS.isDir(stat.mode)){FS.rmdir(path)}else if(FS.isFile(stat.mode)){FS.unlink(path)}}catch(e){return callback(e)}callback(null)},loadRemoteEntry:(store,path,callback)=>{var req=store.get(path);req.onsuccess=(event=>{callback(null,event.target.result)});req.onerror=(e=>{callback(this.error);e.preventDefault()})},storeRemoteEntry:(store,path,entry,callback)=>{try{var req=store.put(entry,path)}catch(e){callback(e);return}req.onsuccess=(()=>{callback(null)});req.onerror=(e=>{callback(this.error);e.preventDefault()})},removeRemoteEntry:(store,path,callback)=>{var req=store.delete(path);req.onsuccess=(()=>{callback(null)});req.onerror=(e=>{callback(this.error);e.preventDefault()})},reconcile:(src,dst,callback)=>{var total=0;var create=[];Object.keys(src.entries).forEach(function(key){var e=src.entries[key];var e2=dst.entries[key];if(!e2||e["timestamp"].getTime()!=e2["timestamp"].getTime()){create.push(key);total++}});var remove=[];Object.keys(dst.entries).forEach(function(key){if(!src.entries[key]){remove.push(key);total++}});if(!total){return callback(null)}var errored=false;var db=src.type==="remote"?src.db:dst.db;var transaction=db.transaction([IDBFS.DB_STORE_NAME],"readwrite");var store=transaction.objectStore(IDBFS.DB_STORE_NAME);function done(err){if(err&&!errored){errored=true;return callback(err)}}transaction.onerror=(e=>{done(this.error);e.preventDefault()});transaction.oncomplete=(e=>{if(!errored){callback(null)}});create.sort().forEach(path=>{if(dst.type==="local"){IDBFS.loadRemoteEntry(store,path,(err,entry)=>{if(err)return done(err);IDBFS.storeLocalEntry(path,entry,done)})}else{IDBFS.loadLocalEntry(path,(err,entry)=>{if(err)return done(err);IDBFS.storeRemoteEntry(store,path,entry,done)})}});remove.sort().reverse().forEach(path=>{if(dst.type==="local"){IDBFS.removeLocalEntry(path,done)}else{IDBFS.removeRemoteEntry(store,path,done)}})}};var FS={root:null,mounts:[],devices:{},streams:[],nextInode:1,nameTable:null,currentPath:"/",initialized:false,ignorePermissions:true,ErrnoError:null,genericErrors:{},filesystems:null,syncFSRequests:0,lookupPath:(path,opts={})=>{path=PATH_FS.resolve(FS.cwd(),path);if(!path)return{path:"",node:null};var defaults={follow_mount:true,recurse_count:0};opts=Object.assign(defaults,opts);if(opts.recurse_count>8){throw new FS.ErrnoError(32)}var parts=PATH.normalizeArray(path.split("/").filter(p=>!!p),false);var current=FS.root;var current_path="/";for(var i=0;i40){throw new FS.ErrnoError(32)}}}}return{path:current_path,node:current}},getPath:node=>{var path;while(true){if(FS.isRoot(node)){var mount=node.mount.mountpoint;if(!path)return mount;return mount[mount.length-1]!=="/"?mount+"/"+path:mount+path}path=path?node.name+"/"+path:node.name;node=node.parent}},hashName:(parentid,name)=>{var hash=0;for(var i=0;i>>0)%FS.nameTable.length},hashAddNode:node=>{var hash=FS.hashName(node.parent.id,node.name);node.name_next=FS.nameTable[hash];FS.nameTable[hash]=node},hashRemoveNode:node=>{var hash=FS.hashName(node.parent.id,node.name);if(FS.nameTable[hash]===node){FS.nameTable[hash]=node.name_next}else{var current=FS.nameTable[hash];while(current){if(current.name_next===node){current.name_next=node.name_next;break}current=current.name_next}}},lookupNode:(parent,name)=>{var errCode=FS.mayLookup(parent);if(errCode){throw new FS.ErrnoError(errCode,parent)}var hash=FS.hashName(parent.id,name);for(var node=FS.nameTable[hash];node;node=node.name_next){var nodeName=node.name;if(node.parent.id===parent.id&&nodeName===name){return node}}return FS.lookup(parent,name)},createNode:(parent,name,mode,rdev)=>{var node=new FS.FSNode(parent,name,mode,rdev);FS.hashAddNode(node);return node},destroyNode:node=>{FS.hashRemoveNode(node)},isRoot:node=>{return node===node.parent},isMountpoint:node=>{return!!node.mounted},isFile:mode=>{return(mode&61440)===32768},isDir:mode=>{return(mode&61440)===16384},isLink:mode=>{return(mode&61440)===40960},isChrdev:mode=>{return(mode&61440)===8192},isBlkdev:mode=>{return(mode&61440)===24576},isFIFO:mode=>{return(mode&61440)===4096},isSocket:mode=>{return(mode&49152)===49152},flagModes:{"r":0,"r+":2,"w":577,"w+":578,"a":1089,"a+":1090},modeStringToFlags:str=>{var flags=FS.flagModes[str];if(typeof flags=="undefined"){throw new Error("Unknown file open mode: "+str)}return flags},flagsToPermissionString:flag=>{var perms=["r","w","rw"][flag&3];if(flag&512){perms+="w"}return perms},nodePermissions:(node,perms)=>{if(FS.ignorePermissions){return 0}if(perms.includes("r")&&!(node.mode&292)){return 2}else if(perms.includes("w")&&!(node.mode&146)){return 2}else if(perms.includes("x")&&!(node.mode&73)){return 2}return 0},mayLookup:dir=>{var errCode=FS.nodePermissions(dir,"x");if(errCode)return errCode;if(!dir.node_ops.lookup)return 2;return 0},mayCreate:(dir,name)=>{try{var node=FS.lookupNode(dir,name);return 20}catch(e){}return FS.nodePermissions(dir,"wx")},mayDelete:(dir,name,isdir)=>{var node;try{node=FS.lookupNode(dir,name)}catch(e){return e.errno}var errCode=FS.nodePermissions(dir,"wx");if(errCode){return errCode}if(isdir){if(!FS.isDir(node.mode)){return 54}if(FS.isRoot(node)||FS.getPath(node)===FS.cwd()){return 10}}else{if(FS.isDir(node.mode)){return 31}}return 0},mayOpen:(node,flags)=>{if(!node){return 44}if(FS.isLink(node.mode)){return 32}else if(FS.isDir(node.mode)){if(FS.flagsToPermissionString(flags)!=="r"||flags&512){return 31}}return FS.nodePermissions(node,FS.flagsToPermissionString(flags))},MAX_OPEN_FDS:4096,nextfd:(fd_start=0,fd_end=FS.MAX_OPEN_FDS)=>{for(var fd=fd_start;fd<=fd_end;fd++){if(!FS.streams[fd]){return fd}}throw new FS.ErrnoError(33)},getStream:fd=>FS.streams[fd],createStream:(stream,fd_start,fd_end)=>{if(!FS.FSStream){FS.FSStream=function(){};FS.FSStream.prototype={object:{get:function(){return this.node},set:function(val){this.node=val}},isRead:{get:function(){return(this.flags&2097155)!==1}},isWrite:{get:function(){return(this.flags&2097155)!==0}},isAppend:{get:function(){return this.flags&1024}}}}stream=Object.assign(new FS.FSStream,stream);var fd=FS.nextfd(fd_start,fd_end);stream.fd=fd;FS.streams[fd]=stream;return stream},closeStream:fd=>{FS.streams[fd]=null},chrdev_stream_ops:{open:stream=>{var device=FS.getDevice(stream.node.rdev);stream.stream_ops=device.stream_ops;if(stream.stream_ops.open){stream.stream_ops.open(stream)}},llseek:()=>{throw new FS.ErrnoError(70)}},major:dev=>dev>>8,minor:dev=>dev&255,makedev:(ma,mi)=>ma<<8|mi,registerDevice:(dev,ops)=>{FS.devices[dev]={stream_ops:ops}},getDevice:dev=>FS.devices[dev],getMounts:mount=>{var mounts=[];var check=[mount];while(check.length){var m=check.pop();mounts.push(m);check.push.apply(check,m.mounts)}return mounts},syncfs:(populate,callback)=>{if(typeof populate=="function"){callback=populate;populate=false}FS.syncFSRequests++;if(FS.syncFSRequests>1){err("warning: "+FS.syncFSRequests+" FS.syncfs operations in flight at once, probably just doing extra work")}var mounts=FS.getMounts(FS.root.mount);var completed=0;function doCallback(errCode){FS.syncFSRequests--;return callback(errCode)}function done(errCode){if(errCode){if(!done.errored){done.errored=true;return doCallback(errCode)}return}if(++completed>=mounts.length){doCallback(null)}}mounts.forEach(mount=>{if(!mount.type.syncfs){return done(null)}mount.type.syncfs(mount,populate,done)})},mount:(type,opts,mountpoint)=>{var root=mountpoint==="/";var pseudo=!mountpoint;var node;if(root&&FS.root){throw new FS.ErrnoError(10)}else if(!root&&!pseudo){var lookup=FS.lookupPath(mountpoint,{follow_mount:false});mountpoint=lookup.path;node=lookup.node;if(FS.isMountpoint(node)){throw new FS.ErrnoError(10)}if(!FS.isDir(node.mode)){throw new FS.ErrnoError(54)}}var mount={type:type,opts:opts,mountpoint:mountpoint,mounts:[]};var mountRoot=type.mount(mount);mountRoot.mount=mount;mount.root=mountRoot;if(root){FS.root=mountRoot}else if(node){node.mounted=mount;if(node.mount){node.mount.mounts.push(mount)}}return mountRoot},unmount:mountpoint=>{var lookup=FS.lookupPath(mountpoint,{follow_mount:false});if(!FS.isMountpoint(lookup.node)){throw new FS.ErrnoError(28)}var node=lookup.node;var mount=node.mounted;var mounts=FS.getMounts(mount);Object.keys(FS.nameTable).forEach(hash=>{var current=FS.nameTable[hash];while(current){var next=current.name_next;if(mounts.includes(current.mount)){FS.destroyNode(current)}current=next}});node.mounted=null;var idx=node.mount.mounts.indexOf(mount);node.mount.mounts.splice(idx,1)},lookup:(parent,name)=>{return parent.node_ops.lookup(parent,name)},mknod:(path,mode,dev)=>{var lookup=FS.lookupPath(path,{parent:true});var parent=lookup.node;var name=PATH.basename(path);if(!name||name==="."||name===".."){throw new FS.ErrnoError(28)}var errCode=FS.mayCreate(parent,name);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.mknod){throw new FS.ErrnoError(63)}return parent.node_ops.mknod(parent,name,mode,dev)},create:(path,mode)=>{mode=mode!==undefined?mode:438;mode&=4095;mode|=32768;return FS.mknod(path,mode,0)},mkdir:(path,mode)=>{mode=mode!==undefined?mode:511;mode&=511|512;mode|=16384;return FS.mknod(path,mode,0)},mkdirTree:(path,mode)=>{var dirs=path.split("/");var d="";for(var i=0;i{if(typeof dev=="undefined"){dev=mode;mode=438}mode|=8192;return FS.mknod(path,mode,dev)},symlink:(oldpath,newpath)=>{if(!PATH_FS.resolve(oldpath)){throw new FS.ErrnoError(44)}var lookup=FS.lookupPath(newpath,{parent:true});var parent=lookup.node;if(!parent){throw new FS.ErrnoError(44)}var newname=PATH.basename(newpath);var errCode=FS.mayCreate(parent,newname);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.symlink){throw new FS.ErrnoError(63)}return parent.node_ops.symlink(parent,newname,oldpath)},rename:(old_path,new_path)=>{var old_dirname=PATH.dirname(old_path);var new_dirname=PATH.dirname(new_path);var old_name=PATH.basename(old_path);var new_name=PATH.basename(new_path);var lookup,old_dir,new_dir;lookup=FS.lookupPath(old_path,{parent:true});old_dir=lookup.node;lookup=FS.lookupPath(new_path,{parent:true});new_dir=lookup.node;if(!old_dir||!new_dir)throw new FS.ErrnoError(44);if(old_dir.mount!==new_dir.mount){throw new FS.ErrnoError(75)}var old_node=FS.lookupNode(old_dir,old_name);var relative=PATH_FS.relative(old_path,new_dirname);if(relative.charAt(0)!=="."){throw new FS.ErrnoError(28)}relative=PATH_FS.relative(new_path,old_dirname);if(relative.charAt(0)!=="."){throw new FS.ErrnoError(55)}var new_node;try{new_node=FS.lookupNode(new_dir,new_name)}catch(e){}if(old_node===new_node){return}var isdir=FS.isDir(old_node.mode);var errCode=FS.mayDelete(old_dir,old_name,isdir);if(errCode){throw new FS.ErrnoError(errCode)}errCode=new_node?FS.mayDelete(new_dir,new_name,isdir):FS.mayCreate(new_dir,new_name);if(errCode){throw new FS.ErrnoError(errCode)}if(!old_dir.node_ops.rename){throw new FS.ErrnoError(63)}if(FS.isMountpoint(old_node)||new_node&&FS.isMountpoint(new_node)){throw new FS.ErrnoError(10)}if(new_dir!==old_dir){errCode=FS.nodePermissions(old_dir,"w");if(errCode){throw new FS.ErrnoError(errCode)}}FS.hashRemoveNode(old_node);try{old_dir.node_ops.rename(old_node,new_dir,new_name)}catch(e){throw e}finally{FS.hashAddNode(old_node)}},rmdir:path=>{var lookup=FS.lookupPath(path,{parent:true});var parent=lookup.node;var name=PATH.basename(path);var node=FS.lookupNode(parent,name);var errCode=FS.mayDelete(parent,name,true);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.rmdir){throw new FS.ErrnoError(63)}if(FS.isMountpoint(node)){throw new FS.ErrnoError(10)}parent.node_ops.rmdir(parent,name);FS.destroyNode(node)},readdir:path=>{var lookup=FS.lookupPath(path,{follow:true});var node=lookup.node;if(!node.node_ops.readdir){throw new FS.ErrnoError(54)}return node.node_ops.readdir(node)},unlink:path=>{var lookup=FS.lookupPath(path,{parent:true});var parent=lookup.node;if(!parent){throw new FS.ErrnoError(44)}var name=PATH.basename(path);var node=FS.lookupNode(parent,name);var errCode=FS.mayDelete(parent,name,false);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.unlink){throw new FS.ErrnoError(63)}if(FS.isMountpoint(node)){throw new FS.ErrnoError(10)}parent.node_ops.unlink(parent,name);FS.destroyNode(node)},readlink:path=>{var lookup=FS.lookupPath(path);var link=lookup.node;if(!link){throw new FS.ErrnoError(44)}if(!link.node_ops.readlink){throw new FS.ErrnoError(28)}return PATH_FS.resolve(FS.getPath(link.parent),link.node_ops.readlink(link))},stat:(path,dontFollow)=>{var lookup=FS.lookupPath(path,{follow:!dontFollow});var node=lookup.node;if(!node){throw new FS.ErrnoError(44)}if(!node.node_ops.getattr){throw new FS.ErrnoError(63)}return node.node_ops.getattr(node)},lstat:path=>{return FS.stat(path,true)},chmod:(path,mode,dontFollow)=>{var node;if(typeof path=="string"){var lookup=FS.lookupPath(path,{follow:!dontFollow});node=lookup.node}else{node=path}if(!node.node_ops.setattr){throw new FS.ErrnoError(63)}node.node_ops.setattr(node,{mode:mode&4095|node.mode&~4095,timestamp:Date.now()})},lchmod:(path,mode)=>{FS.chmod(path,mode,true)},fchmod:(fd,mode)=>{var stream=FS.getStream(fd);if(!stream){throw new FS.ErrnoError(8)}FS.chmod(stream.node,mode)},chown:(path,uid,gid,dontFollow)=>{var node;if(typeof path=="string"){var lookup=FS.lookupPath(path,{follow:!dontFollow});node=lookup.node}else{node=path}if(!node.node_ops.setattr){throw new FS.ErrnoError(63)}node.node_ops.setattr(node,{timestamp:Date.now()})},lchown:(path,uid,gid)=>{FS.chown(path,uid,gid,true)},fchown:(fd,uid,gid)=>{var stream=FS.getStream(fd);if(!stream){throw new FS.ErrnoError(8)}FS.chown(stream.node,uid,gid)},truncate:(path,len)=>{if(len<0){throw new FS.ErrnoError(28)}var node;if(typeof path=="string"){var lookup=FS.lookupPath(path,{follow:true});node=lookup.node}else{node=path}if(!node.node_ops.setattr){throw new FS.ErrnoError(63)}if(FS.isDir(node.mode)){throw new FS.ErrnoError(31)}if(!FS.isFile(node.mode)){throw new FS.ErrnoError(28)}var errCode=FS.nodePermissions(node,"w");if(errCode){throw new FS.ErrnoError(errCode)}node.node_ops.setattr(node,{size:len,timestamp:Date.now()})},ftruncate:(fd,len)=>{var stream=FS.getStream(fd);if(!stream){throw new FS.ErrnoError(8)}if((stream.flags&2097155)===0){throw new FS.ErrnoError(28)}FS.truncate(stream.node,len)},utime:(path,atime,mtime)=>{var lookup=FS.lookupPath(path,{follow:true});var node=lookup.node;node.node_ops.setattr(node,{timestamp:Math.max(atime,mtime)})},open:(path,flags,mode,fd_start,fd_end)=>{if(path===""){throw new FS.ErrnoError(44)}flags=typeof flags=="string"?FS.modeStringToFlags(flags):flags;mode=typeof mode=="undefined"?438:mode;if(flags&64){mode=mode&4095|32768}else{mode=0}var node;if(typeof path=="object"){node=path}else{path=PATH.normalize(path);try{var lookup=FS.lookupPath(path,{follow:!(flags&131072)});node=lookup.node}catch(e){}}var created=false;if(flags&64){if(node){if(flags&128){throw new FS.ErrnoError(20)}}else{node=FS.mknod(path,mode,0);created=true}}if(!node){throw new FS.ErrnoError(44)}if(FS.isChrdev(node.mode)){flags&=~512}if(flags&65536&&!FS.isDir(node.mode)){throw new FS.ErrnoError(54)}if(!created){var errCode=FS.mayOpen(node,flags);if(errCode){throw new FS.ErrnoError(errCode)}}if(flags&512){FS.truncate(node,0)}flags&=~(128|512|131072);var stream=FS.createStream({node:node,path:FS.getPath(node),flags:flags,seekable:true,position:0,stream_ops:node.stream_ops,ungotten:[],error:false},fd_start,fd_end);if(stream.stream_ops.open){stream.stream_ops.open(stream)}if(Module["logReadFiles"]&&!(flags&1)){if(!FS.readFiles)FS.readFiles={};if(!(path in FS.readFiles)){FS.readFiles[path]=1}}return stream},close:stream=>{if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if(stream.getdents)stream.getdents=null;try{if(stream.stream_ops.close){stream.stream_ops.close(stream)}}catch(e){throw e}finally{FS.closeStream(stream.fd)}stream.fd=null},isClosed:stream=>{return stream.fd===null},llseek:(stream,offset,whence)=>{if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if(!stream.seekable||!stream.stream_ops.llseek){throw new FS.ErrnoError(70)}if(whence!=0&&whence!=1&&whence!=2){throw new FS.ErrnoError(28)}stream.position=stream.stream_ops.llseek(stream,offset,whence);stream.ungotten=[];return stream.position},read:(stream,buffer,offset,length,position)=>{if(length<0||position<0){throw new FS.ErrnoError(28)}if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if((stream.flags&2097155)===1){throw new FS.ErrnoError(8)}if(FS.isDir(stream.node.mode)){throw new FS.ErrnoError(31)}if(!stream.stream_ops.read){throw new FS.ErrnoError(28)}var seeking=typeof position!="undefined";if(!seeking){position=stream.position}else if(!stream.seekable){throw new FS.ErrnoError(70)}var bytesRead=stream.stream_ops.read(stream,buffer,offset,length,position);if(!seeking)stream.position+=bytesRead;return bytesRead},write:(stream,buffer,offset,length,position,canOwn)=>{if(length<0||position<0){throw new FS.ErrnoError(28)}if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if((stream.flags&2097155)===0){throw new FS.ErrnoError(8)}if(FS.isDir(stream.node.mode)){throw new FS.ErrnoError(31)}if(!stream.stream_ops.write){throw new FS.ErrnoError(28)}if(stream.seekable&&stream.flags&1024){FS.llseek(stream,0,2)}var seeking=typeof position!="undefined";if(!seeking){position=stream.position}else if(!stream.seekable){throw new FS.ErrnoError(70)}var bytesWritten=stream.stream_ops.write(stream,buffer,offset,length,position,canOwn);if(!seeking)stream.position+=bytesWritten;return bytesWritten},allocate:(stream,offset,length)=>{if(FS.isClosed(stream)){throw new FS.ErrnoError(8)}if(offset<0||length<=0){throw new FS.ErrnoError(28)}if((stream.flags&2097155)===0){throw new FS.ErrnoError(8)}if(!FS.isFile(stream.node.mode)&&!FS.isDir(stream.node.mode)){throw new FS.ErrnoError(43)}if(!stream.stream_ops.allocate){throw new FS.ErrnoError(138)}stream.stream_ops.allocate(stream,offset,length)},mmap:(stream,address,length,position,prot,flags)=>{if((prot&2)!==0&&(flags&2)===0&&(stream.flags&2097155)!==2){throw new FS.ErrnoError(2)}if((stream.flags&2097155)===1){throw new FS.ErrnoError(2)}if(!stream.stream_ops.mmap){throw new FS.ErrnoError(43)}return stream.stream_ops.mmap(stream,address,length,position,prot,flags)},msync:(stream,buffer,offset,length,mmapFlags)=>{if(!stream||!stream.stream_ops.msync){return 0}return stream.stream_ops.msync(stream,buffer,offset,length,mmapFlags)},munmap:stream=>0,ioctl:(stream,cmd,arg)=>{if(!stream.stream_ops.ioctl){throw new FS.ErrnoError(59)}return stream.stream_ops.ioctl(stream,cmd,arg)},readFile:(path,opts={})=>{opts.flags=opts.flags||0;opts.encoding=opts.encoding||"binary";if(opts.encoding!=="utf8"&&opts.encoding!=="binary"){throw new Error('Invalid encoding type "'+opts.encoding+'"')}var ret;var stream=FS.open(path,opts.flags);var stat=FS.stat(path);var length=stat.size;var buf=new Uint8Array(length);FS.read(stream,buf,0,length,0);if(opts.encoding==="utf8"){ret=UTF8ArrayToString(buf,0)}else if(opts.encoding==="binary"){ret=buf}FS.close(stream);return ret},writeFile:(path,data,opts={})=>{opts.flags=opts.flags||577;var stream=FS.open(path,opts.flags,opts.mode);if(typeof data=="string"){var buf=new Uint8Array(lengthBytesUTF8(data)+1);var actualNumBytes=stringToUTF8Array(data,buf,0,buf.length);FS.write(stream,buf,0,actualNumBytes,undefined,opts.canOwn)}else if(ArrayBuffer.isView(data)){FS.write(stream,data,0,data.byteLength,undefined,opts.canOwn)}else{throw new Error("Unsupported data type")}FS.close(stream)},cwd:()=>FS.currentPath,chdir:path=>{var lookup=FS.lookupPath(path,{follow:true});if(lookup.node===null){throw new FS.ErrnoError(44)}if(!FS.isDir(lookup.node.mode)){throw new FS.ErrnoError(54)}var errCode=FS.nodePermissions(lookup.node,"x");if(errCode){throw new FS.ErrnoError(errCode)}FS.currentPath=lookup.path},createDefaultDirectories:()=>{FS.mkdir("/tmp");FS.mkdir("/home");FS.mkdir("/home/web_user")},createDefaultDevices:()=>{FS.mkdir("/dev");FS.registerDevice(FS.makedev(1,3),{read:()=>0,write:(stream,buffer,offset,length,pos)=>length});FS.mkdev("/dev/null",FS.makedev(1,3));TTY.register(FS.makedev(5,0),TTY.default_tty_ops);TTY.register(FS.makedev(6,0),TTY.default_tty1_ops);FS.mkdev("/dev/tty",FS.makedev(5,0));FS.mkdev("/dev/tty1",FS.makedev(6,0));var random_device=getRandomDevice();FS.createDevice("/dev","random",random_device);FS.createDevice("/dev","urandom",random_device);FS.mkdir("/dev/shm");FS.mkdir("/dev/shm/tmp")},createSpecialDirectories:()=>{FS.mkdir("/proc");var proc_self=FS.mkdir("/proc/self");FS.mkdir("/proc/self/fd");FS.mount({mount:()=>{var node=FS.createNode(proc_self,"fd",16384|511,73);node.node_ops={lookup:(parent,name)=>{var fd=+name;var stream=FS.getStream(fd);if(!stream)throw new FS.ErrnoError(8);var ret={parent:null,mount:{mountpoint:"fake"},node_ops:{readlink:()=>stream.path}};ret.parent=ret;return ret}};return node}},{},"/proc/self/fd")},createStandardStreams:()=>{if(Module["stdin"]){FS.createDevice("/dev","stdin",Module["stdin"])}else{FS.symlink("/dev/tty","/dev/stdin")}if(Module["stdout"]){FS.createDevice("/dev","stdout",null,Module["stdout"])}else{FS.symlink("/dev/tty","/dev/stdout")}if(Module["stderr"]){FS.createDevice("/dev","stderr",null,Module["stderr"])}else{FS.symlink("/dev/tty1","/dev/stderr")}var stdin=FS.open("/dev/stdin",0);var stdout=FS.open("/dev/stdout",1);var stderr=FS.open("/dev/stderr",1)},ensureErrnoError:()=>{if(FS.ErrnoError)return;FS.ErrnoError=function ErrnoError(errno,node){this.node=node;this.setErrno=function(errno){this.errno=errno};this.setErrno(errno);this.message="FS error"};FS.ErrnoError.prototype=new Error;FS.ErrnoError.prototype.constructor=FS.ErrnoError;[44].forEach(code=>{FS.genericErrors[code]=new FS.ErrnoError(code);FS.genericErrors[code].stack=""})},staticInit:()=>{FS.ensureErrnoError();FS.nameTable=new Array(4096);FS.mount(MEMFS,{},"/");FS.createDefaultDirectories();FS.createDefaultDevices();FS.createSpecialDirectories();FS.filesystems={"MEMFS":MEMFS,"IDBFS":IDBFS}},init:(input,output,error)=>{FS.init.initialized=true;FS.ensureErrnoError();Module["stdin"]=input||Module["stdin"];Module["stdout"]=output||Module["stdout"];Module["stderr"]=error||Module["stderr"];FS.createStandardStreams()},quit:()=>{FS.init.initialized=false;for(var i=0;i{var mode=0;if(canRead)mode|=292|73;if(canWrite)mode|=146;return mode},findObject:(path,dontResolveLastLink)=>{var ret=FS.analyzePath(path,dontResolveLastLink);if(ret.exists){return ret.object}else{return null}},analyzePath:(path,dontResolveLastLink)=>{try{var lookup=FS.lookupPath(path,{follow:!dontResolveLastLink});path=lookup.path}catch(e){}var ret={isRoot:false,exists:false,error:0,name:null,path:null,object:null,parentExists:false,parentPath:null,parentObject:null};try{var lookup=FS.lookupPath(path,{parent:true});ret.parentExists=true;ret.parentPath=lookup.path;ret.parentObject=lookup.node;ret.name=PATH.basename(path);lookup=FS.lookupPath(path,{follow:!dontResolveLastLink});ret.exists=true;ret.path=lookup.path;ret.object=lookup.node;ret.name=lookup.node.name;ret.isRoot=lookup.path==="/"}catch(e){ret.error=e.errno}return ret},createPath:(parent,path,canRead,canWrite)=>{parent=typeof parent=="string"?parent:FS.getPath(parent);var parts=path.split("/").reverse();while(parts.length){var part=parts.pop();if(!part)continue;var current=PATH.join2(parent,part);try{FS.mkdir(current)}catch(e){}parent=current}return current},createFile:(parent,name,properties,canRead,canWrite)=>{var path=PATH.join2(typeof parent=="string"?parent:FS.getPath(parent),name);var mode=FS.getMode(canRead,canWrite);return FS.create(path,mode)},createDataFile:(parent,name,data,canRead,canWrite,canOwn)=>{var path=name;if(parent){parent=typeof parent=="string"?parent:FS.getPath(parent);path=name?PATH.join2(parent,name):parent}var mode=FS.getMode(canRead,canWrite);var node=FS.create(path,mode);if(data){if(typeof data=="string"){var arr=new Array(data.length);for(var i=0,len=data.length;i{var path=PATH.join2(typeof parent=="string"?parent:FS.getPath(parent),name);var mode=FS.getMode(!!input,!!output);if(!FS.createDevice.major)FS.createDevice.major=64;var dev=FS.makedev(FS.createDevice.major++,0);FS.registerDevice(dev,{open:stream=>{stream.seekable=false},close:stream=>{if(output&&output.buffer&&output.buffer.length){output(10)}},read:(stream,buffer,offset,length,pos)=>{var bytesRead=0;for(var i=0;i{for(var i=0;i{if(obj.isDevice||obj.isFolder||obj.link||obj.contents)return true;if(typeof XMLHttpRequest!="undefined"){throw new Error("Lazy loading should have been performed (contents set) in createLazyFile, but it was not. Lazy loading only works in web workers. Use --embed-file or --preload-file in emcc on the main thread.")}else if(read_){try{obj.contents=intArrayFromString(read_(obj.url),true);obj.usedBytes=obj.contents.length}catch(e){throw new FS.ErrnoError(29)}}else{throw new Error("Cannot load without read() or XMLHttpRequest.")}},createLazyFile:(parent,name,url,canRead,canWrite)=>{function LazyUint8Array(){this.lengthKnown=false;this.chunks=[]}LazyUint8Array.prototype.get=function LazyUint8Array_get(idx){if(idx>this.length-1||idx<0){return undefined}var chunkOffset=idx%this.chunkSize;var chunkNum=idx/this.chunkSize|0;return this.getter(chunkNum)[chunkOffset]};LazyUint8Array.prototype.setDataGetter=function LazyUint8Array_setDataGetter(getter){this.getter=getter};LazyUint8Array.prototype.cacheLength=function LazyUint8Array_cacheLength(){var xhr=new XMLHttpRequest;xhr.open("HEAD",url,false);xhr.send(null);if(!(xhr.status>=200&&xhr.status<300||xhr.status===304))throw new Error("Couldn't load "+url+". Status: "+xhr.status);var datalength=Number(xhr.getResponseHeader("Content-length"));var header;var hasByteServing=(header=xhr.getResponseHeader("Accept-Ranges"))&&header==="bytes";var usesGzip=(header=xhr.getResponseHeader("Content-Encoding"))&&header==="gzip";var chunkSize=1024*1024;if(!hasByteServing)chunkSize=datalength;var doXHR=(from,to)=>{if(from>to)throw new Error("invalid range ("+from+", "+to+") or no bytes requested!");if(to>datalength-1)throw new Error("only "+datalength+" bytes available! programmer error!");var xhr=new XMLHttpRequest;xhr.open("GET",url,false);if(datalength!==chunkSize)xhr.setRequestHeader("Range","bytes="+from+"-"+to);xhr.responseType="arraybuffer";if(xhr.overrideMimeType){xhr.overrideMimeType("text/plain; charset=x-user-defined")}xhr.send(null);if(!(xhr.status>=200&&xhr.status<300||xhr.status===304))throw new Error("Couldn't load "+url+". Status: "+xhr.status);if(xhr.response!==undefined){return new Uint8Array(xhr.response||[])}else{return intArrayFromString(xhr.responseText||"",true)}};var lazyArray=this;lazyArray.setDataGetter(chunkNum=>{var start=chunkNum*chunkSize;var end=(chunkNum+1)*chunkSize-1;end=Math.min(end,datalength-1);if(typeof lazyArray.chunks[chunkNum]=="undefined"){lazyArray.chunks[chunkNum]=doXHR(start,end)}if(typeof lazyArray.chunks[chunkNum]=="undefined")throw new Error("doXHR failed!");return lazyArray.chunks[chunkNum]});if(usesGzip||!datalength){chunkSize=datalength=1;datalength=this.getter(0).length;chunkSize=datalength;out("LazyFiles on gzip forces download of the whole file when length is accessed")}this._length=datalength;this._chunkSize=chunkSize;this.lengthKnown=true};if(typeof XMLHttpRequest!="undefined"){if(!ENVIRONMENT_IS_WORKER)throw"Cannot do synchronous binary XHRs outside webworkers in modern browsers. Use --embed-file or --preload-file in emcc";var lazyArray=new LazyUint8Array;Object.defineProperties(lazyArray,{length:{get:function(){if(!this.lengthKnown){this.cacheLength()}return this._length}},chunkSize:{get:function(){if(!this.lengthKnown){this.cacheLength()}return this._chunkSize}}});var properties={isDevice:false,contents:lazyArray}}else{var properties={isDevice:false,url:url}}var node=FS.createFile(parent,name,properties,canRead,canWrite);if(properties.contents){node.contents=properties.contents}else if(properties.url){node.contents=null;node.url=properties.url}Object.defineProperties(node,{usedBytes:{get:function(){return this.contents.length}}});var stream_ops={};var keys=Object.keys(node.stream_ops);keys.forEach(key=>{var fn=node.stream_ops[key];stream_ops[key]=function forceLoadLazyFile(){FS.forceLoadFile(node);return fn.apply(null,arguments)}});stream_ops.read=((stream,buffer,offset,length,position)=>{FS.forceLoadFile(node);var contents=stream.node.contents;if(position>=contents.length)return 0;var size=Math.min(contents.length-position,length);if(contents.slice){for(var i=0;i{var fullname=name?PATH_FS.resolve(PATH.join2(parent,name)):parent;var dep=getUniqueRunDependency("cp "+fullname);function processData(byteArray){function finish(byteArray){if(preFinish)preFinish();if(!dontCreateFile){FS.createDataFile(parent,name,byteArray,canRead,canWrite,canOwn)}if(onload)onload();removeRunDependency(dep)}if(Browser.handledByPreloadPlugin(byteArray,fullname,finish,()=>{if(onerror)onerror();removeRunDependency(dep)})){return}finish(byteArray)}addRunDependency(dep);if(typeof url=="string"){asyncLoad(url,byteArray=>processData(byteArray),onerror)}else{processData(url)}},indexedDB:()=>{return window.indexedDB||window.mozIndexedDB||window.webkitIndexedDB||window.msIndexedDB},DB_NAME:()=>{return"EM_FS_"+window.location.pathname},DB_VERSION:20,DB_STORE_NAME:"FILE_DATA",saveFilesToDB:(paths,onload,onerror)=>{onload=onload||(()=>{});onerror=onerror||(()=>{});var indexedDB=FS.indexedDB();try{var openRequest=indexedDB.open(FS.DB_NAME(),FS.DB_VERSION)}catch(e){return onerror(e)}openRequest.onupgradeneeded=(()=>{out("creating db");var db=openRequest.result;db.createObjectStore(FS.DB_STORE_NAME)});openRequest.onsuccess=(()=>{var db=openRequest.result;var transaction=db.transaction([FS.DB_STORE_NAME],"readwrite");var files=transaction.objectStore(FS.DB_STORE_NAME);var ok=0,fail=0,total=paths.length;function finish(){if(fail==0)onload();else onerror()}paths.forEach(path=>{var putRequest=files.put(FS.analyzePath(path).object.contents,path);putRequest.onsuccess=(()=>{ok++;if(ok+fail==total)finish()});putRequest.onerror=(()=>{fail++;if(ok+fail==total)finish()})});transaction.onerror=onerror});openRequest.onerror=onerror},loadFilesFromDB:(paths,onload,onerror)=>{onload=onload||(()=>{});onerror=onerror||(()=>{});var indexedDB=FS.indexedDB();try{var openRequest=indexedDB.open(FS.DB_NAME(),FS.DB_VERSION)}catch(e){return onerror(e)}openRequest.onupgradeneeded=onerror;openRequest.onsuccess=(()=>{var db=openRequest.result;try{var transaction=db.transaction([FS.DB_STORE_NAME],"readonly")}catch(e){onerror(e);return}var files=transaction.objectStore(FS.DB_STORE_NAME);var ok=0,fail=0,total=paths.length;function finish(){if(fail==0)onload();else onerror()}paths.forEach(path=>{var getRequest=files.get(path);getRequest.onsuccess=(()=>{if(FS.analyzePath(path).exists){FS.unlink(path)}FS.createDataFile(PATH.dirname(path),PATH.basename(path),getRequest.result,true,true,true);ok++;if(ok+fail==total)finish()});getRequest.onerror=(()=>{fail++;if(ok+fail==total)finish()})});transaction.onerror=onerror});openRequest.onerror=onerror}};var SYSCALLS={DEFAULT_POLLMASK:5,calculateAt:function(dirfd,path,allowEmpty){if(path[0]==="/"){return path}var dir;if(dirfd===-100){dir=FS.cwd()}else{var dirstream=FS.getStream(dirfd);if(!dirstream)throw new FS.ErrnoError(8);dir=dirstream.path}if(path.length==0){if(!allowEmpty){throw new FS.ErrnoError(44)}return dir}return PATH.join2(dir,path)},doStat:function(func,path,buf){try{var stat=func(path)}catch(e){if(e&&e.node&&PATH.normalize(path)!==PATH.normalize(FS.getPath(e.node))){return-54}throw e}HEAP32[buf>>2]=stat.dev;HEAP32[buf+4>>2]=0;HEAP32[buf+8>>2]=stat.ino;HEAP32[buf+12>>2]=stat.mode;HEAP32[buf+16>>2]=stat.nlink;HEAP32[buf+20>>2]=stat.uid;HEAP32[buf+24>>2]=stat.gid;HEAP32[buf+28>>2]=stat.rdev;HEAP32[buf+32>>2]=0;tempI64=[stat.size>>>0,(tempDouble=stat.size,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[buf+40>>2]=tempI64[0],HEAP32[buf+44>>2]=tempI64[1];HEAP32[buf+48>>2]=4096;HEAP32[buf+52>>2]=stat.blocks;HEAP32[buf+56>>2]=stat.atime.getTime()/1e3|0;HEAP32[buf+60>>2]=0;HEAP32[buf+64>>2]=stat.mtime.getTime()/1e3|0;HEAP32[buf+68>>2]=0;HEAP32[buf+72>>2]=stat.ctime.getTime()/1e3|0;HEAP32[buf+76>>2]=0;tempI64=[stat.ino>>>0,(tempDouble=stat.ino,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[buf+80>>2]=tempI64[0],HEAP32[buf+84>>2]=tempI64[1];return 0},doMsync:function(addr,stream,len,flags,offset){var buffer=HEAPU8.slice(addr,addr+len);FS.msync(stream,buffer,offset,len,flags)},doMkdir:function(path,mode){path=PATH.normalize(path);if(path[path.length-1]==="/")path=path.substr(0,path.length-1);FS.mkdir(path,mode,0);return 0},doMknod:function(path,mode,dev){switch(mode&61440){case 32768:case 8192:case 24576:case 4096:case 49152:break;default:return-28}FS.mknod(path,mode,dev);return 0},doReadlink:function(path,buf,bufsize){if(bufsize<=0)return-28;var ret=FS.readlink(path);var len=Math.min(bufsize,lengthBytesUTF8(ret));var endChar=HEAP8[buf+len];stringToUTF8(ret,buf,bufsize+1);HEAP8[buf+len]=endChar;return len},doAccess:function(path,amode){if(amode&~7){return-28}var lookup=FS.lookupPath(path,{follow:true});var node=lookup.node;if(!node){return-44}var perms="";if(amode&4)perms+="r";if(amode&2)perms+="w";if(amode&1)perms+="x";if(perms&&FS.nodePermissions(node,perms)){return-2}return 0},doReadv:function(stream,iov,iovcnt,offset){var ret=0;for(var i=0;i>2];var len=HEAP32[iov+(i*8+4)>>2];var curr=FS.read(stream,HEAP8,ptr,len,offset);if(curr<0)return-1;ret+=curr;if(curr>2];var len=HEAP32[iov+(i*8+4)>>2];var curr=FS.write(stream,HEAP8,ptr,len,offset);if(curr<0)return-1;ret+=curr}return ret},varargs:undefined,get:function(){SYSCALLS.varargs+=4;var ret=HEAP32[SYSCALLS.varargs-4>>2];return ret},getStr:function(ptr){var ret=UTF8ToString(ptr);return ret},getStreamFromFD:function(fd){var stream=FS.getStream(fd);if(!stream)throw new FS.ErrnoError(8);return stream},get64:function(low,high){return low}};function ___syscall__newselect(nfds,readfds,writefds,exceptfds,timeout){try{var total=0;var srcReadLow=readfds?HEAP32[readfds>>2]:0,srcReadHigh=readfds?HEAP32[readfds+4>>2]:0;var srcWriteLow=writefds?HEAP32[writefds>>2]:0,srcWriteHigh=writefds?HEAP32[writefds+4>>2]:0;var srcExceptLow=exceptfds?HEAP32[exceptfds>>2]:0,srcExceptHigh=exceptfds?HEAP32[exceptfds+4>>2]:0;var dstReadLow=0,dstReadHigh=0;var dstWriteLow=0,dstWriteHigh=0;var dstExceptLow=0,dstExceptHigh=0;var allLow=(readfds?HEAP32[readfds>>2]:0)|(writefds?HEAP32[writefds>>2]:0)|(exceptfds?HEAP32[exceptfds>>2]:0);var allHigh=(readfds?HEAP32[readfds+4>>2]:0)|(writefds?HEAP32[writefds+4>>2]:0)|(exceptfds?HEAP32[exceptfds+4>>2]:0);var check=function(fd,low,high,val){return fd<32?low&val:high&val};for(var fd=0;fd>2]=dstReadLow;HEAP32[readfds+4>>2]=dstReadHigh}if(writefds){HEAP32[writefds>>2]=dstWriteLow;HEAP32[writefds+4>>2]=dstWriteHigh}if(exceptfds){HEAP32[exceptfds>>2]=dstExceptLow;HEAP32[exceptfds+4>>2]=dstExceptHigh}return total}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}var SOCKFS={mount:function(mount){Module["websocket"]=Module["websocket"]&&"object"===typeof Module["websocket"]?Module["websocket"]:{};Module["websocket"]._callbacks={};Module["websocket"]["on"]=function(event,callback){if("function"===typeof callback){this._callbacks[event]=callback}return this};Module["websocket"].emit=function(event,param){if("function"===typeof this._callbacks[event]){this._callbacks[event].call(this,param)}};return FS.createNode(null,"/",16384|511,0)},createSocket:function(family,type,protocol){type&=~526336;var streaming=type==1;if(streaming&&protocol&&protocol!=6){throw new FS.ErrnoError(66)}var sock={family:family,type:type,protocol:protocol,server:null,error:null,peers:{},pending:[],recv_queue:[],sock_ops:SOCKFS.websocket_sock_ops};var name=SOCKFS.nextname();var node=FS.createNode(SOCKFS.root,name,49152,0);node.sock=sock;var stream=FS.createStream({path:name,node:node,flags:2,seekable:false,stream_ops:SOCKFS.stream_ops});sock.stream=stream;return sock},getSocket:function(fd){var stream=FS.getStream(fd);if(!stream||!FS.isSocket(stream.node.mode)){return null}return stream.node.sock},stream_ops:{poll:function(stream){var sock=stream.node.sock;return sock.sock_ops.poll(sock)},ioctl:function(stream,request,varargs){var sock=stream.node.sock;return sock.sock_ops.ioctl(sock,request,varargs)},read:function(stream,buffer,offset,length,position){var sock=stream.node.sock;var msg=sock.sock_ops.recvmsg(sock,length);if(!msg){return 0}buffer.set(msg.buffer,offset);return msg.buffer.length},write:function(stream,buffer,offset,length,position){var sock=stream.node.sock;return sock.sock_ops.sendmsg(sock,buffer,offset,length)},close:function(stream){var sock=stream.node.sock;sock.sock_ops.close(sock)}},nextname:function(){if(!SOCKFS.nextname.current){SOCKFS.nextname.current=0}return"socket["+SOCKFS.nextname.current+++"]"},websocket_sock_ops:{createPeer:function(sock,addr,port){var ws;if(typeof addr=="object"){ws=addr;addr=null;port=null}if(ws){if(ws._socket){addr=ws._socket.remoteAddress;port=ws._socket.remotePort}else{var result=/ws[s]?:\/\/([^:]+):(\d+)/.exec(ws.url);if(!result){throw new Error("WebSocket URL must be in the format ws(s)://address:port")}addr=result[1];port=parseInt(result[2],10)}}else{try{var runtimeConfig=Module["websocket"]&&"object"===typeof Module["websocket"];var url="ws:#".replace("#","//");if(runtimeConfig){if("string"===typeof Module["websocket"]["url"]){url=Module["websocket"]["url"]}}if(url==="ws://"||url==="wss://"){var parts=addr.split("/");url=url+parts[0]+":"+port+"/"+parts.slice(1).join("/")}var subProtocols="binary";if(runtimeConfig){if("string"===typeof Module["websocket"]["subprotocol"]){subProtocols=Module["websocket"]["subprotocol"]}}var opts=undefined;if(subProtocols!=="null"){subProtocols=subProtocols.replace(/^ +| +$/g,"").split(/ *, */);opts=ENVIRONMENT_IS_NODE?{"protocol":subProtocols.toString()}:subProtocols}if(runtimeConfig&&null===Module["websocket"]["subprotocol"]){subProtocols="null";opts=undefined}var WebSocketConstructor;if(ENVIRONMENT_IS_NODE){WebSocketConstructor=require("ws")}else{WebSocketConstructor=WebSocket}ws=new WebSocketConstructor(url,opts);ws.binaryType="arraybuffer"}catch(e){throw new FS.ErrnoError(23)}}var peer={addr:addr,port:port,socket:ws,dgram_send_queue:[]};SOCKFS.websocket_sock_ops.addPeer(sock,peer);SOCKFS.websocket_sock_ops.handlePeerEvents(sock,peer);if(sock.type===2&&typeof sock.sport!="undefined"){peer.dgram_send_queue.push(new Uint8Array([255,255,255,255,"p".charCodeAt(0),"o".charCodeAt(0),"r".charCodeAt(0),"t".charCodeAt(0),(sock.sport&65280)>>8,sock.sport&255]))}return peer},getPeer:function(sock,addr,port){return sock.peers[addr+":"+port]},addPeer:function(sock,peer){sock.peers[peer.addr+":"+peer.port]=peer},removePeer:function(sock,peer){delete sock.peers[peer.addr+":"+peer.port]},handlePeerEvents:function(sock,peer){var first=true;var handleOpen=function(){Module["websocket"].emit("open",sock.stream.fd);try{var queued=peer.dgram_send_queue.shift();while(queued){peer.socket.send(queued);queued=peer.dgram_send_queue.shift()}}catch(e){peer.socket.close()}};function handleMessage(data){if(typeof data=="string"){var encoder=new TextEncoder;data=encoder.encode(data)}else{assert(data.byteLength!==undefined);if(data.byteLength==0){return}else{data=new Uint8Array(data)}}var wasfirst=first;first=false;if(wasfirst&&data.length===10&&data[0]===255&&data[1]===255&&data[2]===255&&data[3]===255&&data[4]==="p".charCodeAt(0)&&data[5]==="o".charCodeAt(0)&&data[6]==="r".charCodeAt(0)&&data[7]==="t".charCodeAt(0)){var newport=data[8]<<8|data[9];SOCKFS.websocket_sock_ops.removePeer(sock,peer);peer.port=newport;SOCKFS.websocket_sock_ops.addPeer(sock,peer);return}sock.recv_queue.push({addr:peer.addr,port:peer.port,data:data});Module["websocket"].emit("message",sock.stream.fd)}if(ENVIRONMENT_IS_NODE){peer.socket.on("open",handleOpen);peer.socket.on("message",function(data,flags){if(!flags.binary){return}handleMessage(new Uint8Array(data).buffer)});peer.socket.on("close",function(){Module["websocket"].emit("close",sock.stream.fd)});peer.socket.on("error",function(error){sock.error=14;Module["websocket"].emit("error",[sock.stream.fd,sock.error,"ECONNREFUSED: Connection refused"])})}else{peer.socket.onopen=handleOpen;peer.socket.onclose=function(){Module["websocket"].emit("close",sock.stream.fd)};peer.socket.onmessage=function peer_socket_onmessage(event){handleMessage(event.data)};peer.socket.onerror=function(error){sock.error=14;Module["websocket"].emit("error",[sock.stream.fd,sock.error,"ECONNREFUSED: Connection refused"])}}},poll:function(sock){if(sock.type===1&&sock.server){return sock.pending.length?64|1:0}var mask=0;var dest=sock.type===1?SOCKFS.websocket_sock_ops.getPeer(sock,sock.daddr,sock.dport):null;if(sock.recv_queue.length||!dest||dest&&dest.socket.readyState===dest.socket.CLOSING||dest&&dest.socket.readyState===dest.socket.CLOSED){mask|=64|1}if(!dest||dest&&dest.socket.readyState===dest.socket.OPEN){mask|=4}if(dest&&dest.socket.readyState===dest.socket.CLOSING||dest&&dest.socket.readyState===dest.socket.CLOSED){mask|=16}return mask},ioctl:function(sock,request,arg){switch(request){case 21531:var bytes=0;if(sock.recv_queue.length){bytes=sock.recv_queue[0].data.length}HEAP32[arg>>2]=bytes;return 0;default:return 28}},close:function(sock){if(sock.server){try{sock.server.close()}catch(e){}sock.server=null}var peers=Object.keys(sock.peers);for(var i=0;i>2]=value;return value}function inetPton4(str){var b=str.split(".");for(var i=0;i<4;i++){var tmp=Number(b[i]);if(isNaN(tmp))return null;b[i]=tmp}return(b[0]|b[1]<<8|b[2]<<16|b[3]<<24)>>>0}function jstoi_q(str){return parseInt(str)}function inetPton6(str){var words;var w,offset,z;var valid6regx=/^((?=.*::)(?!.*::.+::)(::)?([\dA-F]{1,4}:(:|\b)|){5}|([\dA-F]{1,4}:){6})((([\dA-F]{1,4}((?!\3)::|:\b|$))|(?!\2\3)){2}|(((2[0-4]|1\d|[1-9])?\d|25[0-5])\.?\b){4})$/i;var parts=[];if(!valid6regx.test(str)){return null}if(str==="::"){return[0,0,0,0,0,0,0,0]}if(str.startsWith("::")){str=str.replace("::","Z:")}else{str=str.replace("::",":Z:")}if(str.indexOf(".")>0){str=str.replace(new RegExp("[.]","g"),":");words=str.split(":");words[words.length-4]=jstoi_q(words[words.length-4])+jstoi_q(words[words.length-3])*256;words[words.length-3]=jstoi_q(words[words.length-2])+jstoi_q(words[words.length-1])*256;words=words.slice(0,words.length-2)}else{words=str.split(":")}offset=0;z=0;for(w=0;w>2]=16}HEAP16[sa>>1]=family;HEAP32[sa+4>>2]=addr;HEAP16[sa+2>>1]=_htons(port);break;case 10:addr=inetPton6(addr);zeroMemory(sa,28);if(addrlen){HEAP32[addrlen>>2]=28}HEAP32[sa>>2]=family;HEAP32[sa+8>>2]=addr[0];HEAP32[sa+12>>2]=addr[1];HEAP32[sa+16>>2]=addr[2];HEAP32[sa+20>>2]=addr[3];HEAP16[sa+2>>1]=_htons(port);break;default:return 5}return 0}var DNS={address_map:{id:1,addrs:{},names:{}},lookup_name:function(name){var res=inetPton4(name);if(res!==null){return name}res=inetPton6(name);if(res!==null){return name}var addr;if(DNS.address_map.addrs[name]){addr=DNS.address_map.addrs[name]}else{var id=DNS.address_map.id++;assert(id<65535,"exceeded max address mappings of 65535");addr="172.29."+(id&255)+"."+(id&65280);DNS.address_map.names[addr]=name;DNS.address_map.addrs[name]=addr}return addr},lookup_addr:function(addr){if(DNS.address_map.names[addr]){return DNS.address_map.names[addr]}return null}};function ___syscall_accept4(fd,addr,addrlen,flags){try{var sock=getSocketFromFD(fd);var newsock=sock.sock_ops.accept(sock);if(addr){var errno=writeSockaddr(addr,newsock.family,DNS.lookup_name(newsock.daddr),newsock.dport,addrlen)}return newsock.stream.fd}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function inetNtop4(addr){return(addr&255)+"."+(addr>>8&255)+"."+(addr>>16&255)+"."+(addr>>24&255)}function inetNtop6(ints){var str="";var word=0;var longest=0;var lastzero=0;var zstart=0;var len=0;var i=0;var parts=[ints[0]&65535,ints[0]>>16,ints[1]&65535,ints[1]>>16,ints[2]&65535,ints[2]>>16,ints[3]&65535,ints[3]>>16];var hasipv4=true;var v4part="";for(i=0;i<5;i++){if(parts[i]!==0){hasipv4=false;break}}if(hasipv4){v4part=inetNtop4(parts[6]|parts[7]<<16);if(parts[5]===-1){str="::ffff:";str+=v4part;return str}if(parts[5]===0){str="::";if(v4part==="0.0.0.0")v4part="";if(v4part==="0.0.0.1")v4part="1";str+=v4part;return str}}for(word=0;word<8;word++){if(parts[word]===0){if(word-lastzero>1){len=0}lastzero=word;len++}if(len>longest){longest=len;zstart=word-longest+1}}for(word=0;word<8;word++){if(longest>1){if(parts[word]===0&&word>=zstart&&word>1];var port=_ntohs(HEAPU16[sa+2>>1]);var addr;switch(family){case 2:if(salen!==16){return{errno:28}}addr=HEAP32[sa+4>>2];addr=inetNtop4(addr);break;case 10:if(salen!==28){return{errno:28}}addr=[HEAP32[sa+8>>2],HEAP32[sa+12>>2],HEAP32[sa+16>>2],HEAP32[sa+20>>2]];addr=inetNtop6(addr);break;default:return{errno:5}}return{family:family,addr:addr,port:port}}function getSocketAddress(addrp,addrlen,allowNull){if(allowNull&&addrp===0)return null;var info=readSockaddr(addrp,addrlen);if(info.errno)throw new FS.ErrnoError(info.errno);info.addr=DNS.lookup_addr(info.addr)||info.addr;return info}function ___syscall_bind(fd,addr,addrlen){try{var sock=getSocketFromFD(fd);var info=getSocketAddress(addr,addrlen);sock.sock_ops.bind(sock,info.addr,info.port);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_chmod(path,mode){try{path=SYSCALLS.getStr(path);FS.chmod(path,mode);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_connect(fd,addr,addrlen){try{var sock=getSocketFromFD(fd);var info=getSocketAddress(addr,addrlen);sock.sock_ops.connect(sock,info.addr,info.port);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_dup3(fd,suggestFD,flags){try{var old=SYSCALLS.getStreamFromFD(fd);if(old.fd===suggestFD)return-28;var suggest=FS.getStream(suggestFD);if(suggest)FS.close(suggest);return FS.open(old.path,old.flags,0,suggestFD,suggestFD).fd}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_faccessat(dirfd,path,amode,flags){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);return SYSCALLS.doAccess(path,amode)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_fcntl64(fd,cmd,varargs){SYSCALLS.varargs=varargs;try{var stream=SYSCALLS.getStreamFromFD(fd);switch(cmd){case 0:{var arg=SYSCALLS.get();if(arg<0){return-28}var newStream;newStream=FS.open(stream.path,stream.flags,0,arg);return newStream.fd}case 1:case 2:return 0;case 3:return stream.flags;case 4:{var arg=SYSCALLS.get();stream.flags|=arg;return 0}case 5:{var arg=SYSCALLS.get();var offset=0;HEAP16[arg+offset>>1]=2;return 0}case 6:case 7:return 0;case 16:case 8:return-28;case 9:setErrNo(28);return-1;default:{return-28}}}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_fstat64(fd,buf){try{var stream=SYSCALLS.getStreamFromFD(fd);return SYSCALLS.doStat(FS.stat,stream.path,buf)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_ftruncate64(fd,low,high){try{var length=SYSCALLS.get64(low,high);FS.ftruncate(fd,length);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_getcwd(buf,size){try{if(size===0)return-28;var cwd=FS.cwd();var cwdLengthInBytes=lengthBytesUTF8(cwd);if(size>>0,(tempDouble=id,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[dirp+pos>>2]=tempI64[0],HEAP32[dirp+pos+4>>2]=tempI64[1];tempI64=[(idx+1)*struct_size>>>0,(tempDouble=(idx+1)*struct_size,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[dirp+pos+8>>2]=tempI64[0],HEAP32[dirp+pos+12>>2]=tempI64[1];HEAP16[dirp+pos+16>>1]=280;HEAP8[dirp+pos+18>>0]=type;stringToUTF8(name,dirp+pos+19,256);pos+=struct_size;idx+=1}FS.llseek(stream,idx*struct_size,0);return pos}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_getpeername(fd,addr,addrlen){try{var sock=getSocketFromFD(fd);if(!sock.daddr){return-53}var errno=writeSockaddr(addr,sock.family,DNS.lookup_name(sock.daddr),sock.dport,addrlen);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_getsockname(fd,addr,addrlen){try{err("__syscall_getsockname "+fd);var sock=getSocketFromFD(fd);var errno=writeSockaddr(addr,sock.family,DNS.lookup_name(sock.saddr||"0.0.0.0"),sock.sport,addrlen);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_getsockopt(fd,level,optname,optval,optlen){try{var sock=getSocketFromFD(fd);if(level===1){if(optname===4){HEAP32[optval>>2]=sock.error;HEAP32[optlen>>2]=4;sock.error=null;return 0}}return-50}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_ioctl(fd,op,varargs){SYSCALLS.varargs=varargs;try{var stream=SYSCALLS.getStreamFromFD(fd);switch(op){case 21509:case 21505:{if(!stream.tty)return-59;return 0}case 21510:case 21511:case 21512:case 21506:case 21507:case 21508:{if(!stream.tty)return-59;return 0}case 21519:{if(!stream.tty)return-59;var argp=SYSCALLS.get();HEAP32[argp>>2]=0;return 0}case 21520:{if(!stream.tty)return-59;return-28}case 21531:{var argp=SYSCALLS.get();return FS.ioctl(stream,op,argp)}case 21523:{if(!stream.tty)return-59;return 0}case 21524:{if(!stream.tty)return-59;return 0}default:abort("bad ioctl syscall "+op)}}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_listen(fd,backlog){try{var sock=getSocketFromFD(fd);sock.sock_ops.listen(sock,backlog);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_lstat64(path,buf){try{path=SYSCALLS.getStr(path);return SYSCALLS.doStat(FS.lstat,path,buf)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_mkdir(path,mode){try{path=SYSCALLS.getStr(path);return SYSCALLS.doMkdir(path,mode)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_newfstatat(dirfd,path,buf,flags){try{path=SYSCALLS.getStr(path);var nofollow=flags&256;var allowEmpty=flags&4096;flags=flags&~4352;path=SYSCALLS.calculateAt(dirfd,path,allowEmpty);return SYSCALLS.doStat(nofollow?FS.lstat:FS.stat,path,buf)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_openat(dirfd,path,flags,varargs){SYSCALLS.varargs=varargs;try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);var mode=varargs?SYSCALLS.get():0;return FS.open(path,flags,mode).fd}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}var PIPEFS={BUCKET_BUFFER_SIZE:8192,mount:function(mount){return FS.createNode(null,"/",16384|511,0)},createPipe:function(){var pipe={buckets:[],refcnt:2};pipe.buckets.push({buffer:new Uint8Array(PIPEFS.BUCKET_BUFFER_SIZE),offset:0,roffset:0});var rName=PIPEFS.nextname();var wName=PIPEFS.nextname();var rNode=FS.createNode(PIPEFS.root,rName,4096,0);var wNode=FS.createNode(PIPEFS.root,wName,4096,0);rNode.pipe=pipe;wNode.pipe=pipe;var readableStream=FS.createStream({path:rName,node:rNode,flags:0,seekable:false,stream_ops:PIPEFS.stream_ops});rNode.stream=readableStream;var writableStream=FS.createStream({path:wName,node:wNode,flags:1,seekable:false,stream_ops:PIPEFS.stream_ops});wNode.stream=writableStream;return{readable_fd:readableStream.fd,writable_fd:writableStream.fd}},stream_ops:{poll:function(stream){var pipe=stream.node.pipe;if((stream.flags&2097155)===1){return 256|4}else{if(pipe.buckets.length>0){for(var i=0;i0){return 64|1}}}}return 0},ioctl:function(stream,request,varargs){return 28},fsync:function(stream){return 28},read:function(stream,buffer,offset,length,position){var pipe=stream.node.pipe;var currentLength=0;for(var i=0;i=dataLen){currBucket.buffer.set(data,currBucket.offset);currBucket.offset+=dataLen;return dataLen}else if(freeBytesInCurrBuffer>0){currBucket.buffer.set(data.subarray(0,freeBytesInCurrBuffer),currBucket.offset);currBucket.offset+=freeBytesInCurrBuffer;data=data.subarray(freeBytesInCurrBuffer,data.byteLength)}var numBuckets=data.byteLength/PIPEFS.BUCKET_BUFFER_SIZE|0;var remElements=data.byteLength%PIPEFS.BUCKET_BUFFER_SIZE;for(var i=0;i0){var newBucket={buffer:new Uint8Array(PIPEFS.BUCKET_BUFFER_SIZE),offset:data.byteLength,roffset:0};pipe.buckets.push(newBucket);newBucket.buffer.set(data)}return dataLen},close:function(stream){var pipe=stream.node.pipe;pipe.refcnt--;if(pipe.refcnt===0){pipe.buckets=null}}},nextname:function(){if(!PIPEFS.nextname.current){PIPEFS.nextname.current=0}return"pipe["+PIPEFS.nextname.current+++"]"}};function ___syscall_pipe(fdPtr){try{if(fdPtr==0){throw new FS.ErrnoError(21)}var res=PIPEFS.createPipe();HEAP32[fdPtr>>2]=res.readable_fd;HEAP32[fdPtr+4>>2]=res.writable_fd;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_poll(fds,nfds,timeout){try{var nonzero=0;for(var i=0;i>2];var events=HEAP16[pollfd+4>>1];var mask=32;var stream=FS.getStream(fd);if(stream){mask=SYSCALLS.DEFAULT_POLLMASK;if(stream.stream_ops.poll){mask=stream.stream_ops.poll(stream)}}mask&=events|8|16;if(mask)nonzero++;HEAP16[pollfd+6>>1]=mask}return nonzero}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_readlinkat(dirfd,path,buf,bufsize){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);return SYSCALLS.doReadlink(path,buf,bufsize)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_recvfrom(fd,buf,len,flags,addr,addrlen){try{var sock=getSocketFromFD(fd);var msg=sock.sock_ops.recvmsg(sock,len);if(!msg)return 0;if(addr){var errno=writeSockaddr(addr,sock.family,DNS.lookup_name(msg.addr),msg.port,addrlen)}HEAPU8.set(msg.buffer,buf);return msg.buffer.byteLength}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_recvmsg(fd,message,flags){try{var sock=getSocketFromFD(fd);var iov=HEAP32[message+8>>2];var num=HEAP32[message+12>>2];var total=0;for(var i=0;i>2]}var msg=sock.sock_ops.recvmsg(sock,total);if(!msg)return 0;var name=HEAP32[message>>2];if(name){var errno=writeSockaddr(name,sock.family,DNS.lookup_name(msg.addr),msg.port)}var bytesRead=0;var bytesRemaining=msg.buffer.byteLength;for(var i=0;bytesRemaining>0&&i>2];var iovlen=HEAP32[iov+(8*i+4)>>2];if(!iovlen){continue}var length=Math.min(iovlen,bytesRemaining);var buf=msg.buffer.subarray(bytesRead,bytesRead+length);HEAPU8.set(buf,iovbase+bytesRead);bytesRead+=length;bytesRemaining-=length}return bytesRead}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_renameat(olddirfd,oldpath,newdirfd,newpath){try{oldpath=SYSCALLS.getStr(oldpath);newpath=SYSCALLS.getStr(newpath);oldpath=SYSCALLS.calculateAt(olddirfd,oldpath);newpath=SYSCALLS.calculateAt(newdirfd,newpath);FS.rename(oldpath,newpath);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_rmdir(path){try{path=SYSCALLS.getStr(path);FS.rmdir(path);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_sendmsg(fd,message,flags){try{var sock=getSocketFromFD(fd);var iov=HEAP32[message+8>>2];var num=HEAP32[message+12>>2];var addr,port;var name=HEAP32[message>>2];var namelen=HEAP32[message+4>>2];if(name){var info=readSockaddr(name,namelen);if(info.errno)return-info.errno;port=info.port;addr=DNS.lookup_addr(info.addr)||info.addr}var total=0;for(var i=0;i>2]}var view=new Uint8Array(total);var offset=0;for(var i=0;i>2];var iovlen=HEAP32[iov+(8*i+4)>>2];for(var j=0;j>0]}}return sock.sock_ops.sendmsg(sock,view,0,total,addr,port)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_sendto(fd,message,length,flags,addr,addr_len){try{var sock=getSocketFromFD(fd);var dest=getSocketAddress(addr,addr_len,true);if(!dest){return FS.write(sock.stream,HEAP8,message,length)}else{return sock.sock_ops.sendmsg(sock,HEAP8,message,length,dest.addr,dest.port)}}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_socket(domain,type,protocol){try{var sock=SOCKFS.createSocket(domain,type,protocol);return sock.stream.fd}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_stat64(path,buf){try{path=SYSCALLS.getStr(path);return SYSCALLS.doStat(FS.stat,path,buf)}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_statfs64(path,size,buf){try{path=SYSCALLS.getStr(path);HEAP32[buf+4>>2]=4096;HEAP32[buf+40>>2]=4096;HEAP32[buf+8>>2]=1e6;HEAP32[buf+12>>2]=5e5;HEAP32[buf+16>>2]=5e5;HEAP32[buf+20>>2]=FS.nextInode;HEAP32[buf+24>>2]=1e6;HEAP32[buf+28>>2]=42;HEAP32[buf+44>>2]=2;HEAP32[buf+36>>2]=255;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_truncate64(path,low,high){try{path=SYSCALLS.getStr(path);var length=SYSCALLS.get64(low,high);FS.truncate(path,length);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_unlinkat(dirfd,path,flags){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);if(flags===0){FS.unlink(path)}else if(flags===512){FS.rmdir(path)}else{abort("Invalid flags passed to unlinkat")}return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function ___syscall_utimensat(dirfd,path,times,flags){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path,true);if(!times){var atime=Date.now();var mtime=atime}else{var seconds=HEAP32[times>>2];var nanoseconds=HEAP32[times+4>>2];atime=seconds*1e3+nanoseconds/(1e3*1e3);times+=8;seconds=HEAP32[times>>2];nanoseconds=HEAP32[times+4>>2];mtime=seconds*1e3+nanoseconds/(1e3*1e3)}FS.utime(path,atime,mtime);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}var dlopen_main_init=0;function __dlopen_js(handle){var ret=!dlopen_main_init;dlopen_main_init=1;return ret}function __dlsym_js(handle,symbol){return 0}function __emscripten_date_now(){return Date.now()}var nowIsMonotonic=true;function __emscripten_get_now_is_monotonic(){return nowIsMonotonic}function __emscripten_throw_longjmp(){throw Infinity}function __gmtime_js(time,tmPtr){var date=new Date(HEAP32[time>>2]*1e3);HEAP32[tmPtr>>2]=date.getUTCSeconds();HEAP32[tmPtr+4>>2]=date.getUTCMinutes();HEAP32[tmPtr+8>>2]=date.getUTCHours();HEAP32[tmPtr+12>>2]=date.getUTCDate();HEAP32[tmPtr+16>>2]=date.getUTCMonth();HEAP32[tmPtr+20>>2]=date.getUTCFullYear()-1900;HEAP32[tmPtr+24>>2]=date.getUTCDay();var start=Date.UTC(date.getUTCFullYear(),0,1,0,0,0,0);var yday=(date.getTime()-start)/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday}function __localtime_js(time,tmPtr){var date=new Date(HEAP32[time>>2]*1e3);HEAP32[tmPtr>>2]=date.getSeconds();HEAP32[tmPtr+4>>2]=date.getMinutes();HEAP32[tmPtr+8>>2]=date.getHours();HEAP32[tmPtr+12>>2]=date.getDate();HEAP32[tmPtr+16>>2]=date.getMonth();HEAP32[tmPtr+20>>2]=date.getFullYear()-1900;HEAP32[tmPtr+24>>2]=date.getDay();var start=new Date(date.getFullYear(),0,1);var yday=(date.getTime()-start.getTime())/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday;HEAP32[tmPtr+36>>2]=-(date.getTimezoneOffset()*60);var summerOffset=new Date(date.getFullYear(),6,1).getTimezoneOffset();var winterOffset=start.getTimezoneOffset();var dst=(summerOffset!=winterOffset&&date.getTimezoneOffset()==Math.min(winterOffset,summerOffset))|0;HEAP32[tmPtr+32>>2]=dst}function __mktime_js(tmPtr){var date=new Date(HEAP32[tmPtr+20>>2]+1900,HEAP32[tmPtr+16>>2],HEAP32[tmPtr+12>>2],HEAP32[tmPtr+8>>2],HEAP32[tmPtr+4>>2],HEAP32[tmPtr>>2],0);var dst=HEAP32[tmPtr+32>>2];var guessedOffset=date.getTimezoneOffset();var start=new Date(date.getFullYear(),0,1);var summerOffset=new Date(date.getFullYear(),6,1).getTimezoneOffset();var winterOffset=start.getTimezoneOffset();var dstOffset=Math.min(winterOffset,summerOffset);if(dst<0){HEAP32[tmPtr+32>>2]=Number(summerOffset!=winterOffset&&dstOffset==guessedOffset)}else if(dst>0!=(dstOffset==guessedOffset)){var nonDstOffset=Math.max(winterOffset,summerOffset);var trueOffset=dst>0?dstOffset:nonDstOffset;date.setTime(date.getTime()+(trueOffset-guessedOffset)*6e4)}HEAP32[tmPtr+24>>2]=date.getDay();var yday=(date.getTime()-start.getTime())/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday;HEAP32[tmPtr>>2]=date.getSeconds();HEAP32[tmPtr+4>>2]=date.getMinutes();HEAP32[tmPtr+8>>2]=date.getHours();HEAP32[tmPtr+12>>2]=date.getDate();HEAP32[tmPtr+16>>2]=date.getMonth();return date.getTime()/1e3|0}function __mmap_js(addr,len,prot,flags,fd,off,allocated,builtin){try{var info=FS.getStream(fd);if(!info)return-8;var res=FS.mmap(info,addr,len,off,prot,flags);var ptr=res.ptr;HEAP32[allocated>>2]=res.allocated;return ptr}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function __munmap_js(addr,len,prot,flags,fd,offset){try{var stream=FS.getStream(fd);if(stream){if(prot&2){SYSCALLS.doMsync(addr,stream,len,flags,offset)}FS.munmap(stream)}}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return-e.errno}}function _tzset_impl(timezone,daylight,tzname){var currentYear=(new Date).getFullYear();var winter=new Date(currentYear,0,1);var summer=new Date(currentYear,6,1);var winterOffset=winter.getTimezoneOffset();var summerOffset=summer.getTimezoneOffset();var stdTimezoneOffset=Math.max(winterOffset,summerOffset);HEAP32[timezone>>2]=stdTimezoneOffset*60;HEAP32[daylight>>2]=Number(winterOffset!=summerOffset);function extractZone(date){var match=date.toTimeString().match(/\(([A-Za-z ]+)\)$/);return match?match[1]:"GMT"}var winterName=extractZone(winter);var summerName=extractZone(summer);var winterNamePtr=allocateUTF8(winterName);var summerNamePtr=allocateUTF8(summerName);if(summerOffset>2]=winterNamePtr;HEAP32[tzname+4>>2]=summerNamePtr}else{HEAP32[tzname>>2]=summerNamePtr;HEAP32[tzname+4>>2]=winterNamePtr}}function __tzset_js(timezone,daylight,tzname){if(__tzset_js.called)return;__tzset_js.called=true;_tzset_impl(timezone,daylight,tzname)}function _abort(){abort("")}var readAsmConstArgsArray=[];function readAsmConstArgs(sigPtr,buf){readAsmConstArgsArray.length=0;var ch;buf>>=2;while(ch=HEAPU8[sigPtr++]){var readAsmConstArgsDouble=ch<105;if(readAsmConstArgsDouble&&buf&1)buf++;readAsmConstArgsArray.push(readAsmConstArgsDouble?HEAPF64[buf++>>1]:HEAP32[buf]);++buf}return readAsmConstArgsArray}function mainThreadEM_ASM(code,sigPtr,argbuf,sync){var args=readAsmConstArgs(sigPtr,argbuf);return ASM_CONSTS[code].apply(null,args)}function _emscripten_asm_const_int_sync_on_main_thread(code,sigPtr,argbuf){return mainThreadEM_ASM(code,sigPtr,argbuf,1)}function _emscripten_set_main_loop_timing(mode,value){Browser.mainLoop.timingMode=mode;Browser.mainLoop.timingValue=value;if(!Browser.mainLoop.func){return 1}if(!Browser.mainLoop.running){Browser.mainLoop.running=true}if(mode==0){Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_setTimeout(){var timeUntilNextTick=Math.max(0,Browser.mainLoop.tickStartTime+value-_emscripten_get_now())|0;setTimeout(Browser.mainLoop.runner,timeUntilNextTick)};Browser.mainLoop.method="timeout"}else if(mode==1){Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_rAF(){Browser.requestAnimationFrame(Browser.mainLoop.runner)};Browser.mainLoop.method="rAF"}else if(mode==2){if(typeof setImmediate=="undefined"){var setImmediates=[];var emscriptenMainLoopMessageId="setimmediate";var Browser_setImmediate_messageHandler=function(event){if(event.data===emscriptenMainLoopMessageId||event.data.target===emscriptenMainLoopMessageId){event.stopPropagation();setImmediates.shift()()}};addEventListener("message",Browser_setImmediate_messageHandler,true);setImmediate=function Browser_emulated_setImmediate(func){setImmediates.push(func);if(ENVIRONMENT_IS_WORKER){if(Module["setImmediates"]===undefined)Module["setImmediates"]=[];Module["setImmediates"].push(func);postMessage({target:emscriptenMainLoopMessageId})}else postMessage(emscriptenMainLoopMessageId,"*")}}Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_setImmediate(){setImmediate(Browser.mainLoop.runner)};Browser.mainLoop.method="immediate"}return 0}var _emscripten_get_now;if(ENVIRONMENT_IS_NODE){_emscripten_get_now=(()=>{var t=process["hrtime"]();return t[0]*1e3+t[1]/1e6})}else _emscripten_get_now=(()=>performance.now());function _exit(status){exit(status)}function maybeExit(){}function setMainLoop(browserIterationFunc,fps,simulateInfiniteLoop,arg,noSetTiming){assert(!Browser.mainLoop.func,"emscripten_set_main_loop: there can only be one main loop function at once: call emscripten_cancel_main_loop to cancel the previous one before setting a new one with different parameters.");Browser.mainLoop.func=browserIterationFunc;Browser.mainLoop.arg=arg;var thisMainLoopId=Browser.mainLoop.currentlyRunningMainloop;function checkIsRunning(){if(thisMainLoopId0){var start=Date.now();var blocker=Browser.mainLoop.queue.shift();blocker.func(blocker.arg);if(Browser.mainLoop.remainingBlockers){var remaining=Browser.mainLoop.remainingBlockers;var next=remaining%1==0?remaining-1:Math.floor(remaining);if(blocker.counted){Browser.mainLoop.remainingBlockers=next}else{next=next+.5;Browser.mainLoop.remainingBlockers=(8*remaining+next)/9}}out('main loop blocker "'+blocker.name+'" took '+(Date.now()-start)+" ms");Browser.mainLoop.updateStatus();if(!checkIsRunning())return;setTimeout(Browser.mainLoop.runner,0);return}if(!checkIsRunning())return;Browser.mainLoop.currentFrameNumber=Browser.mainLoop.currentFrameNumber+1|0;if(Browser.mainLoop.timingMode==1&&Browser.mainLoop.timingValue>1&&Browser.mainLoop.currentFrameNumber%Browser.mainLoop.timingValue!=0){Browser.mainLoop.scheduler();return}else if(Browser.mainLoop.timingMode==0){Browser.mainLoop.tickStartTime=_emscripten_get_now()}GL.newRenderingFrameStarted();Browser.mainLoop.runIter(browserIterationFunc);if(!checkIsRunning())return;if(typeof SDL=="object"&&SDL.audio&&SDL.audio.queueNewAudioData)SDL.audio.queueNewAudioData();Browser.mainLoop.scheduler()};if(!noSetTiming){if(fps&&fps>0)_emscripten_set_main_loop_timing(0,1e3/fps);else _emscripten_set_main_loop_timing(1,1);Browser.mainLoop.scheduler()}if(simulateInfiniteLoop){throw"unwind"}}function callUserCallback(func,synchronous){if(ABORT){return}if(synchronous){func();return}try{func()}catch(e){handleException(e)}}function safeSetTimeout(func,timeout){return setTimeout(function(){callUserCallback(func)},timeout)}var Browser={mainLoop:{running:false,scheduler:null,method:"",currentlyRunningMainloop:0,func:null,arg:0,timingMode:0,timingValue:0,currentFrameNumber:0,queue:[],pause:function(){Browser.mainLoop.scheduler=null;Browser.mainLoop.currentlyRunningMainloop++},resume:function(){Browser.mainLoop.currentlyRunningMainloop++;var timingMode=Browser.mainLoop.timingMode;var timingValue=Browser.mainLoop.timingValue;var func=Browser.mainLoop.func;Browser.mainLoop.func=null;setMainLoop(func,0,false,Browser.mainLoop.arg,true);_emscripten_set_main_loop_timing(timingMode,timingValue);Browser.mainLoop.scheduler()},updateStatus:function(){if(Module["setStatus"]){var message=Module["statusMessage"]||"Please wait...";var remaining=Browser.mainLoop.remainingBlockers;var expected=Browser.mainLoop.expectedBlockers;if(remaining){if(remaining{assert(img.complete,"Image "+name+" could not be decoded");var canvas=document.createElement("canvas");canvas.width=img.width;canvas.height=img.height;var ctx=canvas.getContext("2d");ctx.drawImage(img,0,0);Module["preloadedImages"][name]=canvas;Browser.URLObject.revokeObjectURL(url);if(onload)onload(byteArray)});img.onerror=(event=>{out("Image "+url+" could not be decoded");if(onerror)onerror()});img.src=url};Module["preloadPlugins"].push(imagePlugin);var audioPlugin={};audioPlugin["canHandle"]=function audioPlugin_canHandle(name){return!Module.noAudioDecoding&&name.substr(-4)in{".ogg":1,".wav":1,".mp3":1}};audioPlugin["handle"]=function audioPlugin_handle(byteArray,name,onload,onerror){var done=false;function finish(audio){if(done)return;done=true;Module["preloadedAudios"][name]=audio;if(onload)onload(byteArray)}function fail(){if(done)return;done=true;Module["preloadedAudios"][name]=new Audio;if(onerror)onerror()}if(Browser.hasBlobConstructor){try{var b=new Blob([byteArray],{type:Browser.getMimetype(name)})}catch(e){return fail()}var url=Browser.URLObject.createObjectURL(b);var audio=new Audio;audio.addEventListener("canplaythrough",function(){finish(audio)},false);audio.onerror=function audio_onerror(event){if(done)return;out("warning: browser could not fully decode audio "+name+", trying slower base64 approach");function encode64(data){var BASE="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";var PAD="=";var ret="";var leftchar=0;var leftbits=0;for(var i=0;i=6){var curr=leftchar>>leftbits-6&63;leftbits-=6;ret+=BASE[curr]}}if(leftbits==2){ret+=BASE[(leftchar&3)<<4];ret+=PAD+PAD}else if(leftbits==4){ret+=BASE[(leftchar&15)<<2];ret+=PAD}return ret}audio.src="data:audio/x-"+name.substr(-3)+";base64,"+encode64(byteArray);finish(audio)};audio.src=url;safeSetTimeout(function(){finish(audio)},1e4)}else{return fail()}};Module["preloadPlugins"].push(audioPlugin);function pointerLockChange(){Browser.pointerLock=document["pointerLockElement"]===Module["canvas"]||document["mozPointerLockElement"]===Module["canvas"]||document["webkitPointerLockElement"]===Module["canvas"]||document["msPointerLockElement"]===Module["canvas"]}var canvas=Module["canvas"];if(canvas){canvas.requestPointerLock=canvas["requestPointerLock"]||canvas["mozRequestPointerLock"]||canvas["webkitRequestPointerLock"]||canvas["msRequestPointerLock"]||function(){};canvas.exitPointerLock=document["exitPointerLock"]||document["mozExitPointerLock"]||document["webkitExitPointerLock"]||document["msExitPointerLock"]||function(){};canvas.exitPointerLock=canvas.exitPointerLock.bind(document);document.addEventListener("pointerlockchange",pointerLockChange,false);document.addEventListener("mozpointerlockchange",pointerLockChange,false);document.addEventListener("webkitpointerlockchange",pointerLockChange,false);document.addEventListener("mspointerlockchange",pointerLockChange,false);if(Module["elementPointerLock"]){canvas.addEventListener("click",function(ev){if(!Browser.pointerLock&&Module["canvas"].requestPointerLock){Module["canvas"].requestPointerLock();ev.preventDefault()}},false)}}},handledByPreloadPlugin:function(byteArray,fullname,finish,onerror){Browser.init();var handled=false;Module["preloadPlugins"].forEach(function(plugin){if(handled)return;if(plugin["canHandle"](fullname)){plugin["handle"](byteArray,fullname,finish,onerror);handled=true}});return handled},createContext:function(canvas,useWebGL,setInModule,webGLContextAttributes){if(useWebGL&&Module.ctx&&canvas==Module.canvas)return Module.ctx;var ctx;var contextHandle;if(useWebGL){var contextAttributes={antialias:false,alpha:false,majorVersion:typeof WebGL2RenderingContext!="undefined"?2:1};if(webGLContextAttributes){for(var attribute in webGLContextAttributes){contextAttributes[attribute]=webGLContextAttributes[attribute]}}if(typeof GL!="undefined"){contextHandle=GL.createContext(canvas,contextAttributes);if(contextHandle){ctx=GL.getContext(contextHandle).GLctx}}}else{ctx=canvas.getContext("2d")}if(!ctx)return null;if(setInModule){if(!useWebGL)assert(typeof GLctx=="undefined","cannot set in module if GLctx is used, but we are a non-GL context that would replace it");Module.ctx=ctx;if(useWebGL)GL.makeContextCurrent(contextHandle);Module.useWebGL=useWebGL;Browser.moduleContextCreatedCallbacks.forEach(function(callback){callback()});Browser.init()}return ctx},destroyContext:function(canvas,useWebGL,setInModule){},fullscreenHandlersInstalled:false,lockPointer:undefined,resizeCanvas:undefined,requestFullscreen:function(lockPointer,resizeCanvas){Browser.lockPointer=lockPointer;Browser.resizeCanvas=resizeCanvas;if(typeof Browser.lockPointer=="undefined")Browser.lockPointer=true;if(typeof Browser.resizeCanvas=="undefined")Browser.resizeCanvas=false;var canvas=Module["canvas"];function fullscreenChange(){Browser.isFullscreen=false;var canvasContainer=canvas.parentNode;if((document["fullscreenElement"]||document["mozFullScreenElement"]||document["msFullscreenElement"]||document["webkitFullscreenElement"]||document["webkitCurrentFullScreenElement"])===canvasContainer){canvas.exitFullscreen=Browser.exitFullscreen;if(Browser.lockPointer)canvas.requestPointerLock();Browser.isFullscreen=true;if(Browser.resizeCanvas){Browser.setFullscreenCanvasSize()}else{Browser.updateCanvasDimensions(canvas)}}else{canvasContainer.parentNode.insertBefore(canvas,canvasContainer);canvasContainer.parentNode.removeChild(canvasContainer);if(Browser.resizeCanvas){Browser.setWindowedCanvasSize()}else{Browser.updateCanvasDimensions(canvas)}}if(Module["onFullScreen"])Module["onFullScreen"](Browser.isFullscreen);if(Module["onFullscreen"])Module["onFullscreen"](Browser.isFullscreen)}if(!Browser.fullscreenHandlersInstalled){Browser.fullscreenHandlersInstalled=true;document.addEventListener("fullscreenchange",fullscreenChange,false);document.addEventListener("mozfullscreenchange",fullscreenChange,false);document.addEventListener("webkitfullscreenchange",fullscreenChange,false);document.addEventListener("MSFullscreenChange",fullscreenChange,false)}var canvasContainer=document.createElement("div");canvas.parentNode.insertBefore(canvasContainer,canvas);canvasContainer.appendChild(canvas);canvasContainer.requestFullscreen=canvasContainer["requestFullscreen"]||canvasContainer["mozRequestFullScreen"]||canvasContainer["msRequestFullscreen"]||(canvasContainer["webkitRequestFullscreen"]?function(){canvasContainer["webkitRequestFullscreen"](Element["ALLOW_KEYBOARD_INPUT"])}:null)||(canvasContainer["webkitRequestFullScreen"]?function(){canvasContainer["webkitRequestFullScreen"](Element["ALLOW_KEYBOARD_INPUT"])}:null);canvasContainer.requestFullscreen()},exitFullscreen:function(){if(!Browser.isFullscreen){return false}var CFS=document["exitFullscreen"]||document["cancelFullScreen"]||document["mozCancelFullScreen"]||document["msExitFullscreen"]||document["webkitCancelFullScreen"]||function(){};CFS.apply(document,[]);return true},nextRAF:0,fakeRequestAnimationFrame:function(func){var now=Date.now();if(Browser.nextRAF===0){Browser.nextRAF=now+1e3/60}else{while(now+2>=Browser.nextRAF){Browser.nextRAF+=1e3/60}}var delay=Math.max(Browser.nextRAF-now,0);setTimeout(func,delay)},requestAnimationFrame:function(func){if(typeof requestAnimationFrame=="function"){requestAnimationFrame(func);return}var RAF=Browser.fakeRequestAnimationFrame;RAF(func)},safeSetTimeout:function(func){return safeSetTimeout(func)},safeRequestAnimationFrame:function(func){return Browser.requestAnimationFrame(function(){callUserCallback(func)})},getMimetype:function(name){return{"jpg":"image/jpeg","jpeg":"image/jpeg","png":"image/png","bmp":"image/bmp","ogg":"audio/ogg","wav":"audio/wav","mp3":"audio/mpeg"}[name.substr(name.lastIndexOf(".")+1)]},getUserMedia:function(func){if(!window.getUserMedia){window.getUserMedia=navigator["getUserMedia"]||navigator["mozGetUserMedia"]}window.getUserMedia(func)},getMovementX:function(event){return event["movementX"]||event["mozMovementX"]||event["webkitMovementX"]||0},getMovementY:function(event){return event["movementY"]||event["mozMovementY"]||event["webkitMovementY"]||0},getMouseWheelDelta:function(event){var delta=0;switch(event.type){case"DOMMouseScroll":delta=event.detail/3;break;case"mousewheel":delta=event.wheelDelta/120;break;case"wheel":delta=event.deltaY;switch(event.deltaMode){case 0:delta/=100;break;case 1:delta/=3;break;case 2:delta*=80;break;default:throw"unrecognized mouse wheel delta mode: "+event.deltaMode}break;default:throw"unrecognized mouse wheel event: "+event.type}return delta},mouseX:0,mouseY:0,mouseMovementX:0,mouseMovementY:0,touches:{},lastTouches:{},calculateMouseEvent:function(event){if(Browser.pointerLock){if(event.type!="mousemove"&&"mozMovementX"in event){Browser.mouseMovementX=Browser.mouseMovementY=0}else{Browser.mouseMovementX=Browser.getMovementX(event);Browser.mouseMovementY=Browser.getMovementY(event)}if(typeof SDL!="undefined"){Browser.mouseX=SDL.mouseX+Browser.mouseMovementX;Browser.mouseY=SDL.mouseY+Browser.mouseMovementY}else{Browser.mouseX+=Browser.mouseMovementX;Browser.mouseY+=Browser.mouseMovementY}}else{var rect=Module["canvas"].getBoundingClientRect();var cw=Module["canvas"].width;var ch=Module["canvas"].height;var scrollX=typeof window.scrollX!="undefined"?window.scrollX:window.pageXOffset;var scrollY=typeof window.scrollY!="undefined"?window.scrollY:window.pageYOffset;if(event.type==="touchstart"||event.type==="touchend"||event.type==="touchmove"){var touch=event.touch;if(touch===undefined){return}var adjustedX=touch.pageX-(scrollX+rect.left);var adjustedY=touch.pageY-(scrollY+rect.top);adjustedX=adjustedX*(cw/rect.width);adjustedY=adjustedY*(ch/rect.height);var coords={x:adjustedX,y:adjustedY};if(event.type==="touchstart"){Browser.lastTouches[touch.identifier]=coords;Browser.touches[touch.identifier]=coords}else if(event.type==="touchend"||event.type==="touchmove"){var last=Browser.touches[touch.identifier];if(!last)last=coords;Browser.lastTouches[touch.identifier]=last;Browser.touches[touch.identifier]=coords}return}var x=event.pageX-(scrollX+rect.left);var y=event.pageY-(scrollY+rect.top);x=x*(cw/rect.width);y=y*(ch/rect.height);Browser.mouseMovementX=x-Browser.mouseX;Browser.mouseMovementY=y-Browser.mouseY;Browser.mouseX=x;Browser.mouseY=y}},resizeListeners:[],updateResizeListeners:function(){var canvas=Module["canvas"];Browser.resizeListeners.forEach(function(listener){listener(canvas.width,canvas.height)})},setCanvasSize:function(width,height,noUpdates){var canvas=Module["canvas"];Browser.updateCanvasDimensions(canvas,width,height);if(!noUpdates)Browser.updateResizeListeners()},windowedWidth:0,windowedHeight:0,setFullscreenCanvasSize:function(){if(typeof SDL!="undefined"){var flags=HEAPU32[SDL.screen>>2];flags=flags|8388608;HEAP32[SDL.screen>>2]=flags}Browser.updateCanvasDimensions(Module["canvas"]);Browser.updateResizeListeners()},setWindowedCanvasSize:function(){if(typeof SDL!="undefined"){var flags=HEAPU32[SDL.screen>>2];flags=flags&~8388608;HEAP32[SDL.screen>>2]=flags}Browser.updateCanvasDimensions(Module["canvas"]);Browser.updateResizeListeners()},updateCanvasDimensions:function(canvas,wNative,hNative){if(wNative&&hNative){canvas.widthNative=wNative;canvas.heightNative=hNative}else{wNative=canvas.widthNative;hNative=canvas.heightNative}var w=wNative;var h=hNative;if(Module["forcedAspectRatio"]&&Module["forcedAspectRatio"]>0){if(w/h=0;--i){JSEvents._removeHandler(i)}JSEvents.eventHandlers=[];JSEvents.deferredCalls=[]},registerRemoveEventListeners:function(){if(!JSEvents.removeEventListenersRegistered){__ATEXIT__.push(JSEvents.removeAllEventListeners);JSEvents.removeEventListenersRegistered=true}},deferredCalls:[],deferCall:function(targetFunction,precedence,argsList){function arraysHaveEqualContent(arrA,arrB){if(arrA.length!=arrB.length)return false;for(var i in arrA){if(arrA[i]!=arrB[i])return false}return true}for(var i in JSEvents.deferredCalls){var call=JSEvents.deferredCalls[i];if(call.targetFunction==targetFunction&&arraysHaveEqualContent(call.argsList,argsList)){return}}JSEvents.deferredCalls.push({targetFunction:targetFunction,precedence:precedence,argsList:argsList});JSEvents.deferredCalls.sort(function(x,y){return x.precedence2?UTF8ToString(cString):cString}var specialHTMLTargets=[0,typeof document!="undefined"?document:0,typeof window!="undefined"?window:0];function findEventTarget(target){target=maybeCStringToJsString(target);var domElement=specialHTMLTargets[target]||(typeof document!="undefined"?document.querySelector(target):undefined);return domElement}function findCanvasEventTarget(target){return findEventTarget(target)}function _emscripten_get_canvas_element_size(target,width,height){var canvas=findCanvasEventTarget(target);if(!canvas)return-4;HEAP32[width>>2]=canvas.width;HEAP32[height>>2]=canvas.height}function getCanvasElementSize(target){return withStackSave(function(){var w=stackAlloc(8);var h=w+4;var targetInt=stackAlloc(target.id.length+1);stringToUTF8(target.id,targetInt,target.id.length+1);var ret=_emscripten_get_canvas_element_size(targetInt,w,h);var size=[HEAP32[w>>2],HEAP32[h>>2]];return size})}function _emscripten_set_canvas_element_size(target,width,height){var canvas=findCanvasEventTarget(target);if(!canvas)return-4;canvas.width=width;canvas.height=height;return 0}function setCanvasElementSize(target,width,height){if(!target.controlTransferredOffscreen){target.width=width;target.height=height}else{withStackSave(function(){var targetInt=stackAlloc(target.id.length+1);stringToUTF8(target.id,targetInt,target.id.length+1);_emscripten_set_canvas_element_size(targetInt,width,height)})}}function registerRestoreOldStyle(canvas){var canvasSize=getCanvasElementSize(canvas);var oldWidth=canvasSize[0];var oldHeight=canvasSize[1];var oldCssWidth=canvas.style.width;var oldCssHeight=canvas.style.height;var oldBackgroundColor=canvas.style.backgroundColor;var oldDocumentBackgroundColor=document.body.style.backgroundColor;var oldPaddingLeft=canvas.style.paddingLeft;var oldPaddingRight=canvas.style.paddingRight;var oldPaddingTop=canvas.style.paddingTop;var oldPaddingBottom=canvas.style.paddingBottom;var oldMarginLeft=canvas.style.marginLeft;var oldMarginRight=canvas.style.marginRight;var oldMarginTop=canvas.style.marginTop;var oldMarginBottom=canvas.style.marginBottom;var oldDocumentBodyMargin=document.body.style.margin;var oldDocumentOverflow=document.documentElement.style.overflow;var oldDocumentScroll=document.body.scroll;var oldImageRendering=canvas.style.imageRendering;function restoreOldStyle(){var fullscreenElement=document.fullscreenElement||document.webkitFullscreenElement||document.msFullscreenElement;if(!fullscreenElement){document.removeEventListener("fullscreenchange",restoreOldStyle);document.removeEventListener("webkitfullscreenchange",restoreOldStyle);setCanvasElementSize(canvas,oldWidth,oldHeight);canvas.style.width=oldCssWidth;canvas.style.height=oldCssHeight;canvas.style.backgroundColor=oldBackgroundColor;if(!oldDocumentBackgroundColor)document.body.style.backgroundColor="white";document.body.style.backgroundColor=oldDocumentBackgroundColor;canvas.style.paddingLeft=oldPaddingLeft;canvas.style.paddingRight=oldPaddingRight;canvas.style.paddingTop=oldPaddingTop;canvas.style.paddingBottom=oldPaddingBottom;canvas.style.marginLeft=oldMarginLeft;canvas.style.marginRight=oldMarginRight;canvas.style.marginTop=oldMarginTop;canvas.style.marginBottom=oldMarginBottom;document.body.style.margin=oldDocumentBodyMargin;document.documentElement.style.overflow=oldDocumentOverflow;document.body.scroll=oldDocumentScroll;canvas.style.imageRendering=oldImageRendering;if(canvas.GLctxObject)canvas.GLctxObject.GLctx.viewport(0,0,oldWidth,oldHeight);if(currentFullscreenStrategy.canvasResizedCallback){(function(a1,a2,a3){return dynCall_iiii.apply(null,[currentFullscreenStrategy.canvasResizedCallback,a1,a2,a3])})(37,0,currentFullscreenStrategy.canvasResizedCallbackUserData)}}}document.addEventListener("fullscreenchange",restoreOldStyle);document.addEventListener("webkitfullscreenchange",restoreOldStyle);return restoreOldStyle}function setLetterbox(element,topBottom,leftRight){element.style.paddingLeft=element.style.paddingRight=leftRight+"px";element.style.paddingTop=element.style.paddingBottom=topBottom+"px"}function getBoundingClientRect(e){return specialHTMLTargets.indexOf(e)<0?e.getBoundingClientRect():{"left":0,"top":0}}function _JSEvents_resizeCanvasForFullscreen(target,strategy){var restoreOldStyle=registerRestoreOldStyle(target);var cssWidth=strategy.softFullscreen?innerWidth:screen.width;var cssHeight=strategy.softFullscreen?innerHeight:screen.height;var rect=getBoundingClientRect(target);var windowedCssWidth=rect.width;var windowedCssHeight=rect.height;var canvasSize=getCanvasElementSize(target);var windowedRttWidth=canvasSize[0];var windowedRttHeight=canvasSize[1];if(strategy.scaleMode==3){setLetterbox(target,(cssHeight-windowedCssHeight)/2,(cssWidth-windowedCssWidth)/2);cssWidth=windowedCssWidth;cssHeight=windowedCssHeight}else if(strategy.scaleMode==2){if(cssWidth*windowedRttHeight>2]=isFullscreen;HEAP32[eventStruct+4>>2]=JSEvents.fullscreenEnabled();var reportedElement=isFullscreen?fullscreenElement:JSEvents.previousFullscreenElement;var nodeName=JSEvents.getNodeNameForTarget(reportedElement);var id=reportedElement&&reportedElement.id?reportedElement.id:"";stringToUTF8(nodeName,eventStruct+8,128);stringToUTF8(id,eventStruct+136,128);HEAP32[eventStruct+264>>2]=reportedElement?reportedElement.clientWidth:0;HEAP32[eventStruct+268>>2]=reportedElement?reportedElement.clientHeight:0;HEAP32[eventStruct+272>>2]=screen.width;HEAP32[eventStruct+276>>2]=screen.height;if(isFullscreen){JSEvents.previousFullscreenElement=fullscreenElement}}function _emscripten_get_fullscreen_status(fullscreenStatus){if(!JSEvents.fullscreenEnabled())return-1;fillFullscreenChangeEventData(fullscreenStatus);return 0}function fillGamepadEventData(eventStruct,e){HEAPF64[eventStruct>>3]=e.timestamp;for(var i=0;i>3]=e.axes[i]}for(var i=0;i>3]=e.buttons[i].value}else{HEAPF64[eventStruct+i*8+528>>3]=e.buttons[i]}}for(var i=0;i>2]=e.buttons[i].pressed}else{HEAP32[eventStruct+i*4+1040>>2]=e.buttons[i]==1}}HEAP32[eventStruct+1296>>2]=e.connected;HEAP32[eventStruct+1300>>2]=e.index;HEAP32[eventStruct+8>>2]=e.axes.length;HEAP32[eventStruct+12>>2]=e.buttons.length;stringToUTF8(e.id,eventStruct+1304,64);stringToUTF8(e.mapping,eventStruct+1368,64)}function _emscripten_get_gamepad_status(index,gamepadState){if(index<0||index>=JSEvents.lastGamepadState.length)return-5;if(!JSEvents.lastGamepadState[index])return-7;fillGamepadEventData(gamepadState,JSEvents.lastGamepadState[index]);return 0}function _emscripten_get_heap_max(){return 2147483648}function _emscripten_get_now_res(){if(ENVIRONMENT_IS_NODE){return 1}else return 1e3}function _emscripten_get_num_gamepads(){return JSEvents.lastGamepadState.length}function _emscripten_html5_remove_all_event_listeners(){JSEvents.removeAllEventListeners()}function _emscripten_is_webgl_context_lost(contextHandle){return!GL.contexts[contextHandle]||GL.contexts[contextHandle].GLctx.isContextLost()}function reallyNegative(x){return x<0||x===0&&1/x===-Infinity}function convertI32PairToI53(lo,hi){return(lo>>>0)+hi*4294967296}function convertU32PairToI53(lo,hi){return(lo>>>0)+(hi>>>0)*4294967296}function reSign(value,bits){if(value<=0){return value}var half=bits<=32?Math.abs(1<=half&&(bits<=32||value>half)){value=-2*half+value}return value}function unSign(value,bits){if(value>=0){return value}return bits<=32?2*Math.abs(1<>3]);argIndex+=8}else if(type=="i64"){ret=[HEAP32[argIndex>>2],HEAP32[argIndex+4>>2]];argIndex+=8}else{type="i32";ret=HEAP32[argIndex>>2];argIndex+=4}return ret}var ret=[];var curr,next,currArg;while(1){var startTextIndex=textIndex;curr=HEAP8[textIndex>>0];if(curr===0)break;next=HEAP8[textIndex+1>>0];if(curr==37){var flagAlwaysSigned=false;var flagLeftAlign=false;var flagAlternative=false;var flagZeroPad=false;var flagPadSign=false;flagsLoop:while(1){switch(next){case 43:flagAlwaysSigned=true;break;case 45:flagLeftAlign=true;break;case 35:flagAlternative=true;break;case 48:if(flagZeroPad){break flagsLoop}else{flagZeroPad=true;break}case 32:flagPadSign=true;break;default:break flagsLoop}textIndex++;next=HEAP8[textIndex+1>>0]}var width=0;if(next==42){width=getNextArg("i32");textIndex++;next=HEAP8[textIndex+1>>0]}else{while(next>=48&&next<=57){width=width*10+(next-48);textIndex++;next=HEAP8[textIndex+1>>0]}}var precisionSet=false,precision=-1;if(next==46){precision=0;precisionSet=true;textIndex++;next=HEAP8[textIndex+1>>0];if(next==42){precision=getNextArg("i32");textIndex++}else{while(1){var precisionChr=HEAP8[textIndex+1>>0];if(precisionChr<48||precisionChr>57)break;precision=precision*10+(precisionChr-48);textIndex++}}next=HEAP8[textIndex+1>>0]}if(precision<0){precision=6;precisionSet=false}var argSize;switch(String.fromCharCode(next)){case"h":var nextNext=HEAP8[textIndex+2>>0];if(nextNext==104){textIndex++;argSize=1}else{argSize=2}break;case"l":var nextNext=HEAP8[textIndex+2>>0];if(nextNext==108){textIndex++;argSize=8}else{argSize=4}break;case"L":case"q":case"j":argSize=8;break;case"z":case"t":case"I":argSize=4;break;default:argSize=null}if(argSize)textIndex++;next=HEAP8[textIndex+1>>0];switch(String.fromCharCode(next)){case"d":case"i":case"u":case"o":case"x":case"X":case"p":{var signed=next==100||next==105;argSize=argSize||4;currArg=getNextArg("i"+argSize*8);var argText;if(argSize==8){currArg=next==117?convertU32PairToI53(currArg[0],currArg[1]):convertI32PairToI53(currArg[0],currArg[1])}if(argSize<=4){var limit=Math.pow(256,argSize)-1;currArg=(signed?reSign:unSign)(currArg&limit,argSize*8)}var currAbsArg=Math.abs(currArg);var prefix="";if(next==100||next==105){argText=reSign(currArg,8*argSize).toString(10)}else if(next==117){argText=unSign(currArg,8*argSize).toString(10);currArg=Math.abs(currArg)}else if(next==111){argText=(flagAlternative?"0":"")+currAbsArg.toString(8)}else if(next==120||next==88){prefix=flagAlternative&&currArg!=0?"0x":"";if(currArg<0){currArg=-currArg;argText=(currAbsArg-1).toString(16);var buffer=[];for(var i=0;i=0){if(flagAlwaysSigned){prefix="+"+prefix}else if(flagPadSign){prefix=" "+prefix}}if(argText.charAt(0)=="-"){prefix="-"+prefix;argText=argText.substr(1)}while(prefix.length+argText.lengthexponent&&exponent>=-4){next=(next==103?"f":"F").charCodeAt(0);precision-=exponent+1}else{next=(next==103?"e":"E").charCodeAt(0);precision--}effectivePrecision=Math.min(precision,20)}if(next==101||next==69){argText=currArg.toExponential(effectivePrecision);if(/[eE][-+]\d$/.test(argText)){argText=argText.slice(0,-1)+"0"+argText.slice(-1)}}else if(next==102||next==70){argText=currArg.toFixed(effectivePrecision);if(currArg===0&&reallyNegative(currArg)){argText="-"+argText}}var parts=argText.split("e");if(isGeneral&&!flagAlternative){while(parts[0].length>1&&parts[0].includes(".")&&(parts[0].slice(-1)=="0"||parts[0].slice(-1)==".")){parts[0]=parts[0].slice(0,-1)}}else{if(flagAlternative&&argText.indexOf(".")==-1)parts[0]+=".";while(precision>effectivePrecision++)parts[0]+="0"}argText=parts[0]+(parts.length>1?"e"+parts[1]:"");if(next==69)argText=argText.toUpperCase();if(currArg>=0){if(flagAlwaysSigned){argText="+"+argText}else if(flagPadSign){argText=" "+argText}}}while(argText.length>0])}}else{ret=ret.concat(intArrayFromString("(null)".substr(0,argLength),true))}if(flagLeftAlign){while(argLength0){ret.push(32)}if(!flagLeftAlign)ret.push(getNextArg("i8"));break}case"n":{var ptr=getNextArg("i32*");HEAP32[ptr>>2]=ret.length;break}case"%":{ret.push(curr);break}default:{for(var i=startTextIndex;i>0])}}}textIndex+=2}else{ret.push(curr);textIndex+=1}}return ret}function traverseStack(args){if(!args||!args.callee||!args.callee.name){return[null,"",""]}var funstr=args.callee.toString();var funcname=args.callee.name;var str="(";var first=true;for(var i in args){var a=args[i];if(!first){str+=", "}first=false;if(typeof a=="number"||typeof a=="string"){str+=a}else{str+="("+typeof a+")"}}str+=")";var caller=args.callee.caller;args=caller?caller.arguments:[];if(first)str="";return[args,funcname,str]}function _emscripten_get_callstack_js(flags){var callstack=jsStackTrace();var iThisFunc=callstack.lastIndexOf("_emscripten_log");var iThisFunc2=callstack.lastIndexOf("_emscripten_get_callstack");var iNextLine=callstack.indexOf("\n",Math.max(iThisFunc,iThisFunc2))+1;callstack=callstack.slice(iNextLine);if(flags&32){warnOnce("EM_LOG_DEMANGLE is deprecated; ignoring")}if(flags&8&&typeof emscripten_source_map=="undefined"){warnOnce('Source map information is not available, emscripten_log with EM_LOG_C_STACK will be ignored. Build with "--pre-js $EMSCRIPTEN/src/emscripten-source-map.min.js" linker flag to add source map loading to code.');flags^=8;flags|=16}var stack_args=null;if(flags&128){stack_args=traverseStack(arguments);while(stack_args[1].includes("_emscripten_"))stack_args=traverseStack(stack_args[0])}var lines=callstack.split("\n");callstack="";var newFirefoxRe=new RegExp("\\s*(.*?)@(.*?):([0-9]+):([0-9]+)");var firefoxRe=new RegExp("\\s*(.*?)@(.*):(.*)(:(.*))?");var chromeRe=new RegExp("\\s*at (.*?) \\((.*):(.*):(.*)\\)");for(var l in lines){var line=lines[l];var symbolName="";var file="";var lineno=0;var column=0;var parts=chromeRe.exec(line);if(parts&&parts.length==5){symbolName=parts[1];file=parts[2];lineno=parts[3];column=parts[4]}else{parts=newFirefoxRe.exec(line);if(!parts)parts=firefoxRe.exec(line);if(parts&&parts.length>=4){symbolName=parts[1];file=parts[2];lineno=parts[3];column=parts[4]|0}else{callstack+=line+"\n";continue}}var haveSourceMap=false;if(flags&8){var orig=emscripten_source_map.originalPositionFor({line:lineno,column:column});haveSourceMap=orig&&orig.source;if(haveSourceMap){if(flags&64){orig.source=orig.source.substring(orig.source.replace(/\\/g,"/").lastIndexOf("/")+1)}callstack+=" at "+symbolName+" ("+orig.source+":"+orig.line+":"+orig.column+")\n"}}if(flags&16||!haveSourceMap){if(flags&64){file=file.substring(file.replace(/\\/g,"/").lastIndexOf("/")+1)}callstack+=(haveSourceMap?" = "+symbolName:" at "+symbolName)+" ("+file+":"+lineno+":"+column+")\n"}if(flags&128&&stack_args[0]){if(stack_args[1]==symbolName&&stack_args[2].length>0){callstack=callstack.replace(/\s+$/,"");callstack+=" with values: "+stack_args[1]+stack_args[2]+"\n"}stack_args=traverseStack(stack_args[0])}}callstack=callstack.replace(/\s+$/,"");return callstack}function _emscripten_log_js(flags,str){if(flags&24){str=str.replace(/\s+$/,"");str+=(str.length>0?"\n":"")+_emscripten_get_callstack_js(flags)}if(flags&1){if(flags&4){console.error(str)}else if(flags&2){console.warn(str)}else if(flags&512){console.info(str)}else if(flags&256){console.debug(str)}else{console.log(str)}}else if(flags&6){err(str)}else{out(str)}}function _emscripten_log(flags,format,varargs){var result=formatString(format,varargs);var str=UTF8ArrayToString(result,0);_emscripten_log_js(flags,str)}function _emscripten_memcpy_big(dest,src,num){HEAPU8.copyWithin(dest,src,src+num)}function doRequestFullscreen(target,strategy){if(!JSEvents.fullscreenEnabled())return-1;target=findEventTarget(target);if(!target)return-4;if(!target.requestFullscreen&&!target.webkitRequestFullscreen){return-3}var canPerformRequests=JSEvents.canPerformEventHandlerRequests();if(!canPerformRequests){if(strategy.deferUntilInEventHandler){JSEvents.deferCall(_JSEvents_requestFullscreen,1,[target,strategy]);return 1}else{return-2}}return _JSEvents_requestFullscreen(target,strategy)}function _emscripten_request_fullscreen(target,deferUntilInEventHandler){var strategy={scaleMode:0,canvasResolutionScaleMode:0,filteringMode:0,deferUntilInEventHandler:deferUntilInEventHandler,canvasResizedCallbackTargetThread:2};return doRequestFullscreen(target,strategy)}function _emscripten_request_pointerlock(target,deferUntilInEventHandler){target=findEventTarget(target);if(!target)return-4;if(!target.requestPointerLock&&!target.msRequestPointerLock){return-1}var canPerformRequests=JSEvents.canPerformEventHandlerRequests();if(!canPerformRequests){if(deferUntilInEventHandler){JSEvents.deferCall(requestPointerLock,2,[target]);return 1}else{return-2}}return requestPointerLock(target)}function emscripten_realloc_buffer(size){try{wasmMemory.grow(size-buffer.byteLength+65535>>>16);updateGlobalBufferAndViews(wasmMemory.buffer);return 1}catch(e){}}function _emscripten_resize_heap(requestedSize){var oldSize=HEAPU8.length;requestedSize=requestedSize>>>0;var maxHeapSize=_emscripten_get_heap_max();if(requestedSize>maxHeapSize){return false}let alignUp=(x,multiple)=>x+(multiple-x%multiple)%multiple;for(var cutDown=1;cutDown<=4;cutDown*=2){var overGrownHeapSize=oldSize*(1+.2/cutDown);overGrownHeapSize=Math.min(overGrownHeapSize,requestedSize+100663296);var newSize=Math.min(maxHeapSize,alignUp(Math.max(requestedSize,overGrownHeapSize),65536));var replacement=emscripten_realloc_buffer(newSize);if(replacement){return true}}return false}function _emscripten_sample_gamepad_data(){return(JSEvents.lastGamepadState=navigator.getGamepads?navigator.getGamepads():navigator.webkitGetGamepads?navigator.webkitGetGamepads():null)?0:-1}function registerFocusEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.focusEvent)JSEvents.focusEvent=_malloc(256);var focusEventHandlerFunc=function(ev){var e=ev||event;var nodeName=JSEvents.getNodeNameForTarget(e.target);var id=e.target.id?e.target.id:"";var focusEvent=JSEvents.focusEvent;stringToUTF8(nodeName,focusEvent+0,128);stringToUTF8(id,focusEvent+128,128);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,focusEvent,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:focusEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_blur_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerFocusEventCallback(target,userData,useCapture,callbackfunc,12,"blur",targetThread);return 0}function _emscripten_set_focus_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerFocusEventCallback(target,userData,useCapture,callbackfunc,13,"focus",targetThread);return 0}function registerFullscreenChangeEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.fullscreenChangeEvent)JSEvents.fullscreenChangeEvent=_malloc(280);var fullscreenChangeEventhandlerFunc=function(ev){var e=ev||event;var fullscreenChangeEvent=JSEvents.fullscreenChangeEvent;fillFullscreenChangeEventData(fullscreenChangeEvent);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,fullscreenChangeEvent,userData))e.preventDefault()};var eventHandler={target:target,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:fullscreenChangeEventhandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_fullscreenchange_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){if(!JSEvents.fullscreenEnabled())return-1;target=findEventTarget(target);if(!target)return-4;registerFullscreenChangeEventCallback(target,userData,useCapture,callbackfunc,19,"fullscreenchange",targetThread);registerFullscreenChangeEventCallback(target,userData,useCapture,callbackfunc,19,"webkitfullscreenchange",targetThread);return 0}function registerGamepadEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.gamepadEvent)JSEvents.gamepadEvent=_malloc(1432);var gamepadEventHandlerFunc=function(ev){var e=ev||event;var gamepadEvent=JSEvents.gamepadEvent;fillGamepadEventData(gamepadEvent,e["gamepad"]);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,gamepadEvent,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),allowsDeferredCalls:true,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:gamepadEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_gamepadconnected_callback_on_thread(userData,useCapture,callbackfunc,targetThread){if(!navigator.getGamepads&&!navigator.webkitGetGamepads)return-1;registerGamepadEventCallback(2,userData,useCapture,callbackfunc,26,"gamepadconnected",targetThread);return 0}function _emscripten_set_gamepaddisconnected_callback_on_thread(userData,useCapture,callbackfunc,targetThread){if(!navigator.getGamepads&&!navigator.webkitGetGamepads)return-1;registerGamepadEventCallback(2,userData,useCapture,callbackfunc,27,"gamepaddisconnected",targetThread);return 0}function _emscripten_set_interval(cb,msecs,userData){return setInterval(function(){callUserCallback(function(){(function(a1){dynCall_vi.apply(null,[cb,a1])})(userData)})},msecs)}function registerKeyEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.keyEvent)JSEvents.keyEvent=_malloc(176);var keyEventHandlerFunc=function(e){var keyEventData=JSEvents.keyEvent;HEAPF64[keyEventData>>3]=e.timeStamp;var idx=keyEventData>>2;HEAP32[idx+2]=e.location;HEAP32[idx+3]=e.ctrlKey;HEAP32[idx+4]=e.shiftKey;HEAP32[idx+5]=e.altKey;HEAP32[idx+6]=e.metaKey;HEAP32[idx+7]=e.repeat;HEAP32[idx+8]=e.charCode;HEAP32[idx+9]=e.keyCode;HEAP32[idx+10]=e.which;stringToUTF8(e.key||"",keyEventData+44,32);stringToUTF8(e.code||"",keyEventData+76,32);stringToUTF8(e.char||"",keyEventData+108,32);stringToUTF8(e.locale||"",keyEventData+140,32);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,keyEventData,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),allowsDeferredCalls:true,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:keyEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_keydown_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerKeyEventCallback(target,userData,useCapture,callbackfunc,2,"keydown",targetThread);return 0}function _emscripten_set_keypress_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerKeyEventCallback(target,userData,useCapture,callbackfunc,1,"keypress",targetThread);return 0}function _emscripten_set_keyup_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerKeyEventCallback(target,userData,useCapture,callbackfunc,3,"keyup",targetThread);return 0}function _emscripten_set_main_loop(func,fps,simulateInfiniteLoop){var browserIterationFunc=function(){dynCall_v.call(null,func)};setMainLoop(browserIterationFunc,fps,simulateInfiniteLoop)}function fillMouseEventData(eventStruct,e,target){HEAPF64[eventStruct>>3]=e.timeStamp;var idx=eventStruct>>2;HEAP32[idx+2]=e.screenX;HEAP32[idx+3]=e.screenY;HEAP32[idx+4]=e.clientX;HEAP32[idx+5]=e.clientY;HEAP32[idx+6]=e.ctrlKey;HEAP32[idx+7]=e.shiftKey;HEAP32[idx+8]=e.altKey;HEAP32[idx+9]=e.metaKey;HEAP16[idx*2+20]=e.button;HEAP16[idx*2+21]=e.buttons;HEAP32[idx+11]=e["movementX"];HEAP32[idx+12]=e["movementY"];var rect=getBoundingClientRect(target);HEAP32[idx+13]=e.clientX-rect.left;HEAP32[idx+14]=e.clientY-rect.top}function registerMouseEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.mouseEvent)JSEvents.mouseEvent=_malloc(72);target=findEventTarget(target);var mouseEventHandlerFunc=function(ev){var e=ev||event;fillMouseEventData(JSEvents.mouseEvent,e,target);if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,JSEvents.mouseEvent,userData))e.preventDefault()};var eventHandler={target:target,allowsDeferredCalls:eventTypeString!="mousemove"&&eventTypeString!="mouseenter"&&eventTypeString!="mouseleave",eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:mouseEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_mousedown_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,5,"mousedown",targetThread);return 0}function _emscripten_set_mousemove_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,8,"mousemove",targetThread);return 0}function _emscripten_set_mouseup_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,6,"mouseup",targetThread);return 0}function registerTouchEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.touchEvent)JSEvents.touchEvent=_malloc(1696);target=findEventTarget(target);var touchEventHandlerFunc=function(e){var t,touches={},et=e.touches;for(var i=0;i>2;HEAP32[idx+3]=e.ctrlKey;HEAP32[idx+4]=e.shiftKey;HEAP32[idx+5]=e.altKey;HEAP32[idx+6]=e.metaKey;idx+=7;var targetRect=getBoundingClientRect(target);var numTouches=0;for(var i in touches){var t=touches[i];HEAP32[idx+0]=t.identifier;HEAP32[idx+1]=t.screenX;HEAP32[idx+2]=t.screenY;HEAP32[idx+3]=t.clientX;HEAP32[idx+4]=t.clientY;HEAP32[idx+5]=t.pageX;HEAP32[idx+6]=t.pageY;HEAP32[idx+7]=t.isChanged;HEAP32[idx+8]=t.onTarget;HEAP32[idx+9]=t.clientX-targetRect.left;HEAP32[idx+10]=t.clientY-targetRect.top;idx+=13;if(++numTouches>31){break}}HEAP32[touchEvent+8>>2]=numTouches;if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,touchEvent,userData))e.preventDefault()};var eventHandler={target:target,allowsDeferredCalls:eventTypeString=="touchstart"||eventTypeString=="touchend",eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:touchEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_touchcancel_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,25,"touchcancel",targetThread);return 0}function _emscripten_set_touchend_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,23,"touchend",targetThread);return 0}function _emscripten_set_touchmove_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,24,"touchmove",targetThread);return 0}function _emscripten_set_touchstart_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,22,"touchstart",targetThread);return 0}function registerWheelEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.wheelEvent)JSEvents.wheelEvent=_malloc(104);var wheelHandlerFunc=function(ev){var e=ev||event;var wheelEvent=JSEvents.wheelEvent;fillMouseEventData(wheelEvent,e,target);HEAPF64[wheelEvent+72>>3]=e["deltaX"];HEAPF64[wheelEvent+80>>3]=e["deltaY"];HEAPF64[wheelEvent+88>>3]=e["deltaZ"];HEAP32[wheelEvent+96>>2]=e["deltaMode"];if(function(a1,a2,a3){return dynCall_iiii.apply(null,[callbackfunc,a1,a2,a3])}(eventTypeId,wheelEvent,userData))e.preventDefault()};var eventHandler={target:target,allowsDeferredCalls:true,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:wheelHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}function _emscripten_set_wheel_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){target=findEventTarget(target);if(typeof target.onwheel!="undefined"){registerWheelEventCallback(target,userData,useCapture,callbackfunc,9,"wheel",targetThread);return 0}else{return-1}}function __webgl_enable_ANGLE_instanced_arrays(ctx){var ext=ctx.getExtension("ANGLE_instanced_arrays");if(ext){ctx["vertexAttribDivisor"]=function(index,divisor){ext["vertexAttribDivisorANGLE"](index,divisor)};ctx["drawArraysInstanced"]=function(mode,first,count,primcount){ext["drawArraysInstancedANGLE"](mode,first,count,primcount)};ctx["drawElementsInstanced"]=function(mode,count,type,indices,primcount){ext["drawElementsInstancedANGLE"](mode,count,type,indices,primcount)};return 1}}function __webgl_enable_OES_vertex_array_object(ctx){var ext=ctx.getExtension("OES_vertex_array_object");if(ext){ctx["createVertexArray"]=function(){return ext["createVertexArrayOES"]()};ctx["deleteVertexArray"]=function(vao){ext["deleteVertexArrayOES"](vao)};ctx["bindVertexArray"]=function(vao){ext["bindVertexArrayOES"](vao)};ctx["isVertexArray"]=function(vao){return ext["isVertexArrayOES"](vao)};return 1}}function __webgl_enable_WEBGL_draw_buffers(ctx){var ext=ctx.getExtension("WEBGL_draw_buffers");if(ext){ctx["drawBuffers"]=function(n,bufs){ext["drawBuffersWEBGL"](n,bufs)};return 1}}function __webgl_enable_WEBGL_draw_instanced_base_vertex_base_instance(ctx){return!!(ctx.dibvbi=ctx.getExtension("WEBGL_draw_instanced_base_vertex_base_instance"))}function __webgl_enable_WEBGL_multi_draw_instanced_base_vertex_base_instance(ctx){return!!(ctx.mdibvbi=ctx.getExtension("WEBGL_multi_draw_instanced_base_vertex_base_instance"))}function __webgl_enable_WEBGL_multi_draw(ctx){return!!(ctx.multiDrawWebgl=ctx.getExtension("WEBGL_multi_draw"))}var GL={counter:1,buffers:[],mappedBuffers:{},programs:[],framebuffers:[],renderbuffers:[],textures:[],shaders:[],vaos:[],contexts:[],offscreenCanvases:{},queries:[],samplers:[],transformFeedbacks:[],syncs:[],byteSizeByTypeRoot:5120,byteSizeByType:[1,1,2,2,4,4,4,2,3,4,8],stringCache:{},stringiCache:{},unpackAlignment:4,recordError:function recordError(errorCode){if(!GL.lastError){GL.lastError=errorCode}},getNewId:function(table){var ret=GL.counter++;for(var i=table.length;i>1;var quadIndexes=new Uint16Array(numIndexes);var i=0,v=0;while(1){quadIndexes[i++]=v;if(i>=numIndexes)break;quadIndexes[i++]=v+1;if(i>=numIndexes)break;quadIndexes[i++]=v+2;if(i>=numIndexes)break;quadIndexes[i++]=v;if(i>=numIndexes)break;quadIndexes[i++]=v+2;if(i>=numIndexes)break;quadIndexes[i++]=v+3;if(i>=numIndexes)break;v+=4}context.GLctx.bufferData(34963,quadIndexes,35044);context.GLctx.bindBuffer(34963,null)}},getTempVertexBuffer:function getTempVertexBuffer(sizeBytes){var idx=GL.log2ceilLookup(sizeBytes);var ringbuffer=GL.currentContext.tempVertexBuffers1[idx];var nextFreeBufferIndex=GL.currentContext.tempVertexBufferCounters1[idx];GL.currentContext.tempVertexBufferCounters1[idx]=GL.currentContext.tempVertexBufferCounters1[idx]+1&GL.numTempVertexBuffersPerSize-1;var vbo=ringbuffer[nextFreeBufferIndex];if(vbo){return vbo}var prevVBO=GLctx.getParameter(34964);ringbuffer[nextFreeBufferIndex]=GLctx.createBuffer();GLctx.bindBuffer(34962,ringbuffer[nextFreeBufferIndex]);GLctx.bufferData(34962,1<>2]:-1;source+=UTF8ToString(HEAP32[string+i*4>>2],len<0?undefined:len)}return source},calcBufLength:function calcBufLength(size,type,stride,count){if(stride>0){return count*stride}var typeSize=GL.byteSizeByType[type-GL.byteSizeByTypeRoot];return size*typeSize*count},usedTempBuffers:[],preDrawHandleClientVertexAttribBindings:function preDrawHandleClientVertexAttribBindings(count){GL.resetBufferBinding=false;for(var i=0;i1?canvas.getContext("webgl2",webGLContextAttributes):canvas.getContext("webgl",webGLContextAttributes);if(!ctx)return 0;var handle=GL.registerContext(ctx,webGLContextAttributes);return handle},registerContext:function(ctx,webGLContextAttributes){var handle=GL.getNewId(GL.contexts);var context={handle:handle,attributes:webGLContextAttributes,version:webGLContextAttributes.majorVersion,GLctx:ctx};if(ctx.canvas)ctx.canvas.GLctxObject=context;GL.contexts[handle]=context;if(typeof webGLContextAttributes.enableExtensionsByDefault=="undefined"||webGLContextAttributes.enableExtensionsByDefault){GL.initExtensions(context)}context.maxVertexAttribs=context.GLctx.getParameter(34921);context.clientBuffers=[];for(var i=0;i=2){GLctx.disjointTimerQueryExt=GLctx.getExtension("EXT_disjoint_timer_query_webgl2")}if(context.version<2||!GLctx.disjointTimerQueryExt){GLctx.disjointTimerQueryExt=GLctx.getExtension("EXT_disjoint_timer_query")}__webgl_enable_WEBGL_multi_draw(GLctx);var exts=GLctx.getSupportedExtensions()||[];exts.forEach(function(ext){if(!ext.includes("lose_context")&&!ext.includes("debug")){GLctx.getExtension(ext)}})}};var __emscripten_webgl_power_preferences=["default","low-power","high-performance"];function _emscripten_webgl_do_create_context(target,attributes){var a=attributes>>2;var powerPreference=HEAP32[a+(24>>2)];var contextAttributes={"alpha":!!HEAP32[a+(0>>2)],"depth":!!HEAP32[a+(4>>2)],"stencil":!!HEAP32[a+(8>>2)],"antialias":!!HEAP32[a+(12>>2)],"premultipliedAlpha":!!HEAP32[a+(16>>2)],"preserveDrawingBuffer":!!HEAP32[a+(20>>2)],"powerPreference":__emscripten_webgl_power_preferences[powerPreference],"failIfMajorPerformanceCaveat":!!HEAP32[a+(28>>2)],majorVersion:HEAP32[a+(32>>2)],minorVersion:HEAP32[a+(36>>2)],enableExtensionsByDefault:HEAP32[a+(40>>2)],explicitSwapControl:HEAP32[a+(44>>2)],proxyContextToMainThread:HEAP32[a+(48>>2)],renderViaOffscreenBackBuffer:HEAP32[a+(52>>2)]};var canvas=findCanvasEventTarget(target);if(!canvas){return 0}if(contextAttributes.explicitSwapControl){return 0}var contextHandle=GL.createContext(canvas,contextAttributes);return contextHandle}function _emscripten_webgl_create_context(a0,a1){return _emscripten_webgl_do_create_context(a0,a1)}function _emscripten_webgl_destroy_context(contextHandle){if(GL.currentContext==contextHandle)GL.currentContext=0;GL.deleteContext(contextHandle)}function _emscripten_webgl_enable_extension(contextHandle,extension){var context=GL.getContext(contextHandle);var extString=UTF8ToString(extension);if(extString.startsWith("GL_"))extString=extString.substr(3);if(extString=="ANGLE_instanced_arrays")__webgl_enable_ANGLE_instanced_arrays(GLctx);if(extString=="OES_vertex_array_object")__webgl_enable_OES_vertex_array_object(GLctx);if(extString=="WEBGL_draw_buffers")__webgl_enable_WEBGL_draw_buffers(GLctx);if(extString=="WEBGL_draw_instanced_base_vertex_base_instance")__webgl_enable_WEBGL_draw_instanced_base_vertex_base_instance(GLctx);if(extString=="WEBGL_multi_draw_instanced_base_vertex_base_instance")__webgl_enable_WEBGL_multi_draw_instanced_base_vertex_base_instance(GLctx);if(extString=="WEBGL_multi_draw")__webgl_enable_WEBGL_multi_draw(GLctx);var ext=context.GLctx.getExtension(extString);return!!ext}function _emscripten_webgl_do_get_current_context(){return GL.currentContext?GL.currentContext.handle:0}function _emscripten_webgl_get_current_context(){return _emscripten_webgl_do_get_current_context()}function _emscripten_webgl_init_context_attributes(attributes){var a=attributes>>2;for(var i=0;i<56>>2;++i){HEAP32[a+i]=0}HEAP32[a+(0>>2)]=HEAP32[a+(4>>2)]=HEAP32[a+(12>>2)]=HEAP32[a+(16>>2)]=HEAP32[a+(32>>2)]=HEAP32[a+(40>>2)]=1}function _emscripten_webgl_make_context_current(contextHandle){var success=GL.makeContextCurrent(contextHandle);return success?0:-5}var ENV={};function getExecutableName(){return thisProgram||"./this.program"}function getEnvStrings(){if(!getEnvStrings.strings){var lang=(typeof navigator=="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8";var env={"USER":"web_user","LOGNAME":"web_user","PATH":"/","PWD":"/","HOME":"/home/web_user","LANG":lang,"_":getExecutableName()};for(var x in ENV){if(ENV[x]===undefined)delete env[x];else env[x]=ENV[x]}var strings=[];for(var x in env){strings.push(x+"="+env[x])}getEnvStrings.strings=strings}return getEnvStrings.strings}function _environ_get(__environ,environ_buf){var bufSize=0;getEnvStrings().forEach(function(string,i){var ptr=environ_buf+bufSize;HEAP32[__environ+i*4>>2]=ptr;writeAsciiToMemory(string,ptr);bufSize+=string.length+1});return 0}function _environ_sizes_get(penviron_count,penviron_buf_size){var strings=getEnvStrings();HEAP32[penviron_count>>2]=strings.length;var bufSize=0;strings.forEach(function(string){bufSize+=string.length+1});HEAP32[penviron_buf_size>>2]=bufSize;return 0}function _fd_close(fd){try{var stream=SYSCALLS.getStreamFromFD(fd);FS.close(stream);return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function _fd_fdstat_get(fd,pbuf){try{var stream=SYSCALLS.getStreamFromFD(fd);var type=stream.tty?2:FS.isDir(stream.mode)?3:FS.isLink(stream.mode)?7:4;HEAP8[pbuf>>0]=type;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function _fd_read(fd,iov,iovcnt,pnum){try{var stream=SYSCALLS.getStreamFromFD(fd);var num=SYSCALLS.doReadv(stream,iov,iovcnt);HEAP32[pnum>>2]=num;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function _fd_seek(fd,offset_low,offset_high,whence,newOffset){try{var stream=SYSCALLS.getStreamFromFD(fd);var HIGH_OFFSET=4294967296;var offset=offset_high*HIGH_OFFSET+(offset_low>>>0);var DOUBLE_LIMIT=9007199254740992;if(offset<=-DOUBLE_LIMIT||offset>=DOUBLE_LIMIT){return-61}FS.llseek(stream,offset,whence);tempI64=[stream.position>>>0,(tempDouble=stream.position,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[newOffset>>2]=tempI64[0],HEAP32[newOffset+4>>2]=tempI64[1];if(stream.getdents&&offset===0&&whence===0)stream.getdents=null;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function _fd_write(fd,iov,iovcnt,pnum){try{var stream=SYSCALLS.getStreamFromFD(fd);var num=SYSCALLS.doWritev(stream,iov,iovcnt);HEAP32[pnum>>2]=num;return 0}catch(e){if(typeof FS=="undefined"||!(e instanceof FS.ErrnoError))throw e;return e.errno}}function _getTempRet0(){return getTempRet0()}function _getaddrinfo(node,service,hint,out){var addr=0;var port=0;var flags=0;var family=0;var type=0;var proto=0;var ai;function allocaddrinfo(family,type,proto,canon,addr,port){var sa,salen,ai;var errno;salen=family===10?28:16;addr=family===10?inetNtop6(addr):inetNtop4(addr);sa=_malloc(salen);errno=writeSockaddr(sa,family,addr,port);assert(!errno);ai=_malloc(32);HEAP32[ai+4>>2]=family;HEAP32[ai+8>>2]=type;HEAP32[ai+12>>2]=proto;HEAP32[ai+24>>2]=canon;HEAP32[ai+20>>2]=sa;if(family===10){HEAP32[ai+16>>2]=28}else{HEAP32[ai+16>>2]=16}HEAP32[ai+28>>2]=0;return ai}if(hint){flags=HEAP32[hint>>2];family=HEAP32[hint+4>>2];type=HEAP32[hint+8>>2];proto=HEAP32[hint+12>>2]}if(type&&!proto){proto=type===2?17:6}if(!type&&proto){type=proto===17?2:1}if(proto===0){proto=6}if(type===0){type=1}if(!node&&!service){return-2}if(flags&~(1|2|4|1024|8|16|32)){return-1}if(hint!==0&&HEAP32[hint>>2]&2&&!node){return-1}if(flags&32){return-2}if(type!==0&&type!==1&&type!==2){return-7}if(family!==0&&family!==2&&family!==10){return-6}if(service){service=UTF8ToString(service);port=parseInt(service,10);if(isNaN(port)){if(flags&1024){return-2}return-8}}if(!node){if(family===0){family=2}if((flags&1)===0){if(family===2){addr=_htonl(2130706433)}else{addr=[0,0,0,1]}}ai=allocaddrinfo(family,type,proto,null,addr,port);HEAP32[out>>2]=ai;return 0}node=UTF8ToString(node);addr=inetPton4(node);if(addr!==null){if(family===0||family===2){family=2}else if(family===10&&flags&8){addr=[0,0,_htonl(65535),addr];family=10}else{return-2}}else{addr=inetPton6(node);if(addr!==null){if(family===0||family===10){family=10}else{return-2}}}if(addr!=null){ai=allocaddrinfo(family,type,proto,node,addr,port);HEAP32[out>>2]=ai;return 0}if(flags&4){return-2}node=DNS.lookup_name(node);addr=inetPton4(node);if(family===0){family=2}else if(family===10){addr=[0,0,_htonl(65535),addr]}ai=allocaddrinfo(family,type,proto,null,addr,port);HEAP32[out>>2]=ai;return 0}function getHostByName(name){var ret=_malloc(20);var nameBuf=_malloc(name.length+1);stringToUTF8(name,nameBuf,name.length+1);HEAP32[ret>>2]=nameBuf;var aliasesBuf=_malloc(4);HEAP32[aliasesBuf>>2]=0;HEAP32[ret+4>>2]=aliasesBuf;var afinet=2;HEAP32[ret+8>>2]=afinet;HEAP32[ret+12>>2]=4;var addrListBuf=_malloc(12);HEAP32[addrListBuf>>2]=addrListBuf+8;HEAP32[addrListBuf+4>>2]=0;HEAP32[addrListBuf+8>>2]=inetPton4(DNS.lookup_name(name));HEAP32[ret+16>>2]=addrListBuf;return ret}function _gethostbyaddr(addr,addrlen,type){if(type!==2){setErrNo(5);return null}addr=HEAP32[addr>>2];var host=inetNtop4(addr);var lookup=DNS.lookup_addr(host);if(lookup){host=lookup}return getHostByName(host)}function _gethostbyname(name){return getHostByName(UTF8ToString(name))}function _getnameinfo(sa,salen,node,nodelen,serv,servlen,flags){var info=readSockaddr(sa,salen);if(info.errno){return-6}var port=info.port;var addr=info.addr;var overflowed=false;if(node&&nodelen){var lookup;if(flags&1||!(lookup=DNS.lookup_addr(addr))){if(flags&8){return-2}}else{addr=lookup}var numBytesWrittenExclNull=stringToUTF8(addr,node,nodelen);if(numBytesWrittenExclNull+1>=nodelen){overflowed=true}}if(serv&&servlen){port=""+port;var numBytesWrittenExclNull=stringToUTF8(port,serv,servlen);if(numBytesWrittenExclNull+1>=servlen){overflowed=true}}if(overflowed){return-12}return 0}function _glActiveTexture(x0){GLctx["activeTexture"](x0)}function _glAttachShader(program,shader){program=GL.programs[program];shader=GL.shaders[shader];program[shader.shaderType]=shader;GLctx.attachShader(program,shader)}function _glBeginQuery(target,id){GLctx["beginQuery"](target,GL.queries[id])}function _glBindAttribLocation(program,index,name){GLctx.bindAttribLocation(GL.programs[program],index,UTF8ToString(name))}function _glBindBuffer(target,buffer){if(target==34962){GLctx.currentArrayBufferBinding=buffer}else if(target==34963){GLctx.currentElementArrayBufferBinding=buffer}if(target==35051){GLctx.currentPixelPackBufferBinding=buffer}else if(target==35052){GLctx.currentPixelUnpackBufferBinding=buffer}GLctx.bindBuffer(target,GL.buffers[buffer])}function _glBindBufferBase(target,index,buffer){GLctx["bindBufferBase"](target,index,GL.buffers[buffer])}function _glBindBufferRange(target,index,buffer,offset,ptrsize){GLctx["bindBufferRange"](target,index,GL.buffers[buffer],offset,ptrsize)}function _glBindFramebuffer(target,framebuffer){GLctx.bindFramebuffer(target,GL.framebuffers[framebuffer])}function _glBindRenderbuffer(target,renderbuffer){GLctx.bindRenderbuffer(target,GL.renderbuffers[renderbuffer])}function _glBindSampler(unit,sampler){GLctx["bindSampler"](unit,GL.samplers[sampler])}function _glBindTexture(target,texture){GLctx.bindTexture(target,GL.textures[texture])}function _glBindVertexArray(vao){GLctx["bindVertexArray"](GL.vaos[vao]);var ibo=GLctx.getParameter(34965);GLctx.currentElementArrayBufferBinding=ibo?ibo.name|0:0}function _glBlendEquation(x0){GLctx["blendEquation"](x0)}function _glBlendEquationSeparate(x0,x1){GLctx["blendEquationSeparate"](x0,x1)}function _glBlendFuncSeparate(x0,x1,x2,x3){GLctx["blendFuncSeparate"](x0,x1,x2,x3)}function _glBlitFramebuffer(x0,x1,x2,x3,x4,x5,x6,x7,x8,x9){GLctx["blitFramebuffer"](x0,x1,x2,x3,x4,x5,x6,x7,x8,x9)}function _glBufferData(target,size,data,usage){if(GL.currentContext.version>=2){if(data){GLctx.bufferData(target,HEAPU8,usage,data,size)}else{GLctx.bufferData(target,size,usage)}}else{GLctx.bufferData(target,data?HEAPU8.subarray(data,data+size):size,usage)}}function _glBufferSubData(target,offset,size,data){if(GL.currentContext.version>=2){GLctx.bufferSubData(target,offset,HEAPU8,data,size);return}GLctx.bufferSubData(target,offset,HEAPU8.subarray(data,data+size))}function _glCheckFramebufferStatus(x0){return GLctx["checkFramebufferStatus"](x0)}function _glClear(x0){GLctx["clear"](x0)}function _glClearBufferfi(x0,x1,x2,x3){GLctx["clearBufferfi"](x0,x1,x2,x3)}function _glClearBufferfv(buffer,drawbuffer,value){GLctx["clearBufferfv"](buffer,drawbuffer,HEAPF32,value>>2)}function _glClearBufferuiv(buffer,drawbuffer,value){GLctx["clearBufferuiv"](buffer,drawbuffer,HEAPU32,value>>2)}function _glClearColor(x0,x1,x2,x3){GLctx["clearColor"](x0,x1,x2,x3)}function _glClearDepthf(x0){GLctx["clearDepth"](x0)}function _glClearStencil(x0){GLctx["clearStencil"](x0)}function _glClientWaitSync(sync,flags,timeoutLo,timeoutHi){return GLctx.clientWaitSync(GL.syncs[sync],flags,convertI32PairToI53(timeoutLo,timeoutHi))}function _glColorMask(red,green,blue,alpha){GLctx.colorMask(!!red,!!green,!!blue,!!alpha)}function _glCompileShader(shader){GLctx.compileShader(GL.shaders[shader])}function _glCompressedTexImage2D(target,level,internalFormat,width,height,border,imageSize,data){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,imageSize,data)}else{GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,HEAPU8,data,imageSize)}return}GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,data?HEAPU8.subarray(data,data+imageSize):null)}function _glCompressedTexImage3D(target,level,internalFormat,width,height,depth,border,imageSize,data){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexImage3D"](target,level,internalFormat,width,height,depth,border,imageSize,data)}else{GLctx["compressedTexImage3D"](target,level,internalFormat,width,height,depth,border,HEAPU8,data,imageSize)}}function _glCompressedTexSubImage2D(target,level,xoffset,yoffset,width,height,format,imageSize,data){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,imageSize,data)}else{GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,HEAPU8,data,imageSize)}return}GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,data?HEAPU8.subarray(data,data+imageSize):null)}function _glCompressedTexSubImage3D(target,level,xoffset,yoffset,zoffset,width,height,depth,format,imageSize,data){if(GLctx.currentPixelUnpackBufferBinding){GLctx["compressedTexSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,imageSize,data)}else{GLctx["compressedTexSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,HEAPU8,data,imageSize)}}function _glCopyBufferSubData(x0,x1,x2,x3,x4){GLctx["copyBufferSubData"](x0,x1,x2,x3,x4)}function _glCopyTexImage2D(x0,x1,x2,x3,x4,x5,x6,x7){GLctx["copyTexImage2D"](x0,x1,x2,x3,x4,x5,x6,x7)}function _glCopyTexSubImage2D(x0,x1,x2,x3,x4,x5,x6,x7){GLctx["copyTexSubImage2D"](x0,x1,x2,x3,x4,x5,x6,x7)}function _glCreateProgram(){var id=GL.getNewId(GL.programs);var program=GLctx.createProgram();program.name=id;program.maxUniformLength=program.maxAttributeLength=program.maxUniformBlockNameLength=0;program.uniformIdCounter=1;GL.programs[id]=program;return id}function _glCreateShader(shaderType){var id=GL.getNewId(GL.shaders);GL.shaders[id]=GLctx.createShader(shaderType);GL.shaders[id].shaderType=shaderType&1?"vs":"fs";return id}function _glCullFace(x0){GLctx["cullFace"](x0)}function _glDeleteBuffers(n,buffers){for(var i=0;i>2];var buffer=GL.buffers[id];if(!buffer)continue;GLctx.deleteBuffer(buffer);buffer.name=0;GL.buffers[id]=null;if(id==GLctx.currentArrayBufferBinding)GLctx.currentArrayBufferBinding=0;if(id==GLctx.currentElementArrayBufferBinding)GLctx.currentElementArrayBufferBinding=0;if(id==GLctx.currentPixelPackBufferBinding)GLctx.currentPixelPackBufferBinding=0;if(id==GLctx.currentPixelUnpackBufferBinding)GLctx.currentPixelUnpackBufferBinding=0}}function _glDeleteFramebuffers(n,framebuffers){for(var i=0;i>2];var framebuffer=GL.framebuffers[id];if(!framebuffer)continue;GLctx.deleteFramebuffer(framebuffer);framebuffer.name=0;GL.framebuffers[id]=null}}function _glDeleteProgram(id){if(!id)return;var program=GL.programs[id];if(!program){GL.recordError(1281);return}GLctx.deleteProgram(program);program.name=0;GL.programs[id]=null}function _glDeleteQueries(n,ids){for(var i=0;i>2];var query=GL.queries[id];if(!query)continue;GLctx["deleteQuery"](query);GL.queries[id]=null}}function _glDeleteRenderbuffers(n,renderbuffers){for(var i=0;i>2];var renderbuffer=GL.renderbuffers[id];if(!renderbuffer)continue;GLctx.deleteRenderbuffer(renderbuffer);renderbuffer.name=0;GL.renderbuffers[id]=null}}function _glDeleteSamplers(n,samplers){for(var i=0;i>2];var sampler=GL.samplers[id];if(!sampler)continue;GLctx["deleteSampler"](sampler);sampler.name=0;GL.samplers[id]=null}}function _glDeleteShader(id){if(!id)return;var shader=GL.shaders[id];if(!shader){GL.recordError(1281);return}GLctx.deleteShader(shader);GL.shaders[id]=null}function _glDeleteSync(id){if(!id)return;var sync=GL.syncs[id];if(!sync){GL.recordError(1281);return}GLctx.deleteSync(sync);sync.name=0;GL.syncs[id]=null}function _glDeleteTextures(n,textures){for(var i=0;i>2];var texture=GL.textures[id];if(!texture)continue;GLctx.deleteTexture(texture);texture.name=0;GL.textures[id]=null}}function _glDeleteVertexArrays(n,vaos){for(var i=0;i>2];GLctx["deleteVertexArray"](GL.vaos[id]);GL.vaos[id]=null}}function _glDepthFunc(x0){GLctx["depthFunc"](x0)}function _glDepthMask(flag){GLctx.depthMask(!!flag)}function _glDetachShader(program,shader){GLctx.detachShader(GL.programs[program],GL.shaders[shader])}function _glDisable(x0){GLctx["disable"](x0)}function _glDisableVertexAttribArray(index){var cb=GL.currentContext.clientBuffers[index];cb.enabled=false;GLctx.disableVertexAttribArray(index)}function _glDrawArrays(mode,first,count){GL.preDrawHandleClientVertexAttribBindings(first+count);GLctx.drawArrays(mode,first,count);GL.postDrawHandleClientVertexAttribBindings()}function _glDrawArraysInstanced(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}var tempFixedLengthArray=[];function _glDrawBuffers(n,bufs){var bufArray=tempFixedLengthArray[n];for(var i=0;i>2]}GLctx["drawBuffers"](bufArray)}function _glDrawElements(mode,count,type,indices){var buf;if(!GLctx.currentElementArrayBufferBinding){var size=GL.calcBufLength(1,type,0,count);buf=GL.getTempIndexBuffer(size);GLctx.bindBuffer(34963,buf);GLctx.bufferSubData(34963,0,HEAPU8.subarray(indices,indices+size));indices=0}GL.preDrawHandleClientVertexAttribBindings(count);GLctx.drawElements(mode,count,type,indices);GL.postDrawHandleClientVertexAttribBindings(count);if(!GLctx.currentElementArrayBufferBinding){GLctx.bindBuffer(34963,null)}}function _glDrawElementsInstanced(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}function _glEnable(x0){GLctx["enable"](x0)}function _glEnableVertexAttribArray(index){var cb=GL.currentContext.clientBuffers[index];cb.enabled=true;GLctx.enableVertexAttribArray(index)}function _glEndQuery(x0){GLctx["endQuery"](x0)}function _glFenceSync(condition,flags){var sync=GLctx.fenceSync(condition,flags);if(sync){var id=GL.getNewId(GL.syncs);sync.name=id;GL.syncs[id]=sync;return id}else{return 0}}function _glFinish(){GLctx["finish"]()}function _glFlush(){GLctx["flush"]()}function emscriptenWebGLGetBufferBinding(target){switch(target){case 34962:target=34964;break;case 34963:target=34965;break;case 35051:target=35053;break;case 35052:target=35055;break;case 35982:target=35983;break;case 36662:target=36662;break;case 36663:target=36663;break;case 35345:target=35368;break}var buffer=GLctx.getParameter(target);if(buffer)return buffer.name|0;else return 0}function emscriptenWebGLValidateMapBufferTarget(target){switch(target){case 34962:case 34963:case 36662:case 36663:case 35051:case 35052:case 35882:case 35982:case 35345:return true;default:return false}}function _glFlushMappedBufferRange(target,offset,length){if(!emscriptenWebGLValidateMapBufferTarget(target)){GL.recordError(1280);err("GL_INVALID_ENUM in glFlushMappedBufferRange");return}var mapping=GL.mappedBuffers[emscriptenWebGLGetBufferBinding(target)];if(!mapping){GL.recordError(1282);err("buffer was never mapped in glFlushMappedBufferRange");return}if(!(mapping.access&16)){GL.recordError(1282);err("buffer was not mapped with GL_MAP_FLUSH_EXPLICIT_BIT in glFlushMappedBufferRange");return}if(offset<0||length<0||offset+length>mapping.length){GL.recordError(1281);err("invalid range in glFlushMappedBufferRange");return}GLctx.bufferSubData(target,mapping.offset,HEAPU8.subarray(mapping.mem+offset,mapping.mem+offset+length))}function _glFramebufferRenderbuffer(target,attachment,renderbuffertarget,renderbuffer){GLctx.framebufferRenderbuffer(target,attachment,renderbuffertarget,GL.renderbuffers[renderbuffer])}function _glFramebufferTexture2D(target,attachment,textarget,texture,level){GLctx.framebufferTexture2D(target,attachment,textarget,GL.textures[texture],level)}function _glFramebufferTextureLayer(target,attachment,texture,level,layer){GLctx.framebufferTextureLayer(target,attachment,GL.textures[texture],level,layer)}function _glFrontFace(x0){GLctx["frontFace"](x0)}function __glGenObject(n,buffers,createFunction,objectTable){for(var i=0;i>2]=id}}function _glGenBuffers(n,buffers){__glGenObject(n,buffers,"createBuffer",GL.buffers)}function _glGenFramebuffers(n,ids){__glGenObject(n,ids,"createFramebuffer",GL.framebuffers)}function _glGenQueries(n,ids){__glGenObject(n,ids,"createQuery",GL.queries)}function _glGenRenderbuffers(n,renderbuffers){__glGenObject(n,renderbuffers,"createRenderbuffer",GL.renderbuffers)}function _glGenSamplers(n,samplers){__glGenObject(n,samplers,"createSampler",GL.samplers)}function _glGenTextures(n,textures){__glGenObject(n,textures,"createTexture",GL.textures)}function _glGenVertexArrays(n,arrays){__glGenObject(n,arrays,"createVertexArray",GL.vaos)}function _glGenerateMipmap(x0){GLctx["generateMipmap"](x0)}function __glGetActiveAttribOrUniform(funcName,program,index,bufSize,length,size,type,name){program=GL.programs[program];var info=GLctx[funcName](program,index);if(info){var numBytesWrittenExclNull=name&&stringToUTF8(info.name,name,bufSize);if(length)HEAP32[length>>2]=numBytesWrittenExclNull;if(size)HEAP32[size>>2]=info.size;if(type)HEAP32[type>>2]=info.type}}function _glGetActiveAttrib(program,index,bufSize,length,size,type,name){__glGetActiveAttribOrUniform("getActiveAttrib",program,index,bufSize,length,size,type,name)}function _glGetActiveUniform(program,index,bufSize,length,size,type,name){__glGetActiveAttribOrUniform("getActiveUniform",program,index,bufSize,length,size,type,name)}function _glGetActiveUniformBlockName(program,uniformBlockIndex,bufSize,length,uniformBlockName){program=GL.programs[program];var result=GLctx["getActiveUniformBlockName"](program,uniformBlockIndex);if(!result)return;if(uniformBlockName&&bufSize>0){var numBytesWrittenExclNull=stringToUTF8(result,uniformBlockName,bufSize);if(length)HEAP32[length>>2]=numBytesWrittenExclNull}else{if(length)HEAP32[length>>2]=0}}function _glGetActiveUniformBlockiv(program,uniformBlockIndex,pname,params){if(!params){GL.recordError(1281);return}program=GL.programs[program];if(pname==35393){var name=GLctx["getActiveUniformBlockName"](program,uniformBlockIndex);HEAP32[params>>2]=name.length+1;return}var result=GLctx["getActiveUniformBlockParameter"](program,uniformBlockIndex,pname);if(result===null)return;if(pname==35395){for(var i=0;i>2]=result[i]}}else{HEAP32[params>>2]=result}}function _glGetActiveUniformsiv(program,uniformCount,uniformIndices,pname,params){if(!params){GL.recordError(1281);return}if(uniformCount>0&&uniformIndices==0){GL.recordError(1281);return}program=GL.programs[program];var ids=[];for(var i=0;i>2])}var result=GLctx["getActiveUniforms"](program,ids,pname);if(!result)return;var len=result.length;for(var i=0;i>2]=result[i]}}function _glGetAttribLocation(program,name){return GLctx.getAttribLocation(GL.programs[program],UTF8ToString(name))}function _glGetBufferSubData(target,offset,size,data){if(!data){GL.recordError(1281);return}GLctx["getBufferSubData"](target,offset,HEAPU8,data,size)}function _glGetError(){var error=GLctx.getError()||GL.lastError;GL.lastError=0;return error}function _glGetFramebufferAttachmentParameteriv(target,attachment,pname,params){var result=GLctx.getFramebufferAttachmentParameter(target,attachment,pname);if(result instanceof WebGLRenderbuffer||result instanceof WebGLTexture){result=result.name|0}HEAP32[params>>2]=result}function writeI53ToI64(ptr,num){HEAPU32[ptr>>2]=num;HEAPU32[ptr+4>>2]=(num-HEAPU32[ptr>>2])/4294967296}function emscriptenWebGLGetIndexed(target,index,data,type){if(!data){GL.recordError(1281);return}var result=GLctx["getIndexedParameter"](target,index);var ret;switch(typeof result){case"boolean":ret=result?1:0;break;case"number":ret=result;break;case"object":if(result===null){switch(target){case 35983:case 35368:ret=0;break;default:{GL.recordError(1280);return}}}else if(result instanceof WebGLBuffer){ret=result.name|0}else{GL.recordError(1280);return}break;default:GL.recordError(1280);return}switch(type){case 1:writeI53ToI64(data,ret);break;case 0:HEAP32[data>>2]=ret;break;case 2:HEAPF32[data>>2]=ret;break;case 4:HEAP8[data>>0]=ret?1:0;break;default:throw"internal emscriptenWebGLGetIndexed() error, bad type: "+type}}function _glGetIntegeri_v(target,index,data){emscriptenWebGLGetIndexed(target,index,data,0)}function emscriptenWebGLGet(name_,p,type){if(!p){GL.recordError(1281);return}var ret=undefined;switch(name_){case 36346:ret=1;break;case 36344:if(type!=0&&type!=1){GL.recordError(1280)}return;case 34814:case 36345:ret=0;break;case 34466:var formats=GLctx.getParameter(34467);ret=formats?formats.length:0;break;case 33390:ret=1048576;break;case 33309:if(GL.currentContext.version<2){GL.recordError(1282);return}var exts=GLctx.getSupportedExtensions()||[];ret=2*exts.length;break;case 33307:case 33308:if(GL.currentContext.version<2){GL.recordError(1280);return}ret=name_==33307?3:0;break}if(ret===undefined){var result=GLctx.getParameter(name_);switch(typeof result){case"number":ret=result;break;case"boolean":ret=result?1:0;break;case"string":GL.recordError(1280);return;case"object":if(result===null){switch(name_){case 34964:case 35725:case 34965:case 36006:case 36007:case 32873:case 34229:case 36662:case 36663:case 35053:case 35055:case 36010:case 35097:case 35869:case 32874:case 36389:case 35983:case 35368:case 34068:{ret=0;break}default:{GL.recordError(1280);return}}}else if(result instanceof Float32Array||result instanceof Uint32Array||result instanceof Int32Array||result instanceof Array){for(var i=0;i>2]=result[i];break;case 2:HEAPF32[p+i*4>>2]=result[i];break;case 4:HEAP8[p+i>>0]=result[i]?1:0;break}}return}else{try{ret=result.name|0}catch(e){GL.recordError(1280);err("GL_INVALID_ENUM in glGet"+type+"v: Unknown object returned from WebGL getParameter("+name_+")! (error: "+e+")");return}}break;default:GL.recordError(1280);err("GL_INVALID_ENUM in glGet"+type+"v: Native code calling glGet"+type+"v("+name_+") and it returns "+result+" of type "+typeof result+"!");return}}switch(type){case 1:writeI53ToI64(p,ret);break;case 0:HEAP32[p>>2]=ret;break;case 2:HEAPF32[p>>2]=ret;break;case 4:HEAP8[p>>0]=ret?1:0;break}}function _glGetIntegerv(name_,p){emscriptenWebGLGet(name_,p,0)}function _glGetInternalformativ(target,internalformat,pname,bufSize,params){if(bufSize<0){GL.recordError(1281);return}if(!params){GL.recordError(1281);return}var ret=GLctx["getInternalformatParameter"](target,internalformat,pname);if(ret===null)return;for(var i=0;i>2]=ret[i]}}function _glGetProgramBinary(program,bufSize,length,binaryFormat,binary){GL.recordError(1282)}function _glGetProgramInfoLog(program,maxLength,length,infoLog){var log=GLctx.getProgramInfoLog(GL.programs[program]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _glGetProgramiv(program,pname,p){if(!p){GL.recordError(1281);return}if(program>=GL.counter){GL.recordError(1281);return}program=GL.programs[program];if(pname==35716){var log=GLctx.getProgramInfoLog(program);if(log===null)log="(unknown error)";HEAP32[p>>2]=log.length+1}else if(pname==35719){if(!program.maxUniformLength){for(var i=0;i>2]=program.maxUniformLength}else if(pname==35722){if(!program.maxAttributeLength){for(var i=0;i>2]=program.maxAttributeLength}else if(pname==35381){if(!program.maxUniformBlockNameLength){for(var i=0;i>2]=program.maxUniformBlockNameLength}else{HEAP32[p>>2]=GLctx.getProgramParameter(program,pname)}}function _glGetQueryObjectuiv(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param=GLctx["getQueryParameter"](query,pname);var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}HEAP32[params>>2]=ret}function _glGetQueryiv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx["getQuery"](target,pname)}function _glGetRenderbufferParameteriv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.getRenderbufferParameter(target,pname)}function _glGetShaderInfoLog(shader,maxLength,length,infoLog){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _glGetShaderPrecisionFormat(shaderType,precisionType,range,precision){var result=GLctx.getShaderPrecisionFormat(shaderType,precisionType);HEAP32[range>>2]=result.rangeMin;HEAP32[range+4>>2]=result.rangeMax;HEAP32[precision>>2]=result.precision}function _glGetShaderSource(shader,bufSize,length,source){var result=GLctx.getShaderSource(GL.shaders[shader]);if(!result)return;var numBytesWrittenExclNull=bufSize>0&&source?stringToUTF8(result,source,bufSize):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}function _glGetShaderiv(shader,pname,p){if(!p){GL.recordError(1281);return}if(pname==35716){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var logLength=log?log.length+1:0;HEAP32[p>>2]=logLength}else if(pname==35720){var source=GLctx.getShaderSource(GL.shaders[shader]);var sourceLength=source?source.length+1:0;HEAP32[p>>2]=sourceLength}else{HEAP32[p>>2]=GLctx.getShaderParameter(GL.shaders[shader],pname)}}function _glGetString(name_){var ret=GL.stringCache[name_];if(!ret){switch(name_){case 7939:var exts=GLctx.getSupportedExtensions()||[];exts=exts.concat(exts.map(function(e){return"GL_"+e}));ret=stringToNewUTF8(exts.join(" "));break;case 7936:case 7937:case 37445:case 37446:var s=GLctx.getParameter(name_);if(!s){GL.recordError(1280)}ret=s&&stringToNewUTF8(s);break;case 7938:var glVersion=GLctx.getParameter(7938);if(GL.currentContext.version>=2)glVersion="OpenGL ES 3.0 ("+glVersion+")";else{glVersion="OpenGL ES 2.0 ("+glVersion+")"}ret=stringToNewUTF8(glVersion);break;case 35724:var glslVersion=GLctx.getParameter(35724);var ver_re=/^WebGL GLSL ES ([0-9]\.[0-9][0-9]?)(?:$| .*)/;var ver_num=glslVersion.match(ver_re);if(ver_num!==null){if(ver_num[1].length==3)ver_num[1]=ver_num[1]+"0";glslVersion="OpenGL ES GLSL ES "+ver_num[1]+" ("+glslVersion+")"}ret=stringToNewUTF8(glslVersion);break;default:GL.recordError(1280)}GL.stringCache[name_]=ret}return ret}function _glGetStringi(name,index){if(GL.currentContext.version<2){GL.recordError(1282);return 0}var stringiCache=GL.stringiCache[name];if(stringiCache){if(index<0||index>=stringiCache.length){GL.recordError(1281);return 0}return stringiCache[index]}switch(name){case 7939:var exts=GLctx.getSupportedExtensions()||[];exts=exts.concat(exts.map(function(e){return"GL_"+e}));exts=exts.map(function(e){return stringToNewUTF8(e)});stringiCache=GL.stringiCache[name]=exts;if(index<0||index>=stringiCache.length){GL.recordError(1281);return 0}return stringiCache[index];default:GL.recordError(1280);return 0}}function _glGetTexParameteriv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.getTexParameter(target,pname)}function _glGetUniformBlockIndex(program,uniformBlockName){return GLctx["getUniformBlockIndex"](GL.programs[program],UTF8ToString(uniformBlockName))}function _glGetUniformIndices(program,uniformCount,uniformNames,uniformIndices){if(!uniformIndices){GL.recordError(1281);return}if(uniformCount>0&&(uniformNames==0||uniformIndices==0)){GL.recordError(1281);return}program=GL.programs[program];var names=[];for(var i=0;i>2]));var result=GLctx["getUniformIndices"](program,names);if(!result)return;var len=result.length;for(var i=0;i>2]=result[i]}}function webglGetLeftBracePos(name){return name.slice(-1)=="]"&&name.lastIndexOf("[")}function webglPrepareUniformLocationsBeforeFirstUse(program){var uniformLocsById=program.uniformLocsById,uniformSizeAndIdsByName=program.uniformSizeAndIdsByName,i,j;if(!uniformLocsById){program.uniformLocsById=uniformLocsById={};program.uniformArrayNamesById={};for(i=0;i0?nm.slice(0,lb):nm;var id=uniformSizeAndIdsByName[arrayName]?uniformSizeAndIdsByName[arrayName][1]:program.uniformIdCounter;program.uniformIdCounter=Math.max(id+sz,program.uniformIdCounter);uniformSizeAndIdsByName[arrayName]=[sz,id];for(j=0;j0){arrayIndex=jstoi_q(name.slice(leftBrace+1))>>>0;uniformBaseName=name.slice(0,leftBrace)}var sizeAndId=program.uniformSizeAndIdsByName[uniformBaseName];if(sizeAndId&&arrayIndex0?"["+webglLoc+"]":""))}return webglLoc}else{GL.recordError(1282)}}function emscriptenWebGLGetUniform(program,location,params,type){if(!params){GL.recordError(1281);return}program=GL.programs[program];webglPrepareUniformLocationsBeforeFirstUse(program);var data=GLctx.getUniform(program,webglGetUniformLocation(location));if(typeof data=="number"||typeof data=="boolean"){switch(type){case 0:HEAP32[params>>2]=data;break;case 2:HEAPF32[params>>2]=data;break}}else{for(var i=0;i>2]=data[i];break;case 2:HEAPF32[params+i*4>>2]=data[i];break}}}}function _glGetUniformiv(program,location,params){emscriptenWebGLGetUniform(program,location,params,0)}function emscriptenWebGLGetVertexAttrib(index,pname,params,type){if(!params){GL.recordError(1281);return}if(GL.currentContext.clientBuffers[index].enabled){err("glGetVertexAttrib*v on client-side array: not supported, bad data returned")}var data=GLctx.getVertexAttrib(index,pname);if(pname==34975){HEAP32[params>>2]=data&&data["name"]}else if(typeof data=="number"||typeof data=="boolean"){switch(type){case 0:HEAP32[params>>2]=data;break;case 2:HEAPF32[params>>2]=data;break;case 5:HEAP32[params>>2]=Math.fround(data);break}}else{for(var i=0;i>2]=data[i];break;case 2:HEAPF32[params+i*4>>2]=data[i];break;case 5:HEAP32[params+i*4>>2]=Math.fround(data[i]);break}}}}function _glGetVertexAttribiv(index,pname,params){emscriptenWebGLGetVertexAttrib(index,pname,params,5)}function _glInvalidateFramebuffer(target,numAttachments,attachments){var list=tempFixedLengthArray[numAttachments];for(var i=0;i>2]}GLctx["invalidateFramebuffer"](target,list)}function _glIsEnabled(x0){return GLctx["isEnabled"](x0)}function _glIsVertexArray(array){var vao=GL.vaos[array];if(!vao)return 0;return GLctx["isVertexArray"](vao)}function _glLinkProgram(program){program=GL.programs[program];GLctx.linkProgram(program);program.uniformLocsById=0;program.uniformSizeAndIdsByName={};[program["vs"],program["fs"]].forEach(function(s){Object.keys(s.explicitUniformLocations).forEach(function(shaderLocation){var loc=s.explicitUniformLocations[shaderLocation];program.uniformSizeAndIdsByName[shaderLocation]=[1,loc];program.uniformIdCounter=Math.max(program.uniformIdCounter,loc+1)})});function copyKeys(dst,src){Object.keys(src).forEach(function(key){dst[key]=src[key]})}program.explicitUniformBindings={};program.explicitSamplerBindings={};[program["vs"],program["fs"]].forEach(function(s){copyKeys(program.explicitUniformBindings,s.explicitUniformBindings);copyKeys(program.explicitSamplerBindings,s.explicitSamplerBindings)});program.explicitProgramBindingsApplied=0}function _glMapBufferRange(target,offset,length,access){if(access!=26&&access!=10){err("glMapBufferRange is only supported when access is MAP_WRITE|INVALIDATE_BUFFER");return 0}if(!emscriptenWebGLValidateMapBufferTarget(target)){GL.recordError(1280);err("GL_INVALID_ENUM in glMapBufferRange");return 0}var mem=_malloc(length);if(!mem)return 0;GL.mappedBuffers[emscriptenWebGLGetBufferBinding(target)]={offset:offset,length:length,mem:mem,access:access};return mem}function _glPixelStorei(pname,param){if(pname==3317){GL.unpackAlignment=param}GLctx.pixelStorei(pname,param)}function _glPolygonOffset(x0,x1){GLctx["polygonOffset"](x0,x1)}function _glProgramBinary(program,binaryFormat,binary,length){GL.recordError(1280)}function _glProgramParameteri(program,pname,value){GL.recordError(1280)}function _glReadBuffer(x0){GLctx["readBuffer"](x0)}function computeUnpackAlignedImageSize(width,height,sizePerPixel,alignment){function roundedToNextMultipleOf(x,y){return x+y-1&-y}var plainRowSize=width*sizePerPixel;var alignedRowSize=roundedToNextMultipleOf(plainRowSize,alignment);return height*alignedRowSize}function __colorChannelsInGlTextureFormat(format){var colorChannels={5:3,6:4,8:2,29502:3,29504:4,26917:2,26918:2,29846:3,29847:4};return colorChannels[format-6402]||1}function heapObjectForWebGLType(type){type-=5120;if(type==0)return HEAP8;if(type==1)return HEAPU8;if(type==2)return HEAP16;if(type==4)return HEAP32;if(type==6)return HEAPF32;if(type==5||type==28922||type==28520||type==30779||type==30782)return HEAPU32;return HEAPU16}function heapAccessShiftForWebGLHeap(heap){return 31-Math.clz32(heap.BYTES_PER_ELEMENT)}function emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,internalFormat){var heap=heapObjectForWebGLType(type);var shift=heapAccessShiftForWebGLHeap(heap);var byteSize=1<>shift,pixels+bytes>>shift)}function _glReadPixels(x,y,width,height,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelPackBufferBinding){GLctx.readPixels(x,y,width,height,format,type,pixels)}else{var heap=heapObjectForWebGLType(type);GLctx.readPixels(x,y,width,height,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}return}var pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,format);if(!pixelData){GL.recordError(1280);return}GLctx.readPixels(x,y,width,height,format,type,pixelData)}function _glRenderbufferStorage(x0,x1,x2,x3){GLctx["renderbufferStorage"](x0,x1,x2,x3)}function _glRenderbufferStorageMultisample(x0,x1,x2,x3,x4){GLctx["renderbufferStorageMultisample"](x0,x1,x2,x3,x4)}function _glSamplerParameteri(sampler,pname,param){GLctx["samplerParameteri"](GL.samplers[sampler],pname,param)}function _glScissor(x0,x1,x2,x3){GLctx["scissor"](x0,x1,x2,x3)}function find_closing_parens_index(arr,i,opening="(",closing=")"){for(var nesting=0;i{return defs[args[0]]?1:0});function isWhitespace(str,i){return!(str.charCodeAt(i)>32)}function nextWhitespace(str,i){while(!isWhitespace(str,i))++i;return i}function classifyChar(str,idx){var cc=str.charCodeAt(idx);if(cc>32){if(cc<48)return 1;if(cc<58)return 2;if(cc<65)return 1;if(cc<91||cc==95)return 3;if(cc<97)return 1;if(cc<123)return 3;return 1}return cc<33?0:4}function tokenize(exprString,keepWhitespace){var out=[],len=exprString.length;for(var i=0;i<=len;++i){var kind=classifyChar(exprString,i);if(kind==2||kind==3){for(var j=i+1;j<=len;++j){var kind2=classifyChar(exprString,j);if(kind2!=kind&&(kind2!=2||kind!=3)){out.push(exprString.substring(i,j));i=j-1;break}}}else if(kind==1){var op2=exprString.substr(i,2);if(["<=",">=","==","!=","&&","||"].includes(op2)){out.push(op2);++i}else{out.push(exprString[i])}}}return out}function expandMacros(str,lineStart,lineEnd){if(lineEnd===undefined)lineEnd=str.length;var len=str.length;var out="";for(var i=lineStart;i1||typeof tokens[0]!="function"){tokens=function(tokens){var i,j,p,operatorAndPriority=-2;for(j=0;j",">=","==","!=","&&","||","("].indexOf(tokens[j]))>operatorAndPriority){i=j;operatorAndPriority=p}}if(operatorAndPriority==13){var j=find_closing_parens_index(tokens,i);if(j){tokens.splice(i,j+1-i,buildExprTree(tokens.slice(i+1,j)));return tokens}}if(operatorAndPriority==4){i=tokens.lastIndexOf("!");var innerExpr=buildExprTree(tokens.slice(i+1,i+2));tokens.splice(i,2,function(){return!innerExpr()});return tokens}if(operatorAndPriority>=0){var left=buildExprTree(tokens.slice(0,i));var right=buildExprTree(tokens.slice(i+1));switch(tokens[i]){case"&&":return[function(){return left()&&right()}];case"||":return[function(){return left()||right()}];case"==":return[function(){return left()==right()}];case"!=":return[function(){return left()!=right()}];case"<":return[function(){return left()":return[function(){return left()>right()}];case">=":return[function(){return left()>=right()}];case"+":return[function(){return left()+right()}];case"-":return[function(){return left()-right()}];case"*":return[function(){return left()*right()}];case"/":return[function(){return Math.floor(left()/right())}]}}var num=jstoi_q(tokens[i]);return[function(){return num}]}(tokens)}return tokens[0]}for(;i0){var macroEnd=expression.indexOf(")",macroStart);let params=expression.substring(macroStart+1,macroEnd).split(",").map(x=>x.trim());let value=tokenize(expression.substring(macroEnd+1).trim());defs[expression.substring(0,macroStart)]=(args=>{var ret="";value.forEach(x=>{var argIndex=params.indexOf(x);ret+=argIndex>=0?args[argIndex]:x});return ret})}else{let value=expandMacros(expression.substring(firstWs+1).trim(),0);defs[expression.substring(0,firstWs)]=(()=>value)}}break;case"undef":if(thisLineIsInActivePreprocessingBlock)delete defs[expression];break;default:if(directive!="version"&&directive!="pragma"&&directive!="extension"){}out+=expandMacros(code,lineStart,i)+"\n"}}return out}function remove_cpp_comments_in_shaders(code){var i=0,out="",ch,next,len=code.length;for(;i1,"GL_ES":()=>1,"__VERSION__":()=>source.includes("#version 300")?300:100});var regex=/layout\s*\(\s*location\s*=\s*(-?\d+)\s*\)\s*(uniform\s+((lowp|mediump|highp)\s+)?\w+\s+(\w+))/g,explicitUniformLocations={},match;while(match=regex.exec(source)){explicitUniformLocations[match[5]]=jstoi_q(match[1]);if(!(explicitUniformLocations[match[5]]>=0&&explicitUniformLocations[match[5]]<1048576)){err('Specified an out of range layout(location=x) directive "'+explicitUniformLocations[match[5]]+'"! ('+match[0]+")");GL.recordError(1281);return}}source=source.replace(regex,"$2");GL.shaders[shader].explicitUniformLocations=explicitUniformLocations;var bindingRegex=/layout\s*\(.*?binding\s*=\s*(-?\d+).*?\)\s*uniform\s+(\w+)\s+(\w+)?/g,samplerBindings={},uniformBindings={},bindingMatch;while(bindingMatch=bindingRegex.exec(source)){var arrayLength=1;for(var i=bindingMatch.index;i=0&&binding+arrayLength<=numBindingPoints)){err('Specified an out of range layout(binding=x) directive "'+binding+'"! ('+bindingMatch[0]+"). Valid range is [0, "+numBindingPoints+"-1]");GL.recordError(1281);return}}source=source.replace(/layout\s*\(.*?binding\s*=\s*([-\d]+).*?\)/g,"");source=source.replace(/(layout\s*\((.*?)),\s*binding\s*=\s*([-\d]+)\)/g,"$1)");source=source.replace(/layout\s*\(\s*binding\s*=\s*([-\d]+)\s*,(.*?)\)/g,"layout($2)");GL.shaders[shader].explicitSamplerBindings=samplerBindings;GL.shaders[shader].explicitUniformBindings=uniformBindings;GLctx.shaderSource(GL.shaders[shader],source)}function _glStencilFuncSeparate(x0,x1,x2,x3){GLctx["stencilFuncSeparate"](x0,x1,x2,x3)}function _glStencilMask(x0){GLctx["stencilMask"](x0)}function _glStencilOpSeparate(x0,x1,x2,x3){GLctx["stencilOpSeparate"](x0,x1,x2,x3)}function _glTexImage2D(target,level,internalFormat,width,height,border,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,null)}return}GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,pixels?emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,internalFormat):null)}function _glTexImage3D(target,level,internalFormat,width,height,depth,border,format,type,pixels){if(GLctx.currentPixelUnpackBufferBinding){GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx["texImage3D"](target,level,internalFormat,width,height,depth,border,format,type,null)}}function _glTexParameterf(x0,x1,x2){GLctx["texParameterf"](x0,x1,x2)}function _glTexParameteri(x0,x1,x2){GLctx["texParameteri"](x0,x1,x2)}function _glTexParameteriv(target,pname,params){var param=HEAP32[params>>2];GLctx.texParameteri(target,pname,param)}function _glTexStorage2D(x0,x1,x2,x3,x4){GLctx["texStorage2D"](x0,x1,x2,x3,x4)}function _glTexStorage3D(x0,x1,x2,x3,x4,x5){GLctx["texStorage3D"](x0,x1,x2,x3,x4,x5)}function _glTexSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixels){if(GL.currentContext.version>=2){if(GLctx.currentPixelUnpackBufferBinding){GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,null)}return}var pixelData=null;if(pixels)pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,0);GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixelData)}function _glTexSubImage3D(target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,pixels){if(GLctx.currentPixelUnpackBufferBinding){GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,pixels)}else if(pixels){var heap=heapObjectForWebGLType(type);GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,heap,pixels>>heapAccessShiftForWebGLHeap(heap))}else{GLctx["texSubImage3D"](target,level,xoffset,yoffset,zoffset,width,height,depth,format,type,null)}}var miniTempWebGLFloatBuffers=[];function _glUniform1fv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform1fv(webglGetUniformLocation(location),HEAPF32,value>>2,count);return}if(count<=288){var view=miniTempWebGLFloatBuffers[count-1];for(var i=0;i>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*4>>2)}GLctx.uniform1fv(webglGetUniformLocation(location),view)}function _glUniform1i(location,v0){GLctx.uniform1i(webglGetUniformLocation(location),v0)}var __miniTempWebGLIntBuffers=[];function _glUniform1iv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform1iv(webglGetUniformLocation(location),HEAP32,value>>2,count);return}if(count<=288){var view=__miniTempWebGLIntBuffers[count-1];for(var i=0;i>2]}}else{var view=HEAP32.subarray(value>>2,value+count*4>>2)}GLctx.uniform1iv(webglGetUniformLocation(location),view)}function _glUniform1uiv(location,count,value){GLctx.uniform1uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count)}function _glUniform2fv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform2fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*2);return}if(count<=144){var view=miniTempWebGLFloatBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2fv(webglGetUniformLocation(location),view)}function _glUniform2iv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform2iv(webglGetUniformLocation(location),HEAP32,value>>2,count*2);return}if(count<=144){var view=__miniTempWebGLIntBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2iv(webglGetUniformLocation(location),view)}function _glUniform2uiv(location,count,value){GLctx.uniform2uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count*2)}function _glUniform3fv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform3fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*3);return}if(count<=96){var view=miniTempWebGLFloatBuffers[3*count-1];for(var i=0;i<3*count;i+=3){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*12>>2)}GLctx.uniform3fv(webglGetUniformLocation(location),view)}function _glUniform3iv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform3iv(webglGetUniformLocation(location),HEAP32,value>>2,count*3);return}if(count<=96){var view=__miniTempWebGLIntBuffers[3*count-1];for(var i=0;i<3*count;i+=3){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2];view[i+2]=HEAP32[value+(4*i+8)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*12>>2)}GLctx.uniform3iv(webglGetUniformLocation(location),view)}function _glUniform3uiv(location,count,value){GLctx.uniform3uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count*3)}function _glUniform4fv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform4fv(webglGetUniformLocation(location),HEAPF32,value>>2,count*4);return}if(count<=72){var view=miniTempWebGLFloatBuffers[4*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<4*count;i+=4){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3]}}else{var view=HEAPF32.subarray(value>>2,value+count*16>>2)}GLctx.uniform4fv(webglGetUniformLocation(location),view)}function _glUniform4iv(location,count,value){if(GL.currentContext.version>=2){GLctx.uniform4iv(webglGetUniformLocation(location),HEAP32,value>>2,count*4);return}if(count<=72){var view=__miniTempWebGLIntBuffers[4*count-1];for(var i=0;i<4*count;i+=4){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2];view[i+2]=HEAP32[value+(4*i+8)>>2];view[i+3]=HEAP32[value+(4*i+12)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*16>>2)}GLctx.uniform4iv(webglGetUniformLocation(location),view)}function _glUniform4uiv(location,count,value){GLctx.uniform4uiv(webglGetUniformLocation(location),HEAPU32,value>>2,count*4)}function _glUniformBlockBinding(program,uniformBlockIndex,uniformBlockBinding){program=GL.programs[program];GLctx["uniformBlockBinding"](program,uniformBlockIndex,uniformBlockBinding)}function _glUniformMatrix3fv(location,count,transpose,value){if(GL.currentContext.version>=2){GLctx.uniformMatrix3fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*9);return}if(count<=32){var view=miniTempWebGLFloatBuffers[9*count-1];for(var i=0;i<9*count;i+=9){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2];view[i+3]=HEAPF32[value+(4*i+12)>>2];view[i+4]=HEAPF32[value+(4*i+16)>>2];view[i+5]=HEAPF32[value+(4*i+20)>>2];view[i+6]=HEAPF32[value+(4*i+24)>>2];view[i+7]=HEAPF32[value+(4*i+28)>>2];view[i+8]=HEAPF32[value+(4*i+32)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*36>>2)}GLctx.uniformMatrix3fv(webglGetUniformLocation(location),!!transpose,view)}function _glUniformMatrix4fv(location,count,transpose,value){if(GL.currentContext.version>=2){GLctx.uniformMatrix4fv(webglGetUniformLocation(location),!!transpose,HEAPF32,value>>2,count*16);return}if(count<=18){var view=miniTempWebGLFloatBuffers[16*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<16*count;i+=16){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3];view[i+4]=heap[dst+4];view[i+5]=heap[dst+5];view[i+6]=heap[dst+6];view[i+7]=heap[dst+7];view[i+8]=heap[dst+8];view[i+9]=heap[dst+9];view[i+10]=heap[dst+10];view[i+11]=heap[dst+11];view[i+12]=heap[dst+12];view[i+13]=heap[dst+13];view[i+14]=heap[dst+14];view[i+15]=heap[dst+15]}}else{var view=HEAPF32.subarray(value>>2,value+count*64>>2)}GLctx.uniformMatrix4fv(webglGetUniformLocation(location),!!transpose,view)}function _glUnmapBuffer(target){if(!emscriptenWebGLValidateMapBufferTarget(target)){GL.recordError(1280);err("GL_INVALID_ENUM in glUnmapBuffer");return 0}var buffer=emscriptenWebGLGetBufferBinding(target);var mapping=GL.mappedBuffers[buffer];if(!mapping){GL.recordError(1282);err("buffer was never mapped in glUnmapBuffer");return 0}GL.mappedBuffers[buffer]=null;if(!(mapping.access&16))if(GL.currentContext.version>=2){GLctx.bufferSubData(target,mapping.offset,HEAPU8,mapping.mem,mapping.length)}else{GLctx.bufferSubData(target,mapping.offset,HEAPU8.subarray(mapping.mem,mapping.mem+mapping.length))}_free(mapping.mem);return 1}function webglApplyExplicitProgramBindings(){var p=GLctx.currentProgram;if(!p.explicitProgramBindingsApplied){if(GL.currentContext.version>=2){Object.keys(p.explicitUniformBindings).forEach(function(ubo){var bindings=p.explicitUniformBindings[ubo];for(var i=0;i1?"["+i+"]":""));GLctx.uniformBlockBinding(p,blockIndex,bindings[0]+i)}})}Object.keys(p.explicitSamplerBindings).forEach(function(sampler){var bindings=p.explicitSamplerBindings[sampler];for(var i=0;i>2],HEAPF32[v+4>>2],HEAPF32[v+8>>2],HEAPF32[v+12>>2])}function _glVertexAttribIPointer(index,size,type,stride,ptr){var cb=GL.currentContext.clientBuffers[index];if(!GLctx.currentArrayBufferBinding){cb.size=size;cb.type=type;cb.normalized=false;cb.stride=stride;cb.ptr=ptr;cb.clientside=true;cb.vertexAttribPointerAdaptor=function(index,size,type,normalized,stride,ptr){this.vertexAttribIPointer(index,size,type,stride,ptr)};return}cb.clientside=false;GLctx["vertexAttribIPointer"](index,size,type,stride,ptr)}function _glVertexAttribPointer(index,size,type,normalized,stride,ptr){var cb=GL.currentContext.clientBuffers[index];if(!GLctx.currentArrayBufferBinding){cb.size=size;cb.type=type;cb.normalized=normalized;cb.stride=stride;cb.ptr=ptr;cb.clientside=true;cb.vertexAttribPointerAdaptor=function(index,size,type,normalized,stride,ptr){this.vertexAttribPointer(index,size,type,normalized,stride,ptr)};return}cb.clientside=false;GLctx.vertexAttribPointer(index,size,type,!!normalized,stride,ptr)}function _glViewport(x0,x1,x2,x3){GLctx["viewport"](x0,x1,x2,x3)}function _llvm_eh_typeid_for(type){return type}function _setTempRet0(val){setTempRet0(val)}function __isLeapYear(year){return year%4===0&&(year%100!==0||year%400===0)}function __arraySum(array,index){var sum=0;for(var i=0;i<=index;sum+=array[i++]){}return sum}var __MONTH_DAYS_LEAP=[31,29,31,30,31,30,31,31,30,31,30,31];var __MONTH_DAYS_REGULAR=[31,28,31,30,31,30,31,31,30,31,30,31];function __addDays(date,days){var newDate=new Date(date.getTime());while(days>0){var leap=__isLeapYear(newDate.getFullYear());var currentMonth=newDate.getMonth();var daysInCurrentMonth=(leap?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR)[currentMonth];if(days>daysInCurrentMonth-newDate.getDate()){days-=daysInCurrentMonth-newDate.getDate()+1;newDate.setDate(1);if(currentMonth<11){newDate.setMonth(currentMonth+1)}else{newDate.setMonth(0);newDate.setFullYear(newDate.getFullYear()+1)}}else{newDate.setDate(newDate.getDate()+days);return newDate}}return newDate}function _strftime(s,maxsize,format,tm){var tm_zone=HEAP32[tm+40>>2];var date={tm_sec:HEAP32[tm>>2],tm_min:HEAP32[tm+4>>2],tm_hour:HEAP32[tm+8>>2],tm_mday:HEAP32[tm+12>>2],tm_mon:HEAP32[tm+16>>2],tm_year:HEAP32[tm+20>>2],tm_wday:HEAP32[tm+24>>2],tm_yday:HEAP32[tm+28>>2],tm_isdst:HEAP32[tm+32>>2],tm_gmtoff:HEAP32[tm+36>>2],tm_zone:tm_zone?UTF8ToString(tm_zone):""};var pattern=UTF8ToString(format);var EXPANSION_RULES_1={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"};for(var rule in EXPANSION_RULES_1){pattern=pattern.replace(new RegExp(rule,"g"),EXPANSION_RULES_1[rule])}var WEEKDAYS=["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"];var MONTHS=["January","February","March","April","May","June","July","August","September","October","November","December"];function leadingSomething(value,digits,character){var str=typeof value=="number"?value.toString():value||"";while(str.length0?1:0}var compare;if((compare=sgn(date1.getFullYear()-date2.getFullYear()))===0){if((compare=sgn(date1.getMonth()-date2.getMonth()))===0){compare=sgn(date1.getDate()-date2.getDate())}}return compare}function getFirstWeekStartDate(janFourth){switch(janFourth.getDay()){case 0:return new Date(janFourth.getFullYear()-1,11,29);case 1:return janFourth;case 2:return new Date(janFourth.getFullYear(),0,3);case 3:return new Date(janFourth.getFullYear(),0,2);case 4:return new Date(janFourth.getFullYear(),0,1);case 5:return new Date(janFourth.getFullYear()-1,11,31);case 6:return new Date(janFourth.getFullYear()-1,11,30)}}function getWeekBasedYear(date){var thisDate=__addDays(new Date(date.tm_year+1900,0,1),date.tm_yday);var janFourthThisYear=new Date(thisDate.getFullYear(),0,4);var janFourthNextYear=new Date(thisDate.getFullYear()+1,0,4);var firstWeekStartThisYear=getFirstWeekStartDate(janFourthThisYear);var firstWeekStartNextYear=getFirstWeekStartDate(janFourthNextYear);if(compareByDay(firstWeekStartThisYear,thisDate)<=0){if(compareByDay(firstWeekStartNextYear,thisDate)<=0){return thisDate.getFullYear()+1}else{return thisDate.getFullYear()}}else{return thisDate.getFullYear()-1}}var EXPANSION_RULES_2={"%a":function(date){return WEEKDAYS[date.tm_wday].substring(0,3)},"%A":function(date){return WEEKDAYS[date.tm_wday]},"%b":function(date){return MONTHS[date.tm_mon].substring(0,3)},"%B":function(date){return MONTHS[date.tm_mon]},"%C":function(date){var year=date.tm_year+1900;return leadingNulls(year/100|0,2)},"%d":function(date){return leadingNulls(date.tm_mday,2)},"%e":function(date){return leadingSomething(date.tm_mday,2," ")},"%g":function(date){return getWeekBasedYear(date).toString().substring(2)},"%G":function(date){return getWeekBasedYear(date)},"%H":function(date){return leadingNulls(date.tm_hour,2)},"%I":function(date){var twelveHour=date.tm_hour;if(twelveHour==0)twelveHour=12;else if(twelveHour>12)twelveHour-=12;return leadingNulls(twelveHour,2)},"%j":function(date){return leadingNulls(date.tm_mday+__arraySum(__isLeapYear(date.tm_year+1900)?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR,date.tm_mon-1),3)},"%m":function(date){return leadingNulls(date.tm_mon+1,2)},"%M":function(date){return leadingNulls(date.tm_min,2)},"%n":function(){return"\n"},"%p":function(date){if(date.tm_hour>=0&&date.tm_hour<12){return"AM"}else{return"PM"}},"%S":function(date){return leadingNulls(date.tm_sec,2)},"%t":function(){return"\t"},"%u":function(date){return date.tm_wday||7},"%U":function(date){var days=date.tm_yday+7-date.tm_wday;return leadingNulls(Math.floor(days/7),2)},"%V":function(date){var val=Math.floor((date.tm_yday+7-(date.tm_wday+6)%7)/7);if((date.tm_wday+371-date.tm_yday-2)%7<=2){val++}if(!val){val=52;var dec31=(date.tm_wday+7-date.tm_yday-1)%7;if(dec31==4||dec31==5&&__isLeapYear(date.tm_year%400-1)){val++}}else if(val==53){var jan1=(date.tm_wday+371-date.tm_yday)%7;if(jan1!=4&&(jan1!=3||!__isLeapYear(date.tm_year)))val=1}return leadingNulls(val,2)},"%w":function(date){return date.tm_wday},"%W":function(date){var days=date.tm_yday+7-(date.tm_wday+6)%7;return leadingNulls(Math.floor(days/7),2)},"%y":function(date){return(date.tm_year+1900).toString().substring(2)},"%Y":function(date){return date.tm_year+1900},"%z":function(date){var off=date.tm_gmtoff;var ahead=off>=0;off=Math.abs(off)/60;off=off/60*100+off%60;return(ahead?"+":"-")+String("0000"+off).slice(-4)},"%Z":function(date){return date.tm_zone},"%%":function(){return"%"}};pattern=pattern.replace(/%%/g,"\0\0");for(var rule in EXPANSION_RULES_2){if(pattern.includes(rule)){pattern=pattern.replace(new RegExp(rule,"g"),EXPANSION_RULES_2[rule](date))}}pattern=pattern.replace(/\0\0/g,"%");var bytes=intArrayFromString(pattern,false);if(bytes.length>maxsize){return 0}writeArrayToMemory(bytes,s);return bytes.length-1}var FSNode=function(parent,name,mode,rdev){if(!parent){parent=this}this.parent=parent;this.mount=parent.mount;this.mounted=null;this.id=FS.nextInode++;this.name=name;this.mode=mode;this.node_ops={};this.stream_ops={};this.rdev=rdev};var readMode=292|73;var writeMode=146;Object.defineProperties(FSNode.prototype,{read:{get:function(){return(this.mode&readMode)===readMode},set:function(val){val?this.mode|=readMode:this.mode&=~readMode}},write:{get:function(){return(this.mode&writeMode)===writeMode},set:function(val){val?this.mode|=writeMode:this.mode&=~writeMode}},isFolder:{get:function(){return FS.isDir(this.mode)}},isDevice:{get:function(){return FS.isChrdev(this.mode)}}});FS.FSNode=FSNode;FS.staticInit();Module["FS_createPath"]=FS.createPath;Module["FS_createDataFile"]=FS.createDataFile;Module["requestFullscreen"]=function Module_requestFullscreen(lockPointer,resizeCanvas){Browser.requestFullscreen(lockPointer,resizeCanvas)};Module["requestAnimationFrame"]=function Module_requestAnimationFrame(func){Browser.requestAnimationFrame(func)};Module["setCanvasSize"]=function Module_setCanvasSize(width,height,noUpdates){Browser.setCanvasSize(width,height,noUpdates)};Module["pauseMainLoop"]=function Module_pauseMainLoop(){Browser.mainLoop.pause()};Module["resumeMainLoop"]=function Module_resumeMainLoop(){Browser.mainLoop.resume()};Module["getUserMedia"]=function Module_getUserMedia(){Browser.getUserMedia()};Module["createContext"]=function Module_createContext(canvas,useWebGL,setInModule,webGLContextAttributes){return Browser.createContext(canvas,useWebGL,setInModule,webGLContextAttributes)};var GLctx;for(var i=0;i<32;++i)tempFixedLengthArray.push(new Array(i));var miniTempWebGLFloatBuffersStorage=new Float32Array(288);for(var i=0;i<288;++i){miniTempWebGLFloatBuffers[i]=miniTempWebGLFloatBuffersStorage.subarray(0,i+1)}var __miniTempWebGLIntBuffersStorage=new Int32Array(288);for(var i=0;i<288;++i){__miniTempWebGLIntBuffers[i]=__miniTempWebGLIntBuffersStorage.subarray(0,i+1)}var ASSERTIONS=false;function intArrayFromString(stringy,dontAddNull,length){var len=length>0?length:lengthBytesUTF8(stringy)+1;var u8array=new Array(len);var numBytesWritten=stringToUTF8Array(stringy,u8array,0,u8array.length);if(dontAddNull)u8array.length=numBytesWritten;return u8array}var asmLibraryArg={"GetJSMemoryInfo":_GetJSMemoryInfo,"JS_Accelerometer_IsRunning":_JS_Accelerometer_IsRunning,"JS_Accelerometer_Start":_JS_Accelerometer_Start,"JS_Accelerometer_Stop":_JS_Accelerometer_Stop,"JS_Cursor_SetImage":_JS_Cursor_SetImage,"JS_Cursor_SetShow":_JS_Cursor_SetShow,"JS_DOM_MapViewportCoordinateToElementLocalCoordinate":_JS_DOM_MapViewportCoordinateToElementLocalCoordinate,"JS_DOM_UnityCanvasSelector":_JS_DOM_UnityCanvasSelector,"JS_Eval_OpenURL":_JS_Eval_OpenURL,"JS_FileSystem_Initialize":_JS_FileSystem_Initialize,"JS_FileSystem_Sync":_JS_FileSystem_Sync,"JS_GravitySensor_IsRunning":_JS_GravitySensor_IsRunning,"JS_GravitySensor_Start":_JS_GravitySensor_Start,"JS_GravitySensor_Stop":_JS_GravitySensor_Stop,"JS_GuardAgainstJsExceptions":_JS_GuardAgainstJsExceptions,"JS_Gyroscope_IsRunning":_JS_Gyroscope_IsRunning,"JS_Gyroscope_Start":_JS_Gyroscope_Start,"JS_Gyroscope_Stop":_JS_Gyroscope_Stop,"JS_LinearAccelerationSensor_IsRunning":_JS_LinearAccelerationSensor_IsRunning,"JS_LinearAccelerationSensor_Start":_JS_LinearAccelerationSensor_Start,"JS_LinearAccelerationSensor_Stop":_JS_LinearAccelerationSensor_Stop,"JS_Log_Dump":_JS_Log_Dump,"JS_Log_StackTrace":_JS_Log_StackTrace,"JS_MobileKeybard_GetIgnoreBlurEvent":_JS_MobileKeybard_GetIgnoreBlurEvent,"JS_MobileKeyboard_GetKeyboardStatus":_JS_MobileKeyboard_GetKeyboardStatus,"JS_MobileKeyboard_GetText":_JS_MobileKeyboard_GetText,"JS_MobileKeyboard_GetTextSelection":_JS_MobileKeyboard_GetTextSelection,"JS_MobileKeyboard_Hide":_JS_MobileKeyboard_Hide,"JS_MobileKeyboard_SetCharacterLimit":_JS_MobileKeyboard_SetCharacterLimit,"JS_MobileKeyboard_SetText":_JS_MobileKeyboard_SetText,"JS_MobileKeyboard_SetTextSelection":_JS_MobileKeyboard_SetTextSelection,"JS_MobileKeyboard_Show":_JS_MobileKeyboard_Show,"JS_OrientationSensor_IsRunning":_JS_OrientationSensor_IsRunning,"JS_OrientationSensor_Start":_JS_OrientationSensor_Start,"JS_OrientationSensor_Stop":_JS_OrientationSensor_Stop,"JS_RequestDeviceSensorPermissionsOnTouch":_JS_RequestDeviceSensorPermissionsOnTouch,"JS_RunQuitCallbacks":_JS_RunQuitCallbacks,"JS_ScreenOrientation_DeInit":_JS_ScreenOrientation_DeInit,"JS_ScreenOrientation_Init":_JS_ScreenOrientation_Init,"JS_ScreenOrientation_Lock":_JS_ScreenOrientation_Lock,"JS_Sound_Create_Channel":_JS_Sound_Create_Channel,"JS_Sound_GetLength":_JS_Sound_GetLength,"JS_Sound_GetLoadState":_JS_Sound_GetLoadState,"JS_Sound_Init":_JS_Sound_Init,"JS_Sound_Load":_JS_Sound_Load,"JS_Sound_Load_PCM":_JS_Sound_Load_PCM,"JS_Sound_Play":_JS_Sound_Play,"JS_Sound_ReleaseInstance":_JS_Sound_ReleaseInstance,"JS_Sound_ResumeIfNeeded":_JS_Sound_ResumeIfNeeded,"JS_Sound_Set3D":_JS_Sound_Set3D,"JS_Sound_SetListenerOrientation":_JS_Sound_SetListenerOrientation,"JS_Sound_SetListenerPosition":_JS_Sound_SetListenerPosition,"JS_Sound_SetLoop":_JS_Sound_SetLoop,"JS_Sound_SetLoopPoints":_JS_Sound_SetLoopPoints,"JS_Sound_SetPaused":_JS_Sound_SetPaused,"JS_Sound_SetPitch":_JS_Sound_SetPitch,"JS_Sound_SetPosition":_JS_Sound_SetPosition,"JS_Sound_SetVolume":_JS_Sound_SetVolume,"JS_Sound_Stop":_JS_Sound_Stop,"JS_SystemInfo_GetBrowserName":_JS_SystemInfo_GetBrowserName,"JS_SystemInfo_GetBrowserVersionString":_JS_SystemInfo_GetBrowserVersionString,"JS_SystemInfo_GetCanvasClientSize":_JS_SystemInfo_GetCanvasClientSize,"JS_SystemInfo_GetDocumentURL":_JS_SystemInfo_GetDocumentURL,"JS_SystemInfo_GetGPUInfo":_JS_SystemInfo_GetGPUInfo,"JS_SystemInfo_GetLanguage":_JS_SystemInfo_GetLanguage,"JS_SystemInfo_GetMatchWebGLToCanvasSize":_JS_SystemInfo_GetMatchWebGLToCanvasSize,"JS_SystemInfo_GetMemory":_JS_SystemInfo_GetMemory,"JS_SystemInfo_GetOS":_JS_SystemInfo_GetOS,"JS_SystemInfo_GetPreferredDevicePixelRatio":_JS_SystemInfo_GetPreferredDevicePixelRatio,"JS_SystemInfo_GetScreenSize":_JS_SystemInfo_GetScreenSize,"JS_SystemInfo_GetStreamingAssetsURL":_JS_SystemInfo_GetStreamingAssetsURL,"JS_SystemInfo_HasAstcHdr":_JS_SystemInfo_HasAstcHdr,"JS_SystemInfo_HasCursorLock":_JS_SystemInfo_HasCursorLock,"JS_SystemInfo_HasFullscreen":_JS_SystemInfo_HasFullscreen,"JS_SystemInfo_HasWebGL":_JS_SystemInfo_HasWebGL,"JS_UnityEngineShouldQuit":_JS_UnityEngineShouldQuit,"JS_WebRequest_Abort":_JS_WebRequest_Abort,"JS_WebRequest_Create":_JS_WebRequest_Create,"JS_WebRequest_GetResponseMetaData":_JS_WebRequest_GetResponseMetaData,"JS_WebRequest_GetResponseMetaDataLengths":_JS_WebRequest_GetResponseMetaDataLengths,"JS_WebRequest_Release":_JS_WebRequest_Release,"JS_WebRequest_Send":_JS_WebRequest_Send,"JS_WebRequest_SetRedirectLimit":_JS_WebRequest_SetRedirectLimit,"JS_WebRequest_SetRequestHeader":_JS_WebRequest_SetRequestHeader,"JS_WebRequest_SetTimeout":_JS_WebRequest_SetTimeout,"__cxa_allocate_exception":___cxa_allocate_exception,"__cxa_begin_catch":___cxa_begin_catch,"__cxa_end_catch":___cxa_end_catch,"__cxa_find_matching_catch_2":___cxa_find_matching_catch_2,"__cxa_find_matching_catch_3":___cxa_find_matching_catch_3,"__cxa_find_matching_catch_4":___cxa_find_matching_catch_4,"__cxa_free_exception":___cxa_free_exception,"__cxa_rethrow":___cxa_rethrow,"__cxa_throw":___cxa_throw,"__resumeException":___resumeException,"__syscall__newselect":___syscall__newselect,"__syscall_accept4":___syscall_accept4,"__syscall_bind":___syscall_bind,"__syscall_chmod":___syscall_chmod,"__syscall_connect":___syscall_connect,"__syscall_dup3":___syscall_dup3,"__syscall_faccessat":___syscall_faccessat,"__syscall_fcntl64":___syscall_fcntl64,"__syscall_fstat64":___syscall_fstat64,"__syscall_ftruncate64":___syscall_ftruncate64,"__syscall_getcwd":___syscall_getcwd,"__syscall_getdents64":___syscall_getdents64,"__syscall_getpeername":___syscall_getpeername,"__syscall_getsockname":___syscall_getsockname,"__syscall_getsockopt":___syscall_getsockopt,"__syscall_ioctl":___syscall_ioctl,"__syscall_listen":___syscall_listen,"__syscall_lstat64":___syscall_lstat64,"__syscall_mkdir":___syscall_mkdir,"__syscall_newfstatat":___syscall_newfstatat,"__syscall_openat":___syscall_openat,"__syscall_pipe":___syscall_pipe,"__syscall_poll":___syscall_poll,"__syscall_readlinkat":___syscall_readlinkat,"__syscall_recvfrom":___syscall_recvfrom,"__syscall_recvmsg":___syscall_recvmsg,"__syscall_renameat":___syscall_renameat,"__syscall_rmdir":___syscall_rmdir,"__syscall_sendmsg":___syscall_sendmsg,"__syscall_sendto":___syscall_sendto,"__syscall_socket":___syscall_socket,"__syscall_stat64":___syscall_stat64,"__syscall_statfs64":___syscall_statfs64,"__syscall_truncate64":___syscall_truncate64,"__syscall_unlinkat":___syscall_unlinkat,"__syscall_utimensat":___syscall_utimensat,"_dlopen_js":__dlopen_js,"_dlsym_js":__dlsym_js,"_emscripten_date_now":__emscripten_date_now,"_emscripten_get_now_is_monotonic":__emscripten_get_now_is_monotonic,"_emscripten_throw_longjmp":__emscripten_throw_longjmp,"_gmtime_js":__gmtime_js,"_localtime_js":__localtime_js,"_mktime_js":__mktime_js,"_mmap_js":__mmap_js,"_munmap_js":__munmap_js,"_tzset_js":__tzset_js,"abort":_abort,"emscripten_asm_const_int_sync_on_main_thread":_emscripten_asm_const_int_sync_on_main_thread,"emscripten_cancel_main_loop":_emscripten_cancel_main_loop,"emscripten_clear_interval":_emscripten_clear_interval,"emscripten_exit_fullscreen":_emscripten_exit_fullscreen,"emscripten_exit_pointerlock":_emscripten_exit_pointerlock,"emscripten_get_canvas_element_size":_emscripten_get_canvas_element_size,"emscripten_get_fullscreen_status":_emscripten_get_fullscreen_status,"emscripten_get_gamepad_status":_emscripten_get_gamepad_status,"emscripten_get_heap_max":_emscripten_get_heap_max,"emscripten_get_now":_emscripten_get_now,"emscripten_get_now_res":_emscripten_get_now_res,"emscripten_get_num_gamepads":_emscripten_get_num_gamepads,"emscripten_html5_remove_all_event_listeners":_emscripten_html5_remove_all_event_listeners,"emscripten_is_webgl_context_lost":_emscripten_is_webgl_context_lost,"emscripten_log":_emscripten_log,"emscripten_memcpy_big":_emscripten_memcpy_big,"emscripten_request_fullscreen":_emscripten_request_fullscreen,"emscripten_request_pointerlock":_emscripten_request_pointerlock,"emscripten_resize_heap":_emscripten_resize_heap,"emscripten_sample_gamepad_data":_emscripten_sample_gamepad_data,"emscripten_set_blur_callback_on_thread":_emscripten_set_blur_callback_on_thread,"emscripten_set_canvas_element_size":_emscripten_set_canvas_element_size,"emscripten_set_focus_callback_on_thread":_emscripten_set_focus_callback_on_thread,"emscripten_set_fullscreenchange_callback_on_thread":_emscripten_set_fullscreenchange_callback_on_thread,"emscripten_set_gamepadconnected_callback_on_thread":_emscripten_set_gamepadconnected_callback_on_thread,"emscripten_set_gamepaddisconnected_callback_on_thread":_emscripten_set_gamepaddisconnected_callback_on_thread,"emscripten_set_interval":_emscripten_set_interval,"emscripten_set_keydown_callback_on_thread":_emscripten_set_keydown_callback_on_thread,"emscripten_set_keypress_callback_on_thread":_emscripten_set_keypress_callback_on_thread,"emscripten_set_keyup_callback_on_thread":_emscripten_set_keyup_callback_on_thread,"emscripten_set_main_loop":_emscripten_set_main_loop,"emscripten_set_main_loop_timing":_emscripten_set_main_loop_timing,"emscripten_set_mousedown_callback_on_thread":_emscripten_set_mousedown_callback_on_thread,"emscripten_set_mousemove_callback_on_thread":_emscripten_set_mousemove_callback_on_thread,"emscripten_set_mouseup_callback_on_thread":_emscripten_set_mouseup_callback_on_thread,"emscripten_set_touchcancel_callback_on_thread":_emscripten_set_touchcancel_callback_on_thread,"emscripten_set_touchend_callback_on_thread":_emscripten_set_touchend_callback_on_thread,"emscripten_set_touchmove_callback_on_thread":_emscripten_set_touchmove_callback_on_thread,"emscripten_set_touchstart_callback_on_thread":_emscripten_set_touchstart_callback_on_thread,"emscripten_set_wheel_callback_on_thread":_emscripten_set_wheel_callback_on_thread,"emscripten_webgl_create_context":_emscripten_webgl_create_context,"emscripten_webgl_destroy_context":_emscripten_webgl_destroy_context,"emscripten_webgl_enable_extension":_emscripten_webgl_enable_extension,"emscripten_webgl_get_current_context":_emscripten_webgl_get_current_context,"emscripten_webgl_init_context_attributes":_emscripten_webgl_init_context_attributes,"emscripten_webgl_make_context_current":_emscripten_webgl_make_context_current,"environ_get":_environ_get,"environ_sizes_get":_environ_sizes_get,"exit":_exit,"fd_close":_fd_close,"fd_fdstat_get":_fd_fdstat_get,"fd_read":_fd_read,"fd_seek":_fd_seek,"fd_write":_fd_write,"getTempRet0":_getTempRet0,"getaddrinfo":_getaddrinfo,"gethostbyaddr":_gethostbyaddr,"gethostbyname":_gethostbyname,"getnameinfo":_getnameinfo,"glActiveTexture":_glActiveTexture,"glAttachShader":_glAttachShader,"glBeginQuery":_glBeginQuery,"glBindAttribLocation":_glBindAttribLocation,"glBindBuffer":_glBindBuffer,"glBindBufferBase":_glBindBufferBase,"glBindBufferRange":_glBindBufferRange,"glBindFramebuffer":_glBindFramebuffer,"glBindRenderbuffer":_glBindRenderbuffer,"glBindSampler":_glBindSampler,"glBindTexture":_glBindTexture,"glBindVertexArray":_glBindVertexArray,"glBlendEquation":_glBlendEquation,"glBlendEquationSeparate":_glBlendEquationSeparate,"glBlendFuncSeparate":_glBlendFuncSeparate,"glBlitFramebuffer":_glBlitFramebuffer,"glBufferData":_glBufferData,"glBufferSubData":_glBufferSubData,"glCheckFramebufferStatus":_glCheckFramebufferStatus,"glClear":_glClear,"glClearBufferfi":_glClearBufferfi,"glClearBufferfv":_glClearBufferfv,"glClearBufferuiv":_glClearBufferuiv,"glClearColor":_glClearColor,"glClearDepthf":_glClearDepthf,"glClearStencil":_glClearStencil,"glClientWaitSync":_glClientWaitSync,"glColorMask":_glColorMask,"glCompileShader":_glCompileShader,"glCompressedTexImage2D":_glCompressedTexImage2D,"glCompressedTexImage3D":_glCompressedTexImage3D,"glCompressedTexSubImage2D":_glCompressedTexSubImage2D,"glCompressedTexSubImage3D":_glCompressedTexSubImage3D,"glCopyBufferSubData":_glCopyBufferSubData,"glCopyTexImage2D":_glCopyTexImage2D,"glCopyTexSubImage2D":_glCopyTexSubImage2D,"glCreateProgram":_glCreateProgram,"glCreateShader":_glCreateShader,"glCullFace":_glCullFace,"glDeleteBuffers":_glDeleteBuffers,"glDeleteFramebuffers":_glDeleteFramebuffers,"glDeleteProgram":_glDeleteProgram,"glDeleteQueries":_glDeleteQueries,"glDeleteRenderbuffers":_glDeleteRenderbuffers,"glDeleteSamplers":_glDeleteSamplers,"glDeleteShader":_glDeleteShader,"glDeleteSync":_glDeleteSync,"glDeleteTextures":_glDeleteTextures,"glDeleteVertexArrays":_glDeleteVertexArrays,"glDepthFunc":_glDepthFunc,"glDepthMask":_glDepthMask,"glDetachShader":_glDetachShader,"glDisable":_glDisable,"glDisableVertexAttribArray":_glDisableVertexAttribArray,"glDrawArrays":_glDrawArrays,"glDrawArraysInstanced":_glDrawArraysInstanced,"glDrawBuffers":_glDrawBuffers,"glDrawElements":_glDrawElements,"glDrawElementsInstanced":_glDrawElementsInstanced,"glEnable":_glEnable,"glEnableVertexAttribArray":_glEnableVertexAttribArray,"glEndQuery":_glEndQuery,"glFenceSync":_glFenceSync,"glFinish":_glFinish,"glFlush":_glFlush,"glFlushMappedBufferRange":_glFlushMappedBufferRange,"glFramebufferRenderbuffer":_glFramebufferRenderbuffer,"glFramebufferTexture2D":_glFramebufferTexture2D,"glFramebufferTextureLayer":_glFramebufferTextureLayer,"glFrontFace":_glFrontFace,"glGenBuffers":_glGenBuffers,"glGenFramebuffers":_glGenFramebuffers,"glGenQueries":_glGenQueries,"glGenRenderbuffers":_glGenRenderbuffers,"glGenSamplers":_glGenSamplers,"glGenTextures":_glGenTextures,"glGenVertexArrays":_glGenVertexArrays,"glGenerateMipmap":_glGenerateMipmap,"glGetActiveAttrib":_glGetActiveAttrib,"glGetActiveUniform":_glGetActiveUniform,"glGetActiveUniformBlockName":_glGetActiveUniformBlockName,"glGetActiveUniformBlockiv":_glGetActiveUniformBlockiv,"glGetActiveUniformsiv":_glGetActiveUniformsiv,"glGetAttribLocation":_glGetAttribLocation,"glGetBufferSubData":_glGetBufferSubData,"glGetError":_glGetError,"glGetFramebufferAttachmentParameteriv":_glGetFramebufferAttachmentParameteriv,"glGetIntegeri_v":_glGetIntegeri_v,"glGetIntegerv":_glGetIntegerv,"glGetInternalformativ":_glGetInternalformativ,"glGetProgramBinary":_glGetProgramBinary,"glGetProgramInfoLog":_glGetProgramInfoLog,"glGetProgramiv":_glGetProgramiv,"glGetQueryObjectuiv":_glGetQueryObjectuiv,"glGetQueryiv":_glGetQueryiv,"glGetRenderbufferParameteriv":_glGetRenderbufferParameteriv,"glGetShaderInfoLog":_glGetShaderInfoLog,"glGetShaderPrecisionFormat":_glGetShaderPrecisionFormat,"glGetShaderSource":_glGetShaderSource,"glGetShaderiv":_glGetShaderiv,"glGetString":_glGetString,"glGetStringi":_glGetStringi,"glGetTexParameteriv":_glGetTexParameteriv,"glGetUniformBlockIndex":_glGetUniformBlockIndex,"glGetUniformIndices":_glGetUniformIndices,"glGetUniformLocation":_glGetUniformLocation,"glGetUniformiv":_glGetUniformiv,"glGetVertexAttribiv":_glGetVertexAttribiv,"glInvalidateFramebuffer":_glInvalidateFramebuffer,"glIsEnabled":_glIsEnabled,"glIsVertexArray":_glIsVertexArray,"glLinkProgram":_glLinkProgram,"glMapBufferRange":_glMapBufferRange,"glPixelStorei":_glPixelStorei,"glPolygonOffset":_glPolygonOffset,"glProgramBinary":_glProgramBinary,"glProgramParameteri":_glProgramParameteri,"glReadBuffer":_glReadBuffer,"glReadPixels":_glReadPixels,"glRenderbufferStorage":_glRenderbufferStorage,"glRenderbufferStorageMultisample":_glRenderbufferStorageMultisample,"glSamplerParameteri":_glSamplerParameteri,"glScissor":_glScissor,"glShaderSource":_glShaderSource,"glStencilFuncSeparate":_glStencilFuncSeparate,"glStencilMask":_glStencilMask,"glStencilOpSeparate":_glStencilOpSeparate,"glTexImage2D":_glTexImage2D,"glTexImage3D":_glTexImage3D,"glTexParameterf":_glTexParameterf,"glTexParameteri":_glTexParameteri,"glTexParameteriv":_glTexParameteriv,"glTexStorage2D":_glTexStorage2D,"glTexStorage3D":_glTexStorage3D,"glTexSubImage2D":_glTexSubImage2D,"glTexSubImage3D":_glTexSubImage3D,"glUniform1fv":_glUniform1fv,"glUniform1i":_glUniform1i,"glUniform1iv":_glUniform1iv,"glUniform1uiv":_glUniform1uiv,"glUniform2fv":_glUniform2fv,"glUniform2iv":_glUniform2iv,"glUniform2uiv":_glUniform2uiv,"glUniform3fv":_glUniform3fv,"glUniform3iv":_glUniform3iv,"glUniform3uiv":_glUniform3uiv,"glUniform4fv":_glUniform4fv,"glUniform4iv":_glUniform4iv,"glUniform4uiv":_glUniform4uiv,"glUniformBlockBinding":_glUniformBlockBinding,"glUniformMatrix3fv":_glUniformMatrix3fv,"glUniformMatrix4fv":_glUniformMatrix4fv,"glUnmapBuffer":_glUnmapBuffer,"glUseProgram":_glUseProgram,"glValidateProgram":_glValidateProgram,"glVertexAttrib4f":_glVertexAttrib4f,"glVertexAttrib4fv":_glVertexAttrib4fv,"glVertexAttribIPointer":_glVertexAttribIPointer,"glVertexAttribPointer":_glVertexAttribPointer,"glViewport":_glViewport,"invoke_dddi":invoke_dddi,"invoke_ddiii":invoke_ddiii,"invoke_dii":invoke_dii,"invoke_diii":invoke_diii,"invoke_diiii":invoke_diiii,"invoke_dji":invoke_dji,"invoke_fffi":invoke_fffi,"invoke_fi":invoke_fi,"invoke_fii":invoke_fii,"invoke_fiii":invoke_fiii,"invoke_i":invoke_i,"invoke_idi":invoke_idi,"invoke_ifi":invoke_ifi,"invoke_ii":invoke_ii,"invoke_iidi":invoke_iidi,"invoke_iifi":invoke_iifi,"invoke_iii":invoke_iii,"invoke_iiifi":invoke_iiifi,"invoke_iiifii":invoke_iiifii,"invoke_iiii":invoke_iiii,"invoke_iiiidii":invoke_iiiidii,"invoke_iiiifii":invoke_iiiifii,"invoke_iiiii":invoke_iiiii,"invoke_iiiiii":invoke_iiiiii,"invoke_iiiiiii":invoke_iiiiiii,"invoke_iiiiiiii":invoke_iiiiiiii,"invoke_iiiiiiiii":invoke_iiiiiiiii,"invoke_iiiiiiiiifi":invoke_iiiiiiiiifi,"invoke_iiiiiiiiii":invoke_iiiiiiiiii,"invoke_iiiiiiiiiii":invoke_iiiiiiiiiii,"invoke_iiiiiiiiiiii":invoke_iiiiiiiiiiii,"invoke_iiiiiiiiiji":invoke_iiiiiiiiiji,"invoke_iiiiij":invoke_iiiiij,"invoke_iiiijii":invoke_iiiijii,"invoke_iiiijjii":invoke_iiiijjii,"invoke_iiijiii":invoke_iiijiii,"invoke_iij":invoke_iij,"invoke_iiji":invoke_iiji,"invoke_iijii":invoke_iijii,"invoke_iijiii":invoke_iijiii,"invoke_iijiiiiii":invoke_iijiiiiii,"invoke_iijji":invoke_iijji,"invoke_iijjiiiiii":invoke_iijjiiiiii,"invoke_iji":invoke_iji,"invoke_ijji":invoke_ijji,"invoke_j":invoke_j,"invoke_jdi":invoke_jdi,"invoke_ji":invoke_ji,"invoke_jidi":invoke_jidi,"invoke_jii":invoke_jii,"invoke_jiidi":invoke_jiidi,"invoke_jiii":invoke_jiii,"invoke_jiiii":invoke_jiiii,"invoke_jiiiii":invoke_jiiiii,"invoke_jiiiiiiiiii":invoke_jiiiiiiiiii,"invoke_jiji":invoke_jiji,"invoke_jijii":invoke_jijii,"invoke_jji":invoke_jji,"invoke_jjii":invoke_jjii,"invoke_jjji":invoke_jjji,"invoke_v":invoke_v,"invoke_vi":invoke_vi,"invoke_vidd":invoke_vidd,"invoke_vidi":invoke_vidi,"invoke_viffi":invoke_viffi,"invoke_vifi":invoke_vifi,"invoke_vifii":invoke_vifii,"invoke_vii":invoke_vii,"invoke_viidi":invoke_viidi,"invoke_viiffi":invoke_viiffi,"invoke_viifi":invoke_viifi,"invoke_viifii":invoke_viifii,"invoke_viii":invoke_viii,"invoke_viiifi":invoke_viiifi,"invoke_viiii":invoke_viiii,"invoke_viiiifi":invoke_viiiifi,"invoke_viiiii":invoke_viiiii,"invoke_viiiiii":invoke_viiiiii,"invoke_viiiiiii":invoke_viiiiiii,"invoke_viiiiiiii":invoke_viiiiiiii,"invoke_viiiiiiiii":invoke_viiiiiiiii,"invoke_viiiiiiiiii":invoke_viiiiiiiiii,"invoke_viiiiiiiiiiii":invoke_viiiiiiiiiiii,"invoke_viiiiiiiiiiiii":invoke_viiiiiiiiiiiii,"invoke_viiiiiiiiiiiiiii":invoke_viiiiiiiiiiiiiii,"invoke_viiiji":invoke_viiiji,"invoke_viiji":invoke_viiji,"invoke_viijii":invoke_viijii,"invoke_viijiiijiiii":invoke_viijiiijiiii,"invoke_viji":invoke_viji,"invoke_vijii":invoke_vijii,"invoke_vijiii":invoke_vijiii,"invoke_vijjji":invoke_vijjji,"invoke_vji":invoke_vji,"invoke_vjiiiii":invoke_vjiiiii,"invoke_vjjjiiii":invoke_vjjjiiii,"llvm_eh_typeid_for":_llvm_eh_typeid_for,"setTempRet0":_setTempRet0,"strftime":_strftime};var asm=createWasm();var ___wasm_call_ctors=Module["___wasm_call_ctors"]=function(){return(___wasm_call_ctors=Module["___wasm_call_ctors"]=Module["asm"]["__wasm_call_ctors"]).apply(null,arguments)};var _getMemInfo=Module["_getMemInfo"]=function(){return(_getMemInfo=Module["_getMemInfo"]=Module["asm"]["getMemInfo"]).apply(null,arguments)};var _SendMessageFloat=Module["_SendMessageFloat"]=function(){return(_SendMessageFloat=Module["_SendMessageFloat"]=Module["asm"]["SendMessageFloat"]).apply(null,arguments)};var _SendMessageString=Module["_SendMessageString"]=function(){return(_SendMessageString=Module["_SendMessageString"]=Module["asm"]["SendMessageString"]).apply(null,arguments)};var _SendMessage=Module["_SendMessage"]=function(){return(_SendMessage=Module["_SendMessage"]=Module["asm"]["SendMessage"]).apply(null,arguments)};var _SetFullscreen=Module["_SetFullscreen"]=function(){return(_SetFullscreen=Module["_SetFullscreen"]=Module["asm"]["SetFullscreen"]).apply(null,arguments)};var _main=Module["_main"]=function(){return(_main=Module["_main"]=Module["asm"]["main"]).apply(null,arguments)};var ___errno_location=Module["___errno_location"]=function(){return(___errno_location=Module["___errno_location"]=Module["asm"]["__errno_location"]).apply(null,arguments)};var ___dl_seterr=Module["___dl_seterr"]=function(){return(___dl_seterr=Module["___dl_seterr"]=Module["asm"]["__dl_seterr"]).apply(null,arguments)};var _htonl=Module["_htonl"]=function(){return(_htonl=Module["_htonl"]=Module["asm"]["htonl"]).apply(null,arguments)};var _htons=Module["_htons"]=function(){return(_htons=Module["_htons"]=Module["asm"]["htons"]).apply(null,arguments)};var _ntohs=Module["_ntohs"]=function(){return(_ntohs=Module["_ntohs"]=Module["asm"]["ntohs"]).apply(null,arguments)};var _strlen=Module["_strlen"]=function(){return(_strlen=Module["_strlen"]=Module["asm"]["strlen"]).apply(null,arguments)};var _malloc=Module["_malloc"]=function(){return(_malloc=Module["_malloc"]=Module["asm"]["malloc"]).apply(null,arguments)};var _free=Module["_free"]=function(){return(_free=Module["_free"]=Module["asm"]["free"]).apply(null,arguments)};var _emscripten_builtin_memalign=Module["_emscripten_builtin_memalign"]=function(){return(_emscripten_builtin_memalign=Module["_emscripten_builtin_memalign"]=Module["asm"]["emscripten_builtin_memalign"]).apply(null,arguments)};var _setThrew=Module["_setThrew"]=function(){return(_setThrew=Module["_setThrew"]=Module["asm"]["setThrew"]).apply(null,arguments)};var _saveSetjmp=Module["_saveSetjmp"]=function(){return(_saveSetjmp=Module["_saveSetjmp"]=Module["asm"]["saveSetjmp"]).apply(null,arguments)};var stackSave=Module["stackSave"]=function(){return(stackSave=Module["stackSave"]=Module["asm"]["stackSave"]).apply(null,arguments)};var stackRestore=Module["stackRestore"]=function(){return(stackRestore=Module["stackRestore"]=Module["asm"]["stackRestore"]).apply(null,arguments)};var stackAlloc=Module["stackAlloc"]=function(){return(stackAlloc=Module["stackAlloc"]=Module["asm"]["stackAlloc"]).apply(null,arguments)};var ___cxa_can_catch=Module["___cxa_can_catch"]=function(){return(___cxa_can_catch=Module["___cxa_can_catch"]=Module["asm"]["__cxa_can_catch"]).apply(null,arguments)};var ___cxa_is_pointer_type=Module["___cxa_is_pointer_type"]=function(){return(___cxa_is_pointer_type=Module["___cxa_is_pointer_type"]=Module["asm"]["__cxa_is_pointer_type"]).apply(null,arguments)};var dynCall_iidiiii=Module["dynCall_iidiiii"]=function(){return(dynCall_iidiiii=Module["dynCall_iidiiii"]=Module["asm"]["dynCall_iidiiii"]).apply(null,arguments)};var dynCall_vii=Module["dynCall_vii"]=function(){return(dynCall_vii=Module["dynCall_vii"]=Module["asm"]["dynCall_vii"]).apply(null,arguments)};var dynCall_iiii=Module["dynCall_iiii"]=function(){return(dynCall_iiii=Module["dynCall_iiii"]=Module["asm"]["dynCall_iiii"]).apply(null,arguments)};var dynCall_iii=Module["dynCall_iii"]=function(){return(dynCall_iii=Module["dynCall_iii"]=Module["asm"]["dynCall_iii"]).apply(null,arguments)};var dynCall_ii=Module["dynCall_ii"]=function(){return(dynCall_ii=Module["dynCall_ii"]=Module["asm"]["dynCall_ii"]).apply(null,arguments)};var dynCall_jiji=Module["dynCall_jiji"]=function(){return(dynCall_jiji=Module["dynCall_jiji"]=Module["asm"]["dynCall_jiji"]).apply(null,arguments)};var dynCall_vi=Module["dynCall_vi"]=function(){return(dynCall_vi=Module["dynCall_vi"]=Module["asm"]["dynCall_vi"]).apply(null,arguments)};var dynCall_viii=Module["dynCall_viii"]=function(){return(dynCall_viii=Module["dynCall_viii"]=Module["asm"]["dynCall_viii"]).apply(null,arguments)};var dynCall_iiiii=Module["dynCall_iiiii"]=function(){return(dynCall_iiiii=Module["dynCall_iiiii"]=Module["asm"]["dynCall_iiiii"]).apply(null,arguments)};var dynCall_v=Module["dynCall_v"]=function(){return(dynCall_v=Module["dynCall_v"]=Module["asm"]["dynCall_v"]).apply(null,arguments)};var dynCall_i=Module["dynCall_i"]=function(){return(dynCall_i=Module["dynCall_i"]=Module["asm"]["dynCall_i"]).apply(null,arguments)};var dynCall_viiiiii=Module["dynCall_viiiiii"]=function(){return(dynCall_viiiiii=Module["dynCall_viiiiii"]=Module["asm"]["dynCall_viiiiii"]).apply(null,arguments)};var dynCall_viiiii=Module["dynCall_viiiii"]=function(){return(dynCall_viiiii=Module["dynCall_viiiii"]=Module["asm"]["dynCall_viiiii"]).apply(null,arguments)};var dynCall_viiii=Module["dynCall_viiii"]=function(){return(dynCall_viiii=Module["dynCall_viiii"]=Module["asm"]["dynCall_viiii"]).apply(null,arguments)};var dynCall_iiiiii=Module["dynCall_iiiiii"]=function(){return(dynCall_iiiiii=Module["dynCall_iiiiii"]=Module["asm"]["dynCall_iiiiii"]).apply(null,arguments)};var dynCall_iiiiiiii=Module["dynCall_iiiiiiii"]=function(){return(dynCall_iiiiiiii=Module["dynCall_iiiiiiii"]=Module["asm"]["dynCall_iiiiiiii"]).apply(null,arguments)};var dynCall_iiijiii=Module["dynCall_iiijiii"]=function(){return(dynCall_iiijiii=Module["dynCall_iiijiii"]=Module["asm"]["dynCall_iiijiii"]).apply(null,arguments)};var dynCall_iij=Module["dynCall_iij"]=function(){return(dynCall_iij=Module["dynCall_iij"]=Module["asm"]["dynCall_iij"]).apply(null,arguments)};var dynCall_iiiiiii=Module["dynCall_iiiiiii"]=function(){return(dynCall_iiiiiii=Module["dynCall_iiiiiii"]=Module["asm"]["dynCall_iiiiiii"]).apply(null,arguments)};var dynCall_jii=Module["dynCall_jii"]=function(){return(dynCall_jii=Module["dynCall_jii"]=Module["asm"]["dynCall_jii"]).apply(null,arguments)};var dynCall_iiifii=Module["dynCall_iiifii"]=function(){return(dynCall_iiifii=Module["dynCall_iiifii"]=Module["asm"]["dynCall_iiifii"]).apply(null,arguments)};var dynCall_viifi=Module["dynCall_viifi"]=function(){return(dynCall_viifi=Module["dynCall_viifi"]=Module["asm"]["dynCall_viifi"]).apply(null,arguments)};var dynCall_iijiii=Module["dynCall_iijiii"]=function(){return(dynCall_iijiii=Module["dynCall_iijiii"]=Module["asm"]["dynCall_iijiii"]).apply(null,arguments)};var dynCall_vijii=Module["dynCall_vijii"]=function(){return(dynCall_vijii=Module["dynCall_vijii"]=Module["asm"]["dynCall_vijii"]).apply(null,arguments)};var dynCall_iiiijii=Module["dynCall_iiiijii"]=function(){return(dynCall_iiiijii=Module["dynCall_iiiijii"]=Module["asm"]["dynCall_iiiijii"]).apply(null,arguments)};var dynCall_viji=Module["dynCall_viji"]=function(){return(dynCall_viji=Module["dynCall_viji"]=Module["asm"]["dynCall_viji"]).apply(null,arguments)};var dynCall_viiji=Module["dynCall_viiji"]=function(){return(dynCall_viiji=Module["dynCall_viiji"]=Module["asm"]["dynCall_viiji"]).apply(null,arguments)};var dynCall_vidi=Module["dynCall_vidi"]=function(){return(dynCall_vidi=Module["dynCall_vidi"]=Module["asm"]["dynCall_vidi"]).apply(null,arguments)};var dynCall_viidi=Module["dynCall_viidi"]=function(){return(dynCall_viidi=Module["dynCall_viidi"]=Module["asm"]["dynCall_viidi"]).apply(null,arguments)};var dynCall_viiiiiii=Module["dynCall_viiiiiii"]=function(){return(dynCall_viiiiiii=Module["dynCall_viiiiiii"]=Module["asm"]["dynCall_viiiiiii"]).apply(null,arguments)};var dynCall_viiffi=Module["dynCall_viiffi"]=function(){return(dynCall_viiffi=Module["dynCall_viiffi"]=Module["asm"]["dynCall_viiffi"]).apply(null,arguments)};var dynCall_fiii=Module["dynCall_fiii"]=function(){return(dynCall_fiii=Module["dynCall_fiii"]=Module["asm"]["dynCall_fiii"]).apply(null,arguments)};var dynCall_diidi=Module["dynCall_diidi"]=function(){return(dynCall_diidi=Module["dynCall_diidi"]=Module["asm"]["dynCall_diidi"]).apply(null,arguments)};var dynCall_jiiji=Module["dynCall_jiiji"]=function(){return(dynCall_jiiji=Module["dynCall_jiiji"]=Module["asm"]["dynCall_jiiji"]).apply(null,arguments)};var dynCall_fiifi=Module["dynCall_fiifi"]=function(){return(dynCall_fiifi=Module["dynCall_fiifi"]=Module["asm"]["dynCall_fiifi"]).apply(null,arguments)};var dynCall_iiffi=Module["dynCall_iiffi"]=function(){return(dynCall_iiffi=Module["dynCall_iiffi"]=Module["asm"]["dynCall_iiffi"]).apply(null,arguments)};var dynCall_iiiifi=Module["dynCall_iiiifi"]=function(){return(dynCall_iiiifi=Module["dynCall_iiiifi"]=Module["asm"]["dynCall_iiiifi"]).apply(null,arguments)};var dynCall_viiiiiiii=Module["dynCall_viiiiiiii"]=function(){return(dynCall_viiiiiiii=Module["dynCall_viiiiiiii"]=Module["asm"]["dynCall_viiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiii=Module["dynCall_viiiiiiiii"]=function(){return(dynCall_viiiiiiiii=Module["dynCall_viiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiii=Module["dynCall_viiiiiiiiii"]=function(){return(dynCall_viiiiiiiiii=Module["dynCall_viiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiii=Module["dynCall_viiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiii=Module["dynCall_viiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiii"]).apply(null,arguments)};var dynCall_ji=Module["dynCall_ji"]=function(){return(dynCall_ji=Module["dynCall_ji"]=Module["asm"]["dynCall_ji"]).apply(null,arguments)};var dynCall_jjji=Module["dynCall_jjji"]=function(){return(dynCall_jjji=Module["dynCall_jjji"]=Module["asm"]["dynCall_jjji"]).apply(null,arguments)};var dynCall_dii=Module["dynCall_dii"]=function(){return(dynCall_dii=Module["dynCall_dii"]=Module["asm"]["dynCall_dii"]).apply(null,arguments)};var dynCall_viijiiijiiii=Module["dynCall_viijiiijiiii"]=function(){return(dynCall_viijiiijiiii=Module["dynCall_viijiiijiiii"]=Module["asm"]["dynCall_viijiiijiiii"]).apply(null,arguments)};var dynCall_vifi=Module["dynCall_vifi"]=function(){return(dynCall_vifi=Module["dynCall_vifi"]=Module["asm"]["dynCall_vifi"]).apply(null,arguments)};var dynCall_iifi=Module["dynCall_iifi"]=function(){return(dynCall_iifi=Module["dynCall_iifi"]=Module["asm"]["dynCall_iifi"]).apply(null,arguments)};var dynCall_fffi=Module["dynCall_fffi"]=function(){return(dynCall_fffi=Module["dynCall_fffi"]=Module["asm"]["dynCall_fffi"]).apply(null,arguments)};var dynCall_ijji=Module["dynCall_ijji"]=function(){return(dynCall_ijji=Module["dynCall_ijji"]=Module["asm"]["dynCall_ijji"]).apply(null,arguments)};var dynCall_jji=Module["dynCall_jji"]=function(){return(dynCall_jji=Module["dynCall_jji"]=Module["asm"]["dynCall_jji"]).apply(null,arguments)};var dynCall_dddi=Module["dynCall_dddi"]=function(){return(dynCall_dddi=Module["dynCall_dddi"]=Module["asm"]["dynCall_dddi"]).apply(null,arguments)};var dynCall_jiii=Module["dynCall_jiii"]=function(){return(dynCall_jiii=Module["dynCall_jiii"]=Module["asm"]["dynCall_jiii"]).apply(null,arguments)};var dynCall_diii=Module["dynCall_diii"]=function(){return(dynCall_diii=Module["dynCall_diii"]=Module["asm"]["dynCall_diii"]).apply(null,arguments)};var dynCall_iidi=Module["dynCall_iidi"]=function(){return(dynCall_iidi=Module["dynCall_iidi"]=Module["asm"]["dynCall_iidi"]).apply(null,arguments)};var dynCall_jiiii=Module["dynCall_jiiii"]=function(){return(dynCall_jiiii=Module["dynCall_jiiii"]=Module["asm"]["dynCall_jiiii"]).apply(null,arguments)};var dynCall_diiii=Module["dynCall_diiii"]=function(){return(dynCall_diiii=Module["dynCall_diiii"]=Module["asm"]["dynCall_diiii"]).apply(null,arguments)};var dynCall_viiiji=Module["dynCall_viiiji"]=function(){return(dynCall_viiiji=Module["dynCall_viiiji"]=Module["asm"]["dynCall_viiiji"]).apply(null,arguments)};var dynCall_iiiiiiiii=Module["dynCall_iiiiiiiii"]=function(){return(dynCall_iiiiiiiii=Module["dynCall_iiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiii"]).apply(null,arguments)};var dynCall_jdi=Module["dynCall_jdi"]=function(){return(dynCall_jdi=Module["dynCall_jdi"]=Module["asm"]["dynCall_jdi"]).apply(null,arguments)};var dynCall_vijjji=Module["dynCall_vijjji"]=function(){return(dynCall_vijjji=Module["dynCall_vijjji"]=Module["asm"]["dynCall_vijjji"]).apply(null,arguments)};var dynCall_iiiiij=Module["dynCall_iiiiij"]=function(){return(dynCall_iiiiij=Module["dynCall_iiiiij"]=Module["asm"]["dynCall_iiiiij"]).apply(null,arguments)};var dynCall_iiiiiiiiii=Module["dynCall_iiiiiiiiii"]=function(){return(dynCall_iiiiiiiiii=Module["dynCall_iiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiii"]).apply(null,arguments)};var dynCall_iiji=Module["dynCall_iiji"]=function(){return(dynCall_iiji=Module["dynCall_iiji"]=Module["asm"]["dynCall_iiji"]).apply(null,arguments)};var dynCall_jjii=Module["dynCall_jjii"]=function(){return(dynCall_jjii=Module["dynCall_jjii"]=Module["asm"]["dynCall_jjii"]).apply(null,arguments)};var dynCall_dji=Module["dynCall_dji"]=function(){return(dynCall_dji=Module["dynCall_dji"]=Module["asm"]["dynCall_dji"]).apply(null,arguments)};var dynCall_idi=Module["dynCall_idi"]=function(){return(dynCall_idi=Module["dynCall_idi"]=Module["asm"]["dynCall_idi"]).apply(null,arguments)};var dynCall_iji=Module["dynCall_iji"]=function(){return(dynCall_iji=Module["dynCall_iji"]=Module["asm"]["dynCall_iji"]).apply(null,arguments)};var dynCall_viifii=Module["dynCall_viifii"]=function(){return(dynCall_viifii=Module["dynCall_viifii"]=Module["asm"]["dynCall_viifii"]).apply(null,arguments)};var dynCall_fiiffi=Module["dynCall_fiiffi"]=function(){return(dynCall_fiiffi=Module["dynCall_fiiffi"]=Module["asm"]["dynCall_fiiffi"]).apply(null,arguments)};var dynCall_viiififii=Module["dynCall_viiififii"]=function(){return(dynCall_viiififii=Module["dynCall_viiififii"]=Module["asm"]["dynCall_viiififii"]).apply(null,arguments)};var dynCall_fi=Module["dynCall_fi"]=function(){return(dynCall_fi=Module["dynCall_fi"]=Module["asm"]["dynCall_fi"]).apply(null,arguments)};var dynCall_iiifi=Module["dynCall_iiifi"]=function(){return(dynCall_iiifi=Module["dynCall_iiifi"]=Module["asm"]["dynCall_iiifi"]).apply(null,arguments)};var dynCall_viiiifi=Module["dynCall_viiiifi"]=function(){return(dynCall_viiiifi=Module["dynCall_viiiifi"]=Module["asm"]["dynCall_viiiifi"]).apply(null,arguments)};var dynCall_fii=Module["dynCall_fii"]=function(){return(dynCall_fii=Module["dynCall_fii"]=Module["asm"]["dynCall_fii"]).apply(null,arguments)};var dynCall_iiiiiiiiiiii=Module["dynCall_iiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiii=Module["dynCall_iiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiifii=Module["dynCall_iiiifii"]=function(){return(dynCall_iiiifii=Module["dynCall_iiiifii"]=Module["asm"]["dynCall_iiiifii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiifii=Module["dynCall_viiiifii"]=function(){return(dynCall_viiiifii=Module["dynCall_viiiifii"]=Module["asm"]["dynCall_viiiifii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiii=Module["dynCall_viiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiii=Module["dynCall_viiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_jijii=Module["dynCall_jijii"]=function(){return(dynCall_jijii=Module["dynCall_jijii"]=Module["asm"]["dynCall_jijii"]).apply(null,arguments)};var dynCall_iiijii=Module["dynCall_iiijii"]=function(){return(dynCall_iiijii=Module["dynCall_iiijii"]=Module["asm"]["dynCall_iiijii"]).apply(null,arguments)};var dynCall_iijiiii=Module["dynCall_iijiiii"]=function(){return(dynCall_iijiiii=Module["dynCall_iijiiii"]=Module["asm"]["dynCall_iijiiii"]).apply(null,arguments)};var dynCall_jijiii=Module["dynCall_jijiii"]=function(){return(dynCall_jijiii=Module["dynCall_jijiii"]=Module["asm"]["dynCall_jijiii"]).apply(null,arguments)};var dynCall_viijii=Module["dynCall_viijii"]=function(){return(dynCall_viijii=Module["dynCall_viijii"]=Module["asm"]["dynCall_viijii"]).apply(null,arguments)};var dynCall_iijiiiiii=Module["dynCall_iijiiiiii"]=function(){return(dynCall_iijiiiiii=Module["dynCall_iijiiiiii"]=Module["asm"]["dynCall_iijiiiiii"]).apply(null,arguments)};var dynCall_iijjiiiiii=Module["dynCall_iijjiiiiii"]=function(){return(dynCall_iijjiiiiii=Module["dynCall_iijjiiiiii"]=Module["asm"]["dynCall_iijjiiiiii"]).apply(null,arguments)};var dynCall_iiiijjii=Module["dynCall_iiiijjii"]=function(){return(dynCall_iiiijjii=Module["dynCall_iiiijjii"]=Module["asm"]["dynCall_iiiijjii"]).apply(null,arguments)};var dynCall_iijii=Module["dynCall_iijii"]=function(){return(dynCall_iijii=Module["dynCall_iijii"]=Module["asm"]["dynCall_iijii"]).apply(null,arguments)};var dynCall_j=Module["dynCall_j"]=function(){return(dynCall_j=Module["dynCall_j"]=Module["asm"]["dynCall_j"]).apply(null,arguments)};var dynCall_iiiiiiiiiji=Module["dynCall_iiiiiiiiiji"]=function(){return(dynCall_iiiiiiiiiji=Module["dynCall_iiiiiiiiiji"]=Module["asm"]["dynCall_iiiiiiiiiji"]).apply(null,arguments)};var dynCall_vji=Module["dynCall_vji"]=function(){return(dynCall_vji=Module["dynCall_vji"]=Module["asm"]["dynCall_vji"]).apply(null,arguments)};var dynCall_ifi=Module["dynCall_ifi"]=function(){return(dynCall_ifi=Module["dynCall_ifi"]=Module["asm"]["dynCall_ifi"]).apply(null,arguments)};var dynCall_vifii=Module["dynCall_vifii"]=function(){return(dynCall_vifii=Module["dynCall_vifii"]=Module["asm"]["dynCall_vifii"]).apply(null,arguments)};var dynCall_iiiidii=Module["dynCall_iiiidii"]=function(){return(dynCall_iiiidii=Module["dynCall_iiiidii"]=Module["asm"]["dynCall_iiiidii"]).apply(null,arguments)};var dynCall_iijji=Module["dynCall_iijji"]=function(){return(dynCall_iijji=Module["dynCall_iijji"]=Module["asm"]["dynCall_iijji"]).apply(null,arguments)};var dynCall_iiddi=Module["dynCall_iiddi"]=function(){return(dynCall_iiddi=Module["dynCall_iiddi"]=Module["asm"]["dynCall_iiddi"]).apply(null,arguments)};var dynCall_iiiiji=Module["dynCall_iiiiji"]=function(){return(dynCall_iiiiji=Module["dynCall_iiiiji"]=Module["asm"]["dynCall_iiiiji"]).apply(null,arguments)};var dynCall_jidi=Module["dynCall_jidi"]=function(){return(dynCall_jidi=Module["dynCall_jidi"]=Module["asm"]["dynCall_jidi"]).apply(null,arguments)};var dynCall_ddiii=Module["dynCall_ddiii"]=function(){return(dynCall_ddiii=Module["dynCall_ddiii"]=Module["asm"]["dynCall_ddiii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiiiii"]=function(){return(dynCall_viiiiiiiiiiiiiiiiiii=Module["dynCall_viiiiiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_viiiiiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_didi=Module["dynCall_didi"]=function(){return(dynCall_didi=Module["dynCall_didi"]=Module["asm"]["dynCall_didi"]).apply(null,arguments)};var dynCall_fifi=Module["dynCall_fifi"]=function(){return(dynCall_fifi=Module["dynCall_fifi"]=Module["asm"]["dynCall_fifi"]).apply(null,arguments)};var dynCall_vijiii=Module["dynCall_vijiii"]=function(){return(dynCall_vijiii=Module["dynCall_vijiii"]=Module["asm"]["dynCall_vijiii"]).apply(null,arguments)};var dynCall_vjjjiiii=Module["dynCall_vjjjiiii"]=function(){return(dynCall_vjjjiiii=Module["dynCall_vjjjiiii"]=Module["asm"]["dynCall_vjjjiiii"]).apply(null,arguments)};var dynCall_vjiiiii=Module["dynCall_vjiiiii"]=function(){return(dynCall_vjiiiii=Module["dynCall_vjiiiii"]=Module["asm"]["dynCall_vjiiiii"]).apply(null,arguments)};var dynCall_jiiiii=Module["dynCall_jiiiii"]=function(){return(dynCall_jiiiii=Module["dynCall_jiiiii"]=Module["asm"]["dynCall_jiiiii"]).apply(null,arguments)};var dynCall_viffi=Module["dynCall_viffi"]=function(){return(dynCall_viffi=Module["dynCall_viffi"]=Module["asm"]["dynCall_viffi"]).apply(null,arguments)};var dynCall_viiifi=Module["dynCall_viiifi"]=function(){return(dynCall_viiifi=Module["dynCall_viiifi"]=Module["asm"]["dynCall_viiifi"]).apply(null,arguments)};var dynCall_iiiiiiiiifi=Module["dynCall_iiiiiiiiifi"]=function(){return(dynCall_iiiiiiiiifi=Module["dynCall_iiiiiiiiifi"]=Module["asm"]["dynCall_iiiiiiiiifi"]).apply(null,arguments)};var dynCall_iiiiiiiifi=Module["dynCall_iiiiiiiifi"]=function(){return(dynCall_iiiiiiiifi=Module["dynCall_iiiiiiiifi"]=Module["asm"]["dynCall_iiiiiiiifi"]).apply(null,arguments)};var dynCall_ifiiii=Module["dynCall_ifiiii"]=function(){return(dynCall_ifiiii=Module["dynCall_ifiiii"]=Module["asm"]["dynCall_ifiiii"]).apply(null,arguments)};var dynCall_idiiiii=Module["dynCall_idiiiii"]=function(){return(dynCall_idiiiii=Module["dynCall_idiiiii"]=Module["asm"]["dynCall_idiiiii"]).apply(null,arguments)};var dynCall_idiiii=Module["dynCall_idiiii"]=function(){return(dynCall_idiiii=Module["dynCall_idiiii"]=Module["asm"]["dynCall_idiiii"]).apply(null,arguments)};var dynCall_idii=Module["dynCall_idii"]=function(){return(dynCall_idii=Module["dynCall_idii"]=Module["asm"]["dynCall_idii"]).apply(null,arguments)};var dynCall_vijiiii=Module["dynCall_vijiiii"]=function(){return(dynCall_vijiiii=Module["dynCall_vijiiii"]=Module["asm"]["dynCall_vijiiii"]).apply(null,arguments)};var dynCall_iiijiiii=Module["dynCall_iiijiiii"]=function(){return(dynCall_iiijiiii=Module["dynCall_iiijiiii"]=Module["asm"]["dynCall_iiijiiii"]).apply(null,arguments)};var dynCall_iiiji=Module["dynCall_iiiji"]=function(){return(dynCall_iiiji=Module["dynCall_iiiji"]=Module["asm"]["dynCall_iiiji"]).apply(null,arguments)};var dynCall_vjiiii=Module["dynCall_vjiiii"]=function(){return(dynCall_vjiiii=Module["dynCall_vjiiii"]=Module["asm"]["dynCall_vjiiii"]).apply(null,arguments)};var dynCall_iddi=Module["dynCall_iddi"]=function(){return(dynCall_iddi=Module["dynCall_iddi"]=Module["asm"]["dynCall_iddi"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiiiiiiii=Module["dynCall_iiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiii=Module["dynCall_iiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiijii=Module["dynCall_viiijii"]=function(){return(dynCall_viiijii=Module["dynCall_viiijii"]=Module["asm"]["dynCall_viiijii"]).apply(null,arguments)};var dynCall_viijiii=Module["dynCall_viijiii"]=function(){return(dynCall_viijiii=Module["dynCall_viijiii"]=Module["asm"]["dynCall_viijiii"]).apply(null,arguments)};var dynCall_ijii=Module["dynCall_ijii"]=function(){return(dynCall_ijii=Module["dynCall_ijii"]=Module["asm"]["dynCall_ijii"]).apply(null,arguments)};var dynCall_iiiiiji=Module["dynCall_iiiiiji"]=function(){return(dynCall_iiiiiji=Module["dynCall_iiiiiji"]=Module["asm"]["dynCall_iiiiiji"]).apply(null,arguments)};var dynCall_ijjiiii=Module["dynCall_ijjiiii"]=function(){return(dynCall_ijjiiii=Module["dynCall_ijjiiii"]=Module["asm"]["dynCall_ijjiiii"]).apply(null,arguments)};var dynCall_vdiiiii=Module["dynCall_vdiiiii"]=function(){return(dynCall_vdiiiii=Module["dynCall_vdiiiii"]=Module["asm"]["dynCall_vdiiiii"]).apply(null,arguments)};var dynCall_diiji=Module["dynCall_diiji"]=function(){return(dynCall_diiji=Module["dynCall_diiji"]=Module["asm"]["dynCall_diiji"]).apply(null,arguments)};var dynCall_vjiiiiiiii=Module["dynCall_vjiiiiiiii"]=function(){return(dynCall_vjiiiiiiii=Module["dynCall_vjiiiiiiii"]=Module["asm"]["dynCall_vjiiiiiiii"]).apply(null,arguments)};var dynCall_vjiiiiiii=Module["dynCall_vjiiiiiii"]=function(){return(dynCall_vjiiiiiii=Module["dynCall_vjiiiiiii"]=Module["asm"]["dynCall_vjiiiiiii"]).apply(null,arguments)};var dynCall_ijiiii=Module["dynCall_ijiiii"]=function(){return(dynCall_ijiiii=Module["dynCall_ijiiii"]=Module["asm"]["dynCall_ijiiii"]).apply(null,arguments)};var dynCall_iidii=Module["dynCall_iidii"]=function(){return(dynCall_iidii=Module["dynCall_iidii"]=Module["asm"]["dynCall_iidii"]).apply(null,arguments)};var dynCall_iifii=Module["dynCall_iifii"]=function(){return(dynCall_iifii=Module["dynCall_iifii"]=Module["asm"]["dynCall_iifii"]).apply(null,arguments)};var dynCall_iidiii=Module["dynCall_iidiii"]=function(){return(dynCall_iidiii=Module["dynCall_iidiii"]=Module["asm"]["dynCall_iidiii"]).apply(null,arguments)};var dynCall_diji=Module["dynCall_diji"]=function(){return(dynCall_diji=Module["dynCall_diji"]=Module["asm"]["dynCall_diji"]).apply(null,arguments)};var dynCall_fidi=Module["dynCall_fidi"]=function(){return(dynCall_fidi=Module["dynCall_fidi"]=Module["asm"]["dynCall_fidi"]).apply(null,arguments)};var dynCall_ijjiii=Module["dynCall_ijjiii"]=function(){return(dynCall_ijjiii=Module["dynCall_ijjiii"]=Module["asm"]["dynCall_ijjiii"]).apply(null,arguments)};var dynCall_viffffi=Module["dynCall_viffffi"]=function(){return(dynCall_viffffi=Module["dynCall_viffffi"]=Module["asm"]["dynCall_viffffi"]).apply(null,arguments)};var dynCall_diiiii=Module["dynCall_diiiii"]=function(){return(dynCall_diiiii=Module["dynCall_diiiii"]=Module["asm"]["dynCall_diiiii"]).apply(null,arguments)};var dynCall_vijji=Module["dynCall_vijji"]=function(){return(dynCall_vijji=Module["dynCall_vijji"]=Module["asm"]["dynCall_vijji"]).apply(null,arguments)};var dynCall_vfffi=Module["dynCall_vfffi"]=function(){return(dynCall_vfffi=Module["dynCall_vfffi"]=Module["asm"]["dynCall_vfffi"]).apply(null,arguments)};var dynCall_vffi=Module["dynCall_vffi"]=function(){return(dynCall_vffi=Module["dynCall_vffi"]=Module["asm"]["dynCall_vffi"]).apply(null,arguments)};var dynCall_vffffi=Module["dynCall_vffffi"]=function(){return(dynCall_vffffi=Module["dynCall_vffffi"]=Module["asm"]["dynCall_vffffi"]).apply(null,arguments)};var dynCall_viiiiffi=Module["dynCall_viiiiffi"]=function(){return(dynCall_viiiiffi=Module["dynCall_viiiiffi"]=Module["asm"]["dynCall_viiiiffi"]).apply(null,arguments)};var dynCall_viiiffii=Module["dynCall_viiiffii"]=function(){return(dynCall_viiiffii=Module["dynCall_viiiffii"]=Module["asm"]["dynCall_viiiffii"]).apply(null,arguments)};var dynCall_vifffi=Module["dynCall_vifffi"]=function(){return(dynCall_vifffi=Module["dynCall_vifffi"]=Module["asm"]["dynCall_vifffi"]).apply(null,arguments)};var dynCall_viffffffi=Module["dynCall_viffffffi"]=function(){return(dynCall_viffffffi=Module["dynCall_viffffffi"]=Module["asm"]["dynCall_viffffffi"]).apply(null,arguments)};var dynCall_fiiii=Module["dynCall_fiiii"]=function(){return(dynCall_fiiii=Module["dynCall_fiiii"]=Module["asm"]["dynCall_fiiii"]).apply(null,arguments)};var dynCall_vffffffii=Module["dynCall_vffffffii"]=function(){return(dynCall_vffffffii=Module["dynCall_vffffffii"]=Module["asm"]["dynCall_vffffffii"]).apply(null,arguments)};var dynCall_vfiii=Module["dynCall_vfiii"]=function(){return(dynCall_vfiii=Module["dynCall_vfiii"]=Module["asm"]["dynCall_vfiii"]).apply(null,arguments)};var dynCall_ffi=Module["dynCall_ffi"]=function(){return(dynCall_ffi=Module["dynCall_ffi"]=Module["asm"]["dynCall_ffi"]).apply(null,arguments)};var dynCall_ffffi=Module["dynCall_ffffi"]=function(){return(dynCall_ffffi=Module["dynCall_ffffi"]=Module["asm"]["dynCall_ffffi"]).apply(null,arguments)};var dynCall_iffi=Module["dynCall_iffi"]=function(){return(dynCall_iffi=Module["dynCall_iffi"]=Module["asm"]["dynCall_iffi"]).apply(null,arguments)};var dynCall_fffifffi=Module["dynCall_fffifffi"]=function(){return(dynCall_fffifffi=Module["dynCall_fffifffi"]=Module["asm"]["dynCall_fffifffi"]).apply(null,arguments)};var dynCall_fdi=Module["dynCall_fdi"]=function(){return(dynCall_fdi=Module["dynCall_fdi"]=Module["asm"]["dynCall_fdi"]).apply(null,arguments)};var dynCall_ddi=Module["dynCall_ddi"]=function(){return(dynCall_ddi=Module["dynCall_ddi"]=Module["asm"]["dynCall_ddi"]).apply(null,arguments)};var dynCall_vfii=Module["dynCall_vfii"]=function(){return(dynCall_vfii=Module["dynCall_vfii"]=Module["asm"]["dynCall_vfii"]).apply(null,arguments)};var dynCall_ddddi=Module["dynCall_ddddi"]=function(){return(dynCall_ddddi=Module["dynCall_ddddi"]=Module["asm"]["dynCall_ddddi"]).apply(null,arguments)};var dynCall_jjjji=Module["dynCall_jjjji"]=function(){return(dynCall_jjjji=Module["dynCall_jjjji"]=Module["asm"]["dynCall_jjjji"]).apply(null,arguments)};var dynCall_vijjii=Module["dynCall_vijjii"]=function(){return(dynCall_vijjii=Module["dynCall_vijjii"]=Module["asm"]["dynCall_vijjii"]).apply(null,arguments)};var dynCall_viiifii=Module["dynCall_viiifii"]=function(){return(dynCall_viiifii=Module["dynCall_viiifii"]=Module["asm"]["dynCall_viiifii"]).apply(null,arguments)};var dynCall_viiiiiiiijijiii=Module["dynCall_viiiiiiiijijiii"]=function(){return(dynCall_viiiiiiiijijiii=Module["dynCall_viiiiiiiijijiii"]=Module["asm"]["dynCall_viiiiiiiijijiii"]).apply(null,arguments)};var dynCall_viiiififfi=Module["dynCall_viiiififfi"]=function(){return(dynCall_viiiififfi=Module["dynCall_viiiififfi"]=Module["asm"]["dynCall_viiiififfi"]).apply(null,arguments)};var dynCall_viiiifiifi=Module["dynCall_viiiifiifi"]=function(){return(dynCall_viiiifiifi=Module["dynCall_viiiifiifi"]=Module["asm"]["dynCall_viiiifiifi"]).apply(null,arguments)};var dynCall_viiiifiiii=Module["dynCall_viiiifiiii"]=function(){return(dynCall_viiiifiiii=Module["dynCall_viiiifiiii"]=Module["asm"]["dynCall_viiiifiiii"]).apply(null,arguments)};var dynCall_viiiifiiiii=Module["dynCall_viiiifiiiii"]=function(){return(dynCall_viiiifiiiii=Module["dynCall_viiiifiiiii"]=Module["asm"]["dynCall_viiiifiiiii"]).apply(null,arguments)};var dynCall_viiiifiiiiiiii=Module["dynCall_viiiifiiiiiiii"]=function(){return(dynCall_viiiifiiiiiiii=Module["dynCall_viiiifiiiiiiii"]=Module["asm"]["dynCall_viiiifiiiiiiii"]).apply(null,arguments)};var dynCall_iiifiii=Module["dynCall_iiifiii"]=function(){return(dynCall_iiifiii=Module["dynCall_iiifiii"]=Module["asm"]["dynCall_iiifiii"]).apply(null,arguments)};var dynCall_viiiiiffii=Module["dynCall_viiiiiffii"]=function(){return(dynCall_viiiiiffii=Module["dynCall_viiiiiffii"]=Module["asm"]["dynCall_viiiiiffii"]).apply(null,arguments)};var dynCall_viffffii=Module["dynCall_viffffii"]=function(){return(dynCall_viffffii=Module["dynCall_viffffii"]=Module["asm"]["dynCall_viffffii"]).apply(null,arguments)};var dynCall_iiiifiii=Module["dynCall_iiiifiii"]=function(){return(dynCall_iiiifiii=Module["dynCall_iiiifiii"]=Module["asm"]["dynCall_iiiifiii"]).apply(null,arguments)};var dynCall_iifiii=Module["dynCall_iifiii"]=function(){return(dynCall_iifiii=Module["dynCall_iifiii"]=Module["asm"]["dynCall_iifiii"]).apply(null,arguments)};var dynCall_iifiiii=Module["dynCall_iifiiii"]=function(){return(dynCall_iifiiii=Module["dynCall_iifiiii"]=Module["asm"]["dynCall_iifiiii"]).apply(null,arguments)};var dynCall_iiiiifiii=Module["dynCall_iiiiifiii"]=function(){return(dynCall_iiiiifiii=Module["dynCall_iiiiifiii"]=Module["asm"]["dynCall_iiiiifiii"]).apply(null,arguments)};var dynCall_iiifiiii=Module["dynCall_iiifiiii"]=function(){return(dynCall_iiifiiii=Module["dynCall_iiifiiii"]=Module["asm"]["dynCall_iiifiiii"]).apply(null,arguments)};var dynCall_vifffffi=Module["dynCall_vifffffi"]=function(){return(dynCall_vifffffi=Module["dynCall_vifffffi"]=Module["asm"]["dynCall_vifffffi"]).apply(null,arguments)};var dynCall_viiiiifi=Module["dynCall_viiiiifi"]=function(){return(dynCall_viiiiifi=Module["dynCall_viiiiifi"]=Module["asm"]["dynCall_viiiiifi"]).apply(null,arguments)};var dynCall_viffiiii=Module["dynCall_viffiiii"]=function(){return(dynCall_viffiiii=Module["dynCall_viffiiii"]=Module["asm"]["dynCall_viffiiii"]).apply(null,arguments)};var dynCall_viiiffffiiii=Module["dynCall_viiiffffiiii"]=function(){return(dynCall_viiiffffiiii=Module["dynCall_viiiffffiiii"]=Module["asm"]["dynCall_viiiffffiiii"]).apply(null,arguments)};var dynCall_viifffffffiiiii=Module["dynCall_viifffffffiiiii"]=function(){return(dynCall_viifffffffiiiii=Module["dynCall_viifffffffiiiii"]=Module["asm"]["dynCall_viifffffffiiiii"]).apply(null,arguments)};var dynCall_fiiiii=Module["dynCall_fiiiii"]=function(){return(dynCall_fiiiii=Module["dynCall_fiiiii"]=Module["asm"]["dynCall_fiiiii"]).apply(null,arguments)};var dynCall_iiiiiiffiiiiiiiiiffffiiii=Module["dynCall_iiiiiiffiiiiiiiiiffffiiii"]=function(){return(dynCall_iiiiiiffiiiiiiiiiffffiiii=Module["dynCall_iiiiiiffiiiiiiiiiffffiiii"]=Module["asm"]["dynCall_iiiiiiffiiiiiiiiiffffiiii"]).apply(null,arguments)};var dynCall_iiiiiiffiiiiiiiiiiiiiii=Module["dynCall_iiiiiiffiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiffiiiiiiiiiiiiiii=Module["dynCall_iiiiiiffiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiffiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_viffii=Module["dynCall_viffii"]=function(){return(dynCall_viffii=Module["dynCall_viffii"]=Module["asm"]["dynCall_viffii"]).apply(null,arguments)};var dynCall_vififiii=Module["dynCall_vififiii"]=function(){return(dynCall_vififiii=Module["dynCall_vififiii"]=Module["asm"]["dynCall_vififiii"]).apply(null,arguments)};var dynCall_viififii=Module["dynCall_viififii"]=function(){return(dynCall_viififii=Module["dynCall_viififii"]=Module["asm"]["dynCall_viififii"]).apply(null,arguments)};var dynCall_fiffi=Module["dynCall_fiffi"]=function(){return(dynCall_fiffi=Module["dynCall_fiffi"]=Module["asm"]["dynCall_fiffi"]).apply(null,arguments)};var dynCall_viijji=Module["dynCall_viijji"]=function(){return(dynCall_viijji=Module["dynCall_viijji"]=Module["asm"]["dynCall_viijji"]).apply(null,arguments)};var dynCall_viiidi=Module["dynCall_viiidi"]=function(){return(dynCall_viiidi=Module["dynCall_viiidi"]=Module["asm"]["dynCall_viiidi"]).apply(null,arguments)};var dynCall_jijji=Module["dynCall_jijji"]=function(){return(dynCall_jijji=Module["dynCall_jijji"]=Module["asm"]["dynCall_jijji"]).apply(null,arguments)};var dynCall_viiffffi=Module["dynCall_viiffffi"]=function(){return(dynCall_viiffffi=Module["dynCall_viiffffi"]=Module["asm"]["dynCall_viiffffi"]).apply(null,arguments)};var dynCall_fifffi=Module["dynCall_fifffi"]=function(){return(dynCall_fifffi=Module["dynCall_fifffi"]=Module["asm"]["dynCall_fifffi"]).apply(null,arguments)};var dynCall_ifffi=Module["dynCall_ifffi"]=function(){return(dynCall_ifffi=Module["dynCall_ifffi"]=Module["asm"]["dynCall_ifffi"]).apply(null,arguments)};var dynCall_viffiii=Module["dynCall_viffiii"]=function(){return(dynCall_viffiii=Module["dynCall_viffiii"]=Module["asm"]["dynCall_viffiii"]).apply(null,arguments)};var dynCall_viffifi=Module["dynCall_viffifi"]=function(){return(dynCall_viffifi=Module["dynCall_viffifi"]=Module["asm"]["dynCall_viffifi"]).apply(null,arguments)};var dynCall_fiffffi=Module["dynCall_fiffffi"]=function(){return(dynCall_fiffffi=Module["dynCall_fiffffi"]=Module["asm"]["dynCall_fiffffi"]).apply(null,arguments)};var dynCall_fffffffi=Module["dynCall_fffffffi"]=function(){return(dynCall_fffffffi=Module["dynCall_fffffffi"]=Module["asm"]["dynCall_fffffffi"]).apply(null,arguments)};var dynCall_viiffifi=Module["dynCall_viiffifi"]=function(){return(dynCall_viiffifi=Module["dynCall_viiffifi"]=Module["asm"]["dynCall_viiffifi"]).apply(null,arguments)};var dynCall_viiiffiiiiiiiii=Module["dynCall_viiiffiiiiiiiii"]=function(){return(dynCall_viiiffiiiiiiiii=Module["dynCall_viiiffiiiiiiiii"]=Module["asm"]["dynCall_viiiffiiiiiiiii"]).apply(null,arguments)};var dynCall_viiiffiiiiii=Module["dynCall_viiiffiiiiii"]=function(){return(dynCall_viiiffiiiiii=Module["dynCall_viiiffiiiiii"]=Module["asm"]["dynCall_viiiffiiiiii"]).apply(null,arguments)};var dynCall_viiffiiiiiiiiii=Module["dynCall_viiffiiiiiiiiii"]=function(){return(dynCall_viiffiiiiiiiiii=Module["dynCall_viiffiiiiiiiiii"]=Module["asm"]["dynCall_viiffiiiiiiiiii"]).apply(null,arguments)};var dynCall_viiffiiiiiii=Module["dynCall_viiffiiiiiii"]=function(){return(dynCall_viiffiiiiiii=Module["dynCall_viiffiiiiiii"]=Module["asm"]["dynCall_viiffiiiiiii"]).apply(null,arguments)};var dynCall_iiiffiiii=Module["dynCall_iiiffiiii"]=function(){return(dynCall_iiiffiiii=Module["dynCall_iiiffiiii"]=Module["asm"]["dynCall_iiiffiiii"]).apply(null,arguments)};var dynCall_fffffi=Module["dynCall_fffffi"]=function(){return(dynCall_fffffi=Module["dynCall_fffffi"]=Module["asm"]["dynCall_fffffi"]).apply(null,arguments)};var dynCall_iiiiffiiii=Module["dynCall_iiiiffiiii"]=function(){return(dynCall_iiiiffiiii=Module["dynCall_iiiiffiiii"]=Module["asm"]["dynCall_iiiiffiiii"]).apply(null,arguments)};var dynCall_fiiiffi=Module["dynCall_fiiiffi"]=function(){return(dynCall_fiiiffi=Module["dynCall_fiiiffi"]=Module["asm"]["dynCall_fiiiffi"]).apply(null,arguments)};var dynCall_vjii=Module["dynCall_vjii"]=function(){return(dynCall_vjii=Module["dynCall_vjii"]=Module["asm"]["dynCall_vjii"]).apply(null,arguments)};var dynCall_viiiiiiiijiiii=Module["dynCall_viiiiiiiijiiii"]=function(){return(dynCall_viiiiiiiijiiii=Module["dynCall_viiiiiiiijiiii"]=Module["asm"]["dynCall_viiiiiiiijiiii"]).apply(null,arguments)};var dynCall_viiiiiifiiiiii=Module["dynCall_viiiiiifiiiiii"]=function(){return(dynCall_viiiiiifiiiiii=Module["dynCall_viiiiiifiiiiii"]=Module["asm"]["dynCall_viiiiiifiiiiii"]).apply(null,arguments)};var dynCall_viffffiii=Module["dynCall_viffffiii"]=function(){return(dynCall_viffffiii=Module["dynCall_viffffiii"]=Module["asm"]["dynCall_viffffiii"]).apply(null,arguments)};var dynCall_viifiii=Module["dynCall_viifiii"]=function(){return(dynCall_viifiii=Module["dynCall_viifiii"]=Module["asm"]["dynCall_viifiii"]).apply(null,arguments)};var dynCall_vifiiiiii=Module["dynCall_vifiiiiii"]=function(){return(dynCall_vifiiiiii=Module["dynCall_vifiiiiii"]=Module["asm"]["dynCall_vifiiiiii"]).apply(null,arguments)};var dynCall_ffii=Module["dynCall_ffii"]=function(){return(dynCall_ffii=Module["dynCall_ffii"]=Module["asm"]["dynCall_ffii"]).apply(null,arguments)};var dynCall_viifiiii=Module["dynCall_viifiiii"]=function(){return(dynCall_viifiiii=Module["dynCall_viifiiii"]=Module["asm"]["dynCall_viifiiii"]).apply(null,arguments)};var dynCall_fifii=Module["dynCall_fifii"]=function(){return(dynCall_fifii=Module["dynCall_fifii"]=Module["asm"]["dynCall_fifii"]).apply(null,arguments)};var dynCall_vifffii=Module["dynCall_vifffii"]=function(){return(dynCall_vifffii=Module["dynCall_vifffii"]=Module["asm"]["dynCall_vifffii"]).apply(null,arguments)};var dynCall_viiiffi=Module["dynCall_viiiffi"]=function(){return(dynCall_viiiffi=Module["dynCall_viiiffi"]=Module["asm"]["dynCall_viiiffi"]).apply(null,arguments)};var dynCall_viiifffi=Module["dynCall_viiifffi"]=function(){return(dynCall_viiifffi=Module["dynCall_viiifffi"]=Module["asm"]["dynCall_viiifffi"]).apply(null,arguments)};var dynCall_fiifii=Module["dynCall_fiifii"]=function(){return(dynCall_fiifii=Module["dynCall_fiifii"]=Module["asm"]["dynCall_fiifii"]).apply(null,arguments)};var dynCall_iiiifiiii=Module["dynCall_iiiifiiii"]=function(){return(dynCall_iiiifiiii=Module["dynCall_iiiifiiii"]=Module["asm"]["dynCall_iiiifiiii"]).apply(null,arguments)};var dynCall_viiiiiffi=Module["dynCall_viiiiiffi"]=function(){return(dynCall_viiiiiffi=Module["dynCall_viiiiiffi"]=Module["asm"]["dynCall_viiiiiffi"]).apply(null,arguments)};var dynCall_iifffi=Module["dynCall_iifffi"]=function(){return(dynCall_iifffi=Module["dynCall_iifffi"]=Module["asm"]["dynCall_iifffi"]).apply(null,arguments)};var dynCall_viijjii=Module["dynCall_viijjii"]=function(){return(dynCall_viijjii=Module["dynCall_viijjii"]=Module["asm"]["dynCall_viijjii"]).apply(null,arguments)};var dynCall_viiiiifii=Module["dynCall_viiiiifii"]=function(){return(dynCall_viiiiifii=Module["dynCall_viiiiifii"]=Module["asm"]["dynCall_viiiiifii"]).apply(null,arguments)};var dynCall_viiiffffi=Module["dynCall_viiiffffi"]=function(){return(dynCall_viiiffffi=Module["dynCall_viiiffffi"]=Module["asm"]["dynCall_viiiffffi"]).apply(null,arguments)};var dynCall_vidii=Module["dynCall_vidii"]=function(){return(dynCall_vidii=Module["dynCall_vidii"]=Module["asm"]["dynCall_vidii"]).apply(null,arguments)};var dynCall_vijiiiiiiii=Module["dynCall_vijiiiiiiii"]=function(){return(dynCall_vijiiiiiiii=Module["dynCall_vijiiiiiiii"]=Module["asm"]["dynCall_vijiiiiiiii"]).apply(null,arguments)};var dynCall_vijiiiijjjjji=Module["dynCall_vijiiiijjjjji"]=function(){return(dynCall_vijiiiijjjjji=Module["dynCall_vijiiiijjjjji"]=Module["asm"]["dynCall_vijiiiijjjjji"]).apply(null,arguments)};var dynCall_jiidi=Module["dynCall_jiidi"]=function(){return(dynCall_jiidi=Module["dynCall_jiidi"]=Module["asm"]["dynCall_jiidi"]).apply(null,arguments)};var dynCall_viidii=Module["dynCall_viidii"]=function(){return(dynCall_viidii=Module["dynCall_viidii"]=Module["asm"]["dynCall_viidii"]).apply(null,arguments)};var dynCall_viiidiii=Module["dynCall_viiidiii"]=function(){return(dynCall_viiidiii=Module["dynCall_viiidiii"]=Module["asm"]["dynCall_viiidiii"]).apply(null,arguments)};var dynCall_vijiiiiiii=Module["dynCall_vijiiiiiii"]=function(){return(dynCall_vijiiiiiii=Module["dynCall_vijiiiiiii"]=Module["asm"]["dynCall_vijiiiiiii"]).apply(null,arguments)};var dynCall_jjiiii=Module["dynCall_jjiiii"]=function(){return(dynCall_jjiiii=Module["dynCall_jjiiii"]=Module["asm"]["dynCall_jjiiii"]).apply(null,arguments)};var dynCall_jjiiiii=Module["dynCall_jjiiiii"]=function(){return(dynCall_jjiiiii=Module["dynCall_jjiiiii"]=Module["asm"]["dynCall_jjiiiii"]).apply(null,arguments)};var dynCall_viijiiiiii=Module["dynCall_viijiiiiii"]=function(){return(dynCall_viijiiiiii=Module["dynCall_viijiiiiii"]=Module["asm"]["dynCall_viijiiiiii"]).apply(null,arguments)};var dynCall_jijjji=Module["dynCall_jijjji"]=function(){return(dynCall_jijjji=Module["dynCall_jijjji"]=Module["asm"]["dynCall_jijjji"]).apply(null,arguments)};var dynCall_jijjjii=Module["dynCall_jijjjii"]=function(){return(dynCall_jijjjii=Module["dynCall_jijjjii"]=Module["asm"]["dynCall_jijjjii"]).apply(null,arguments)};var dynCall_jjiii=Module["dynCall_jjiii"]=function(){return(dynCall_jjiii=Module["dynCall_jjiii"]=Module["asm"]["dynCall_jjiii"]).apply(null,arguments)};var dynCall_ijijiiiii=Module["dynCall_ijijiiiii"]=function(){return(dynCall_ijijiiiii=Module["dynCall_ijijiiiii"]=Module["asm"]["dynCall_ijijiiiii"]).apply(null,arguments)};var dynCall_ijjjiii=Module["dynCall_ijjjiii"]=function(){return(dynCall_ijjjiii=Module["dynCall_ijjjiii"]=Module["asm"]["dynCall_ijjjiii"]).apply(null,arguments)};var dynCall_ijiii=Module["dynCall_ijiii"]=function(){return(dynCall_ijiii=Module["dynCall_ijiii"]=Module["asm"]["dynCall_ijiii"]).apply(null,arguments)};var dynCall_vijjjiijii=Module["dynCall_vijjjiijii"]=function(){return(dynCall_vijjjiijii=Module["dynCall_vijjjiijii"]=Module["asm"]["dynCall_vijjjiijii"]).apply(null,arguments)};var dynCall_ijjjiijii=Module["dynCall_ijjjiijii"]=function(){return(dynCall_ijjjiijii=Module["dynCall_ijjjiijii"]=Module["asm"]["dynCall_ijjjiijii"]).apply(null,arguments)};var dynCall_vijiiiiii=Module["dynCall_vijiiiiii"]=function(){return(dynCall_vijiiiiii=Module["dynCall_vijiiiiii"]=Module["asm"]["dynCall_vijiiiiii"]).apply(null,arguments)};var dynCall_jfi=Module["dynCall_jfi"]=function(){return(dynCall_jfi=Module["dynCall_jfi"]=Module["asm"]["dynCall_jfi"]).apply(null,arguments)};var dynCall_fji=Module["dynCall_fji"]=function(){return(dynCall_fji=Module["dynCall_fji"]=Module["asm"]["dynCall_fji"]).apply(null,arguments)};var dynCall_dfi=Module["dynCall_dfi"]=function(){return(dynCall_dfi=Module["dynCall_dfi"]=Module["asm"]["dynCall_dfi"]).apply(null,arguments)};var dynCall_jidii=Module["dynCall_jidii"]=function(){return(dynCall_jidii=Module["dynCall_jidii"]=Module["asm"]["dynCall_jidii"]).apply(null,arguments)};var dynCall_viiiiiiiji=Module["dynCall_viiiiiiiji"]=function(){return(dynCall_viiiiiiiji=Module["dynCall_viiiiiiiji"]=Module["asm"]["dynCall_viiiiiiiji"]).apply(null,arguments)};var dynCall_viiiiiiiiji=Module["dynCall_viiiiiiiiji"]=function(){return(dynCall_viiiiiiiiji=Module["dynCall_viiiiiiiiji"]=Module["asm"]["dynCall_viiiiiiiiji"]).apply(null,arguments)};var dynCall_viiiiiiiiiji=Module["dynCall_viiiiiiiiiji"]=function(){return(dynCall_viiiiiiiiiji=Module["dynCall_viiiiiiiiiji"]=Module["asm"]["dynCall_viiiiiiiiiji"]).apply(null,arguments)};var dynCall_ijiijii=Module["dynCall_ijiijii"]=function(){return(dynCall_ijiijii=Module["dynCall_ijiijii"]=Module["asm"]["dynCall_ijiijii"]).apply(null,arguments)};var dynCall_vjjiiiii=Module["dynCall_vjjiiiii"]=function(){return(dynCall_vjjiiiii=Module["dynCall_vjjiiiii"]=Module["asm"]["dynCall_vjjiiiii"]).apply(null,arguments)};var dynCall_vjjii=Module["dynCall_vjjii"]=function(){return(dynCall_vjjii=Module["dynCall_vjjii"]=Module["asm"]["dynCall_vjjii"]).apply(null,arguments)};var dynCall_ijiiji=Module["dynCall_ijiiji"]=function(){return(dynCall_ijiiji=Module["dynCall_ijiiji"]=Module["asm"]["dynCall_ijiiji"]).apply(null,arguments)};var dynCall_ijiiiii=Module["dynCall_ijiiiii"]=function(){return(dynCall_ijiiiii=Module["dynCall_ijiiiii"]=Module["asm"]["dynCall_ijiiiii"]).apply(null,arguments)};var dynCall_ijiiiiji=Module["dynCall_ijiiiiji"]=function(){return(dynCall_ijiiiiji=Module["dynCall_ijiiiiji"]=Module["asm"]["dynCall_ijiiiiji"]).apply(null,arguments)};var dynCall_jiiiiii=Module["dynCall_jiiiiii"]=function(){return(dynCall_jiiiiii=Module["dynCall_jiiiiii"]=Module["asm"]["dynCall_jiiiiii"]).apply(null,arguments)};var dynCall_ddii=Module["dynCall_ddii"]=function(){return(dynCall_ddii=Module["dynCall_ddii"]=Module["asm"]["dynCall_ddii"]).apply(null,arguments)};var dynCall_idiii=Module["dynCall_idiii"]=function(){return(dynCall_idiii=Module["dynCall_idiii"]=Module["asm"]["dynCall_idiii"]).apply(null,arguments)};var dynCall_ifiii=Module["dynCall_ifiii"]=function(){return(dynCall_ifiii=Module["dynCall_ifiii"]=Module["asm"]["dynCall_ifiii"]).apply(null,arguments)};var dynCall_ifiiiii=Module["dynCall_ifiiiii"]=function(){return(dynCall_ifiiiii=Module["dynCall_ifiiiii"]=Module["asm"]["dynCall_ifiiiii"]).apply(null,arguments)};var dynCall_jjjii=Module["dynCall_jjjii"]=function(){return(dynCall_jjjii=Module["dynCall_jjjii"]=Module["asm"]["dynCall_jjjii"]).apply(null,arguments)};var dynCall_vdiii=Module["dynCall_vdiii"]=function(){return(dynCall_vdiii=Module["dynCall_vdiii"]=Module["asm"]["dynCall_vdiii"]).apply(null,arguments)};var dynCall_jdii=Module["dynCall_jdii"]=function(){return(dynCall_jdii=Module["dynCall_jdii"]=Module["asm"]["dynCall_jdii"]).apply(null,arguments)};var dynCall_vijijji=Module["dynCall_vijijji"]=function(){return(dynCall_vijijji=Module["dynCall_vijijji"]=Module["asm"]["dynCall_vijijji"]).apply(null,arguments)};var dynCall_iijjji=Module["dynCall_iijjji"]=function(){return(dynCall_iijjji=Module["dynCall_iijjji"]=Module["asm"]["dynCall_iijjji"]).apply(null,arguments)};var dynCall_viijjji=Module["dynCall_viijjji"]=function(){return(dynCall_viijjji=Module["dynCall_viijjji"]=Module["asm"]["dynCall_viijjji"]).apply(null,arguments)};var dynCall_vdii=Module["dynCall_vdii"]=function(){return(dynCall_vdii=Module["dynCall_vdii"]=Module["asm"]["dynCall_vdii"]).apply(null,arguments)};var dynCall_diddi=Module["dynCall_diddi"]=function(){return(dynCall_diddi=Module["dynCall_diddi"]=Module["asm"]["dynCall_diddi"]).apply(null,arguments)};var dynCall_viiiijii=Module["dynCall_viiiijii"]=function(){return(dynCall_viiiijii=Module["dynCall_viiiijii"]=Module["asm"]["dynCall_viiiijii"]).apply(null,arguments)};var dynCall_viiijji=Module["dynCall_viiijji"]=function(){return(dynCall_viiijji=Module["dynCall_viiijji"]=Module["asm"]["dynCall_viiijji"]).apply(null,arguments)};var dynCall_iijjii=Module["dynCall_iijjii"]=function(){return(dynCall_iijjii=Module["dynCall_iijjii"]=Module["asm"]["dynCall_iijjii"]).apply(null,arguments)};var dynCall_viijijii=Module["dynCall_viijijii"]=function(){return(dynCall_viijijii=Module["dynCall_viijijii"]=Module["asm"]["dynCall_viijijii"]).apply(null,arguments)};var dynCall_viijijiii=Module["dynCall_viijijiii"]=function(){return(dynCall_viijijiii=Module["dynCall_viijijiii"]=Module["asm"]["dynCall_viijijiii"]).apply(null,arguments)};var dynCall_vijiji=Module["dynCall_vijiji"]=function(){return(dynCall_vijiji=Module["dynCall_vijiji"]=Module["asm"]["dynCall_vijiji"]).apply(null,arguments)};var dynCall_viijiijiii=Module["dynCall_viijiijiii"]=function(){return(dynCall_viijiijiii=Module["dynCall_viijiijiii"]=Module["asm"]["dynCall_viijiijiii"]).apply(null,arguments)};var dynCall_viiiijiiii=Module["dynCall_viiiijiiii"]=function(){return(dynCall_viiiijiiii=Module["dynCall_viiiijiiii"]=Module["asm"]["dynCall_viiiijiiii"]).apply(null,arguments)};var dynCall_di=Module["dynCall_di"]=function(){return(dynCall_di=Module["dynCall_di"]=Module["asm"]["dynCall_di"]).apply(null,arguments)};var dynCall_jiiiiiiiii=Module["dynCall_jiiiiiiiii"]=function(){return(dynCall_jiiiiiiiii=Module["dynCall_jiiiiiiiii"]=Module["asm"]["dynCall_jiiiiiiiii"]).apply(null,arguments)};var dynCall_jiiiiiiiiii=Module["dynCall_jiiiiiiiiii"]=function(){return(dynCall_jiiiiiiiiii=Module["dynCall_jiiiiiiiiii"]=Module["asm"]["dynCall_jiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiijii=Module["dynCall_iiiiijii"]=function(){return(dynCall_iiiiijii=Module["dynCall_iiiiijii"]=Module["asm"]["dynCall_iiiiijii"]).apply(null,arguments)};var dynCall_iiiiidii=Module["dynCall_iiiiidii"]=function(){return(dynCall_iiiiidii=Module["dynCall_iiiiidii"]=Module["asm"]["dynCall_iiiiidii"]).apply(null,arguments)};var dynCall_iiiiifii=Module["dynCall_iiiiifii"]=function(){return(dynCall_iiiiifii=Module["dynCall_iiiiifii"]=Module["asm"]["dynCall_iiiiifii"]).apply(null,arguments)};var dynCall_iiidiii=Module["dynCall_iiidiii"]=function(){return(dynCall_iiidiii=Module["dynCall_iiidiii"]=Module["asm"]["dynCall_iiidiii"]).apply(null,arguments)};var dynCall_viifffiii=Module["dynCall_viifffiii"]=function(){return(dynCall_viifffiii=Module["dynCall_viifffiii"]=Module["asm"]["dynCall_viifffiii"]).apply(null,arguments)};var dynCall_iiiiffiiiji=Module["dynCall_iiiiffiiiji"]=function(){return(dynCall_iiiiffiiiji=Module["dynCall_iiiiffiiiji"]=Module["asm"]["dynCall_iiiiffiiiji"]).apply(null,arguments)};var dynCall_jiiiiiii=Module["dynCall_jiiiiiii"]=function(){return(dynCall_jiiiiiii=Module["dynCall_jiiiiiii"]=Module["asm"]["dynCall_jiiiiiii"]).apply(null,arguments)};var dynCall_iiiiffiiiii=Module["dynCall_iiiiffiiiii"]=function(){return(dynCall_iiiiffiiiii=Module["dynCall_iiiiffiiiii"]=Module["asm"]["dynCall_iiiiffiiiii"]).apply(null,arguments)};var dynCall_diiiidi=Module["dynCall_diiiidi"]=function(){return(dynCall_diiiidi=Module["dynCall_diiiidi"]=Module["asm"]["dynCall_diiiidi"]).apply(null,arguments)};var dynCall_jiiiiji=Module["dynCall_jiiiiji"]=function(){return(dynCall_jiiiiji=Module["dynCall_jiiiiji"]=Module["asm"]["dynCall_jiiiiji"]).apply(null,arguments)};var dynCall_fiiiifi=Module["dynCall_fiiiifi"]=function(){return(dynCall_fiiiifi=Module["dynCall_fiiiifi"]=Module["asm"]["dynCall_fiiiifi"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiiiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiiiii"]=function(){return(dynCall_iiiiiiiiiiiiiiiiiii=Module["dynCall_iiiiiiiiiiiiiiiiiii"]=Module["asm"]["dynCall_iiiiiiiiiiiiiiiiiii"]).apply(null,arguments)};var dynCall_iiidi=Module["dynCall_iiidi"]=function(){return(dynCall_iiidi=Module["dynCall_iiidi"]=Module["asm"]["dynCall_iiidi"]).apply(null,arguments)};var dynCall_iiijjii=Module["dynCall_iiijjii"]=function(){return(dynCall_iiijjii=Module["dynCall_iiijjii"]=Module["asm"]["dynCall_iiijjii"]).apply(null,arguments)};var dynCall_ijiiiiii=Module["dynCall_ijiiiiii"]=function(){return(dynCall_ijiiiiii=Module["dynCall_ijiiiiii"]=Module["asm"]["dynCall_ijiiiiii"]).apply(null,arguments)};var dynCall_ijjiiiiii=Module["dynCall_ijjiiiiii"]=function(){return(dynCall_ijjiiiiii=Module["dynCall_ijjiiiiii"]=Module["asm"]["dynCall_ijjiiiiii"]).apply(null,arguments)};var dynCall_vdi=Module["dynCall_vdi"]=function(){return(dynCall_vdi=Module["dynCall_vdi"]=Module["asm"]["dynCall_vdi"]).apply(null,arguments)};var dynCall_vfi=Module["dynCall_vfi"]=function(){return(dynCall_vfi=Module["dynCall_vfi"]=Module["asm"]["dynCall_vfi"]).apply(null,arguments)};var dynCall_fff=Module["dynCall_fff"]=function(){return(dynCall_fff=Module["dynCall_fff"]=Module["asm"]["dynCall_fff"]).apply(null,arguments)};var dynCall_vif=Module["dynCall_vif"]=function(){return(dynCall_vif=Module["dynCall_vif"]=Module["asm"]["dynCall_vif"]).apply(null,arguments)};var dynCall_viif=Module["dynCall_viif"]=function(){return(dynCall_viif=Module["dynCall_viif"]=Module["asm"]["dynCall_viif"]).apply(null,arguments)};var dynCall_ijj=Module["dynCall_ijj"]=function(){return(dynCall_ijj=Module["dynCall_ijj"]=Module["asm"]["dynCall_ijj"]).apply(null,arguments)};var dynCall_vjji=Module["dynCall_vjji"]=function(){return(dynCall_vjji=Module["dynCall_vjji"]=Module["asm"]["dynCall_vjji"]).apply(null,arguments)};var dynCall_viffff=Module["dynCall_viffff"]=function(){return(dynCall_viffff=Module["dynCall_viffff"]=Module["asm"]["dynCall_viffff"]).apply(null,arguments)};var dynCall_vid=Module["dynCall_vid"]=function(){return(dynCall_vid=Module["dynCall_vid"]=Module["asm"]["dynCall_vid"]).apply(null,arguments)};var dynCall_viiiiif=Module["dynCall_viiiiif"]=function(){return(dynCall_viiiiif=Module["dynCall_viiiiif"]=Module["asm"]["dynCall_viiiiif"]).apply(null,arguments)};var dynCall_viiiif=Module["dynCall_viiiif"]=function(){return(dynCall_viiiif=Module["dynCall_viiiif"]=Module["asm"]["dynCall_viiiif"]).apply(null,arguments)};var dynCall_viiiiiif=Module["dynCall_viiiiiif"]=function(){return(dynCall_viiiiiif=Module["dynCall_viiiiiif"]=Module["asm"]["dynCall_viiiiiif"]).apply(null,arguments)};var dynCall_iiiijiii=Module["dynCall_iiiijiii"]=function(){return(dynCall_iiiijiii=Module["dynCall_iiiijiii"]=Module["asm"]["dynCall_iiiijiii"]).apply(null,arguments)};var dynCall_iiiij=Module["dynCall_iiiij"]=function(){return(dynCall_iiiij=Module["dynCall_iiiij"]=Module["asm"]["dynCall_iiiij"]).apply(null,arguments)};var dynCall_iiif=Module["dynCall_iiif"]=function(){return(dynCall_iiif=Module["dynCall_iiif"]=Module["asm"]["dynCall_iiif"]).apply(null,arguments)};var dynCall_fif=Module["dynCall_fif"]=function(){return(dynCall_fif=Module["dynCall_fif"]=Module["asm"]["dynCall_fif"]).apply(null,arguments)};var dynCall_iiiiiifff=Module["dynCall_iiiiiifff"]=function(){return(dynCall_iiiiiifff=Module["dynCall_iiiiiifff"]=Module["asm"]["dynCall_iiiiiifff"]).apply(null,arguments)};var dynCall_iiiiiifiif=Module["dynCall_iiiiiifiif"]=function(){return(dynCall_iiiiiifiif=Module["dynCall_iiiiiifiif"]=Module["asm"]["dynCall_iiiiiifiif"]).apply(null,arguments)};var dynCall_iiiiiifiii=Module["dynCall_iiiiiifiii"]=function(){return(dynCall_iiiiiifiii=Module["dynCall_iiiiiifiii"]=Module["asm"]["dynCall_iiiiiifiii"]).apply(null,arguments)};var dynCall_iiiiiiifiif=Module["dynCall_iiiiiiifiif"]=function(){return(dynCall_iiiiiiifiif=Module["dynCall_iiiiiiifiif"]=Module["asm"]["dynCall_iiiiiiifiif"]).apply(null,arguments)};var dynCall_fiff=Module["dynCall_fiff"]=function(){return(dynCall_fiff=Module["dynCall_fiff"]=Module["asm"]["dynCall_fiff"]).apply(null,arguments)};var dynCall_fiiiiiifiifif=Module["dynCall_fiiiiiifiifif"]=function(){return(dynCall_fiiiiiifiifif=Module["dynCall_fiiiiiifiifif"]=Module["asm"]["dynCall_fiiiiiifiifif"]).apply(null,arguments)};var dynCall_fiiiiiifiiiif=Module["dynCall_fiiiiiifiiiif"]=function(){return(dynCall_fiiiiiifiiiif=Module["dynCall_fiiiiiifiiiif"]=Module["asm"]["dynCall_fiiiiiifiiiif"]).apply(null,arguments)};var dynCall_vifiiii=Module["dynCall_vifiiii"]=function(){return(dynCall_vifiiii=Module["dynCall_vifiiii"]=Module["asm"]["dynCall_vifiiii"]).apply(null,arguments)};var dynCall_iifiiiijii=Module["dynCall_iifiiiijii"]=function(){return(dynCall_iifiiiijii=Module["dynCall_iifiiiijii"]=Module["asm"]["dynCall_iifiiiijii"]).apply(null,arguments)};var dynCall_vifif=Module["dynCall_vifif"]=function(){return(dynCall_vifif=Module["dynCall_vifif"]=Module["asm"]["dynCall_vifif"]).apply(null,arguments)};var dynCall_vifijii=Module["dynCall_vifijii"]=function(){return(dynCall_vifijii=Module["dynCall_vifijii"]=Module["asm"]["dynCall_vifijii"]).apply(null,arguments)};var dynCall_iiiifffiii=Module["dynCall_iiiifffiii"]=function(){return(dynCall_iiiifffiii=Module["dynCall_iiiifffiii"]=Module["asm"]["dynCall_iiiifffiii"]).apply(null,arguments)};var dynCall_iiiifffffi=Module["dynCall_iiiifffffi"]=function(){return(dynCall_iiiifffffi=Module["dynCall_iiiifffffi"]=Module["asm"]["dynCall_iiiifffffi"]).apply(null,arguments)};var dynCall_viffiiiif=Module["dynCall_viffiiiif"]=function(){return(dynCall_viffiiiif=Module["dynCall_viffiiiif"]=Module["asm"]["dynCall_viffiiiif"]).apply(null,arguments)};var dynCall_viffiifffffiii=Module["dynCall_viffiifffffiii"]=function(){return(dynCall_viffiifffffiii=Module["dynCall_viffiifffffiii"]=Module["asm"]["dynCall_viffiifffffiii"]).apply(null,arguments)};var dynCall_viffffiifffiiiiif=Module["dynCall_viffffiifffiiiiif"]=function(){return(dynCall_viffffiifffiiiiif=Module["dynCall_viffffiifffiiiiif"]=Module["asm"]["dynCall_viffffiifffiiiiif"]).apply(null,arguments)};var dynCall_iiiifffffii=Module["dynCall_iiiifffffii"]=function(){return(dynCall_iiiifffffii=Module["dynCall_iiiifffffii"]=Module["asm"]["dynCall_iiiifffffii"]).apply(null,arguments)};var dynCall_viiiiiiiiiiifii=Module["dynCall_viiiiiiiiiiifii"]=function(){return(dynCall_viiiiiiiiiiifii=Module["dynCall_viiiiiiiiiiifii"]=Module["asm"]["dynCall_viiiiiiiiiiifii"]).apply(null,arguments)};var dynCall_viff=Module["dynCall_viff"]=function(){return(dynCall_viff=Module["dynCall_viff"]=Module["asm"]["dynCall_viff"]).apply(null,arguments)};var dynCall_iiiifiiiii=Module["dynCall_iiiifiiiii"]=function(){return(dynCall_iiiifiiiii=Module["dynCall_iiiifiiiii"]=Module["asm"]["dynCall_iiiifiiiii"]).apply(null,arguments)};var dynCall_iiiiifiiiiif=Module["dynCall_iiiiifiiiiif"]=function(){return(dynCall_iiiiifiiiiif=Module["dynCall_iiiiifiiiiif"]=Module["asm"]["dynCall_iiiiifiiiiif"]).apply(null,arguments)};var dynCall_viiff=Module["dynCall_viiff"]=function(){return(dynCall_viiff=Module["dynCall_viiff"]=Module["asm"]["dynCall_viiff"]).apply(null,arguments)};var dynCall_viifffi=Module["dynCall_viifffi"]=function(){return(dynCall_viifffi=Module["dynCall_viifffi"]=Module["asm"]["dynCall_viifffi"]).apply(null,arguments)};var dynCall_viiifiiiii=Module["dynCall_viiifiiiii"]=function(){return(dynCall_viiifiiiii=Module["dynCall_viiifiiiii"]=Module["asm"]["dynCall_viiifiiiii"]).apply(null,arguments)};var dynCall_viiiifiiiiif=Module["dynCall_viiiifiiiiif"]=function(){return(dynCall_viiiifiiiiif=Module["dynCall_viiiifiiiiif"]=Module["asm"]["dynCall_viiiifiiiiif"]).apply(null,arguments)};var dynCall_iifff=Module["dynCall_iifff"]=function(){return(dynCall_iifff=Module["dynCall_iifff"]=Module["asm"]["dynCall_iifff"]).apply(null,arguments)};var dynCall_viiifiii=Module["dynCall_viiifiii"]=function(){return(dynCall_viiifiii=Module["dynCall_viiifiii"]=Module["asm"]["dynCall_viiifiii"]).apply(null,arguments)};var dynCall_iif=Module["dynCall_iif"]=function(){return(dynCall_iif=Module["dynCall_iif"]=Module["asm"]["dynCall_iif"]).apply(null,arguments)};var dynCall_viij=Module["dynCall_viij"]=function(){return(dynCall_viij=Module["dynCall_viij"]=Module["asm"]["dynCall_viij"]).apply(null,arguments)};var dynCall_viijijj=Module["dynCall_viijijj"]=function(){return(dynCall_viijijj=Module["dynCall_viijijj"]=Module["asm"]["dynCall_viijijj"]).apply(null,arguments)};var dynCall_viijj=Module["dynCall_viijj"]=function(){return(dynCall_viijj=Module["dynCall_viijj"]=Module["asm"]["dynCall_viijj"]).apply(null,arguments)};var dynCall_viiiij=Module["dynCall_viiiij"]=function(){return(dynCall_viiiij=Module["dynCall_viiiij"]=Module["asm"]["dynCall_viiiij"]).apply(null,arguments)};var dynCall_iiijji=Module["dynCall_iiijji"]=function(){return(dynCall_iiijji=Module["dynCall_iiijji"]=Module["asm"]["dynCall_iiijji"]).apply(null,arguments)};var dynCall_ijjiiiii=Module["dynCall_ijjiiiii"]=function(){return(dynCall_ijjiiiii=Module["dynCall_ijjiiiii"]=Module["asm"]["dynCall_ijjiiiii"]).apply(null,arguments)};var dynCall_vidd=Module["dynCall_vidd"]=function(){return(dynCall_vidd=Module["dynCall_vidd"]=Module["asm"]["dynCall_vidd"]).apply(null,arguments)};var dynCall_iiiiiifffiiifiii=Module["dynCall_iiiiiifffiiifiii"]=function(){return(dynCall_iiiiiifffiiifiii=Module["dynCall_iiiiiifffiiifiii"]=Module["asm"]["dynCall_iiiiiifffiiifiii"]).apply(null,arguments)};var dynCall_viid=Module["dynCall_viid"]=function(){return(dynCall_viid=Module["dynCall_viid"]=Module["asm"]["dynCall_viid"]).apply(null,arguments)};var dynCall_viiif=Module["dynCall_viiif"]=function(){return(dynCall_viiif=Module["dynCall_viiif"]=Module["asm"]["dynCall_viiif"]).apply(null,arguments)};var dynCall_iiiiiff=Module["dynCall_iiiiiff"]=function(){return(dynCall_iiiiiff=Module["dynCall_iiiiiff"]=Module["asm"]["dynCall_iiiiiff"]).apply(null,arguments)};var dynCall_iiij=Module["dynCall_iiij"]=function(){return(dynCall_iiij=Module["dynCall_iiij"]=Module["asm"]["dynCall_iiij"]).apply(null,arguments)};var dynCall_vf=Module["dynCall_vf"]=function(){return(dynCall_vf=Module["dynCall_vf"]=Module["asm"]["dynCall_vf"]).apply(null,arguments)};var dynCall_vffff=Module["dynCall_vffff"]=function(){return(dynCall_vffff=Module["dynCall_vffff"]=Module["asm"]["dynCall_vffff"]).apply(null,arguments)};var dynCall_vff=Module["dynCall_vff"]=function(){return(dynCall_vff=Module["dynCall_vff"]=Module["asm"]["dynCall_vff"]).apply(null,arguments)};var dynCall_vifff=Module["dynCall_vifff"]=function(){return(dynCall_vifff=Module["dynCall_vifff"]=Module["asm"]["dynCall_vifff"]).apply(null,arguments)};var dynCall_viifff=Module["dynCall_viifff"]=function(){return(dynCall_viifff=Module["dynCall_viifff"]=Module["asm"]["dynCall_viifff"]).apply(null,arguments)};var dynCall_vij=Module["dynCall_vij"]=function(){return(dynCall_vij=Module["dynCall_vij"]=Module["asm"]["dynCall_vij"]).apply(null,arguments)};var dynCall_ij=Module["dynCall_ij"]=function(){return(dynCall_ij=Module["dynCall_ij"]=Module["asm"]["dynCall_ij"]).apply(null,arguments)};var dynCall_f=Module["dynCall_f"]=function(){return(dynCall_f=Module["dynCall_f"]=Module["asm"]["dynCall_f"]).apply(null,arguments)};var dynCall_vfff=Module["dynCall_vfff"]=function(){return(dynCall_vfff=Module["dynCall_vfff"]=Module["asm"]["dynCall_vfff"]).apply(null,arguments)};var dynCall_vffffffi=Module["dynCall_vffffffi"]=function(){return(dynCall_vffffffi=Module["dynCall_vffffffi"]=Module["asm"]["dynCall_vffffffi"]).apply(null,arguments)};var dynCall_ff=Module["dynCall_ff"]=function(){return(dynCall_ff=Module["dynCall_ff"]=Module["asm"]["dynCall_ff"]).apply(null,arguments)};var dynCall_iiiiiiffiiiiiiiiiffffiii=Module["dynCall_iiiiiiffiiiiiiiiiffffiii"]=function(){return(dynCall_iiiiiiffiiiiiiiiiffffiii=Module["dynCall_iiiiiiffiiiiiiiiiffffiii"]=Module["asm"]["dynCall_iiiiiiffiiiiiiiiiffffiii"]).apply(null,arguments)};var dynCall_viififi=Module["dynCall_viififi"]=function(){return(dynCall_viififi=Module["dynCall_viififi"]=Module["asm"]["dynCall_viififi"]).apply(null,arguments)};var dynCall_if=Module["dynCall_if"]=function(){return(dynCall_if=Module["dynCall_if"]=Module["asm"]["dynCall_if"]).apply(null,arguments)};var dynCall_viiffiiiiiiiii=Module["dynCall_viiffiiiiiiiii"]=function(){return(dynCall_viiffiiiiiiiii=Module["dynCall_viiffiiiiiiiii"]=Module["asm"]["dynCall_viiffiiiiiiiii"]).apply(null,arguments)};var dynCall_viiffiiiiii=Module["dynCall_viiffiiiiii"]=function(){return(dynCall_viiffiiiiii=Module["dynCall_viiffiiiiii"]=Module["asm"]["dynCall_viiffiiiiii"]).apply(null,arguments)};var dynCall_viiiiiiiijiii=Module["dynCall_viiiiiiiijiii"]=function(){return(dynCall_viiiiiiiijiii=Module["dynCall_viiiiiiiijiii"]=Module["asm"]["dynCall_viiiiiiiijiii"]).apply(null,arguments)};function invoke_ii(index,a1){var sp=stackSave();try{return dynCall_ii(index,a1)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_v(index){var sp=stackSave();try{dynCall_v(index)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vii(index,a1,a2){var sp=stackSave();try{dynCall_vii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iii(index,a1,a2){var sp=stackSave();try{return dynCall_iii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vi(index,a1){var sp=stackSave();try{dynCall_vi(index,a1)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiii(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_iiiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiii(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iiii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_iiiiii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viii(index,a1,a2,a3){var sp=stackSave();try{dynCall_viii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_i(index){var sp=stackSave();try{return dynCall_i(index)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiii(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iiiiiii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{return dynCall_iiiiiiii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{return dynCall_iiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_fiii(index,a1,a2,a3){var sp=stackSave();try{return dynCall_fiii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_diii(index,a1,a2,a3){var sp=stackSave();try{return dynCall_diii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{dynCall_viiiiiii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11){var sp=stackSave();try{return dynCall_iiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{dynCall_viiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15){var sp=stackSave();try{dynCall_viiiiiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_viiiiii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viiiii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8){var sp=stackSave();try{dynCall_viiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8){var sp=stackSave();try{return dynCall_iiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_ddiii(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_ddiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiifii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_iiifii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viifi(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viifi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viffi(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viffi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vidi(index,a1,a2,a3){var sp=stackSave();try{dynCall_vidi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viidi(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viidi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_dii(index,a1,a2){var sp=stackSave();try{return dynCall_dii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9){var sp=stackSave();try{dynCall_viiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiffi(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viiffi(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vifi(index,a1,a2,a3){var sp=stackSave();try{dynCall_vifi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_fii(index,a1,a2){var sp=stackSave();try{return dynCall_fii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iifi(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iifi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_fffi(index,a1,a2,a3){var sp=stackSave();try{return dynCall_fffi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_dddi(index,a1,a2,a3){var sp=stackSave();try{return dynCall_dddi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iidi(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iidi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_diiii(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_diiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9){var sp=stackSave();try{return dynCall_iiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_idi(index,a1,a2){var sp=stackSave();try{return dynCall_idi(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viifii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viifii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_fi(index,a1){var sp=stackSave();try{return dynCall_fi(index,a1)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiifi(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_iiifi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiifi(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_viiiifi(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiifii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iiiifii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_ifi(index,a1,a2){var sp=stackSave();try{return dynCall_ifi(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12){var sp=stackSave();try{dynCall_viiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vifii(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_vifii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiidii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iiiidii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiifi(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viiifi(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13){var sp=stackSave();try{dynCall_viiiiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiiiifi(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{return dynCall_iiiiiiiiifi(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vidd(index,a1,a2,a3){var sp=stackSave();try{dynCall_vidd(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iij(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iij(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiijiii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{return dynCall_iiijiii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jii(index,a1,a2){var sp=stackSave();try{return dynCall_jii(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jiiii(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_jiiii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_ji(index,a1){var sp=stackSave();try{return dynCall_ji(index,a1)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viijii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_viijii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiij(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iiiiij(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_j(index){var sp=stackSave();try{return dynCall_j(index)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iijiii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iijiii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jiii(index,a1,a2,a3){var sp=stackSave();try{return dynCall_jiii(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vijii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_vijii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viji(index,a1,a2,a3,a4){var sp=stackSave();try{dynCall_viji(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiji(index,a1,a2,a3,a4,a5){var sp=stackSave();try{dynCall_viiji(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiijii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{return dynCall_iiiijii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jijii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_jijii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jjji(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_jjji(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viijiiijiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13){var sp=stackSave();try{dynCall_viijiiijiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_ijji(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_ijji(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jji(index,a1,a2,a3){var sp=stackSave();try{return dynCall_jji(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_viiiji(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_viiiji(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jdi(index,a1,a2){var sp=stackSave();try{return dynCall_jdi(index,a1,a2)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vijjji(index,a1,a2,a3,a4,a5,a6,a7,a8){var sp=stackSave();try{dynCall_vijjji(index,a1,a2,a3,a4,a5,a6,a7,a8)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiji(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_iiji(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jjii(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_jjii(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_dji(index,a1,a2,a3){var sp=stackSave();try{return dynCall_dji(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iji(index,a1,a2,a3){var sp=stackSave();try{return dynCall_iji(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jiidi(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_jiidi(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iijii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_iijii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiijjii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9){var sp=stackSave();try{return dynCall_iiiijjii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iijjiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11){var sp=stackSave();try{return dynCall_iijjiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iijiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9){var sp=stackSave();try{return dynCall_iijiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iiiiiiiiiji(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11){var sp=stackSave();try{return dynCall_iiiiiiiiiji(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vji(index,a1,a2,a3){var sp=stackSave();try{dynCall_vji(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jiji(index,a1,a2,a3,a4){var sp=stackSave();try{return dynCall_jiji(index,a1,a2,a3,a4)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_iijji(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{return dynCall_iijji(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jidi(index,a1,a2,a3){var sp=stackSave();try{return dynCall_jidi(index,a1,a2,a3)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vijiii(index,a1,a2,a3,a4,a5,a6){var sp=stackSave();try{dynCall_vijiii(index,a1,a2,a3,a4,a5,a6)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vjjjiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{dynCall_vjjjiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_vjiiiii(index,a1,a2,a3,a4,a5,a6,a7){var sp=stackSave();try{dynCall_vjiiiii(index,a1,a2,a3,a4,a5,a6,a7)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10){var sp=stackSave();try{return dynCall_jiiiiiiiiii(index,a1,a2,a3,a4,a5,a6,a7,a8,a9,a10)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}function invoke_jiiiii(index,a1,a2,a3,a4,a5){var sp=stackSave();try{return dynCall_jiiiii(index,a1,a2,a3,a4,a5)}catch(e){stackRestore(sp);if(e!==e+0)throw e;_setThrew(1,0)}}Module["ccall"]=ccall;Module["cwrap"]=cwrap;Module["stackTrace"]=stackTrace;Module["addRunDependency"]=addRunDependency;Module["removeRunDependency"]=removeRunDependency;Module["FS_createPath"]=FS.createPath;Module["FS_createDataFile"]=FS.createDataFile;Module["stackTrace"]=stackTrace;var calledRun;function ExitStatus(status){this.name="ExitStatus";this.message="Program terminated with exit("+status+")";this.status=status}var calledMain=false;dependenciesFulfilled=function runCaller(){if(!calledRun)run();if(!calledRun)dependenciesFulfilled=runCaller};function callMain(args){var entryFunction=Module["_main"];args=args||[];var argc=args.length+1;var argv=stackAlloc((argc+1)*4);HEAP32[argv>>2]=allocateUTF8OnStack(thisProgram);for(var i=1;i>2)+i]=allocateUTF8OnStack(args[i-1])}HEAP32[(argv>>2)+argc]=0;try{var ret=entryFunction(argc,argv);exit(ret,true);return ret}catch(e){return handleException(e)}finally{calledMain=true}}function run(args){args=args||arguments_;if(runDependencies>0){return}preRun();if(runDependencies>0){return}function doRun(){if(calledRun)return;calledRun=true;Module["calledRun"]=true;if(ABORT)return;initRuntime();preMain();readyPromiseResolve(Module);if(Module["onRuntimeInitialized"])Module["onRuntimeInitialized"]();if(shouldRunNow)callMain(args);postRun()}if(Module["setStatus"]){Module["setStatus"]("Running...");setTimeout(function(){setTimeout(function(){Module["setStatus"]("")},1);doRun()},1)}else{doRun()}}Module["run"]=run;function exit(status,implicit){EXITSTATUS=status;procExit(status)}function procExit(code){EXITSTATUS=code;if(!keepRuntimeAlive()){if(Module["onExit"])Module["onExit"](code);ABORT=true}quit_(code,new ExitStatus(code))}if(Module["preInit"]){if(typeof Module["preInit"]=="function")Module["preInit"]=[Module["preInit"]];while(Module["preInit"].length>0){Module["preInit"].pop()()}}var shouldRunNow=true;if(Module["noInitialRun"])shouldRunNow=false;run(); - - - return unityFramework.ready -} -); -})(); -if (typeof exports === 'object' && typeof module === 'object') - module.exports = unityFramework; -else if (typeof define === 'function' && define['amd']) - define([], function() { return unityFramework; }); -else if (typeof exports === 'object') - exports["unityFramework"] = unityFramework; diff --git a/spaces/DiffusionArtco/AnimeTop50/app.py b/spaces/DiffusionArtco/AnimeTop50/app.py deleted file mode 100644 index 14a95e466bb8504f17080cb514f34114e9606f3c..0000000000000000000000000000000000000000 --- a/spaces/DiffusionArtco/AnimeTop50/app.py +++ /dev/null @@ -1,37 +0,0 @@ -import gradio as gr -import requests -from PIL import Image -from io import BytesIO -import base64 - -api_url = "https://5cb20b40-572c-426f-9466-995256f9b6eb.id.repl.co/generate_image" - -def generate_image(model="Abyss OrangeMix", prompt="", seed=0, negative_prompt="", sampler="k_dpmpp_2s_a", steps=50): - data = "?model=" + model + "&prompt=" + prompt + "&seed=" + str(seed) + "&negative_prompt=" + negative_prompt + "&sampler=" + sampler + "&steps=" + str(steps) - response = requests.post(api_url + data, timeout=400) - if response.status_code == 200: - img_base64 = response.json()["url"] - img_bytes = base64.b64decode(img_base64) - img = Image.open(BytesIO(img_bytes)) - return img - else: - return None - -inputs = [ - gr.inputs.Dropdown(['Abyss OrangeMix', 'AbyssOrangeMix-AfterDark','Anime Pencil Diffusion', 'Anygen', 'Anything Diffusion', 'Anything v3', 'anything_v4_inpainting', 'Arcane Diffusion', 'BPModel', 'Counterfeit', 'Cyberpunk Anime Diffusion', 'CyriousMix', 'DGSpitzer Art Diffusion', 'Dreamshaper', 'DucHaiten Classic Anime', 'Eimis Anime Diffusion', 'Ghibli Diffusion', 'GuoFeng', 'Hentai Diffusion', 'Kenshi', 'Midjourney Diffusion', 'NeverEnding Dream', 'Openniji', 'Pastel Mix', 'Protogen Anime', 'Rev Animated'], label="Model", default="Abyss OrangeMix"), - gr.inputs.Textbox(label="Prompt"), - gr.inputs.Number(label="Seed", default=0), - gr.inputs.Textbox(label="Negative Prompt", default=""), - gr.inputs.Dropdown(["k_lms", "k_heun", "k_euler", "k_euler_a", "k_dpm_2", "k_dpm_2_a", "DDIM", "k_dpm_fast", "k_dpm_adaptive", "k_dpmpp_2m", "k_dpmpp_2s_a", "k_dpmpp_sde"], label="Sampler", default="k_dpmpp_2s_a"), - gr.inputs.Number(label="Steps", default=50) -] - -outputs = gr.outputs.Image(label="Generated Image", type="pil") - -interface = gr.Interface(generate_image, inputs, outputs, title="", - description="
    ", - examples=[]) - -interface.launch() - - diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/resnext.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/resnext.py deleted file mode 100644 index cdbb7461a6c8eb126717967cdca5d5ce392aecea..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/segmodel/resnext.py +++ /dev/null @@ -1,182 +0,0 @@ -import os -import sys -import torch -import torch.nn as nn -import math -try: - from lib.nn import SynchronizedBatchNorm2d -except ImportError: - from torch.nn import BatchNorm2d as SynchronizedBatchNorm2d -try: - from urllib import urlretrieve -except ImportError: - from urllib.request import urlretrieve - - -__all__ = ['ResNeXt', 'resnext101'] # support resnext 101 - - -model_urls = { - #'resnext50': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnext50-imagenet.pth', - 'resnext101': 'http://sceneparsing.csail.mit.edu/model/pretrained_resnet/resnext101-imagenet.pth' -} - - -def conv3x3(in_planes, out_planes, stride=1): - "3x3 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=False) - - -class GroupBottleneck(nn.Module): - expansion = 2 - - def __init__(self, inplanes, planes, stride=1, groups=1, downsample=None): - super(GroupBottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = SynchronizedBatchNorm2d(planes) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, - padding=1, groups=groups, bias=False) - self.bn2 = SynchronizedBatchNorm2d(planes) - self.conv3 = nn.Conv2d(planes, planes * 2, kernel_size=1, bias=False) - self.bn3 = SynchronizedBatchNorm2d(planes * 2) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class ResNeXt(nn.Module): - - def __init__(self, block, layers, groups=32, num_classes=1000): - self.inplanes = 128 - super(ResNeXt, self).__init__() - self.conv1 = conv3x3(3, 64, stride=2) - self.bn1 = SynchronizedBatchNorm2d(64) - self.relu1 = nn.ReLU(inplace=True) - self.conv2 = conv3x3(64, 64) - self.bn2 = SynchronizedBatchNorm2d(64) - self.relu2 = nn.ReLU(inplace=True) - self.conv3 = conv3x3(64, 128) - self.bn3 = SynchronizedBatchNorm2d(128) - self.relu3 = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - self.layer1 = self._make_layer(block, 128, layers[0], groups=groups) - self.layer2 = self._make_layer(block, 256, layers[1], stride=2, groups=groups) - self.layer3 = self._make_layer(block, 512, layers[2], stride=2, groups=groups) - self.layer4 = self._make_layer(block, 1024, layers[3], stride=2, groups=groups) - self.avgpool = nn.AvgPool2d(7, stride=1) - self.fc = nn.Linear(1024 * block.expansion, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels // m.groups - m.weight.data.normal_(0, math.sqrt(2. / n)) - elif isinstance(m, SynchronizedBatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - - def _make_layer(self, block, planes, blocks, stride=1, groups=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - SynchronizedBatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, groups, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, groups=groups)) - - return nn.Sequential(*layers) - - def forward(self, x): - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = self.avgpool(x) - x = x.view(x.size(0), -1) - x = self.fc(x) - - return x - - -''' -def resnext50(pretrained=False, **kwargs): - """Constructs a ResNet-50 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on Places - """ - model = ResNeXt(GroupBottleneck, [3, 4, 6, 3], **kwargs) - if pretrained: - model.load_state_dict(load_url(model_urls['resnext50']), strict=False) - return model -''' - - -def resnext101(pretrained=False, **kwargs): - """Constructs a ResNet-101 model. - - Args: - pretrained (bool): If True, returns a model pre-trained on Places - """ - model = ResNeXt(GroupBottleneck, [3, 4, 23, 3], **kwargs) - if pretrained: - model.load_state_dict(load_url(model_urls['resnext101']), strict=False) - return model - - -# def resnext152(pretrained=False, **kwargs): -# """Constructs a ResNeXt-152 model. -# -# Args: -# pretrained (bool): If True, returns a model pre-trained on Places -# """ -# model = ResNeXt(GroupBottleneck, [3, 8, 36, 3], **kwargs) -# if pretrained: -# model.load_state_dict(load_url(model_urls['resnext152'])) -# return model - - -def load_url(url, model_dir='./pretrained', map_location=None): - if not os.path.exists(model_dir): - os.makedirs(model_dir) - filename = url.split('/')[-1] - cached_file = os.path.join(model_dir, filename) - if not os.path.exists(cached_file): - sys.stderr.write('Downloading: "{}" to {}\n'.format(url, cached_file)) - urlretrieve(url, cached_file) - return torch.load(cached_file, map_location=map_location) diff --git a/spaces/Dorado607/ChuanhuChatGPT/modules/models/configuration_moss.py b/spaces/Dorado607/ChuanhuChatGPT/modules/models/configuration_moss.py deleted file mode 100644 index 9bad4396ecea6578c1628732d0ef077d8964d45d..0000000000000000000000000000000000000000 --- a/spaces/Dorado607/ChuanhuChatGPT/modules/models/configuration_moss.py +++ /dev/null @@ -1,118 +0,0 @@ -""" Moss model configuration""" - -from transformers.utils import logging -from transformers.configuration_utils import PretrainedConfig - - -logger = logging.get_logger(__name__) - - -class MossConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`MossModel`]. It is used to instantiate a - Moss model according to the specified arguments, defining the model architecture. Instantiating a configuration - with the defaults will yield a similar configuration to that of the Moss - [fnlp/moss-moon-003-base](https://huggingface.co/fnlp/moss-moon-003-base) architecture. Configuration objects - inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from - [`PretrainedConfig`] for more information. - - Args: - vocab_size (`int`, *optional*, defaults to 107008): - Vocabulary size of the Moss model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`MossModel`]. - n_positions (`int`, *optional*, defaults to 2048): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 512 or 1024 or 2048). - n_embd (`int`, *optional*, defaults to 4096): - Dimensionality of the embeddings and hidden states. - n_layer (`int`, *optional*, defaults to 28): - Number of hidden layers in the Transformer encoder. - n_head (`int`, *optional*, defaults to 16): - Number of attention heads for each attention layer in the Transformer encoder. - rotary_dim (`int`, *optional*, defaults to 64): - Number of dimensions in the embedding that Rotary Position Embedding is applied to. - n_inner (`int`, *optional*, defaults to None): - Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd - activation_function (`str`, *optional*, defaults to `"gelu_new"`): - Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`. - resid_pdrop (`float`, *optional*, defaults to 0.1): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - embd_pdrop (`int`, *optional*, defaults to 0.1): - The dropout ratio for the embeddings. - attn_pdrop (`float`, *optional*, defaults to 0.1): - The dropout ratio for the attention. - layer_norm_epsilon (`float`, *optional*, defaults to 1e-5): - The epsilon to use in the layer normalization layers. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - use_cache (`bool`, *optional*, defaults to `True`): - Whether or not the model should return the last key/values attentions (not used by all models). - - Example: - - ```python - >>> from modeling_moss import MossModel - >>> from configuration_moss import MossConfig - - >>> # Initializing a moss-moon-003-base configuration - >>> configuration = MossConfig() - - >>> # Initializing a model (with random weights) from the configuration - >>> model = MossModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - - model_type = "moss" - attribute_map = { - "max_position_embeddings": "n_positions", - "hidden_size": "n_embd", - "num_attention_heads": "n_head", - "num_hidden_layers": "n_layer", - } - - def __init__( - self, - vocab_size=107008, - n_positions=2048, - n_ctx=2048, - n_embd=4096, - n_layer=28, - n_head=16, - rotary_dim=64, - n_inner=None, - activation_function="gelu_new", - resid_pdrop=0.0, - embd_pdrop=0.0, - attn_pdrop=0.0, - layer_norm_epsilon=1e-5, - initializer_range=0.02, - use_cache=True, - bos_token_id=106028, - eos_token_id=106068, - tie_word_embeddings=False, - **kwargs, - ): - self.vocab_size = vocab_size - self.n_ctx = n_ctx - self.n_positions = n_positions - self.n_embd = n_embd - self.n_layer = n_layer - self.n_head = n_head - self.n_inner = n_inner - self.rotary_dim = rotary_dim - self.activation_function = activation_function - self.resid_pdrop = resid_pdrop - self.embd_pdrop = embd_pdrop - self.attn_pdrop = attn_pdrop - self.layer_norm_epsilon = layer_norm_epsilon - self.initializer_range = initializer_range - self.use_cache = use_cache - - self.bos_token_id = bos_token_id - self.eos_token_id = eos_token_id - - super().__init__( - bos_token_id=bos_token_id, eos_token_id=eos_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs - ) diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/bg_white.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/bg_white.py deleted file mode 100644 index 9dc181fecbd30b0fcd08cdea87930bd09f4c51fc..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/bg_white.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -import os -import click -import cv2 -import numpy as np - - -def bg_white(seg, raw, blur_level=3, gaussian=81): - seg = cv2.blur(seg, (blur_level, blur_level)) - - empty = np.ones_like(seg) - seg_bg = (empty - seg) * 255 - seg_bg = cv2.GaussianBlur(seg_bg, (gaussian, gaussian), 0) - - background_mask = cv2.cvtColor( - 255 - cv2.cvtColor(seg, cv2.COLOR_BGR2GRAY), cv2.COLOR_GRAY2BGR) - masked_fg = (raw * (1 / 255)) * (seg * (1 / 255)) - masked_bg = (seg_bg * (1 / 255)) * (background_mask * (1 / 255)) - - frame = np.uint8(cv2.add(masked_bg, masked_fg)*255) - - return frame - - -""" -To turn background into white. - -Examples: - -\b -python bg_white.py --raw_img_dir=./SHHQ-1.0/no_segment/ --raw_seg_dir=./SHHQ-1.0/segments/ \\ - --outdir=./SHHQ-1.0/bg_white/ -""" - - -@click.command() -@click.pass_context -@click.option('--raw_img_dir', default="./SHHQ-1.0/no_segment/", help='folder of raw image', required=True) -@click.option('--raw_seg_dir', default='./SHHQ-1.0/segments/', help='folder of segmentation masks', required=True) -@click.option('--outdir', help='Where to save the output images', default="./SHHQ-1.0/bg_white/", type=str, required=True, metavar='DIR') -def main( - ctx: click.Context, - raw_img_dir: str, - raw_seg_dir: str, - outdir: str): - os.makedirs(outdir, exist_ok=True) - files = os.listdir(raw_img_dir) - for file in files: - print(file) - raw = cv2.imread(os.path.join(raw_img_dir, file)) - seg = cv2.imread(os.path.join(raw_seg_dir, file)) - assert raw is not None - assert seg is not None - white_frame = bg_white(seg, raw) - cv2.imwrite(os.path.join(outdir, file), white_frame) - - -if __name__ == "__main__": - main() diff --git a/spaces/EPFL-VILAB/MultiMAE/utils/transforms.py b/spaces/EPFL-VILAB/MultiMAE/utils/transforms.py deleted file mode 100644 index 4a4c651e3b537396fe85143809c09d00984c244b..0000000000000000000000000000000000000000 --- a/spaces/EPFL-VILAB/MultiMAE/utils/transforms.py +++ /dev/null @@ -1,163 +0,0 @@ -# -------------------------------------------------------- -# Based on timm and MAE-priv code bases -# https://github.com/rwightman/pytorch-image-models/tree/master/timm -# https://github.com/BUPT-PRIV/MAE-priv -# -------------------------------------------------------- - -import math -import random -import warnings - -import numpy as np -import torch -import torchvision.transforms.functional as F -from PIL import Image - - -class ToNumpy: - - def __call__(self, pil_img): - np_img = np.array(pil_img, dtype=np.uint8) - if np_img.ndim < 3: - np_img = np.expand_dims(np_img, axis=-1) - np_img = np.rollaxis(np_img, 2) # HWC to CHW - return np_img - - -class ToTensor: - - def __init__(self, dtype=torch.float32): - self.dtype = dtype - - def __call__(self, pil_img): - np_img = np.array(pil_img, dtype=np.uint8) - if np_img.ndim < 3: - np_img = np.expand_dims(np_img, axis=-1) - np_img = np.rollaxis(np_img, 2) # HWC to CHW - return torch.from_numpy(np_img).to(dtype=self.dtype) - - -_pil_interpolation_to_str = { - Image.NEAREST: 'PIL.Image.NEAREST', - Image.BILINEAR: 'PIL.Image.BILINEAR', - Image.BICUBIC: 'PIL.Image.BICUBIC', - Image.LANCZOS: 'PIL.Image.LANCZOS', - Image.HAMMING: 'PIL.Image.HAMMING', - Image.BOX: 'PIL.Image.BOX', -} - - -def _pil_interp(method): - if method == 'bicubic': - return Image.BICUBIC - elif method == 'lanczos': - return Image.LANCZOS - elif method == 'hamming': - return Image.HAMMING - else: - # default bilinear, do we want to allow nearest? - return Image.BILINEAR - - -_RANDOM_INTERPOLATION = (Image.BILINEAR, Image.BICUBIC) - - -class RandomResizedCropAndInterpolation: - """Crop the given PIL Image to random size and aspect ratio with random interpolation. - - A crop of random size (default: of 0.08 to 1.0) of the original size and a random - aspect ratio (default: of 3/4 to 4/3) of the original aspect ratio is made. This crop - is finally resized to given size. - This is popularly used to train the Inception networks. - - Args: - size: expected output size of each edge - scale: range of size of the origin size cropped - ratio: range of aspect ratio of the origin aspect ratio cropped - interpolation: Default: PIL.Image.BILINEAR - """ - - def __init__(self, size, scale=(0.08, 1.0), ratio=(3. / 4., 4. / 3.), - interpolation='bilinear'): - if isinstance(size, (list, tuple)): - self.size = tuple(size) - else: - self.size = (size, size) - if (scale[0] > scale[1]) or (ratio[0] > ratio[1]): - warnings.warn("range should be of kind (min, max)") - - if interpolation == 'random': - self.interpolation = _RANDOM_INTERPOLATION - else: - self.interpolation = _pil_interp(interpolation) - self.scale = scale - self.ratio = ratio - - @staticmethod - def get_params(img, scale, ratio): - """Get parameters for ``crop`` for a random sized crop. - - Args: - img (PIL Image): Image to be cropped. - scale (tuple): range of size of the origin size cropped - ratio (tuple): range of aspect ratio of the origin aspect ratio cropped - - Returns: - tuple: params (i, j, h, w) to be passed to ``crop`` for a random - sized crop. - """ - area = img.size[0] * img.size[1] - - for attempt in range(10): - target_area = random.uniform(*scale) * area - log_ratio = (math.log(ratio[0]), math.log(ratio[1])) - aspect_ratio = math.exp(random.uniform(*log_ratio)) - - w = int(round(math.sqrt(target_area * aspect_ratio))) - h = int(round(math.sqrt(target_area / aspect_ratio))) - - if w <= img.size[0] and h <= img.size[1]: - i = random.randint(0, img.size[1] - h) - j = random.randint(0, img.size[0] - w) - return i, j, h, w - - # Fallback to central crop - in_ratio = img.size[0] / img.size[1] - if in_ratio < min(ratio): - w = img.size[0] - h = int(round(w / min(ratio))) - elif in_ratio > max(ratio): - h = img.size[1] - w = int(round(h * max(ratio))) - else: # whole image - w = img.size[0] - h = img.size[1] - i = (img.size[1] - h) // 2 - j = (img.size[0] - w) // 2 - return i, j, h, w - - def __call__(self, img): - """ - Args: - img (PIL Image): Image to be cropped and resized. - - Returns: - PIL Image: Randomly cropped and resized image. - """ - i, j, h, w = self.get_params(img, self.scale, self.ratio) - if isinstance(self.interpolation, (tuple, list)): - interpolation = random.choice(self.interpolation) - else: - interpolation = self.interpolation - return F.resized_crop(img, i, j, h, w, self.size, interpolation) - - def __repr__(self): - if isinstance(self.interpolation, (tuple, list)): - interpolate_str = ' '.join([_pil_interpolation_to_str[x] for x in self.interpolation]) - else: - interpolate_str = _pil_interpolation_to_str[self.interpolation] - format_string = self.__class__.__name__ + '(size={0}'.format(self.size) - format_string += ', scale={0}'.format(tuple(round(s, 4) for s in self.scale)) - format_string += ', ratio={0}'.format(tuple(round(r, 4) for r in self.ratio)) - format_string += ', interpolation={0})'.format(interpolate_str) - return format_string diff --git a/spaces/Edisonymy/buy-or-rent/src/config.py b/spaces/Edisonymy/buy-or-rent/src/config.py deleted file mode 100644 index 40842ebe42a8674a0721bdd7576901b9e2384d3b..0000000000000000000000000000000000000000 --- a/spaces/Edisonymy/buy-or-rent/src/config.py +++ /dev/null @@ -1,23 +0,0 @@ -n_samples = 1000 -n_bins = 30 -button_css = """ - - """ diff --git a/spaces/Edisonymy/buy-or-rent/src/utils/finance.py b/spaces/Edisonymy/buy-or-rent/src/utils/finance.py deleted file mode 100644 index e819ca881e6503b46be7a8b85b69fce0a6d6d4ee..0000000000000000000000000000000000000000 --- a/spaces/Edisonymy/buy-or-rent/src/utils/finance.py +++ /dev/null @@ -1,34 +0,0 @@ -def get_stamp_duty_next_home(HOUSE_PRICE): - if HOUSE_PRICE <=250000: - return 0 - elif HOUSE_PRICE <=925000: - return (HOUSE_PRICE-250000) * 0.05 - elif HOUSE_PRICE <=1500000: - return (HOUSE_PRICE-925000) * 0.10 + (925000-250000) * 0.05 - else: - return (HOUSE_PRICE-1500000) * 0.12 + (925000-250000) * 0.05 + (1500000-925000) * 0.10 - - -def annuity_pv(payment, discount_rate, n_periods, growth_rate): - # implements present value of annuity formula - pv = payment * (1- (1+growth_rate)**n_periods*(1+discount_rate)**(-1*n_periods)) / (discount_rate-growth_rate) - return pv - - -def annuity_fv(payment, discount_rate, n_periods, growth_rate, adjust_for_inflation = 0): - # implements future value of annuity formula - fv = payment * ((1+discount_rate)**n_periods - (1+growth_rate)**n_periods) / (discount_rate-growth_rate) - return fv / float(1+adjust_for_inflation)**(n_periods) - - -def annuity_payment(pv, discount_rate, n_periods, growth_rate): - # get payment per period for an annuity - return pv* (discount_rate - growth_rate) / (1- (1+growth_rate)**n_periods * (1+discount_rate)**(-1*n_periods)) - - -def pv_future_payment(payment, discount_rate, n_periods): - return payment/(1+discount_rate)**(n_periods) - - -def fv_present_payment(payment, discount_rate, n_periods, adjust_for_inflation = 0): - return payment*(1+discount_rate)**(n_periods) / float(1+adjust_for_inflation)**(n_periods) diff --git a/spaces/Edward-Ji/essentials-of-microeconomics/.github/ISSUE_TEMPLATE/bug_report.md b/spaces/Edward-Ji/essentials-of-microeconomics/.github/ISSUE_TEMPLATE/bug_report.md deleted file mode 100644 index dd84ea7824f11be1eeda22377549cbc1aec7f980..0000000000000000000000000000000000000000 --- a/spaces/Edward-Ji/essentials-of-microeconomics/.github/ISSUE_TEMPLATE/bug_report.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -name: Bug report -about: Create a report to help us improve -title: '' -labels: '' -assignees: '' - ---- - -**Describe the bug** -A clear and concise description of what the bug is. - -**To Reproduce** -Steps to reproduce the behavior: -1. Go to '...' -2. Click on '....' -3. Scroll down to '....' -4. See error - -**Expected behavior** -A clear and concise description of what you expected to happen. - -**Screenshots** -If applicable, add screenshots to help explain your problem. - -**Desktop (please complete the following information):** - - OS: [e.g. iOS] - - Browser [e.g. chrome, safari] - - Version [e.g. 22] - -**Smartphone (please complete the following information):** - - Device: [e.g. iPhone6] - - OS: [e.g. iOS8.1] - - Browser [e.g. stock browser, safari] - - Version [e.g. 22] - -**Additional context** -Add any other context about the problem here. diff --git a/spaces/EronSamez/RVC_HFmeu/utils/backups.py b/spaces/EronSamez/RVC_HFmeu/utils/backups.py deleted file mode 100644 index b814f8184792e80e2324685436053d61487110b1..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/utils/backups.py +++ /dev/null @@ -1,141 +0,0 @@ -import os -import shutil -import hashlib -import time -import base64 - - - - -LOGS_FOLDER = '/content/Applio-RVC-Fork/logs' -WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights' -GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' - -def import_google_drive_backup(): - print("Importing Google Drive backup...") - weights_exist = False - for root, dirs, files in os.walk(GOOGLE_DRIVE_PATH): - for filename in files: - filepath = os.path.join(root, filename) - if os.path.isfile(filepath) and not filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')): - backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - shutil.copy2(filepath, backup_filepath) # copy file with metadata - print(f'Imported file from Google Drive backup: {filename}') - elif filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')) and filename.endswith('.pth'): - weights_exist = True - weights_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, os.path.join(GOOGLE_DRIVE_PATH, 'weights'))) - weights_folderpath = os.path.dirname(weights_filepath) - if not os.path.exists(weights_folderpath): - os.makedirs(weights_folderpath) - print(f'Created weights folder: {weights_folderpath}', flush=True) - shutil.copy2(filepath, weights_filepath) # copy file with metadata - print(f'Imported file from weights: {filename}') - if weights_exist: - print("Copied weights from Google Drive backup to local weights folder.") - else: - print("No weights found in Google Drive backup.") - print("Google Drive backup import completed.") - -def get_md5_hash(file_path): - hash_md5 = hashlib.md5() - with open(file_path, "rb") as f: - for chunk in iter(lambda: f.read(4096), b""): - hash_md5.update(chunk) - return hash_md5.hexdigest() - -def copy_weights_folder_to_drive(): - destination_folder = os.path.join(GOOGLE_DRIVE_PATH, 'weights') - try: - if not os.path.exists(destination_folder): - os.makedirs(destination_folder) - - num_copied = 0 - for filename in os.listdir(WEIGHTS_FOLDER): - if filename.endswith('.pth'): - source_file = os.path.join(WEIGHTS_FOLDER, filename) - destination_file = os.path.join(destination_folder, filename) - if not os.path.exists(destination_file): - shutil.copy2(source_file, destination_file) - num_copied += 1 - print(f"Copied {filename} to Google Drive!") - - if num_copied == 0: - print("No new finished models found for copying.") - else: - print(f"Finished copying {num_copied} files to Google Drive!") - - except Exception as e: - print(f"An error occurred while copying weights: {str(e)}") - # You can log the error or take appropriate actions here. - -def backup_files(): - print("\nStarting backup loop...") - last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt') - fully_updated = False # boolean to track if all files are up to date - - while True: - try: - updated = False # flag to check if any files were updated - last_backup_timestamps = {} - - try: - with open(last_backup_timestamps_path, 'r') as f: - last_backup_timestamps = dict(line.strip().split(':') for line in f) - except FileNotFoundError: - pass # File does not exist yet, which is fine - - for root, dirs, files in os.walk(LOGS_FOLDER): - for filename in files: - if filename != 'last_backup_timestamps.txt': - filepath = os.path.join(root, filename) - if os.path.isfile(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - # check if file has changed since last backup - last_backup_timestamp = last_backup_timestamps.get(filepath) - current_timestamp = os.path.getmtime(filepath) - if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp: - shutil.copy2(filepath, backup_filepath) # copy file with metadata - last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp - if last_backup_timestamp is None: - print(f'Backed up file: {filename}') - else: - print(f'Updating backed up file: {filename}') - updated = True - fully_updated = False # if a file is updated, all files are not up to date - - # check if any files were deleted in Colab and delete them from the backup drive - for filepath in list(last_backup_timestamps.keys()): - if not os.path.exists(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - if os.path.exists(backup_filepath): - os.remove(backup_filepath) - print(f'Deleted file: {filepath}') - del last_backup_timestamps[filepath] - updated = True - fully_updated = False # if a file is deleted, all files are not up to date - - if not updated and not fully_updated: - print("Files are up to date.") - fully_updated = True # if all files are up to date, set the boolean to True - copy_weights_folder_to_drive() - sleep_time = 15 - else: - sleep_time = 0.1 - - with open(last_backup_timestamps_path, 'w') as f: - for filepath, timestamp in last_backup_timestamps.items(): - f.write(f'{filepath}:{timestamp}\n') - - time.sleep(sleep_time) # wait for 15 seconds before checking again, or 0.1s if not fully up to date to speed up backups - - except Exception as e: - print(f"An error occurred: {str(e)}") - # You can log the error or take appropriate actions here. diff --git a/spaces/Faridmaruf/rvc-genshin-v2/app.py b/spaces/Faridmaruf/rvc-genshin-v2/app.py deleted file mode 100644 index b545c33df5f8714308d872bc6cab208485e14e6b..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/rvc-genshin-v2/app.py +++ /dev/null @@ -1,516 +0,0 @@ -import os -import glob -import json -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -import yt_dlp -import ffmpeg -import subprocess -import sys -import io -import wave -from datetime import datetime -from fairseq import checkpoint_utils -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from vc_infer_pipeline import VC -from config import Config -config = Config() -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" - -audio_mode = [] -f0method_mode = [] -f0method_info = "" -if limitation is True: - audio_mode = ["Upload audio", "TTS Audio"] - f0method_mode = ["pm", "harvest"] - f0method_info = "PM is fast, Harvest is good but extremely slow. (Default: PM)" -else: - audio_mode = ["Input path", "Upload audio", "Youtube", "TTS Audio"] - f0method_mode = ["pm", "harvest", "crepe"] - f0method_info = "PM is fast, Harvest is good but extremely slow, and Crepe effect is good but requires GPU (Default: PM)" - -def create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, file_index): - def vc_fn( - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - f0_up_key, - f0_method, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ): - try: - print(f"Converting using {model_name}...") - if vc_audio_mode == "Input path" or "Youtube" and vc_input != "": - audio, sr = librosa.load(vc_input, sr=16000, mono=True) - elif vc_audio_mode == "Upload audio": - if vc_upload is None: - return "You need to upload an audio", None - sampling_rate, audio = vc_upload - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - elif vc_audio_mode == "TTS Audio": - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - vc_input = "tts.mp3" - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - vc_input, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=None, - ) - info = f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - print(f"{model_name} | {info}") - return info, (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, None - return vc_fn - -def load_model(): - categories = [] - with open("weights/folder_info.json", "r", encoding="utf-8") as f: - folder_info = json.load(f) - for category_name, category_info in folder_info.items(): - if not category_info['enable']: - continue - category_title = category_info['title'] - category_folder = category_info['folder_path'] - description = category_info['description'] - models = [] - with open(f"weights/{category_folder}/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for character_name, info in models_info.items(): - if not info['enable']: - continue - model_title = info['title'] - model_name = info['model_path'] - model_author = info.get("author", None) - model_cover = f"weights/{category_folder}/{character_name}/{info['cover']}" - model_index = f"weights/{category_folder}/{character_name}/{info['feature_retrieval_library']}" - cpt = torch.load(f"weights/{category_folder}/{character_name}/{model_name}", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - model_version = "V1" - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - model_version = "V2" - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - print(f"Model loaded: {character_name} / {info['feature_retrieval_library']} | ({model_version})") - models.append((character_name, model_title, model_author, model_cover, model_version, create_vc_fn(model_name, tgt_sr, net_g, vc, if_f0, version, model_index))) - categories.append([category_title, category_folder, description, models]) - return categories - -def cut_vocal_and_inst(url, audio_provider, split_model): - if url != "": - if not os.path.exists("dl_audio"): - os.mkdir("dl_audio") - if audio_provider == "Youtube": - ydl_opts = { - 'noplaylist': True, - 'format': 'bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'wav', - }], - "outtmpl": 'dl_audio/youtube_audio', - } - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([url]) - audio_path = "dl_audio/youtube_audio.wav" - if split_model == "htdemucs": - command = f"demucs --two-stems=vocals {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/htdemucs/youtube_audio/vocals.wav", "output/htdemucs/youtube_audio/no_vocals.wav", audio_path, "output/htdemucs/youtube_audio/vocals.wav" - else: - command = f"demucs --two-stems=vocals -n mdx_extra_q {audio_path} -o output" - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return "output/mdx_extra_q/youtube_audio/vocals.wav", "output/mdx_extra_q/youtube_audio/no_vocals.wav", audio_path, "output/mdx_extra_q/youtube_audio/vocals.wav" - else: - raise gr.Error("URL Required!") - return None, None, None, None - -def combine_vocal_and_inst(audio_data, audio_volume, split_model): - if not os.path.exists("output/result"): - os.mkdir("output/result") - vocal_path = "output/result/output.wav" - output_path = "output/result/combine.mp3" - if split_model == "htdemucs": - inst_path = "output/htdemucs/youtube_audio/no_vocals.wav" - else: - inst_path = "output/mdx_extra_q/youtube_audio/no_vocals.wav" - with wave.open(vocal_path, "w") as wave_file: - wave_file.setnchannels(1) - wave_file.setsampwidth(2) - wave_file.setframerate(audio_data[0]) - wave_file.writeframes(audio_data[1].tobytes()) - command = f'ffmpeg -y -i {inst_path} -i {vocal_path} -filter_complex [1:a]volume={audio_volume}dB[v];[0:a][v]amix=inputs=2:duration=longest -b:a 320k -c:a libmp3lame {output_path}' - result = subprocess.run(command.split(), stdout=subprocess.PIPE) - print(result.stdout.decode()) - return output_path - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_audio_mode(vc_audio_mode): - if vc_audio_mode == "Input path": - return ( - # Input & Upload - gr.Textbox.update(visible=True), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Upload audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "Youtube": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=True), - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True), - gr.Button.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Audio.update(visible=True), - gr.Slider.update(visible=True), - gr.Audio.update(visible=True), - gr.Button.update(visible=True), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - elif vc_audio_mode == "TTS Audio": - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=False), - gr.Audio.update(visible=False), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=True), - gr.Dropdown.update(visible=True) - ) - else: - return ( - # Input & Upload - gr.Textbox.update(visible=False), - gr.Checkbox.update(visible=True), - gr.Audio.update(visible=True), - # Youtube - gr.Dropdown.update(visible=False), - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False), - gr.Button.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Audio.update(visible=False), - gr.Slider.update(visible=False), - gr.Audio.update(visible=False), - gr.Button.update(visible=False), - # TTS - gr.Textbox.update(visible=False), - gr.Dropdown.update(visible=False) - ) - -def use_microphone(microphone): - if microphone == True: - return gr.Audio.update(source="microphone") - else: - return gr.Audio.update(source="upload") - -if __name__ == '__main__': - load_hubert() - categories = load_model() - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with gr.Blocks() as app: - gr.Markdown( - "
    \n\n"+ - "# Multi Model RVC Inference\n\n"+ - "[![Repository](https://img.shields.io/badge/Github-Multi%20Model%20RVC%20Inference-blue?style=for-the-badge&logo=github)](https://github.com/ArkanDash/Multi-Model-RVC-Inference)\n\n"+ - "
    " - ) - for (folder_title, folder, description, models) in categories: - with gr.TabItem(folder_title): - if description: - gr.Markdown(f"###
    {description}") - with gr.Tabs(): - if not models: - gr.Markdown("#
    No Model Loaded.") - gr.Markdown("##
    Please add model or fix your model path.") - continue - for (name, title, author, cover, model_version, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
    ' - f'
    {title}
    \n'+ - f'
    RVC {model_version} Model
    \n'+ - (f'
    Model author: {author}
    ' if author else "")+ - (f'' if cover else "")+ - '
    ' - ) - with gr.Row(): - with gr.Column(): - vc_audio_mode = gr.Dropdown(label="Input voice", choices=audio_mode, allow_custom_value=False, value="Upload audio") - # Input - vc_input = gr.Textbox(label="Input audio path", visible=False) - # Upload - vc_microphone_mode = gr.Checkbox(label="Use Microphone", value=False, visible=True, interactive=True) - vc_upload = gr.Audio(label="Upload audio file", source="upload", visible=True, interactive=True) - # Youtube - vc_download_audio = gr.Dropdown(label="Provider", choices=["Youtube"], allow_custom_value=False, visible=False, value="Youtube", info="Select provider (Default: Youtube)") - vc_link = gr.Textbox(label="Youtube URL", visible=False, info="Example: https://www.youtube.com/watch?v=Nc0sB1Bmf-A", placeholder="https://www.youtube.com/watch?v=...") - vc_split_model = gr.Dropdown(label="Splitter Model", choices=["htdemucs", "mdx_extra_q"], allow_custom_value=False, visible=False, value="htdemucs", info="Select the splitter model (Default: htdemucs)") - vc_split = gr.Button("Split Audio", variant="primary", visible=False) - vc_vocal_preview = gr.Audio(label="Vocal Preview", visible=False) - vc_inst_preview = gr.Audio(label="Instrumental Preview", visible=False) - vc_audio_preview = gr.Audio(label="Audio Preview", visible=False) - # TTS - tts_text = gr.Textbox(visible=False, label="TTS text", info="Text to speech input") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - with gr.Column(): - vc_transform0 = gr.Number(label="Transpose", value=0, info='Type "12" to change from male to female voice. Type "-12" to change female to male voice') - f0method0 = gr.Radio( - label="Pitch extraction algorithm", - info=f0method_info, - choices=f0method_mode, - value="pm", - interactive=True - ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - info="(Default: 0.7)", - value=0.7, - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label="Apply Median Filtering", - info="The value represents the filter radius and can reduce breathiness.", - value=3, - step=1, - interactive=True, - ) - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label="Resample the output audio", - info="Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling", - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label="Volume Envelope", - info="Use the volume envelope of the input to replace or mix with the volume envelope of the output. The closer the ratio is to 1, the more the output envelope is used", - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label="Voice Protection", - info="Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy", - value=0.5, - step=0.01, - interactive=True, - ) - with gr.Column(): - vc_log = gr.Textbox(label="Output Information", interactive=False) - vc_output = gr.Audio(label="Output Audio", interactive=False) - vc_convert = gr.Button("Convert", variant="primary") - vc_volume = gr.Slider( - minimum=0, - maximum=10, - label="Vocal volume", - value=4, - interactive=True, - step=1, - info="Adjust vocal volume (Default: 4}", - visible=False - ) - vc_combined_output = gr.Audio(label="Output Combined Audio", visible=False) - vc_combine = gr.Button("Combine",variant="primary", visible=False) - vc_convert.click( - fn=vc_fn, - inputs=[ - vc_audio_mode, - vc_input, - vc_upload, - tts_text, - tts_voice, - vc_transform0, - f0method0, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - outputs=[vc_log ,vc_output] - ) - vc_split.click( - fn=cut_vocal_and_inst, - inputs=[vc_link, vc_download_audio, vc_split_model], - outputs=[vc_vocal_preview, vc_inst_preview, vc_audio_preview, vc_input] - ) - vc_combine.click( - fn=combine_vocal_and_inst, - inputs=[vc_output, vc_volume, vc_split_model], - outputs=[vc_combined_output] - ) - vc_microphone_mode.change( - fn=use_microphone, - inputs=vc_microphone_mode, - outputs=vc_upload - ) - vc_audio_mode.change( - fn=change_audio_mode, - inputs=[vc_audio_mode], - outputs=[ - vc_input, - vc_microphone_mode, - vc_upload, - vc_download_audio, - vc_link, - vc_split_model, - vc_split, - vc_vocal_preview, - vc_inst_preview, - vc_audio_preview, - vc_volume, - vc_combined_output, - vc_combine, - tts_text, - tts_voice - ] - ) - app.queue(concurrency_count=1, max_size=20, api_open=config.api).launch(share=config.colab) \ No newline at end of file diff --git a/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/modules.py b/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/modules.py deleted file mode 100644 index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/rvc-genshin-v2/lib/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from lib.infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/logger/saver.py b/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/logger/saver.py deleted file mode 100644 index ef78b52b6bcd32106f962b731d3784d72d5f0cce..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-ba/diffusion/logger/saver.py +++ /dev/null @@ -1,150 +0,0 @@ -''' -author: wayn391@mastertones -''' - -import os -import json -import time -import yaml -import datetime -import torch -import matplotlib.pyplot as plt -from . import utils -from torch.utils.tensorboard import SummaryWriter - -class Saver(object): - def __init__( - self, - args, - initial_global_step=-1): - - self.expdir = args.env.expdir - self.sample_rate = args.data.sampling_rate - - # cold start - self.global_step = initial_global_step - self.init_time = time.time() - self.last_time = time.time() - - # makedirs - os.makedirs(self.expdir, exist_ok=True) - - # path - self.path_log_info = os.path.join(self.expdir, 'log_info.txt') - - # ckpt - os.makedirs(self.expdir, exist_ok=True) - - # writer - self.writer = SummaryWriter(os.path.join(self.expdir, 'logs')) - - # save config - path_config = os.path.join(self.expdir, 'config.yaml') - with open(path_config, "w") as out_config: - yaml.dump(dict(args), out_config) - - - def log_info(self, msg): - '''log method''' - if isinstance(msg, dict): - msg_list = [] - for k, v in msg.items(): - tmp_str = '' - if isinstance(v, int): - tmp_str = '{}: {:,}'.format(k, v) - else: - tmp_str = '{}: {}'.format(k, v) - - msg_list.append(tmp_str) - msg_str = '\n'.join(msg_list) - else: - msg_str = msg - - # dsplay - print(msg_str) - - # save - with open(self.path_log_info, 'a') as fp: - fp.write(msg_str+'\n') - - def log_value(self, dict): - for k, v in dict.items(): - self.writer.add_scalar(k, v, self.global_step) - - def log_spec(self, name, spec, spec_out, vmin=-14, vmax=3.5): - spec_cat = torch.cat([(spec_out - spec).abs() + vmin, spec, spec_out], -1) - spec = spec_cat[0] - if isinstance(spec, torch.Tensor): - spec = spec.cpu().numpy() - fig = plt.figure(figsize=(12, 9)) - plt.pcolor(spec.T, vmin=vmin, vmax=vmax) - plt.tight_layout() - self.writer.add_figure(name, fig, self.global_step) - - def log_audio(self, dict): - for k, v in dict.items(): - self.writer.add_audio(k, v, global_step=self.global_step, sample_rate=self.sample_rate) - - def get_interval_time(self, update=True): - cur_time = time.time() - time_interval = cur_time - self.last_time - if update: - self.last_time = cur_time - return time_interval - - def get_total_time(self, to_str=True): - total_time = time.time() - self.init_time - if to_str: - total_time = str(datetime.timedelta( - seconds=total_time))[:-5] - return total_time - - def save_model( - self, - model, - optimizer, - name='model', - postfix='', - to_json=False): - # path - if postfix: - postfix = '_' + postfix - path_pt = os.path.join( - self.expdir , name+postfix+'.pt') - - # check - print(' [*] model checkpoint saved: {}'.format(path_pt)) - - # save - if optimizer is not None: - torch.save({ - 'global_step': self.global_step, - 'model': model.state_dict(), - 'optimizer': optimizer.state_dict()}, path_pt) - else: - torch.save({ - 'global_step': self.global_step, - 'model': model.state_dict()}, path_pt) - - # to json - if to_json: - path_json = os.path.join( - self.expdir , name+'.json') - utils.to_json(path_params, path_json) - - def delete_model(self, name='model', postfix=''): - # path - if postfix: - postfix = '_' + postfix - path_pt = os.path.join( - self.expdir , name+postfix+'.pt') - - # delete - if os.path.exists(path_pt): - os.remove(path_pt) - print(' [*] model checkpoint deleted: {}'.format(path_pt)) - - def global_step_increment(self): - self.global_step += 1 - - diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fast_rcnn/fast_rcnn_r101_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/fast_rcnn/fast_rcnn_r101_fpn_1x_coco.py deleted file mode 100644 index 9a76b3997fbbed5883adde2122dc17ee2262fa80..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/fast_rcnn/fast_rcnn_r101_fpn_1x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './fast_rcnn_r50_fpn_1x_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/HaHaBill/LandShapes-Antarctica/interactive.py b/spaces/HaHaBill/LandShapes-Antarctica/interactive.py deleted file mode 100644 index f2a95cf96173424b4939d0aa53b89e0460e2bc27..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/interactive.py +++ /dev/null @@ -1,655 +0,0 @@ -# Copyright 2020 Erik Härkönen. All rights reserved. -# This file is licensed to you under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. You may obtain a copy -# of the License at http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software distributed under -# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS -# OF ANY KIND, either express or implied. See the License for the specific language -# governing permissions and limitations under the License. - -# An interactive glumpy (OpenGL) + tkinter viewer for interacting with principal components. -# Requires OpenGL and CUDA support for rendering. - -import torch -import numpy as np -import tkinter as tk -from tkinter import ttk -from types import SimpleNamespace -import matplotlib.pyplot as plt -from pathlib import Path -from os import makedirs -from models import get_instrumented_model -from config import Config -from decomposition import get_or_compute -from torch.nn.functional import interpolate -from TkTorchWindow import TorchImageView -from functools import partial -from platform import system -from PIL import Image -from utils import pad_frames, prettify_name -import pickle - -# For platform specific UI tweaks -is_windows = 'Windows' in system() -is_linux = 'Linux' in system() -is_mac = 'Darwin' in system() - -# Read input parameters -args = Config().from_args() - -# Don't bother without GPU -assert torch.cuda.is_available(), 'Interactive mode requires CUDA' - -# Use syntax from paper -def get_edit_name(idx, s, e, name=None): - return 'E({comp}, {edit_range}){edit_name}'.format( - comp = idx, - edit_range = f'{s}-{e}' if e > s else s, - edit_name = f': {name}' if name else '' - ) - -# Load or compute PCA basis vectors -def load_components(class_name, inst): - global components, state, use_named_latents - - config = args.from_dict({ 'output_class': class_name }) - dump_name = get_or_compute(config, inst) - data = np.load(dump_name, allow_pickle=False) - X_comp = data['act_comp'] - X_mean = data['act_mean'] - X_stdev = data['act_stdev'] - Z_comp = data['lat_comp'] - Z_mean = data['lat_mean'] - Z_stdev = data['lat_stdev'] - random_stdev_act = np.mean(data['random_stdevs']) - n_comp = X_comp.shape[0] - data.close() - - # Transfer to GPU - components = SimpleNamespace( - X_comp = torch.from_numpy(X_comp).cuda().float(), - X_mean = torch.from_numpy(X_mean).cuda().float(), - X_stdev = torch.from_numpy(X_stdev).cuda().float(), - Z_comp = torch.from_numpy(Z_comp).cuda().float(), - Z_stdev = torch.from_numpy(Z_stdev).cuda().float(), - Z_mean = torch.from_numpy(Z_mean).cuda().float(), - names = [f'Component {i}' for i in range(n_comp)], - latent_types = [model.latent_space_name()]*n_comp, - ranges = [(0, model.get_max_latents())]*n_comp, - ) - - state.component_class = class_name # invalidates cache - use_named_latents = False - print('Loaded components for', class_name, 'from', dump_name) - -# Load previously exported named components from -# directory specified with '--inputs=path/to/comp' -def load_named_components(path, class_name): - global components, state, use_named_latents - - import glob - matches = glob.glob(f'{path}/*.pkl') - - selected = [] - for dump_path in matches: - with open(dump_path, 'rb') as f: - data = pickle.load(f) - if data['model_name'] != model_name or data['output_class'] != class_name: - continue - - if data['latent_space'] != model.latent_space_name(): - print('Skipping', dump_path, '(wrong latent space)') - continue - - selected.append(data) - print('Using', dump_path) - - if len(selected) == 0: - raise RuntimeError('No valid components in given path.') - - comp_dict = { k : [] for k in ['X_comp', 'Z_comp', 'X_stdev', 'Z_stdev', 'names', 'types', 'layer_names', 'ranges', 'latent_types'] } - components = SimpleNamespace(**comp_dict) - - for d in selected: - s = d['edit_start'] - e = d['edit_end'] - title = get_edit_name(d['component_index'], s, e - 1, d['name']) # show inclusive - components.X_comp.append(torch.from_numpy(d['act_comp']).cuda()) - components.Z_comp.append(torch.from_numpy(d['lat_comp']).cuda()) - components.X_stdev.append(d['act_stdev']) - components.Z_stdev.append(d['lat_stdev']) - components.names.append(title) - components.types.append(d['edit_type']) - components.layer_names.append(d['decomposition']['layer']) # only for act - components.ranges.append((s, e)) - components.latent_types.append(d['latent_space']) # W or Z - - use_named_latents = True - print('Loaded named components') - -def setup_model(): - global model, inst, layer_name, model_name, feat_shape, args, class_name - - model_name = args.model - layer_name = args.layer - class_name = args.output_class - - # Speed up pytorch - torch.autograd.set_grad_enabled(False) - torch.backends.cudnn.benchmark = True - - # Load model - inst = get_instrumented_model(model_name, class_name, layer_name, torch.device('cuda'), use_w=args.use_w) - model = inst.model - - feat_shape = inst.feature_shape[layer_name] - sample_dims = np.prod(feat_shape) - - # Initialize - if args.inputs: - load_named_components(args.inputs, class_name) - else: - load_components(class_name, inst) - -# Project tensor 'X' onto orthonormal basis 'comp', return coordinates -def project_ortho(X, comp): - N = comp.shape[0] - coords = (comp.reshape(N, -1) * X.reshape(-1)).sum(dim=1) - return coords.reshape([N]+[1]*X.ndim) - -def zero_sliders(): - for v in ui_state.sliders: - v.set(0.0) - -def reset_sliders(zero_on_failure=True): - global ui_state - - mode = ui_state.mode.get() - - # Not orthogonal: need to solve least-norm problem - # Not batch size 1: one set of sliders not enough - # Not principal components: unsupported format - is_ortho = not (mode == 'latent' and model.latent_space_name() == 'Z') - is_single = state.z.shape[0] == 1 - is_pcs = not use_named_latents - - state.lat_slider_offset = 0 - state.act_slider_offset = 0 - - enabled = False - if not (enabled and is_ortho and is_single and is_pcs): - if zero_on_failure: - zero_sliders() - return - - if mode == 'activation': - val = state.base_act - mean = components.X_mean - comp = components.X_comp - stdev = components.X_stdev - else: - val = state.z - mean = components.Z_mean - comp = components.Z_comp - stdev = components.Z_stdev - - n_sliders = len(ui_state.sliders) - coords = project_ortho(val - mean, comp) - offset = torch.sum(coords[:n_sliders] * comp[:n_sliders], dim=0) - scaled_coords = (coords.view(-1) / stdev).detach().cpu().numpy() - - # Part representable by sliders - if mode == 'activation': - state.act_slider_offset = offset - else: - state.lat_slider_offset = offset - - for i in range(n_sliders): - ui_state.sliders[i].set(round(scaled_coords[i], ndigits=1)) - -def setup_ui(): - global root, toolbar, ui_state, app, canvas - - root = tk.Tk() - scale = 1.0 - app = TorchImageView(root, width=int(scale*1024), height=int(scale*1024), show_fps=False) - app.pack(fill=tk.BOTH, expand=tk.YES) - root.protocol("WM_DELETE_WINDOW", shutdown) - root.title('GANspace') - - toolbar = tk.Toplevel(root) - toolbar.protocol("WM_DELETE_WINDOW", shutdown) - toolbar.geometry("215x800+0+0") - toolbar.title('') - - N_COMPONENTS = min(70, len(components.names)) - ui_state = SimpleNamespace( - sliders = [tk.DoubleVar(value=0.0) for _ in range(N_COMPONENTS)], - scales = [], - truncation = tk.DoubleVar(value=0.9), - outclass = tk.StringVar(value=class_name), - random_seed = tk.StringVar(value='0'), - mode = tk.StringVar(value='latent'), - batch_size = tk.IntVar(value=1), # how many images to show in window - edit_layer_start = tk.IntVar(value=0), - edit_layer_end = tk.IntVar(value=model.get_max_latents() - 1), - slider_max_val = 10.0 - ) - - # Z vs activation mode button - #tk.Radiobutton(toolbar, text=f"Latent ({model.latent_space_name()})", variable=ui_state.mode, command=reset_sliders, value='latent').pack(fill="x") - #tk.Radiobutton(toolbar, text="Activation", variable=ui_state.mode, command=reset_sliders, value='activation').pack(fill="x") - - # Choose range where latents are modified - def set_min(val): - ui_state.edit_layer_start.set(min(int(val), ui_state.edit_layer_end.get())) - def set_max(val): - ui_state.edit_layer_end.set(max(int(val), ui_state.edit_layer_start.get())) - max_latent_idx = model.get_max_latents() - 1 - - if not use_named_latents: - slider_min = tk.Scale(toolbar, command=set_min, variable=ui_state.edit_layer_start, - label='Layer start', from_=0, to=max_latent_idx, orient=tk.HORIZONTAL).pack(fill="x") - slider_max = tk.Scale(toolbar, command=set_max, variable=ui_state.edit_layer_end, - label='Layer end', from_=0, to=max_latent_idx, orient=tk.HORIZONTAL).pack(fill="x") - - # Scrollable list of components - outer_frame = tk.Frame(toolbar, borderwidth=2, relief=tk.SUNKEN) - canvas = tk.Canvas(outer_frame, highlightthickness=0, borderwidth=0) - frame = tk.Frame(canvas) - vsb = tk.Scrollbar(outer_frame, orient="vertical", command=canvas.yview) - canvas.configure(yscrollcommand=vsb.set) - - vsb.pack(side="right", fill="y") - canvas.pack(side="left", fill="both", expand=True) - canvas.create_window((4,4), window=frame, anchor="nw") - - def onCanvasConfigure(event): - canvas.itemconfigure("all", width=event.width) - canvas.configure(scrollregion=canvas.bbox("all")) - canvas.bind("", onCanvasConfigure) - - def on_scroll(event): - delta = 1 if (event.num == 5 or event.delta < 0) else -1 - canvas.yview_scroll(delta, "units") - - canvas.bind_all("", on_scroll) - canvas.bind_all("", on_scroll) - canvas.bind_all("", on_scroll) - canvas.bind_all("", lambda event : handle_keypress(event.keysym_num)) - - # Sliders and buttons - for i in range(N_COMPONENTS): - inner = tk.Frame(frame, borderwidth=1, background="#aaaaaa") - scale = tk.Scale(inner, variable=ui_state.sliders[i], from_=-ui_state.slider_max_val, - to=ui_state.slider_max_val, resolution=0.1, orient=tk.HORIZONTAL, label=components.names[i]) - scale.pack(fill=tk.X, side=tk.LEFT, expand=True) - ui_state.scales.append(scale) # for changing label later - if not use_named_latents: - tk.Button(inner, text=f"Save", command=partial(export_direction, i, inner)).pack(fill=tk.Y, side=tk.RIGHT) - inner.pack(fill=tk.X) - - outer_frame.pack(fill="both", expand=True, pady=0) - - tk.Button(toolbar, text="Reset", command=reset_sliders).pack(anchor=tk.CENTER, fill=tk.X, padx=4, pady=4) - - tk.Scale(toolbar, variable=ui_state.truncation, from_=0.01, to=1.0, - resolution=0.01, orient=tk.HORIZONTAL, label='Truncation').pack(fill="x") - - tk.Scale(toolbar, variable=ui_state.batch_size, from_=1, to=9, - resolution=1, orient=tk.HORIZONTAL, label='Batch size').pack(fill="x") - - # Output class - frame = tk.Frame(toolbar) - tk.Label(frame, text="Class name").pack(fill="x", side="left") - tk.Entry(frame, textvariable=ui_state.outclass).pack(fill="x", side="right", expand=True, padx=5) - frame.pack(fill=tk.X, pady=3) - - # Random seed - def update_seed(): - seed_str = ui_state.random_seed.get() - if seed_str.isdigit(): - resample_latent(int(seed_str)) - frame = tk.Frame(toolbar) - tk.Label(frame, text="Seed").pack(fill="x", side="left") - tk.Entry(frame, textvariable=ui_state.random_seed, width=12).pack(fill="x", side="left", expand=True, padx=2) - tk.Button(frame, text="Update", command=update_seed).pack(fill="y", side="right", padx=3) - frame.pack(fill=tk.X, pady=3) - - # Get new latent or new components - tk.Button(toolbar, text="Resample latent", command=partial(resample_latent, None, False)).pack(anchor=tk.CENTER, fill=tk.X, padx=4, pady=4) - #tk.Button(toolbar, text="Recompute", command=recompute_components).pack(anchor=tk.CENTER, fill=tk.X) - -# App state -state = SimpleNamespace( - z=None, # current latent(s) - lat_slider_offset = 0, # part of lat that is explained by sliders - act_slider_offset = 0, # part of act that is explained by sliders - component_class=None, # name of current PCs' image class - seed=0, # Latent z_i generated by seed+i - base_act = None, # activation of considered layer given z -) - -def resample_latent(seed=None, only_style=False): - class_name = ui_state.outclass.get() - if class_name.isnumeric(): - class_name = int(class_name) - - if hasattr(model, 'is_valid_class'): - if not model.is_valid_class(class_name): - return - - model.set_output_class(class_name) - - B = ui_state.batch_size.get() - state.seed = np.random.randint(np.iinfo(np.int32).max - B) if seed is None else seed - ui_state.random_seed.set(str(state.seed)) - - # Use consecutive seeds along batch dimension (for easier reproducibility) - trunc = ui_state.truncation.get() - latents = [model.sample_latent(1, seed=state.seed + i, truncation=trunc) for i in range(B)] - - state.z = torch.cat(latents).clone().detach() # make leaf node - assert state.z.is_leaf, 'Latent is not leaf node!' - - if hasattr(model, 'truncation'): - model.truncation = ui_state.truncation.get() - print(f'Seeds: {state.seed} -> {state.seed + B - 1}' if B > 1 else f'Seed: {state.seed}') - - torch.manual_seed(state.seed) - model.partial_forward(state.z, layer_name) - state.base_act = inst.retained_features()[layer_name] - - reset_sliders(zero_on_failure=False) - - # Remove focus from text entry - canvas.focus_set() - -# Used to recompute after changing class of conditional model -def recompute_components(): - class_name = ui_state.outclass.get() - if class_name.isnumeric(): - class_name = int(class_name) - - if hasattr(model, 'is_valid_class'): - if not model.is_valid_class(class_name): - return - - if hasattr(model, 'set_output_class'): - model.set_output_class(class_name) - - load_components(class_name, inst) - -# Used to detect parameter changes for lazy recomputation -class ParamCache(): - def update(self, **kwargs): - dirty = False - for argname, val in kwargs.items(): - # Check pointer, then value - current = getattr(self, argname, 0) - if current is not val and pickle.dumps(current) != pickle.dumps(val): - setattr(self, argname, val) - dirty = True - return dirty - -cache = ParamCache() - -def l2norm(t): - return torch.norm(t.view(t.shape[0], -1), p=2, dim=1, keepdim=True) - -def apply_edit(z0, delta): - return z0 + delta - -def reposition_toolbar(): - size, X, Y = root.winfo_geometry().split('+') - W, H = size.split('x') - toolbar_W = toolbar.winfo_geometry().split('x')[0] - offset_y = -30 if is_linux else 0 # window title bar - toolbar.geometry(f'{toolbar_W}x{H}+{int(X)-int(toolbar_W)}+{int(Y)+offset_y}') - toolbar.update() - -def on_draw(): - global img - - n_comp = len(ui_state.sliders) - slider_vals = np.array([s.get() for s in ui_state.sliders], dtype=np.float32) - - # Run model sparingly - mode = ui_state.mode.get() - latent_start = ui_state.edit_layer_start.get() - latent_end = ui_state.edit_layer_end.get() + 1 # save as exclusive, show as inclusive - - if cache.update(coords=slider_vals, comp=state.component_class, mode=mode, z=state.z, s=latent_start, e=latent_end): - with torch.no_grad(): - z_base = state.z - state.lat_slider_offset - z_deltas = [0.0]*model.get_max_latents() - z_delta_global = 0.0 - - n_comp = slider_vals.size - act_deltas = {} - - if torch.is_tensor(state.act_slider_offset): - act_deltas[layer_name] = -state.act_slider_offset - - for space in components.latent_types: - assert space == model.latent_space_name(), \ - 'Cannot mix latent spaces (for now)' - - for c in range(n_comp): - coord = slider_vals[c] - if coord == 0: - continue - - edit_mode = components.types[c] if use_named_latents else mode - - # Activation offset - if edit_mode in ['activation', 'both']: - delta = components.X_comp[c] * components.X_stdev[c] * coord - name = components.layer_names[c] if use_named_latents else layer_name - act_deltas[name] = act_deltas.get(name, 0.0) + delta - - # Latent offset - if edit_mode in ['latent', 'both']: - delta = components.Z_comp[c] * components.Z_stdev[c] * coord - edit_range = components.ranges[c] if use_named_latents else (latent_start, latent_end) - full_range = (edit_range == (0, model.get_max_latents())) - - # Single or multiple offsets? - if full_range: - z_delta_global = z_delta_global + delta - else: - for l in range(*edit_range): - z_deltas[l] = z_deltas[l] + delta - - # Apply activation deltas - inst.remove_edits() - for layer, delta in act_deltas.items(): - inst.edit_layer(layer, offset=delta) - - # Evaluate - has_offsets = any(torch.is_tensor(t) for t in z_deltas) - z_final = apply_edit(z_base, z_delta_global) - if has_offsets: - z_final = [apply_edit(z_final, d) for d in z_deltas] - img = model.forward(z_final).clamp(0.0, 1.0) - - app.draw(img) - -# Save necessary data to disk for later loading -def export_direction(idx, button_frame): - name = tk.StringVar(value='') - num_strips = tk.IntVar(value=0) - strip_width = tk.IntVar(value=5) - - slider_values = np.array([s.get() for s in ui_state.sliders]) - slider_value = slider_values[idx] - if (slider_values != 0).sum() > 1: - print('Please modify only one slider') - return - elif slider_value == 0: - print('Modify selected slider to set usable range (currently 0)') - return - - popup = tk.Toplevel(root) - popup.geometry("200x200+0+0") - tk.Label(popup, text="Edit name").pack() - tk.Entry(popup, textvariable=name).pack(pady=5) - # tk.Scale(popup, from_=0, to=30, variable=num_strips, - # resolution=1, orient=tk.HORIZONTAL, length=200, label='Image strips to export').pack() - # tk.Scale(popup, from_=3, to=15, variable=strip_width, - # resolution=1, orient=tk.HORIZONTAL, length=200, label='Image strip width').pack() - tk.Button(popup, text='OK', command=popup.quit).pack() - - canceled = False - def on_close(): - nonlocal canceled - canceled = True - popup.quit() - - popup.protocol("WM_DELETE_WINDOW", on_close) - x = button_frame.winfo_rootx() - y = button_frame.winfo_rooty() - w = int(button_frame.winfo_geometry().split('x')[0]) - popup.geometry('%dx%d+%d+%d' % (180, 90, x + w, y)) - popup.mainloop() - popup.destroy() - - # Update slider name - label = get_edit_name(idx, ui_state.edit_layer_start.get(), - ui_state.edit_layer_end.get(), name.get()) - ui_state.scales[idx].config(label=label) - - if canceled: - return - - params = { - 'name': name.get(), - 'sigma_range': slider_value, - 'component_index': idx, - 'act_comp': components.X_comp[idx].detach().cpu().numpy(), - 'lat_comp': components.Z_comp[idx].detach().cpu().numpy(), # either Z or W - 'latent_space': model.latent_space_name(), - 'act_stdev': components.X_stdev[idx].item(), - 'lat_stdev': components.Z_stdev[idx].item(), - 'model_name': model_name, - 'output_class': ui_state.outclass.get(), # applied onto - 'decomposition': { - 'name': args.estimator, - 'components': args.components, - 'samples': args.n, - 'layer': args.layer, - 'class_name': state.component_class # computed from - }, - 'edit_type': ui_state.mode.get(), - 'truncation': ui_state.truncation.get(), - 'edit_start': ui_state.edit_layer_start.get(), - 'edit_end': ui_state.edit_layer_end.get() + 1, # show as inclusive, save as exclusive - 'example_seed': state.seed, - } - - edit_mode_str = params['edit_type'] - if edit_mode_str == 'latent': - edit_mode_str = model.latent_space_name().lower() - - comp_class = state.component_class - appl_class = params['output_class'] - if comp_class != appl_class: - comp_class = f'{comp_class}_onto_{appl_class}' - - file_ident = "{model}-{name}-{cls}-{est}-{mode}-{layer}-comp{idx}-range{start}-{end}".format( - model=model_name, - name=prettify_name(params['name']), - cls=comp_class, - est=args.estimator, - mode=edit_mode_str, - layer=args.layer, - idx=idx, - start=params['edit_start'], - end=params['edit_end'], - ) - - out_dir = Path(__file__).parent / 'out' / 'directions' - makedirs(out_dir / file_ident, exist_ok=True) - - with open(out_dir / f"{file_ident}.pkl", 'wb') as outfile: - pickle.dump(params, outfile) - - print(f'Direction "{name.get()}" saved as "{file_ident}.pkl"') - - batch_size = ui_state.batch_size.get() - len_padded = ((num_strips.get() - 1) // batch_size + 1) * batch_size - orig_seed = state.seed - - reset_sliders() - - # Limit max resolution - max_H = 512 - ratio = min(1.0, max_H / inst.output_shape[2]) - - strips = [[] for _ in range(len_padded)] - for b in range(0, len_padded, batch_size): - # Resample - resample_latent((orig_seed + b) % np.iinfo(np.int32).max) - - sigmas = np.linspace(slider_value, -slider_value, strip_width.get(), dtype=np.float32) - for sid, sigma in enumerate(sigmas): - ui_state.sliders[idx].set(sigma) - - # Advance and show results on screen - on_draw() - root.update() - app.update() - - batch_res = (255*img).byte().permute(0, 2, 3, 1).detach().cpu().numpy() - - for i, data in enumerate(batch_res): - # Save individual - name_nodots = file_ident.replace('.', '_') - outname = out_dir / file_ident / f"{name_nodots}_ex{b+i}_{sid}.png" - im = Image.fromarray(data) - im = im.resize((int(ratio*im.size[0]), int(ratio*im.size[1])), Image.ANTIALIAS) - im.save(outname) - strips[b+i].append(data) - - for i, strip in enumerate(strips[:num_strips.get()]): - print(f'Saving strip {i + 1}/{num_strips.get()}', end='\r', flush=True) - data = np.hstack(pad_frames(strip)) - im = Image.fromarray(data) - im = im.resize((int(ratio*im.size[0]), int(ratio*im.size[1])), Image.ANTIALIAS) - im.save(out_dir / file_ident / f"{file_ident}_ex{i}.png") - - # Reset to original state - resample_latent(orig_seed) - ui_state.sliders[idx].set(slider_value) - - -# Shared by glumpy and tkinter -def handle_keypress(code): - if code == 65307: # ESC - shutdown() - elif code == 65360: # HOME - reset_sliders() - elif code == 114: # R - pass #reset_sliders() - -def shutdown(): - global pending_close - pending_close = True - -def on_key_release(symbol, modifiers): - handle_keypress(symbol) - -if __name__=='__main__': - setup_model() - setup_ui() - resample_latent() - - pending_close = False - while not pending_close: - root.update() - app.update() - on_draw() - reposition_toolbar() - - root.destroy() \ No newline at end of file diff --git a/spaces/HaMerL/ChaosinChat/assets/Kelpy-Codos.js b/spaces/HaMerL/ChaosinChat/assets/Kelpy-Codos.js deleted file mode 100644 index d6873aec8e11d20d320ae41e1d743b047abc6fd0..0000000000000000000000000000000000000000 --- a/spaces/HaMerL/ChaosinChat/assets/Kelpy-Codos.js +++ /dev/null @@ -1,75 +0,0 @@ -// ==UserScript== -// @name Kelpy Codos -// @version 1.0.5 -// @author Keldos; https://keldos.me/ -// @description Add copy button to PRE tags before CODE tag, for ChaosinChatGPT especially. -// Based on ChaosinChatGPT version: ac04408 (2023-3-22) -// @license GPL-3.0 -// @grant none -// ==/UserScript== - -(function () { - 'use strict'; - - function addCopyButton(pre) { - var code = pre.querySelector('code'); - if (!code) { - return; // 如果没有找到 元素,则不添加按钮 - } - var firstChild = code.firstChild; - if (!firstChild) { - return; // 如果 元素没有子节点,则不添加按钮 - } - var button = document.createElement('button'); - button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本 - button.style.position = 'relative'; - button.style.float = 'right'; - button.style.fontSize = '1em'; // 可选:调整按钮大小 - button.style.background = 'none'; // 可选:去掉背景颜色 - button.style.border = 'none'; // 可选:去掉边框 - button.style.cursor = 'pointer'; // 可选:显示指针样式 - button.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前 - var selection = window.getSelection(); - selection.removeAllRanges(); - selection.addRange(range); - - try { - var success = document.execCommand('copy'); - if (success) { - button.textContent = '\u2714'; - setTimeout(function () { - button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制” - }, 2000); - } else { - button.textContent = '\u2716'; - } - } catch (e) { - console.error(e); - button.textContent = '\u2716'; - } - - selection.removeAllRanges(); - }); - code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前 - } - - function handleNewElements(mutationsList, observer) { - for (var mutation of mutationsList) { - if (mutation.type === 'childList') { - for (var node of mutation.addedNodes) { - if (node.nodeName === 'PRE') { - addCopyButton(node); - } - } - } - } - } - - var observer = new MutationObserver(handleNewElements); - observer.observe(document.documentElement, { childList: true, subtree: true }); - - document.querySelectorAll('pre').forEach(addCopyButton); -})(); diff --git a/spaces/Hahsgsgsy/teston/app.py b/spaces/Hahsgsgsy/teston/app.py deleted file mode 100644 index b82d0e452ab7fab54c0ede2398bbcaa2f184cee2..0000000000000000000000000000000000000000 --- a/spaces/Hahsgsgsy/teston/app.py +++ /dev/null @@ -1,303 +0,0 @@ -from client import ClientRubika -from rich import print as prints -import asyncio - -auth = ['atjqbjkkroxsrtgssoovtmwoujiwbeev', -'munbkdlfwwygqjfgyepmxesekanxzxhw', -'qgvofrsyxjiijxjzagtdqknsgpqgcecd', -'ymwjyfdjgxtztzltkqhgysrjtxszpcfp', -'xphggmignuuzavartjgnhfjeybxtkape', -'swlgxogibxepwfakgoxntdpvswfrelfu', -'eclvsfpekmfrybfmcvsowdsywsczfgsg', -'dcxfviqcbbdsgiaddvosaqeplwsfmvfi', -'saiyinuwpgjvpztjdjrqayizpamiibgm', -'huwglyreybcraxvswimpttwdcmkooqrl', -'furrahbfdhnvymiodupfpqdqncdzzyoz', -'obkyyueklrlkvvjqiamqxjlejbciznra', -'espxetcfbtmabmiirvthahjunioculvl', -'ubcjgcxkipwbbiuenvfyybkaulxhksrq', -'helewpwywplttjrpelmiwznzyrxbzcaj', -'qbitpkompdohybmllbtdiwbarqwoozsb', -'seafttyxjppydogfxfuwevhjugurcwvc', -'utczbcjsytrriksjmdyopochoubcxxnf', -'cvfryrneoydhiveqggrthibrlmutfixf', -'pviqxirsgwpvoveouwmthjqendscjfmx', -'asifcwxcwwkmnkokvjjwabecsxpowpfm', -'pqprqmxvhobjyoasdwshkaasildehgob', -'xmepxwebnumzsbmlqtaogpjhnlpbrttz', -'cyaixbkmrmdaaqqycfgubhwbpyehyqkd', -'tuffefiwpuvwpgnrahcoathpfuyhtdil', -'znzdbfledcjdnbohfsvrqrwwstuosqbk', -'excecetesbknynleiqrwqqjxewmypmyo', -'vhxxgclipwbxhffelpkpwtlrynjemgir', -'jlivbmkbshkrluxcltujlnedytmbyxyx', -'yjbeboliuhdallpibnvfpivgdnqcfoka', -'svifiydupgzbnjteswfrrikafkimwdlx', -'eyfkgqtoxutdlsktprwvhfkhlamibfsp', -'tylddnxjuqyxcihdrialcifwuawkcsar', -'kyhszzrmjnblobvmrjenujrithxumaeo', -'qrderomlhpmynazcitfpijxbqouakpai', -'iulddzteapsqfmwyfaskunzgugollqdx', -'zxlouvuojbpxkkoadbkecrnltzumldim', -'seyswyvrhbukdxaikxyqmuhvqmnaxhzm', -'biblcviczavylxacwrwjofowsdemndyh', -'zlzzwpjfbjoccjxzzpjkcxvzjdijezsm', -'txrxpaieehgmijvqgoadivvbcotyniky', -'isjsdndlixzilmojvwnrdvlyizquahbh', -'cxklrrcxudcosgtecaghptydxvemlryh', -'ukwuczeupbvhbhljfvwlshxsmdgqmjbw', -'opbrlrabpsyrwslbolaaehlwkjmwjgnr', -'ubhmqrkxmzhnunlinylcipnhhcyxxbqn', -'amgmtbltspsuueigjufvrffjgopfzlef', -'gccwrkuzwbnwgkvltgibnemmphpmjrou', -'ndpyfgakusdwmsshezovmpdfvsjygmjt', -'euhpemguyawhldzcwjgwujkwxnyvufxs','bkuafkwpaxkqerdheznoojkaxofzfset', -'etoealirqvmvmtkuaftyfxqpiyepievm', -'hlbxgylcnmoztbvplmxwpdgwxbdscswj', -'nwsxvyaslgdjscpcygztnqsflqgpvrkh', -'imltgtwjxdtxomughysevktnbcvzcnct', -'esayqrdyeezdrpwwxtlrcuzbnjmcqgzk', -'epnfrywianospdfqtncyckccbdhaypgr', -'pbuzmrhsbqrscscejxefallaliailozj', -'hpvbanqtoaczqdbpqliklmikeacsmuep', -'zjommzqvlnamydjiuhnzjcanrbyfjkiw', -'sgoqmdwaqcufwwbbrplvdtdtbwnqahao', -'lwunthwrnpqvqodqaswrydkadmxfaohr', -'ovubuwleutzueowjohubzogepxifwvxo', -'pbtizwttympidcodcxjkmdmeugkqcfjz', -'fwxizwqyeepdaylggbtkfzwxkyqonpqv', -'tjeyvmnvxmyfbjxvvdzeauekswclqvtp', -'pzqpweakoqyqhzktggmumokkrdkpzivy', -'uwgapxdtudvpfaftbayjgkdkoxdvcqzu', -'oqabvutkgsnnohagtuadbfmviwhtrtbe', -'pdtpmdbmgbukwysdpfufmeyxsxywmfxu', -'iiffyfknhfffyujduwzhpwckftkloiao', -'ugczigwjbotfzfctgabkfgfrgrcswsbu', -'lqtfzxoaxaoyfyzefgtlimweijxjluar', -'judyzonyddpgmqwiowumeavwybzcgrcy', -'mpfuyrcjtkgoigedxhsoagnwykbwkhcl', -'kybfjgmiofzegdijatdlheedcrztwngr', -'ddslcvmwkhqvdvfvtibeyueagdjbxbed', -'qiwfdahzwpvabvqkbbfhprqxgjgwxips', -'yswtrlctmxvwvigrbghqurrgbjediild', -'uiajhxloaqtkmricvttgjmnngxebfsta', -'cflknzrexhdnwgjhbstdsigbvxdkcdmt', -'ufahumldfpqcrwcoeftrgrsctmhbuktg', -'rpzzgexsacsahxghsrjxwtahxfqfdkfq', -'pagcebydqoduolwywarildggspvulxxd', -'utkqvrizyduofmrtosmwtcjunqoyepaf', -'douizujslqpphutwwsodrgqkhcxbevto', -'pcxokskafecwswsjnfnqjwajuyoxurmf', -'fdvnvfdklndbvseqjiuatokcottayhtl', -'qdnzwsypkmvjqcbnccklovjrjhwscjqq', -'podlwexqlwotiszylsnoesqpazxhwlqz', -'mdikclqlkclffybdyybyxrrgblpvilvr', -'qbsjktbbkkgmdiyxvvnwqvbptlavmzpr', -'yghyfgnvousfybeibakgvyvchtfjswxt', -'avbktcykbwzvwfiazjmvizlkotgnzkso', -'azzpnkqlhsuvzejqrxvguqedniryzazd', -'pfkydacqfajpevwlgnxzaihxtgsuoueh', -'rtwmvvzhpegcsqpsnlwdcyasdudmrblk', -'pykehzypwxxjuzuhyyhayrdakfxgaxjh', -'maozeldtpbenrkuebmanedgxczdhzowj', -'eqzqylubbaksqxwfwlizqwkltdqlnvvc','syqlsligtpbyscwiybhucqtznapqdpkv', -'cqyzuzhvbxafokjltayiawncwwjnupxq', -'xwcrlgswifeplrqdbyjpoqekjahimmta', -'yyjbrwmcefxlsybzdhrcqdijtjqxyhuh', -'jvrjoyiuwmxsbnyipyoiixngehessukc', -'hfxjgaindpelogagdjeafcoqnufzprrs', -'dbbcnxagafcwftjwdztemyfdwudgqnbv', -'qukyzqqmnuwepsbnxjrzthhjclmuusbl', -'vsstbkgzapnrxjlekcqgfzlofiqvnect', -'pdainianysarwfkjpnmxihfscrztkccm', -'wcifumalbttkrcxqtbfauxpaxopxxdrv', -'ejozzrwsmzbmlpyqugwjvtdhbqpiahhc', -'imftdkywerrorckkmvleaiablkiyvqns', -'mavzsixnoagwropoafgtycgjixitukwv', -'nzgpjavyunppfufzzcgefwspqzfotbxl', -'xfcxmgsqqwlpztedeqnzgtslfaagscau', -'drwsqrpxupwfpzwnjydpzpfxzmiuhbsf', -'bwvqpulqecwssayotmovyppqmkuhshya', -'uhuejjalacoqvkicjjschfeeoyetdbsz', -'iipppiofkgpmxemrtpjkkehbilpwtcek', -'mkkwvmnlnhwjeekhertkxysbfsdwuejd', -'jlyirgetdomvvifvxykmvehclxycnzmt', -'txofoiqnvehnlvibpqaiorxzuqukmtub', -'arqwpvtbnmdojnobivdyfkfxtrlbovrf', -'pzszsvassgnkbffmsbtsqfyqkdshihxf', -'qycuszizclhajpnjenrfavtwsdabpmvx', -'emndiddrqnrgysigshegncpylagspqqa', -'qkejmoscleklrretexmufeznpqvjqjjx', -'lmrbxwllmaaysblpylvisugopxwooqyl', -'ucrmxjcegbvaquhfiyjpfqdwamscpber', -'xholbwbbgxyxlrvudteqblumchrmptod', -'zerfnehemgwkqtzlrqecgmjiceqdvdwz', -'nietaietdfojgkbvxfxvyhirtlcrcniw','rrjyvifegykeittrgfbaxuhjsyhdpqnb', -'ltjnhkoqwhfokrxlcwctuzciegffeftl', -'kceddjdfqdhfveaxqajlayqpazthzwvv', -'zoomigswnxkknotosmewlypbjunybsex', -'yfjnjypdszusbdmkpmdnfogxwqjvtwmp', -'hsjhdzcwhoiamltcuceksvoswkeuxyyk', -'ghfhyegoptbxajkbgjpiadvcxvxrfdkq', -'oiyxkdjddaobicoetugkmerqjilmkbmk', -'bgywiarfyerbytjrnfhtwgeyxucwtjkp', -'vvhrrwxnpcqiwqkooorgcjkfezydidps', -'zvniapuxoheejhqdumaeqcjbpowxkdvt', -'prgfzjhmudrvgfkwmurszjiqnrtwyrkd', -'eydsbgcdxlaqhierhohtpfrdyhewfwuk', -'anfevydpnlbqzapohrfvqlyrvigvjrvu', -'hoshqqeegfqnzxmyajmcurscwgfqwlhc', -'unzgbbcxmmshtehoovttihotcwezuwqa', -'jxsachfghjkcmrfbnoxgenvllpdxagxg', -'eoxiolfivrexzisdxymflenbiaozazto', -'ybaebiplohynkzehgtnqvbdqybwvdlfs', -'fkalnkojsxsaxicdrdqlvmnerrmjbvcn', -'nbudmhmppgwnacipfhlzebjsxatezowt', -'vfezequaexscjgzdwxbfdbegrmwsnzjh', -'zkbegazamxvmhyjrivmcelqjyjyagjku', -'offqrwawbsnwempjdisinxkgcxcfrkwi', -'pddfmjxvcualgbzkjsawpgzwsocpyhpn', -'wuorznovyekfgqhckjsnxaixbusbjkil', -'dposmavopcylpqybsoyjwnowtmezgjqm', -'zluhouiitprzjhszdfxpnjucreqffwrp', -'swffretdtzlkarjamklsuqpwoiupbonq', -'ryfqzdjncjmcnrtoucskjcuhkxmlpinw', -'cuqthfprxvleaasuhcbdzjastbbpezci', -'wylrhxasmhglnbdofrjnqbhjbgqjyusp', -'uvrgehomkzkxvecuzthvnaqjkkipdqft', -'irnsoorxkjjxnvtmxkpukiyxeurmfiki', -'hfqkerflyisgnoexwntnaptiwibixzpr', -'zlhaqvndlfdxixjyekiabvaixdqkgdjl', -'nngcqdvzgayxzsqbrneaxtlgrtklfqne', -'euwchfomxdaydowcgicasxitefbkyymw', -'wnpdrybrmheqdjlvniriiehjuqbpxryf', -'nraxtwkofonorfjczkqnpyzvkidnqlzr', -'ddbcaikpdeqjwltubfjudtocsvmrjhuo', -'mwdtbydzwifwtxgmvieohliovkdcekmn', -'gbypnodibkzewnazdxvuufsjlpprriak', -'dagliziwqcpsmeiairbbduwxokkybohu', -'ddnkiyrkkzdxdmqzdgrcpfeimioitzbn', -'gkzpreyhrfgiifybiiugardzlyhqnotf', -'qwfaifbrohntoddccbkkolzebefpprxr', -'wrjksrsvzpygfbfjmbsxqfuhenpleanj', -'ccjipaettmkqilerjtxrujymyaegikzv', -'xrkuhvqnjxqiklmszidvondfttjkdhko','pwflxrtofkzjtketqaemovkzfmhzdkym', -'jfhemwlmuifnvwciqebhzwjeulpxzbqe', -'kodhqxlwcycnrtsyarihpeuzewcuojgz', -'lyhlrhpwwguybtvqvgjmmfgffqzgmpdv', -'nqtuuwxvvscxbshdunwxcumfvballsyr', -'cnlaovjutuqoknzdrmazjhkltwguxiai', -'hnuiemidfruxajexkenkotratwdwafmv', -'dednhewvqrgcnuacknlhsatrfgchpgah', -'nrjcgvpubetvnxibsufcluvsevqtvszv', -'qgjpiiowthdpnmckuimpdrqvgbagwfob', -'xsbpcfguxymlnidykgmakgkouopahuux', -'gxdmmerfasglcnffezbrupbragihwnqu', -'lbiruhaqmowrutcirvdwzxcbuaezjwev', -'znebvraljlbakkvavrsuzenltsdkufio', -'zcarbchdehvcginpkfjcrewdsrsdavrx', -'iudasxsvtiommpbfqfffgbjwbranlwys', -'azlhiwysbdljknmczwgnebcsnooifncs', -'ctbfzxthsygxvygglvtogvbbvokkhcjm', -'thupbmgaznagatlryaopfcoaganbuobx', -'hhzzcongglxqwgxmnrxqameykfcbvrbg', -'fypssnnwbnrkrsgpisulowgpdirselcj', -'jsypsnsiaxhhljxtbnqubgpqrerdszzy', -'obguhrfrdgxfsyrmkgenlksiarsfvnku', -'vkjmkghtodxuutirexpudyzpfgfrsydn', -'twnruwxvhruplwnzmmdfjcbluxcgkeih', -'gljovwnttigksbgxfbdsjavveqpmkhsm', -'fjkxztjyttcnvulegtfretanqttccxcb', -'ngsuwcpqyzjxpxmorzdipqhlddwwjhgz', -'bxtjvhzdhzeqmlqeqprzgveavigcoggu', -'hzmlirhzaagwdvvpzrlsgwgrsingeiei', -'bnvloctykvidqtqrkuolrgfbgmuoxstg', -'aqgygvzdpfhjqifppxtsjgglkojvsgkt', -'uqdfgbsfvtvebvkkygjgdebxqvgrathw', -'vnlprrbnegqcetuopcacvniiqbyslexs', -'oclsgxtcjhkvzwczgkjcpgrvrpygaqxn', -'uvdostudztexiricjuibdwomnehwbica', -'inslxhmkwdwweosayolvqatvqhouzhjd', -'uakmhhtwlivvpttfsigruyswzwauuayk', -'evprnlwazhdbxtwkjdpwdbdooeinnxbe', -'yvswazrrrskvcjjklcasourlfmceovta', -'kyaukaclujxslnlxcaulhyifztxpvbui', -'wmxiodsrsympgsdntjmwwdmvnakawxyu', -'uraihmwqqddztttffffzzoypeasatavv', -'yholopbqxbxlilepkoxaboqmqkxvjldx', -'nmcugznktzobujcckgasgawceemkzvkv', -'cqsxgprzfkvpjkdymgqtsjvqcdhdkncv', -'auioypitkdexssmdykxivqelacofjmlj', -'lohfielmwqqnmtbtncqyzciozchxmqqy', -'pmzlevlpaxlcltfvqsiwmncellqkisyy', -'tfrqswqecnpnciltgrsaqtxmpcueysgi','trimqqayiizwoqvxyufsnbawpywlzaeg', -'ynfxeufgfyrzfnttbjsqocgcgjxmddeg', -'zbrbzfqikzuncbcqddzzeikxctjmsuvi', -'fpudvoioujhwhltffeggbzjopspgusuq', -'wrgkzjpeflpsrvugwockchtnmacetpqp', -'mooxdxjzrzjarfuserdqnzewnvhrkhki', -'pwbnyxntdogqxwqqngwgyneerengnmxg', -'ngcyrifvkdjvvcuflgcbvbrsikgfzdtw', -'bhbidymctoqelcvzeyfrcoplexjnkpdh', -'znqqynizgwvduwgfucnuexcgrvkttrgd', -'bisvumatpwfvoympeofsmsgauabdlsls', -'zifashukopettkayevpkxvtiofjodcqb', -'dqumtiszzrxblzhfjjqkyujkpynljfut', -'qlazmojeuqjzoxijkituoqaehkdgmyor', -'qgpdhfmdhpmmjszzgffrnwmghqsbntvb', -'pafecsigugdvomncvhbvhnwgkesblnhl', -'apobeydwhsfojlzwbmininsekkshzvzx', -'yvzsucrxedsbwxqndefhwnmgtstobgrn', -'avvngtsmuzwtfujrqmkpbunfqnvfkowd', -'njgmexxhxyliwkchkqhnvfruoguewvls', -'ykeucajipkxilgsjolymliaaxawbqoyx', -'mxeoluvdsmlaeqefvwwpliqdxjephlgo', -'xilvxqngcxothxpzbnajiwhzebfpblfh', -'mpfhsmdqzpsgvrxktocxhppszqxdqdju', -'zqhrabnwitfcjrzstlxfothhuggbvegj', -'seemxufphxoefhznhewegzxowsmzsnga', -'vbbsgoferlvhbtruklzlvmfmkusaflrw', -'ocryetqtqixnndgenhdpscqorafvlidp', -'gwnulgsbqqdedkzwchtrihpgwdkjdlrg', -'gguiuopvvokqyklkxqogawegcocapdjw', -'tehecaxgjtytgthjztueeympoxsveovm', -'mpqppckjywertolycwpwblolinrvbpus', -'jzbnwobxrumiavemtelyaninkkwcraqy', -'zaxbsllwgaxnquvtmdjwnpxuioxgdzjw', -'gsjrxgawizxbwhcxesvqzjnptjkpnpkm', -'xfjdepmmmbpalaiiuryypskugmsofpqs', -'xjowzsvnbnejliqenrqnhuzmsgqboxoi', -'ywgeufujshygtimstzqkdsgmmtjskcpa', -'fsoixdthatddsqpovblzhdvnckwlhyxh', -'awikcdblxdosnqcrmbkvuehqlsjloyau', -'vkdqqanatqmwgunviptxodqwbysovsal', -'fyesccgnckmcviesdtnmronlzqxsryir', -'qsescpvrdnsvoqylqfqadfnpsjivqfua', -'maymyxleoqjquauxacpgeykjeoqflgns', -'rsqfcbydvbvrumbhwhyhwwhtzscoivqq', -'bttvswrcjidqqrguwjpiodkxiwitylan', -'oyosvjstwobpdjajxmphjhnmvephhejv', -'aymejiaepqrastcptjxesfgaedkchlac', -'ldqjbyvjipynekiwnsoofnswasicwwiy', -'lhdwkgizdrxmourluakkcqnarzaqwtqc'] - -bot = ClientRubika('CipherX') # چیزی نزارید داخلش - -async def main(): - for authX in auth: - try: - status = bot.online(authX) - prints(f""" -=========================\n -{status}\n -AUTH => "{authX}"\n - -""") - except: - pass - -asyncio.run(main()) - - - - diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/gottbert/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/gottbert/README.md deleted file mode 100644 index 1d58feb279a4a50222290546c3bb285d3cea98e6..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/gottbert/README.md +++ /dev/null @@ -1,64 +0,0 @@ -# GottBERT: a pure German language model - -## Introduction - -[GottBERT](http://arxiv.org/abs/2012.02110) is a pretrained language model trained on 145GB of German text based on RoBERTa. - -## Example usage - -### fairseq -##### Load GottBERT from torch.hub (PyTorch >= 1.1): -```python -import torch -gottbert = torch.hub.load('pytorch/fairseq', 'gottbert-base') -gottbert.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Load GottBERT (for PyTorch 1.0 or custom models): -```python -# Download gottbert model -wget https://dl.gottbert.de/fairseq/models/gottbert-base.tar.gz -tar -xzvf gottbert.tar.gz - -# Load the model in fairseq -from fairseq.models.roberta import GottbertModel -gottbert = GottbertModel.from_pretrained('/path/to/gottbert') -gottbert.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Filling masks: -```python -masked_line = 'Gott ist ! :)' -gottbert.fill_mask(masked_line, topk=3) -# [('Gott ist gut ! :)', 0.3642110526561737, ' gut'), -# ('Gott ist überall ! :)', 0.06009674072265625, ' überall'), -# ('Gott ist großartig ! :)', 0.0370681993663311, ' großartig')] -``` - -##### Extract features from GottBERT - -```python -# Extract the last layer's features -line = "Der erste Schluck aus dem Becher der Naturwissenschaft macht atheistisch , aber auf dem Grunde des Bechers wartet Gott !" -tokens = gottbert.encode(line) -last_layer_features = gottbert.extract_features(tokens) -assert last_layer_features.size() == torch.Size([1, 27, 768]) - -# Extract all layer's features (layer 0 is the embedding layer) -all_layers = gottbert.extract_features(tokens, return_all_hiddens=True) -assert len(all_layers) == 13 -assert torch.all(all_layers[-1] == last_layer_features) -``` -## Citation -If you use our work, please cite: - -```bibtex -@misc{scheible2020gottbert, - title={GottBERT: a pure German Language Model}, - author={Raphael Scheible and Fabian Thomczyk and Patric Tippmann and Victor Jaravine and Martin Boeker}, - year={2020}, - eprint={2012.02110}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/scalar/utils.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/scalar/utils.py deleted file mode 100644 index 2ec6af3fcb09ccaf853be15a84ed8181f9e2f546..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/quantization/scalar/utils.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from operator import attrgetter - -import torch.distributed as dist -import torch.nn as nn - -from ..pq.utils import attrsetter, get_layers -from .modules import ActivationQuantizer, IntConv2d, IntEmbedding, IntLinear - - -MAPPING = {nn.Linear: IntLinear, nn.Embedding: IntEmbedding, nn.Conv2d: IntConv2d} - - -def quantize_model_(model, p=0.2, bits=8, update_step=3000, method="histogram", remove_weights=False): - """ - Replaces all modules with their scalar quantized counterpart and - registers hooks to quantize the post-ativations of those modules. - - Args: - - model: a nn.Module - - p: amount of noise (0 for no noise, 1 to quantize all the weights/activations) - - bits: number of bits - - update_step: update quantization parameters every update_step steps - """ - # quantize all layers - # remove weights indicates whether the weights extension should be removed, in addition to - # weight_orig and weight extension on names - quantized_layers = get_layers(model, "(.*?)", remove_weights=remove_weights) - - for layer in quantized_layers: - - # book-keeping - is_master_process = (not dist.is_initialized()) or ( - dist.is_initialized() and dist.get_rank() == 0 - ) - - # recover module - module = attrgetter(layer)(model) - if is_master_process: - logging.info( - f"Quantizing layer {layer} with bits={bits} and QuantNoise={p}" - ) - - # quantization params - q_params = { - "p": p, - "update_step": update_step, - "bits": bits, - "method": method, - "counter": 0, - } - - # instantiate the quantized counterpart - if isinstance(module, tuple(MAPPING.keys())): - QuantizedModule = MAPPING[module.__class__] - quantized_module = QuantizedModule.__new__(QuantizedModule) - params = module.__dict__ - params.update(q_params) - quantized_module.__dict__.update(params) - - else: - if is_master_process: - logging.info(f"Module {module} not yet supported for quantization") - continue - - # activation quantization - a_q = ActivationQuantizer(quantized_module, p=0, bits=bits, method=method) - - # replace layer by its quantized counterpart - attrsetter(layer)(model, quantized_module) - - # return name of quantized layers - return quantized_layers diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_ema.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_ema.py deleted file mode 100644 index 88ea65a434e49775d40f2b08ce6df0f8d9929c18..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/test_ema.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest -from copy import deepcopy -from dataclasses import dataclass -from typing import Optional - -import torch -from fairseq.models.ema import EMA - - -class DummyModule(torch.nn.Module): - def __init__(self) -> None: - """LightningModule for testing purposes - - Args: - epoch_min_loss_override (int, optional): Pass in an epoch that will be set to the minimum - validation loss for testing purposes (zero based). If None this is ignored. Defaults to None. - """ - super().__init__() - self.layer = torch.nn.Linear(in_features=32, out_features=2) - self.another_layer = torch.nn.Linear(in_features=2, out_features=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.layer(x) - return self.another_layer(x) - - -@dataclass -class EMAConfig(object): - ema_decay: float = 0.99 - ema_start_update: int = 0 - ema_fp32: bool = False - ema_seed_model: Optional[str] = None - - -class TestEMAGPU(unittest.TestCase): - def assertTorchAllClose(self, x, y, atol=1e-8, rtol=1e-5, msg=None): - diff = x.float() - y.float() - diff_norm = torch.norm(diff) - other_norm = torch.norm(y.float()) - - if msg is None: - msg = "|input - other| > {} + {} * |other|".format( - atol, rtol - ) - - self.assertLessEqual( - diff_norm, - atol + rtol * other_norm, - msg=msg, - ) - - def test_ema(self): - model = DummyModule() - optimizer = torch.optim.SGD(model.parameters(), lr=0.01) - state = deepcopy(model.state_dict()) - config = EMAConfig() - ema = EMA(model, config) - - # set decay - ema._set_decay(config.ema_decay) - self.assertEqual(ema.get_decay(), config.ema_decay) - - # get model - self.assertEqual(ema.get_model(), ema.model) - - # Since fp32 params is not used, it should be of size 0 - self.assertEqual(len(ema.fp32_params), 0) - - # EMA step - x = torch.randn(32) - y = model(x) - loss = y.sum() - loss.backward() - optimizer.step() - - ema.step(model) - - ema_state_dict = ema.get_model().state_dict() - - for key, param in model.state_dict().items(): - prev_param = state[key] - ema_param = ema_state_dict[key] - - if "version" in key: - # Do not decay a model.version pytorch param - continue - self.assertTorchAllClose( - ema_param, - config.ema_decay * prev_param + (1 - config.ema_decay) * param, - ) - - # Since fp32 params is not used, it should be of size 0 - self.assertEqual(len(ema.fp32_params), 0) - - # Load EMA into model - model2 = DummyModule() - ema.reverse(model2) - - for key, param in model2.state_dict().items(): - ema_param = ema_state_dict[key] - self.assertTrue( - torch.allclose(ema_param, param) - ) - - def test_ema_fp32(self): - model = DummyModule().half() - optimizer = torch.optim.SGD(model.parameters(), lr=0.01) - state = deepcopy(model.state_dict()) - config = EMAConfig(ema_fp32=True) - ema = EMA(model, config) - - x = torch.randn(32) - y = model(x.half()) - loss = y.sum() - loss.backward() - optimizer.step() - - ema.step(model) - - for key, param in model.state_dict().items(): - prev_param = state[key] - ema_param = ema.get_model().state_dict()[key] - - if "version" in key: - # Do not decay a model.version pytorch param - continue - self.assertIn(key, ema.fp32_params) - - # EMA update is done in fp32, and hence the EMA param must be - # closer to the EMA update done in fp32 than in fp16. - self.assertLessEqual( - torch.norm( - ema_param.float() - - (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half().float() - ), - torch.norm( - ema_param.float() - - (config.ema_decay * prev_param + (1 - config.ema_decay) * param).float() - ), - ) - self.assertTorchAllClose( - ema_param, - (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half(), - ) - - def test_ema_fp16(self): - model = DummyModule().half() - optimizer = torch.optim.SGD(model.parameters(), lr=0.01) - state = deepcopy(model.state_dict()) - config = EMAConfig(ema_fp32=False) - ema = EMA(model, config) - - # Since fp32 params is not used, it should be of size 0 - self.assertEqual(len(ema.fp32_params), 0) - - x = torch.randn(32) - y = model(x.half()) - loss = y.sum() - loss.backward() - optimizer.step() - - ema.step(model) - - for key, param in model.state_dict().items(): - prev_param = state[key] - ema_param = ema.get_model().state_dict()[key] - - if "version" in key: - # Do not decay a model.version pytorch param - continue - - # EMA update is done in fp16, and hence the EMA param must be - # closer to the EMA update done in fp16 than in fp32. - self.assertLessEqual( - torch.norm( - ema_param.float() - - (config.ema_decay * prev_param + (1 - config.ema_decay) * param).float() - ), - torch.norm( - ema_param.float() - - (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half().float() - ), - ) - self.assertTorchAllClose( - ema_param, - config.ema_decay * prev_param + (1 - config.ema_decay) * param, - ) - - # Since fp32 params is not used, it should be of size 0 - self.assertEqual(len(ema.fp32_params), 0) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/HawkEye098432/Vocals_seperator/README.md b/spaces/HawkEye098432/Vocals_seperator/README.md deleted file mode 100644 index 6eaae12be7a215a065884e90bc0a9bd9d2f9e962..0000000000000000000000000000000000000000 --- a/spaces/HawkEye098432/Vocals_seperator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Vocals Seperator -emoji: 🏢 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HgMenon/Transcribe_V0.2/tests/vad_test.py b/spaces/HgMenon/Transcribe_V0.2/tests/vad_test.py deleted file mode 100644 index b465d8a380f9316a6830d9aac320c85f22aba0a0..0000000000000000000000000000000000000000 --- a/spaces/HgMenon/Transcribe_V0.2/tests/vad_test.py +++ /dev/null @@ -1,66 +0,0 @@ -import pprint -import unittest -import numpy as np -import sys - -sys.path.append('../whisper-webui') - -from src.vad import AbstractTranscription, TranscriptionConfig, VadSileroTranscription - -class TestVad(unittest.TestCase): - def __init__(self, *args, **kwargs): - super(TestVad, self).__init__(*args, **kwargs) - self.transcribe_calls = [] - - def test_transcript(self): - mock = MockVadTranscription() - - self.transcribe_calls.clear() - result = mock.transcribe("mock", lambda segment : self.transcribe_segments(segment)) - - self.assertListEqual(self.transcribe_calls, [ - [30, 30], - [100, 100] - ]) - - self.assertListEqual(result['segments'], - [{'end': 50.0, 'start': 40.0, 'text': 'Hello world '}, - {'end': 120.0, 'start': 110.0, 'text': 'Hello world '}] - ) - - def transcribe_segments(self, segment): - self.transcribe_calls.append(segment.tolist()) - - # Dummy text - return { - 'text': "Hello world ", - 'segments': [ - { - "start": 10.0, - "end": 20.0, - "text": "Hello world " - } - ], - 'language': "" - } - -class MockVadTranscription(AbstractTranscription): - def __init__(self): - super().__init__() - - def get_audio_segment(self, str, start_time: str = None, duration: str = None): - start_time_seconds = float(start_time.removesuffix("s")) - duration_seconds = float(duration.removesuffix("s")) - - # For mocking, this just returns a simple numppy array - return np.array([start_time_seconds, duration_seconds], dtype=np.float64) - - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, duration: float): - result = [] - - result.append( { 'start': 30, 'end': 60 } ) - result.append( { 'start': 100, 'end': 200 } ) - return result - -if __name__ == '__main__': - unittest.main() \ No newline at end of file diff --git a/spaces/Hina4867/bingo/src/components/chat-message.tsx b/spaces/Hina4867/bingo/src/components/chat-message.tsx deleted file mode 100644 index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/src/components/chat-message.tsx +++ /dev/null @@ -1,93 +0,0 @@ -import remarkGfm from 'remark-gfm' -import remarkMath from 'remark-math' -import supersub from 'remark-supersub' -import remarkBreaks from 'remark-breaks' -import { cn } from '@/lib/utils' -import { CodeBlock } from '@/components/ui/codeblock' -import { MemoizedReactMarkdown } from '@/components/markdown' -import { LearnMore } from './learn-more' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { useEffect } from 'react' -import { TurnCounter } from './turn-counter' - -export interface ChatMessageProps { - message: ChatMessageModel -} - -export function ChatMessage({ message, ...props }: ChatMessageProps) { - useEffect(() => { - if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) { - window.scrollBy(0, 200) - } - }, [message.text]) - - return message.text ? ( -
    -
    - {obj.alt} - } - } catch (e) { - } - return {obj.alt} - }, - p({ children }) { - return

    {children}

    - }, - code({ node, inline, className, children, ...props }) { - if (children.length) { - if (children[0] == '▍') { - return ( - - ) - } - - children[0] = (children[0] as string).replace('`▍`', '▍') - } - - const match = /language-(\w+)/.exec(className || '') - - if (inline) { - return ( - - {children} - - ) - } - - return ( - - ) - } - }} - > - {message.text} -
    -
    -
    - {message.author === 'bot' && } - {message.author === 'bot' && } -
    -
    - ) : null -} diff --git a/spaces/HugoDzz/super-godot-galaxy/build/_app/immutable/chunks/index.9af7eb9c.js b/spaces/HugoDzz/super-godot-galaxy/build/_app/immutable/chunks/index.9af7eb9c.js deleted file mode 100644 index 54d2defbd6cd6ff3ddacac40fe1b6f76dc16a477..0000000000000000000000000000000000000000 --- a/spaces/HugoDzz/super-godot-galaxy/build/_app/immutable/chunks/index.9af7eb9c.js +++ /dev/null @@ -1 +0,0 @@ -function y(){}function F(t,e){for(const n in e)t[n]=e[n];return t}function B(t){return t()}function q(){return Object.create(null)}function b(t){t.forEach(B)}function k(t){return typeof t=="function"}function st(t,e){return t!=t?e==e:t!==e||t&&typeof t=="object"||typeof t=="function"}let $;function ft(t,e){return $||($=document.createElement("a")),$.href=e,t===$.href}function H(t){return Object.keys(t).length===0}function I(t,...e){if(t==null)return y;const n=t.subscribe(...e);return n.unsubscribe?()=>n.unsubscribe():n}function dt(t,e,n){t.$$.on_destroy.push(I(e,n))}function _t(t,e,n,r){if(t){const i=D(t,e,n,r);return t[0](i)}}function D(t,e,n,r){return t[1]&&r?F(n.ctx.slice(),t[1](r(e))):n.ctx}function ht(t,e,n,r){if(t[2]&&r){const i=t[2](r(n));if(e.dirty===void 0)return i;if(typeof i=="object"){const a=[],l=Math.max(e.dirty.length,i.length);for(let o=0;o32){const e=[],n=t.ctx.length/32;for(let r=0;r>1);n(i)<=r?t=i+1:e=i}return t}function R(t){if(t.hydrate_init)return;t.hydrate_init=!0;let e=t.childNodes;if(t.nodeName==="HEAD"){const c=[];for(let u=0;u0&&e[n[i]].claim_order<=u?i+1:Q(1,i,x=>e[n[x]].claim_order,u))-1;r[c]=n[f]+1;const s=f+1;n[s]=c,i=Math.max(s,i)}const a=[],l=[];let o=e.length-1;for(let c=n[i]+1;c!=0;c=r[c-1]){for(a.push(e[c-1]);o>=c;o--)l.push(e[o]);o--}for(;o>=0;o--)l.push(e[o]);a.reverse(),l.sort((c,u)=>c.claim_order-u.claim_order);for(let c=0,u=0;c=a[u].claim_order;)u++;const f=ut.removeEventListener(e,n,r)}function vt(t){return function(e){return e.preventDefault(),t.call(this,e)}}function Et(t,e,n){n==null?t.removeAttribute(e):t.getAttribute(e)!==n&&t.setAttribute(e,n)}function Y(t){return Array.from(t.childNodes)}function Z(t){t.claim_info===void 0&&(t.claim_info={last_index:0,total_claimed:0})}function L(t,e,n,r,i=!1){Z(t);const a=(()=>{for(let l=t.claim_info.last_index;l=0;l--){const o=t[l];if(e(o)){const c=n(o);return c===void 0?t.splice(l,1):t[l]=c,i?c===void 0&&t.claim_info.last_index--:t.claim_info.last_index=l,o}}return r()})();return a.claim_order=t.claim_info.total_claimed,t.claim_info.total_claimed+=1,a}function tt(t,e,n,r){return L(t,i=>i.nodeName===e,i=>{const a=[];for(let l=0;li.removeAttribute(l))},()=>r(e))}function Nt(t,e,n){return tt(t,e,n,X)}function et(t,e){return L(t,n=>n.nodeType===3,n=>{const r=""+e;if(n.data.startsWith(r)){if(n.data.length!==r.length)return n.splitText(r.length)}else n.data=r},()=>A(e),!0)}function St(t){return et(t," ")}function kt(t,e){e=""+e,t.data!==e&&(t.data=e)}function At(t,e,n,r){n==null?t.style.removeProperty(e):t.style.setProperty(e,n,r?"important":"")}function nt(t,e,{bubbles:n=!1,cancelable:r=!1}={}){const i=document.createEvent("CustomEvent");return i.initCustomEvent(t,n,r,e),i}function Mt(t,e){return new t(e)}let g;function p(t){g=t}function M(){if(!g)throw new Error("Function called outside component initialization");return g}function jt(t){M().$$.on_mount.push(t)}function Ct(t){M().$$.after_update.push(t)}function qt(){const t=M();return(e,n,{cancelable:r=!1}={})=>{const i=t.$$.callbacks[e];if(i){const a=nt(e,n,{cancelable:r});return i.slice().forEach(l=>{l.call(t,a)}),!a.defaultPrevented}return!0}}const h=[],O=[];let m=[];const T=[],P=Promise.resolve();let N=!1;function W(){N||(N=!0,P.then(z))}function Ot(){return W(),P}function S(t){m.push(t)}const E=new Set;let _=0;function z(){if(_!==0)return;const t=g;do{try{for(;_t.indexOf(r)===-1?e.push(r):n.push(r)),n.forEach(r=>r()),m=e}const w=new Set;let d;function Tt(){d={r:0,c:[],p:d}}function Bt(){d.r||b(d.c),d=d.p}function lt(t,e){t&&t.i&&(w.delete(t),t.i(e))}function Dt(t,e,n,r){if(t&&t.o){if(w.has(t))return;w.add(t),d.c.push(()=>{w.delete(t),r&&(n&&t.d(1),r())}),t.o(e)}else r&&r()}const ct=["allowfullscreen","allowpaymentrequest","async","autofocus","autoplay","checked","controls","default","defer","disabled","formnovalidate","hidden","inert","ismap","loop","multiple","muted","nomodule","novalidate","open","playsinline","readonly","required","reversed","selected"];[...ct];function Lt(t){t&&t.c()}function Pt(t,e){t&&t.l(e)}function ut(t,e,n,r){const{fragment:i,after_update:a}=t.$$;i&&i.m(e,n),r||S(()=>{const l=t.$$.on_mount.map(B).filter(k);t.$$.on_destroy?t.$$.on_destroy.push(...l):b(l),t.$$.on_mount=[]}),a.forEach(S)}function ot(t,e){const n=t.$$;n.fragment!==null&&(it(n.after_update),b(n.on_destroy),n.fragment&&n.fragment.d(e),n.on_destroy=n.fragment=null,n.ctx=[])}function at(t,e){t.$$.dirty[0]===-1&&(h.push(t),W(),t.$$.dirty.fill(0)),t.$$.dirty[e/31|0]|=1<{const C=j.length?j[0]:x;return u.ctx&&i(u.ctx[s],u.ctx[s]=C)&&(!u.skip_bound&&u.bound[s]&&u.bound[s](C),f&&at(t,s)),x}):[],u.update(),f=!0,b(u.before_update),u.fragment=r?r(u.ctx):!1,e.target){if(e.hydrate){J();const s=Y(e.target);u.fragment&&u.fragment.l(s),s.forEach(V)}else u.fragment&&u.fragment.c();e.intro&<(t.$$.fragment),ut(t,e.target,e.anchor,e.customElement),K(),z()}p(c)}class zt{$destroy(){ot(this,1),this.$destroy=y}$on(e,n){if(!k(n))return y;const r=this.$$.callbacks[e]||(this.$$.callbacks[e]=[]);return r.push(n),()=>{const i=r.indexOf(n);i!==-1&&r.splice(i,1)}}$set(e){this.$$set&&!H(e)&&(this.$$.skip_bound=!0,this.$$set(e),this.$$.skip_bound=!1)}}export{ut as A,ot as B,_t as C,mt as D,pt as E,ht as F,U as G,y as H,dt as I,yt as J,qt as K,ft as L,bt as M,wt as N,vt as O,zt as S,xt as a,gt as b,St as c,Dt as d,$t as e,Bt as f,lt as g,V as h,Wt as i,Ct as j,X as k,Nt as l,Y as m,Et as n,jt as o,At as p,A as q,et as r,st as s,Ot as t,kt as u,Tt as v,O as w,Mt as x,Lt as y,Pt as z}; diff --git a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/backbone/__init__.py b/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/backbone/__init__.py deleted file mode 100644 index 76e4b272b479a26c63d120c818c140870cd8c287..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/backbone/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .backbone import build_backbone diff --git a/spaces/Illumotion/Koboldcpp/examples/server/server.cpp b/spaces/Illumotion/Koboldcpp/examples/server/server.cpp deleted file mode 100644 index c53a64867336f9433417d215871f194fe2df4bd4..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/server/server.cpp +++ /dev/null @@ -1,1769 +0,0 @@ -#include "common.h" -#include "llama.h" -#include "build-info.h" -#include "grammar-parser.h" - -#ifndef NDEBUG -// crash the server in debug mode, otherwise send an http 500 error -#define CPPHTTPLIB_NO_EXCEPTIONS 1 -#endif - -#include "httplib.h" -#include "json.hpp" - -// auto generated files (update with ./deps.sh) -#include "index.html.hpp" -#include "index.js.hpp" -#include "completion.js.hpp" -#include "json-schema-to-grammar.mjs.hpp" - -#include - -#ifndef SERVER_VERBOSE -#define SERVER_VERBOSE 1 -#endif - -using namespace httplib; -using json = nlohmann::json; - -struct server_params -{ - std::string hostname = "127.0.0.1"; - std::string public_path = "examples/server/public"; - int32_t port = 8080; - int32_t read_timeout = 600; - int32_t write_timeout = 600; -}; - -// completion token output with probabilities -struct completion_token_output -{ - struct token_prob - { - llama_token tok; - float prob; - }; - - std::vector probs; - llama_token tok; -}; - -static size_t common_part(const std::vector &a, const std::vector &b) -{ - size_t i; - for (i = 0; i < a.size() && i < b.size() && a[i] == b[i]; i++) - { - } - return i; -} - -enum stop_type -{ - STOP_FULL, - STOP_PARTIAL, -}; - -static bool ends_with(const std::string &str, const std::string &suffix) -{ - return str.size() >= suffix.size() && - 0 == str.compare(str.size() - suffix.size(), suffix.size(), suffix); -} - -static size_t find_partial_stop_string(const std::string &stop, - const std::string &text) -{ - if (!text.empty() && !stop.empty()) - { - const char text_last_char = text.back(); - for (int64_t char_index = stop.size() - 1; char_index >= 0; char_index--) - { - if (stop[char_index] == text_last_char) - { - const std::string current_partial = stop.substr(0, char_index + 1); - if (ends_with(text, current_partial)) - { - return text.size() - char_index - 1; - } - } - } - } - return std::string::npos; -} - -template -static std::string tokens_to_str(llama_context *ctx, Iter begin, Iter end) -{ - std::string ret; - for (; begin != end; ++begin) - { - ret += llama_token_to_piece(ctx, *begin); - } - return ret; -} - -static void server_log(const char *level, const char *function, int line, - const char *message, const nlohmann::ordered_json &extra) -{ - nlohmann::ordered_json log{ - {"timestamp", time(nullptr)}, - {"level", level}, - {"function", function}, - {"line", line}, - {"message", message}, - }; - - if (!extra.empty()) - { - log.merge_patch(extra); - } - - const std::string str = log.dump(-1, ' ', false, json::error_handler_t::replace); - printf("%.*s\n", (int)str.size(), str.data()); - fflush(stdout); -} - -// format incomplete utf-8 multibyte character for output -static std::string tokens_to_output_formatted_string(const llama_context *ctx, const llama_token token) -{ - std::string out = token == -1 ? "" : llama_token_to_piece(ctx, token); - // if the size is 1 and first bit is 1, meaning it's a partial character - // (size > 1 meaning it's already a known token) - if (out.size() == 1 && (out[0] & 0x80) == 0x80) - { - std::stringstream ss; - ss << std::hex << (out[0] & 0xff); - std::string res(ss.str()); - out = "byte: \\x" + res; - } - return out; -} - -// convert a vector of completion_token_output to json -static json probs_vector_to_json(const llama_context *ctx, const std::vector & probs) -{ - json out = json::array(); - for (const auto &prob : probs) - { - json probs_for_token = json::array(); - for (const auto &p : prob.probs) - { - std::string tok_str = tokens_to_output_formatted_string(ctx, p.tok); - probs_for_token.push_back(json{ - {"tok_str", tok_str}, - {"prob", p.prob}, - }); - } - std::string tok_str = tokens_to_output_formatted_string(ctx, prob.tok); - out.push_back(json{ - {"content", tok_str}, - {"probs", probs_for_token}, - }); - } - return out; -} - -static bool server_verbose = false; - -#if SERVER_VERBOSE != 1 -#define LOG_VERBOSE(MSG, ...) -#else -#define LOG_VERBOSE(MSG, ...) \ - do \ - { \ - if (server_verbose) \ - { \ - server_log("VERBOSE", __func__, __LINE__, MSG, __VA_ARGS__); \ - } \ - } while (0) -#endif - -#define LOG_ERROR(MSG, ...) server_log("ERROR", __func__, __LINE__, MSG, __VA_ARGS__) -#define LOG_WARNING(MSG, ...) server_log("WARNING", __func__, __LINE__, MSG, __VA_ARGS__) -#define LOG_INFO(MSG, ...) server_log("INFO", __func__, __LINE__, MSG, __VA_ARGS__) - -struct llama_server_context -{ - bool stream = false; - bool has_next_token = false; - std::string generated_text; - std::vector generated_token_probs; - - size_t num_prompt_tokens = 0; - size_t num_tokens_predicted = 0; - size_t n_past = 0; - size_t n_remain = 0; - - json prompt; - std::vector embd; - std::vector last_n_tokens; - - llama_model *model = nullptr; - llama_context *ctx = nullptr; - gpt_params params; - int n_ctx; - - grammar_parser::parse_state parsed_grammar; - llama_grammar *grammar = nullptr; - - bool truncated = false; - bool stopped_eos = false; - bool stopped_word = false; - bool stopped_limit = false; - std::string stopping_word; - int32_t multibyte_pending = 0; - - std::mutex mutex; - - std::unique_lock lock() - { - return std::unique_lock(mutex); - } - - ~llama_server_context() - { - if (ctx) - { - llama_free(ctx); - ctx = nullptr; - } - if (model) - { - llama_free_model(model); - model = nullptr; - } - } - - void rewind() - { - params.antiprompt.clear(); - params.grammar.clear(); - num_prompt_tokens = 0; - num_tokens_predicted = 0; - generated_text = ""; - generated_text.reserve(n_ctx); - generated_token_probs.clear(); - truncated = false; - stopped_eos = false; - stopped_word = false; - stopped_limit = false; - stopping_word = ""; - multibyte_pending = 0; - n_remain = 0; - n_past = 0; - - if (grammar != nullptr) { - llama_grammar_free(grammar); - grammar = nullptr; - } - } - - bool loadModel(const gpt_params ¶ms_) - { - params = params_; - std::tie(model, ctx) = llama_init_from_gpt_params(params); - if (model == nullptr) - { - LOG_ERROR("unable to load model", {{"model", params_.model}}); - return false; - } - n_ctx = llama_n_ctx(ctx); - last_n_tokens.resize(n_ctx); - std::fill(last_n_tokens.begin(), last_n_tokens.end(), 0); - return true; - } - - std::vector tokenize(const json & json_prompt, bool add_bos) const - { - // If `add_bos` is true, we only add BOS, when json_prompt is a string, - // or the first element of the json_prompt array is a string. - std::vector prompt_tokens; - - if (json_prompt.is_array()) - { - bool first = true; - for (const auto& p : json_prompt) - { - if (p.is_string()) - { - auto s = p.template get(); - std::vector p; - if (first) - { - p = ::llama_tokenize(ctx, s, add_bos); - first = false; - } - else - { - p = ::llama_tokenize(ctx, s, false); - } - prompt_tokens.insert(prompt_tokens.end(), p.begin(), p.end()); - } - else - { - if (first) - { - first = false; - } - prompt_tokens.push_back(p.template get()); - } - } - } - else - { - auto s = json_prompt.template get(); - prompt_tokens = ::llama_tokenize(ctx, s, add_bos); - } - - return prompt_tokens; - } - - bool loadGrammar() - { - if (!params.grammar.empty()) { - parsed_grammar = grammar_parser::parse(params.grammar.c_str()); - // will be empty (default) if there are parse errors - if (parsed_grammar.rules.empty()) { - LOG_ERROR("grammar parse error", {{"grammar", params.grammar}}); - return false; - } - grammar_parser::print_grammar(stderr, parsed_grammar); - - { - auto it = params.logit_bias.find(llama_token_eos(ctx)); - if (it != params.logit_bias.end() && it->second == -INFINITY) { - LOG_WARNING("EOS token is disabled, which will cause most grammars to fail", {}); - } - } - - std::vector grammar_rules(parsed_grammar.c_rules()); - grammar = llama_grammar_init( - grammar_rules.data(), grammar_rules.size(), parsed_grammar.symbol_ids.at("root")); - } - return true; - } - - void loadInfill() - { - auto prefix_tokens = tokenize(params.input_prefix, true); // always add BOS - auto suffix_tokens = tokenize(params.input_suffix, true); // always add BOS - prefix_tokens.insert(prefix_tokens.begin(), llama_token_prefix(ctx)); - prefix_tokens.insert(prefix_tokens.end(), llama_token_suffix(ctx)); - prefix_tokens.insert(prefix_tokens.end(), suffix_tokens.begin(), suffix_tokens.end()); - prefix_tokens.push_back(llama_token_middle(ctx)); - auto prompt_tokens = prefix_tokens; - - num_prompt_tokens = prompt_tokens.size(); - - if (params.n_keep < 0) - { - params.n_keep = (int)num_prompt_tokens; - } - params.n_keep = std::min(params.n_ctx - 4, params.n_keep); - - // if input prompt is too big, truncate like normal - if (num_prompt_tokens >= (size_t)params.n_ctx) - { - printf("Input prompt is too big, truncating. Can only take %d tokens but got %zu\n", params.n_ctx, num_prompt_tokens); - // todo we probably want to cut from both sides - const int n_left = (params.n_ctx - params.n_keep) / 2; - std::vector new_tokens(prompt_tokens.begin(), prompt_tokens.begin() + params.n_keep); - const int erased_blocks = (num_prompt_tokens - params.n_keep - n_left - 1) / n_left; - new_tokens.insert(new_tokens.end(), prompt_tokens.begin() + params.n_keep + erased_blocks * n_left, prompt_tokens.end()); - std::copy(prompt_tokens.end() - params.n_ctx, prompt_tokens.end(), last_n_tokens.begin()); - - LOG_VERBOSE("input truncated", { - {"n_ctx", params.n_ctx}, - {"n_keep", params.n_keep}, - {"n_left", n_left}, - {"new_tokens", tokens_to_str(ctx, new_tokens.cbegin(), new_tokens.cend())}, - }); - - truncated = true; - prompt_tokens = new_tokens; - } - else - { - const size_t ps = num_prompt_tokens; - std::fill(last_n_tokens.begin(), last_n_tokens.end() - ps, 0); - std::copy(prompt_tokens.begin(), prompt_tokens.end(), last_n_tokens.end() - ps); - } - - // compare the evaluated prompt with the new prompt - n_past = common_part(embd, prompt_tokens); - embd = prompt_tokens; - if (n_past == num_prompt_tokens) - { - // we have to evaluate at least 1 token to generate logits. - printf("we have to evaluate at least 1 token to generate logits\n"); - n_past--; - } - - LOG_VERBOSE("prompt ingested", { - {"n_past", n_past}, - {"cached", tokens_to_str(ctx, embd.cbegin(), embd.cbegin() + n_past)}, - {"to_eval", tokens_to_str(ctx, embd.cbegin() + n_past, embd.cend())}, - }); - - has_next_token = true; - } - void loadPrompt() - { - auto prompt_tokens = tokenize(prompt, true); // always add BOS - - num_prompt_tokens = prompt_tokens.size(); - - if (params.n_keep < 0) - { - params.n_keep = (int)num_prompt_tokens; - } - params.n_keep = std::min(n_ctx - 4, params.n_keep); - - // if input prompt is too big, truncate like normal - if (num_prompt_tokens >= (size_t)n_ctx) - { - const int n_left = (n_ctx - params.n_keep) / 2; - std::vector new_tokens(prompt_tokens.begin(), prompt_tokens.begin() + params.n_keep); - const int erased_blocks = (num_prompt_tokens - params.n_keep - n_left - 1) / n_left; - new_tokens.insert(new_tokens.end(), prompt_tokens.begin() + params.n_keep + erased_blocks * n_left, prompt_tokens.end()); - std::copy(prompt_tokens.end() - n_ctx, prompt_tokens.end(), last_n_tokens.begin()); - - LOG_VERBOSE("input truncated", { - {"n_ctx", n_ctx}, - {"n_keep", params.n_keep}, - {"n_left", n_left}, - {"new_tokens", tokens_to_str(ctx, new_tokens.cbegin(), new_tokens.cend())}, - }); - - truncated = true; - prompt_tokens = new_tokens; - } - else - { - const size_t ps = num_prompt_tokens; - std::fill(last_n_tokens.begin(), last_n_tokens.end() - ps, 0); - std::copy(prompt_tokens.begin(), prompt_tokens.end(), last_n_tokens.end() - ps); - } - - // compare the evaluated prompt with the new prompt - n_past = common_part(embd, prompt_tokens); - - // since #3228 we now have to manually manage the KV cache - llama_kv_cache_seq_rm(ctx, 0, n_past, -1); - - embd = prompt_tokens; - if (n_past == num_prompt_tokens) - { - // we have to evaluate at least 1 token to generate logits. - n_past--; - } - - LOG_VERBOSE("prompt ingested", { - {"n_past", n_past}, - {"cached", tokens_to_str(ctx, embd.cbegin(), embd.cbegin() + n_past)}, - {"to_eval", tokens_to_str(ctx, embd.cbegin() + n_past, embd.cend())}, - }); - - has_next_token = true; - } - - void beginCompletion() - { - // number of tokens to keep when resetting context - n_remain = params.n_predict; - llama_set_rng_seed(ctx, params.seed); - } - - completion_token_output nextToken() - { - completion_token_output result; - result.tok = -1; - - if (embd.size() >= (size_t)n_ctx) - { - // Shift context - - const int n_left = n_past - params.n_keep - 1; - const int n_discard = n_left/2; - - llama_kv_cache_seq_rm (ctx, 0, params.n_keep + 1 , params.n_keep + n_discard + 1); - llama_kv_cache_seq_shift(ctx, 0, params.n_keep + 1 + n_discard, n_past, -n_discard); - - for (size_t i = params.n_keep + 1 + n_discard; i < embd.size(); i++) - { - embd[i - n_discard] = embd[i]; - } - embd.resize(embd.size() - n_discard); - - n_past -= n_discard; - - truncated = true; - LOG_VERBOSE("input truncated", { - {"n_ctx", n_ctx}, - {"n_keep", params.n_keep}, - {"n_left", n_left}, - }); - } - - bool tg = true; - while (n_past < embd.size()) - { - int n_eval = (int)embd.size() - n_past; - tg = n_eval == 1; - if (n_eval > params.n_batch) - { - n_eval = params.n_batch; - } - - if (llama_decode(ctx, llama_batch_get_one(&embd[n_past], n_eval, n_past, 0))) - { - LOG_ERROR("failed to eval", { - {"n_eval", n_eval}, - {"n_past", n_past}, - {"embd", tokens_to_str(ctx, embd.cbegin() + n_past, embd.cend())}, - }); - has_next_token = false; - return result; - } - n_past += n_eval; - } - - if (params.n_predict == 0) - { - has_next_token = false; - result.tok = llama_token_eos(ctx); - return result; - } - - { - // out of user input, sample next token - std::vector candidates; - candidates.reserve(llama_n_vocab(model)); - - result.tok = llama_sample_token(ctx, NULL, grammar, params, last_n_tokens, candidates); - - llama_token_data_array candidates_p = { candidates.data(), candidates.size(), false }; - - const int32_t n_probs = params.n_probs; - if (params.temp <= 0 && n_probs > 0) - { - // For llama_sample_token_greedy we need to sort candidates - llama_sample_softmax(ctx, &candidates_p); - } - - for (size_t i = 0; i < std::min(candidates_p.size, (size_t)n_probs); ++i) - { - result.probs.push_back({candidates_p.data[i].id, candidates_p.data[i].p}); - } - - last_n_tokens.erase(last_n_tokens.begin()); - last_n_tokens.push_back(result.tok); - if (tg) { - num_tokens_predicted++; - } - } - - // add it to the context - embd.push_back(result.tok); - // decrement remaining sampling budget - --n_remain; - - if (!embd.empty() && embd.back() == llama_token_eos(ctx)) - { - // stopping_word = llama_token_to_piece(ctx, embd.back()); - has_next_token = false; - stopped_eos = true; - LOG_VERBOSE("eos token found", {}); - return result; - } - - has_next_token = params.n_predict == -1 || n_remain != 0; - return result; - } - - size_t findStoppingStrings(const std::string &text, const size_t last_token_size, - const stop_type type) - { - size_t stop_pos = std::string::npos; - for (const std::string &word : params.antiprompt) - { - size_t pos; - if (type == STOP_FULL) - { - const size_t tmp = word.size() + last_token_size; - const size_t from_pos = text.size() > tmp ? text.size() - tmp : 0; - pos = text.find(word, from_pos); - } - else - { - pos = find_partial_stop_string(word, text); - } - if (pos != std::string::npos && - (stop_pos == std::string::npos || pos < stop_pos)) - { - if (type == STOP_FULL) - { - stopping_word = word; - stopped_word = true; - has_next_token = false; - } - stop_pos = pos; - } - } - return stop_pos; - } - - completion_token_output doCompletion() - { - auto token_with_probs = nextToken(); - - const std::string token_text = token_with_probs.tok == -1 ? "" : llama_token_to_piece(ctx, token_with_probs.tok); - generated_text += token_text; - - if (params.n_probs > 0) - { - generated_token_probs.push_back(token_with_probs); - } - - if (multibyte_pending > 0) - { - multibyte_pending -= token_text.size(); - } - else if (token_text.size() == 1) - { - const char c = token_text[0]; - // 2-byte characters: 110xxxxx 10xxxxxx - if ((c & 0xE0) == 0xC0) - { - multibyte_pending = 1; - // 3-byte characters: 1110xxxx 10xxxxxx 10xxxxxx - } - else if ((c & 0xF0) == 0xE0) - { - multibyte_pending = 2; - // 4-byte characters: 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx - } - else if ((c & 0xF8) == 0xF0) - { - multibyte_pending = 3; - } - else - { - multibyte_pending = 0; - } - } - - if (multibyte_pending > 0 && !has_next_token) - { - has_next_token = true; - n_remain++; - } - - if (!has_next_token && n_remain == 0) - { - stopped_limit = true; - } - - LOG_VERBOSE("next token", { - {"token", token_with_probs.tok}, - {"token_text", tokens_to_output_formatted_string(ctx, token_with_probs.tok)}, - {"has_next_token", has_next_token}, - {"n_remain", n_remain}, - {"num_tokens_predicted", num_tokens_predicted}, - {"stopped_eos", stopped_eos}, - {"stopped_word", stopped_word}, - {"stopped_limit", stopped_limit}, - {"stopping_word", stopping_word}, - }); - - return token_with_probs; - } - - std::vector getEmbedding() - { - static const int n_embd = llama_n_embd(model); - if (!params.embedding) - { - LOG_WARNING("embedding disabled", { - {"params.embedding", params.embedding}, - }); - return std::vector(n_embd, 0.0f); - } - const float *data = llama_get_embeddings(ctx); - std::vector embedding(data, data + n_embd); - return embedding; - } -}; - -static void server_print_usage(const char *argv0, const gpt_params ¶ms, - const server_params &sparams) -{ - printf("usage: %s [options]\n", argv0); - printf("\n"); - printf("options:\n"); - printf(" -h, --help show this help message and exit\n"); - printf(" -v, --verbose verbose output (default: %s)\n", server_verbose ? "enabled" : "disabled"); - printf(" -t N, --threads N number of threads to use during computation (default: %d)\n", params.n_threads); - printf(" -c N, --ctx-size N size of the prompt context (default: %d)\n", params.n_ctx); - printf(" --rope-freq-base N RoPE base frequency (default: loaded from model)\n"); - printf(" --rope-freq-scale N RoPE frequency scaling factor (default: loaded from model)\n"); - printf(" -b N, --batch-size N batch size for prompt processing (default: %d)\n", params.n_batch); - printf(" --memory-f32 use f32 instead of f16 for memory key+value (default: disabled)\n"); - printf(" not recommended: doubles context memory required and no measurable increase in quality\n"); - if (llama_mlock_supported()) - { - printf(" --mlock force system to keep model in RAM rather than swapping or compressing\n"); - } - if (llama_mmap_supported()) - { - printf(" --no-mmap do not memory-map model (slower load but may reduce pageouts if not using mlock)\n"); - } - printf(" --numa attempt optimizations that help on some NUMA systems\n"); -#ifdef LLAMA_SUPPORTS_GPU_OFFLOAD - printf(" -ngl N, --n-gpu-layers N\n"); - printf(" number of layers to store in VRAM\n"); - printf(" -ts SPLIT --tensor-split SPLIT\n"); - printf(" how to split tensors across multiple GPUs, comma-separated list of proportions, e.g. 3,1\n"); - printf(" -mg i, --main-gpu i the GPU to use for scratch and small tensors\n"); - printf(" -nommq, --no-mul-mat-q\n"); - printf(" use cuBLAS instead of custom mul_mat_q CUDA kernels.\n"); - printf(" Not recommended since this is both slower and uses more VRAM.\n"); -#endif - printf(" -m FNAME, --model FNAME\n"); - printf(" model path (default: %s)\n", params.model.c_str()); - printf(" -a ALIAS, --alias ALIAS\n"); - printf(" set an alias for the model, will be added as `model` field in completion response\n"); - printf(" --lora FNAME apply LoRA adapter (implies --no-mmap)\n"); - printf(" --lora-base FNAME optional model to use as a base for the layers modified by the LoRA adapter\n"); - printf(" --host ip address to listen (default (default: %s)\n", sparams.hostname.c_str()); - printf(" --port PORT port to listen (default (default: %d)\n", sparams.port); - printf(" --path PUBLIC_PATH path from which to serve static files (default %s)\n", sparams.public_path.c_str()); - printf(" -to N, --timeout N server read/write timeout in seconds (default: %d)\n", sparams.read_timeout); - printf(" --embedding enable embedding vector output (default: %s)\n", params.embedding ? "enabled" : "disabled"); - printf("\n"); -} - -static void server_params_parse(int argc, char **argv, server_params &sparams, - gpt_params ¶ms) -{ - gpt_params default_params; - server_params default_sparams; - std::string arg; - bool invalid_param = false; - - for (int i = 1; i < argc; i++) - { - arg = argv[i]; - if (arg == "--port") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - sparams.port = std::stoi(argv[i]); - } - else if (arg == "--host") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - sparams.hostname = argv[i]; - } - else if (arg == "--path") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - sparams.public_path = argv[i]; - } - else if (arg == "--timeout" || arg == "-to") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - sparams.read_timeout = std::stoi(argv[i]); - sparams.write_timeout = std::stoi(argv[i]); - } - else if (arg == "-m" || arg == "--model") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - params.model = argv[i]; - } - else if (arg == "-a" || arg == "--alias") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - params.model_alias = argv[i]; - } - else if (arg == "-h" || arg == "--help") - { - server_print_usage(argv[0], default_params, default_sparams); - exit(0); - } - else if (arg == "-c" || arg == "--ctx-size" || arg == "--ctx_size") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - params.n_ctx = std::stoi(argv[i]); - } - else if (arg == "--rope-freq-base") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - params.rope_freq_base = std::stof(argv[i]); - } - else if (arg == "--rope-freq-scale") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - params.rope_freq_scale = std::stof(argv[i]); - } - else if (arg == "--memory-f32" || arg == "--memory_f32") - { - params.memory_f16 = false; - } - else if (arg == "--threads" || arg == "-t") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - params.n_threads = std::stoi(argv[i]); - } - else if (arg == "-b" || arg == "--batch-size") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - params.n_batch = std::stoi(argv[i]); - params.n_batch = std::min(512, params.n_batch); - } - else if (arg == "--gpu-layers" || arg == "-ngl" || arg == "--n-gpu-layers") - { - if (++i >= argc) - { - invalid_param = true; - break; - } -#ifdef LLAMA_SUPPORTS_GPU_OFFLOAD - params.n_gpu_layers = std::stoi(argv[i]); -#else - LOG_WARNING("Not compiled with GPU offload support, --n-gpu-layers option will be ignored. " - "See main README.md for information on enabling GPU BLAS support", - {{"n_gpu_layers", params.n_gpu_layers}}); -#endif - } - else if (arg == "--tensor-split" || arg == "-ts") - { - if (++i >= argc) - { - invalid_param = true; - break; - } -#ifdef GGML_USE_CUBLAS - std::string arg_next = argv[i]; - - // split string by , and / - const std::regex regex{R"([,/]+)"}; - std::sregex_token_iterator it{arg_next.begin(), arg_next.end(), regex, -1}; - std::vector split_arg{it, {}}; - GGML_ASSERT(split_arg.size() <= LLAMA_MAX_DEVICES); - - for (size_t i_device = 0; i_device < LLAMA_MAX_DEVICES; ++i_device) - { - if (i_device < split_arg.size()) - { - params.tensor_split[i_device] = std::stof(split_arg[i_device]); - } - else - { - params.tensor_split[i_device] = 0.0f; - } - } -#else - LOG_WARNING("llama.cpp was compiled without cuBLAS. It is not possible to set a tensor split.\n", {}); -#endif // GGML_USE_CUBLAS - } - else if (arg == "--no-mul-mat-q" || arg == "-nommq") - { -#ifdef GGML_USE_CUBLAS - params.mul_mat_q = false; -#else - LOG_WARNING("warning: llama.cpp was compiled without cuBLAS. Disabling mul_mat_q kernels has no effect.\n", {}); -#endif // GGML_USE_CUBLAS - } - else if (arg == "--main-gpu" || arg == "-mg") - { - if (++i >= argc) - { - invalid_param = true; - break; - } -#ifdef GGML_USE_CUBLAS - params.main_gpu = std::stoi(argv[i]); -#else - LOG_WARNING("llama.cpp was compiled without cuBLAS. It is not possible to set a main GPU.", {}); -#endif - } - else if (arg == "--lora") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - params.lora_adapter.push_back(std::make_tuple(argv[i], 1.0f)); - params.use_mmap = false; - } - else if (arg == "--lora-scaled") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - const char * lora_adapter = argv[i]; - if (++i >= argc) - { - invalid_param = true; - break; - } - params.lora_adapter.push_back(std::make_tuple(lora_adapter, std::stof(argv[i]))); - params.use_mmap = false; - } - else if (arg == "--lora-base") - { - if (++i >= argc) - { - invalid_param = true; - break; - } - params.lora_base = argv[i]; - } - else if (arg == "-v" || arg == "--verbose") - { -#if SERVER_VERBOSE != 1 - LOG_WARNING("server.cpp is not built with verbose logging.", {}); -#else - server_verbose = true; -#endif - } - else if (arg == "--mlock") - { - params.use_mlock = true; - } - else if (arg == "--no-mmap") - { - params.use_mmap = false; - } - else if (arg == "--numa") - { - params.numa = true; - } - else if (arg == "--embedding") - { - params.embedding = true; - } - else - { - fprintf(stderr, "error: unknown argument: %s\n", arg.c_str()); - server_print_usage(argv[0], default_params, default_sparams); - exit(1); - } - } - - if (invalid_param) - { - fprintf(stderr, "error: invalid parameter for argument: %s\n", arg.c_str()); - server_print_usage(argv[0], default_params, default_sparams); - exit(1); - } -} - -static json format_generation_settings(llama_server_context &llama) -{ - const auto eos_bias = llama.params.logit_bias.find(llama_token_eos(llama.ctx)); - const bool ignore_eos = eos_bias != llama.params.logit_bias.end() && - eos_bias->second < 0.0f && std::isinf(eos_bias->second); - - return json{ - {"n_ctx", llama.n_ctx}, - {"model", llama.params.model_alias}, - {"seed", llama.params.seed}, - {"temp", llama.params.temp}, - {"top_k", llama.params.top_k}, - {"top_p", llama.params.top_p}, - {"tfs_z", llama.params.tfs_z}, - {"typical_p", llama.params.typical_p}, - {"repeat_last_n", llama.params.repeat_last_n}, - {"repeat_penalty", llama.params.repeat_penalty}, - {"presence_penalty", llama.params.presence_penalty}, - {"frequency_penalty", llama.params.frequency_penalty}, - {"mirostat", llama.params.mirostat}, - {"mirostat_tau", llama.params.mirostat_tau}, - {"mirostat_eta", llama.params.mirostat_eta}, - {"penalize_nl", llama.params.penalize_nl}, - {"stop", llama.params.antiprompt}, - {"n_predict", llama.params.n_predict}, - {"n_keep", llama.params.n_keep}, - {"ignore_eos", ignore_eos}, - {"stream", llama.stream}, - {"logit_bias", llama.params.logit_bias}, - {"n_probs", llama.params.n_probs}, - {"grammar", llama.params.grammar}, - }; -} - -static json format_embedding_response(llama_server_context &llama) -{ - return json{ - {"embedding", llama.getEmbedding()}, - }; -} - -static json format_timings(llama_server_context &llama) -{ - const auto timings = llama_get_timings(llama.ctx); - - return json{ - {"prompt_n", timings.n_p_eval}, - {"prompt_ms", timings.t_p_eval_ms}, - {"prompt_per_token_ms", timings.t_p_eval_ms / timings.n_p_eval}, - {"prompt_per_second", 1e3 / timings.t_p_eval_ms * timings.n_p_eval}, - - {"predicted_n", timings.n_eval}, - {"predicted_ms", timings.t_eval_ms}, - {"predicted_per_token_ms", timings.t_eval_ms / timings.n_eval}, - {"predicted_per_second", 1e3 / timings.t_eval_ms * timings.n_eval}, - }; -} - -static json format_final_response(llama_server_context &llama, const std::string &content, const std::vector &probs) -{ - - json res = json{ - {"content", content}, - {"stop", true}, - {"model", llama.params.model_alias}, - {"tokens_predicted", llama.num_tokens_predicted}, - {"tokens_evaluated", llama.num_prompt_tokens}, - {"generation_settings", format_generation_settings(llama)}, - {"prompt", llama.prompt}, - {"truncated", llama.truncated}, - {"stopped_eos", llama.stopped_eos}, - {"stopped_word", llama.stopped_word}, - {"stopped_limit", llama.stopped_limit}, - {"stopping_word", llama.stopping_word}, - {"tokens_cached", llama.n_past}, - {"timings", format_timings(llama)}, - }; - - if (llama.params.n_probs > 0) - { - res["completion_probabilities"] = probs_vector_to_json(llama.ctx, probs); - } - - return res; -} - -static json format_partial_response( - llama_server_context &llama, const std::string &content, const std::vector &probs -) { - json res = json{ - {"content", content}, - {"stop", false}, - }; - - if (llama.params.n_probs > 0) - { - res["completion_probabilities"] = probs_vector_to_json(llama.ctx, probs); - } - - return res; -} - -static json format_tokenizer_response(const std::vector &tokens) -{ - return json{ - {"tokens", tokens}}; -} - -static json format_detokenized_response(std::string content) -{ - return json{ - {"content", content}}; -} - -template -static T json_value(const json &body, const std::string &key, const T &default_value) -{ - // Fallback null to default value - return body.contains(key) && !body.at(key).is_null() - ? body.value(key, default_value) - : default_value; -} - -static void parse_options_completion(const json &body, llama_server_context &llama) -{ - gpt_params default_params; - - llama.stream = json_value(body, "stream", false); - llama.params.n_predict = json_value(body, "n_predict", default_params.n_predict); - llama.params.top_k = json_value(body, "top_k", default_params.top_k); - llama.params.top_p = json_value(body, "top_p", default_params.top_p); - llama.params.tfs_z = json_value(body, "tfs_z", default_params.tfs_z); - llama.params.typical_p = json_value(body, "typical_p", default_params.typical_p); - llama.params.repeat_last_n = json_value(body, "repeat_last_n", default_params.repeat_last_n); - llama.params.temp = json_value(body, "temperature", default_params.temp); - llama.params.repeat_penalty = json_value(body, "repeat_penalty", default_params.repeat_penalty); - llama.params.presence_penalty = json_value(body, "presence_penalty", default_params.presence_penalty); - llama.params.frequency_penalty = json_value(body, "frequency_penalty", default_params.frequency_penalty); - llama.params.mirostat = json_value(body, "mirostat", default_params.mirostat); - llama.params.mirostat_tau = json_value(body, "mirostat_tau", default_params.mirostat_tau); - llama.params.mirostat_eta = json_value(body, "mirostat_eta", default_params.mirostat_eta); - llama.params.penalize_nl = json_value(body, "penalize_nl", default_params.penalize_nl); - llama.params.n_keep = json_value(body, "n_keep", default_params.n_keep); - llama.params.seed = json_value(body, "seed", default_params.seed); - llama.params.grammar = json_value(body, "grammar", default_params.grammar); - llama.params.n_probs = json_value(body, "n_probs", default_params.n_probs); - - if (body.count("prompt") != 0) - { - llama.prompt = body["prompt"]; - } - else - { - llama.prompt = ""; - } - - llama.params.logit_bias.clear(); - if (json_value(body, "ignore_eos", false)) - { - llama.params.logit_bias[llama_token_eos(llama.ctx)] = -INFINITY; - } - - const auto &logit_bias = body.find("logit_bias"); - if (logit_bias != body.end() && logit_bias->is_array()) - { - const int n_vocab = llama_n_vocab(llama.model); - for (const auto &el : *logit_bias) - { - if (el.is_array() && el.size() == 2 && el[0].is_number_integer()) - { - llama_token tok = el[0].get(); - if (tok >= 0 && tok < n_vocab) - { - if (el[1].is_number()) - { - llama.params.logit_bias[tok] = el[1].get(); - } - else if (el[1].is_boolean() && !el[1].get()) - { - llama.params.logit_bias[tok] = -INFINITY; - } - } - } - } - } - - llama.params.antiprompt.clear(); - const auto &stop = body.find("stop"); - if (stop != body.end() && stop->is_array()) - { - for (const auto &word : *stop) - { - if (!word.empty()) - { - llama.params.antiprompt.push_back(word); - } - } - } - - LOG_VERBOSE("completion parameters parsed", format_generation_settings(llama)); -} - -static void parse_options_infill(const json &body, llama_server_context &llama) -{ - if (body.count("input_prefix") != 0) - { - llama.params.input_prefix = body["input_prefix"]; - } - else - { - llama.params.input_prefix = ""; - } - if (body.count("input_suffix") != 0) - { - llama.params.input_suffix = body["input_suffix"]; - } - else - { - llama.params.input_suffix = ""; - } - parse_options_completion(body, llama); -} - -static void log_server_request(const Request &req, const Response &res) -{ - LOG_INFO("request", { - {"remote_addr", req.remote_addr}, - {"remote_port", req.remote_port}, - {"status", res.status}, - {"method", req.method}, - {"path", req.path}, - {"params", req.params}, - }); - - LOG_VERBOSE("request", { - {"request", req.body}, - {"response", res.body}, - }); -} - -static bool is_at_eob(llama_server_context &server_context, const llama_token *tokens, const size_t n_tokens) { - return n_tokens && tokens[n_tokens-1] == llama_token_eos(server_context.ctx); -} - -// Function matching type llama_beam_search_callback_fn_t. -// Custom callback example is called each time the beams lengths increase: -// * Show progress by printing ',' following by number of convergent beam tokens if any. -// * When all beams converge to a common prefix, they are made available in beams_state.beams[0]. -// This is also called when the stop condition is met. -// Collect tokens into std::vector response which is pointed to by callback_data. -static void beam_search_callback(void *callback_data, llama_beams_state beams_state) { - auto & llama = *static_cast(callback_data); - // Mark beams as EOS as needed. - for (size_t i = 0 ; i < beams_state.n_beams ; ++i) { - llama_beam_view& beam_view = beams_state.beam_views[i]; - if (!beam_view.eob && is_at_eob(llama, beam_view.tokens, beam_view.n_tokens)) { - beam_view.eob = true; - } - } - printf(","); // Show progress - if (const size_t n = beams_state.common_prefix_length) { - llama.generated_token_probs.resize(llama.generated_token_probs.size() + n); - assert(0u < beams_state.n_beams); - const llama_token * tokens = beams_state.beam_views[0].tokens; - const auto map = [](llama_token tok) { return completion_token_output{{},tok}; }; - std::transform(tokens, tokens + n, llama.generated_token_probs.end() - n, map); - printf("%zu", n); - } - fflush(stdout); -#if 0 // DEBUG: print current beams for this iteration - std::cout << "\n\nCurrent beams:\n"; - for (size_t i=0 ; i < beams_state.n_beams ; ++i) { - std::cout << "beams["<(&index_html), index_html_len, "text/html"); - return false; }); - - // this is only called if no index.js is found in the public --path - svr.Get("/index.js", [](const Request &, Response &res) - { - res.set_content(reinterpret_cast(&index_js), index_js_len, "text/javascript"); - return false; }); - - // this is only called if no index.html is found in the public --path - svr.Get("/completion.js", [](const Request &, Response &res) - { - res.set_content(reinterpret_cast(&completion_js), completion_js_len, "application/javascript"); - return false; }); - - // this is only called if no index.html is found in the public --path - svr.Get("/json-schema-to-grammar.mjs", [](const Request &, Response &res) - { - res.set_content(reinterpret_cast(&json_schema_to_grammar_mjs), json_schema_to_grammar_mjs_len, "application/javascript"); - return false; }); - - svr.Post("/completion", [&llama](const Request &req, Response &res) - { - auto lock = llama.lock(); - - llama.rewind(); - - llama_reset_timings(llama.ctx); - - parse_options_completion(json::parse(req.body), llama); - - if (!llama.loadGrammar()) - { - res.status = 400; - return; - } - - llama.loadPrompt(); - llama.beginCompletion(); - - if (!llama.stream) { - if (llama.params.n_beams) { - // Fill llama.generated_token_probs vector with final beam. - llama_beam_search(llama.ctx, beam_search_callback, &llama, llama.params.n_beams, - llama.n_past, llama.n_remain); - // Translate llama.generated_token_probs to llama.generated_text. - append_to_generated_text_from_generated_token_probs(llama); - } else { - size_t stop_pos = std::string::npos; - - while (llama.has_next_token) { - const completion_token_output token_with_probs = llama.doCompletion(); - const std::string token_text = token_with_probs.tok == -1 ? "" : llama_token_to_piece(llama.ctx, token_with_probs.tok); - - stop_pos = llama.findStoppingStrings(llama.generated_text, - token_text.size(), STOP_FULL); - } - - if (stop_pos == std::string::npos) { - stop_pos = llama.findStoppingStrings(llama.generated_text, 0, STOP_PARTIAL); - } - if (stop_pos != std::string::npos) { - llama.generated_text.erase(llama.generated_text.begin() + stop_pos, - llama.generated_text.end()); - } - } - - auto probs = llama.generated_token_probs; - if (llama.params.n_probs > 0 && llama.stopped_word) { - const std::vector stop_word_toks = llama_tokenize(llama.ctx, llama.stopping_word, false); - probs = std::vector(llama.generated_token_probs.begin(), llama.generated_token_probs.end() - stop_word_toks.size()); - } - - const json data = format_final_response(llama, llama.generated_text, probs); - - llama_print_timings(llama.ctx); - - res.set_content(data.dump(-1, ' ', false, json::error_handler_t::replace), - "application/json"); - } else { - const auto chunked_content_provider = [&](size_t, DataSink & sink) { - size_t sent_count = 0; - size_t sent_token_probs_index = 0; - - while (llama.has_next_token) { - const completion_token_output token_with_probs = llama.doCompletion(); - if (token_with_probs.tok == -1 || llama.multibyte_pending > 0) { - continue; - } - const std::string token_text = llama_token_to_piece(llama.ctx, token_with_probs.tok); - - size_t pos = std::min(sent_count, llama.generated_text.size()); - - const std::string str_test = llama.generated_text.substr(pos); - bool is_stop_full = false; - size_t stop_pos = - llama.findStoppingStrings(str_test, token_text.size(), STOP_FULL); - if (stop_pos != std::string::npos) { - is_stop_full = true; - llama.generated_text.erase( - llama.generated_text.begin() + pos + stop_pos, - llama.generated_text.end()); - pos = std::min(sent_count, llama.generated_text.size()); - } else { - is_stop_full = false; - stop_pos = llama.findStoppingStrings(str_test, token_text.size(), - STOP_PARTIAL); - } - - if ( - stop_pos == std::string::npos || - // Send rest of the text if we are at the end of the generation - (!llama.has_next_token && !is_stop_full && stop_pos > 0) - ) { - const std::string to_send = llama.generated_text.substr(pos, std::string::npos); - - sent_count += to_send.size(); - - std::vector probs_output = {}; - - if (llama.params.n_probs > 0) { - const std::vector to_send_toks = llama_tokenize(llama.ctx, to_send, false); - size_t probs_pos = std::min(sent_token_probs_index, llama.generated_token_probs.size()); - size_t probs_stop_pos = std::min(sent_token_probs_index + to_send_toks.size(), llama.generated_token_probs.size()); - if (probs_pos < probs_stop_pos) { - probs_output = std::vector(llama.generated_token_probs.begin() + probs_pos, llama.generated_token_probs.begin() + probs_stop_pos); - } - sent_token_probs_index = probs_stop_pos; - } - - const json data = format_partial_response(llama, to_send, probs_output); - - const std::string str = - "data: " + - data.dump(-1, ' ', false, json::error_handler_t::replace) + - "\n\n"; - - LOG_VERBOSE("data stream", { - { "to_send", str } - }); - - if (!sink.write(str.data(), str.size())) { - LOG_VERBOSE("stream closed", {}); - llama_print_timings(llama.ctx); - return false; - } - } - - if (!llama.has_next_token) { - // Generation is done, send extra information. - const json data = format_final_response( - llama, - "", - std::vector(llama.generated_token_probs.begin(), llama.generated_token_probs.begin() + sent_token_probs_index) - ); - - const std::string str = - "data: " + - data.dump(-1, ' ', false, json::error_handler_t::replace) + - "\n\n"; - - LOG_VERBOSE("data stream", { - { "to_send", str } - }); - - if (!sink.write(str.data(), str.size())) { - LOG_VERBOSE("stream closed", {}); - llama_print_timings(llama.ctx); - return false; - } - } - } - - llama_print_timings(llama.ctx); - sink.done(); - return true; - }; - const auto on_complete = [&](bool) { - llama.mutex.unlock(); - }; - lock.release(); - res.set_chunked_content_provider("text/event-stream", chunked_content_provider, on_complete); - } }); - - svr.Post("/infill", [&llama](const Request &req, Response &res) - { - auto lock = llama.lock(); - - llama.rewind(); - - llama_reset_timings(llama.ctx); - - parse_options_infill(json::parse(req.body), llama); - - if (!llama.loadGrammar()) - { - res.status = 400; - return; - } - llama.loadInfill(); - llama.beginCompletion(); - const auto chunked_content_provider = [&](size_t, DataSink & sink) { - size_t sent_count = 0; - size_t sent_token_probs_index = 0; - - while (llama.has_next_token) { - const completion_token_output token_with_probs = llama.doCompletion(); - if (token_with_probs.tok == -1 || llama.multibyte_pending > 0) { - continue; - } - const std::string token_text = llama_token_to_piece(llama.ctx, token_with_probs.tok); - - size_t pos = std::min(sent_count, llama.generated_text.size()); - - const std::string str_test = llama.generated_text.substr(pos); - bool is_stop_full = false; - size_t stop_pos = - llama.findStoppingStrings(str_test, token_text.size(), STOP_FULL); - if (stop_pos != std::string::npos) { - is_stop_full = true; - llama.generated_text.erase( - llama.generated_text.begin() + pos + stop_pos, - llama.generated_text.end()); - pos = std::min(sent_count, llama.generated_text.size()); - } else { - is_stop_full = false; - stop_pos = llama.findStoppingStrings(str_test, token_text.size(), - STOP_PARTIAL); - } - - if ( - stop_pos == std::string::npos || - // Send rest of the text if we are at the end of the generation - (!llama.has_next_token && !is_stop_full && stop_pos > 0) - ) { - const std::string to_send = llama.generated_text.substr(pos, std::string::npos); - - sent_count += to_send.size(); - - std::vector probs_output = {}; - - if (llama.params.n_probs > 0) { - const std::vector to_send_toks = llama_tokenize(llama.ctx, to_send, false); - size_t probs_pos = std::min(sent_token_probs_index, llama.generated_token_probs.size()); - size_t probs_stop_pos = std::min(sent_token_probs_index + to_send_toks.size(), llama.generated_token_probs.size()); - if (probs_pos < probs_stop_pos) { - probs_output = std::vector(llama.generated_token_probs.begin() + probs_pos, llama.generated_token_probs.begin() + probs_stop_pos); - } - sent_token_probs_index = probs_stop_pos; - } - - const json data = format_partial_response(llama, to_send, probs_output); - - const std::string str = - "data: " + - data.dump(-1, ' ', false, json::error_handler_t::replace) + - "\n\n"; - - LOG_VERBOSE("data stream", { - { "to_send", str } - }); - - if (!sink.write(str.data(), str.size())) { - LOG_VERBOSE("stream closed", {}); - llama_print_timings(llama.ctx); - return false; - } - } - - if (!llama.has_next_token) { - // Generation is done, send extra information. - const json data = format_final_response( - llama, - "", - std::vector(llama.generated_token_probs.begin(), llama.generated_token_probs.begin() + sent_token_probs_index) - ); - - const std::string str = - "data: " + - data.dump(-1, ' ', false, json::error_handler_t::replace) + - "\n\n"; - - LOG_VERBOSE("data stream", { - { "to_send", str } - }); - - if (!sink.write(str.data(), str.size())) { - LOG_VERBOSE("stream closed", {}); - llama_print_timings(llama.ctx); - return false; - } - } - } - - llama_print_timings(llama.ctx); - sink.done(); - return true; - }; - const auto on_complete = [&](bool) { - llama.mutex.unlock(); - }; - lock.release(); - res.set_chunked_content_provider("text/event-stream", chunked_content_provider, on_complete); - }); - - svr.Get("/model.json", [&llama](const Request &, Response &res) - { - const json data = format_generation_settings(llama); - return res.set_content(data.dump(), "application/json"); }); - - svr.Options(R"(/.*)", [](const Request &, Response &res) - { return res.set_content("", "application/json"); }); - - svr.Post("/tokenize", [&llama](const Request &req, Response &res) - { - auto lock = llama.lock(); - - const json body = json::parse(req.body); - std::vector tokens; - if (body.count("content") != 0) - { - tokens = llama.tokenize(body["content"], false); - } - const json data = format_tokenizer_response(tokens); - return res.set_content(data.dump(), "application/json"); }); - - svr.Post("/detokenize", [&llama](const Request &req, Response &res) - { - auto lock = llama.lock(); - - const json body = json::parse(req.body); - std::string content; - if (body.count("tokens") != 0) - { - const std::vector tokens = body["tokens"]; - content = tokens_to_str(llama.ctx, tokens.cbegin(), tokens.cend()); - } - - const json data = format_detokenized_response(content); - return res.set_content(data.dump(), "application/json"); }); - - svr.Post("/embedding", [&llama](const Request &req, Response &res) - { - auto lock = llama.lock(); - - const json body = json::parse(req.body); - - llama.rewind(); - llama_reset_timings(llama.ctx); - if (body.count("content") != 0) - { - llama.prompt = body["content"]; - } - else - { - llama.prompt = ""; - } - llama.params.n_predict = 0; - llama.loadPrompt(); - llama.beginCompletion(); - llama.doCompletion(); - - const json data = format_embedding_response(llama); - return res.set_content(data.dump(), "application/json"); }); - - svr.set_logger(log_server_request); - - svr.set_exception_handler([](const Request &, Response &res, std::exception_ptr ep) - { - const char fmt[] = "500 Internal Server Error\n%s"; - char buf[BUFSIZ]; - try { - std::rethrow_exception(std::move(ep)); - } catch (std::exception & e) { - snprintf(buf, sizeof(buf), fmt, e.what()); - } catch (...) { - snprintf(buf, sizeof(buf), fmt, "Unknown Exception"); - } - res.set_content(buf, "text/plain"); - res.status = 500; }); - - svr.set_error_handler([](const Request &, Response &res) - { - if (res.status == 400) { - res.set_content("Invalid request", "text/plain"); - } else if (res.status != 500) { - res.set_content("File Not Found", "text/plain"); - res.status = 404; - } }); - - // set timeouts and change hostname and port - svr.set_read_timeout(sparams.read_timeout); - svr.set_write_timeout(sparams.write_timeout); - - if (!svr.bind_to_port(sparams.hostname, sparams.port)) - { - fprintf(stderr, "\ncouldn't bind to server socket: hostname=%s port=%d\n\n", sparams.hostname.c_str(), sparams.port); - return 1; - } - - // Set the base directory for serving static files - svr.set_base_dir(sparams.public_path); - - // to make it ctrl+clickable: - printf("\nllama server listening at http://%s:%d\n\n", sparams.hostname.c_str(), sparams.port); - - LOG_INFO("HTTP server listening", { - {"hostname", sparams.hostname}, - {"port", sparams.port}, - }); - - if (!svr.listen_after_bind()) - { - return 1; - } - - if (llama.grammar != nullptr) { - llama_grammar_free(llama.grammar); - } - llama_backend_free(); - - return 0; -} diff --git a/spaces/Intoval/privateChatGPT/readme/README_ja.md b/spaces/Intoval/privateChatGPT/readme/README_ja.md deleted file mode 100644 index 5f4eb5afc65eea8afba736b5590dece058cb6b91..0000000000000000000000000000000000000000 --- a/spaces/Intoval/privateChatGPT/readme/README_ja.md +++ /dev/null @@ -1,126 +0,0 @@ -
    - - 简体中文 | English | 日本語 -
    - -

    川虎 Chat 🐯 Chuanhu Chat

    -
    - - Logo - - -

    -

    ChatGPT/ChatGLM/LLaMAなどのLLMのための軽量でユーザーフレンドリーなWeb-UI

    -

    - - Tests Passing - - - GitHub Contributors - - - GitHub pull requests - -

    - ストリーム出力/会話回数無制限/履歴保存/プリセットプロンプト/ファイルへの質問チャット
    - ウェブ検索/LaTeXレンダリング/表レンダリング/コードハイライト
    - オートダークモード/アダプティブ・ウェブ・インターフェイス/WeChatライク・テーマ
    - マルチパラメーターチューニング/マルチAPI-Key対応/マルチユーザー対応
    - GPT-4対応/LLMのローカルデプロイ可能。 -

    - 動画チュートリアル - · - 2.0 イントロダクション - · - 3.0 イントロダクション & チュートリアル - || - オンライントライアル - · - ワンクリックデプロイ -

    -

    - Animation Demo -

    -

    -
    - -## 使う上でのTips - -- ChatGPTをより適切に制御するために、システムプロンプトを使用できます。 -- プロンプトテンプレートを使用するには、プロンプトテンプレートコレクションを選択し、ドロップダウンメニューから特定のプロンプトを選択。回答が不十分な場合は、`🔄再生成`ボタンを使って再試行します。 -- 入力ボックスで改行するには、Shift + Enterキーを押してください。 -- 入力履歴を素早く切り替えるには、入力ボックスで キーを押す。 -- プログラムをサーバにデプロイするには、プログラムの最終行を `demo.launch(server_name="0.0.0.0", server_port=)`に変更します。 -- 共有リンクを取得するには、プログラムの最後の行を `demo.launch(share=True)` に変更してください。なお、公開リンクでアクセスするためには、プログラムが実行されている必要があることに注意してください。 -- Hugging Face Spacesで使用する場合: より速く、より安全に利用するために、**Duplicate Space**を使用し、自分のスペースでプログラムを実行することをお勧めします。 - -## インストール - -```shell -git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git -cd ChuanhuChatGPT -pip install -r requirements.txt -``` - -次に `config_example.json`をコピーして `config.json`にリネームし、そのファイルにAPI-Keyなどの設定を記入する。 - -```shell -python ChuanhuChatbot.py -``` - -ブラウザのウィンドウが開き、ChatGPTとチャットできるようになります。 - -> **Note** -> -> 詳しい手順は[wikiページ](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程)をご確認ください。 - -## トラブルシューティング - -問題が発生した場合は、まずこのプロジェクトの最新の変更点を手動で引っ張ってみるのがよいでしょう。その手順は以下の通りです: - -1. ウェブページの `Download ZIP` をクリックして最新のコードアーカイブをダウンロードするか、または - ```shell - git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f - ``` -2. 新しい依存関係が導入されている可能性があるため、依存関係を再度インストールしてみてください。 - ``` - pip install -r requirements.txt - ``` -3. Gradioを更新 - ``` - pip install gradio --upgrade --force-reinstall - ``` - -一般的に、以下の手順でほとんどの問題を解決することができます。 - -それでも問題が解決しない場合は、こちらのページをご参照ください: [よくある質問(FAQ)](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) - -このページでは、考えられるほぼすべての問題点と解決策を掲載しています。よくお読みください。 - -## More Information - -より詳細な情報は、[wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki) をご覧ください。: - -- [How to contribute a translation](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/Localization) -- [How to make a contribution](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南) -- [How to cite the project](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目) -- [Project changelog](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志) -- [Project license](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可) - -## Starchart - -[![Star History Chart](https://api.star-history.com/svg?repos=GaiZhenbiao/ChuanhuChatGPT&type=Date)](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date) - -## Contributors - - - - - -## Sponsor - -🐯 この企画が役に立ったら、遠慮なくコーラかコーヒーでもおごってください〜。 - -Buy Me A Coffee - -image diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_sde_ve_flax.py b/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_sde_ve_flax.py deleted file mode 100644 index d1f762bc90c471d6bbc7f33e5854d014b1e25667..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_sde_ve_flax.py +++ /dev/null @@ -1,276 +0,0 @@ -# Copyright 2022 Google Brain and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This file is strongly influenced by https://github.com/yang-song/score_sde_pytorch - -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import flax -import jax.numpy as jnp -from jax import random - -from ..configuration_utils import ConfigMixin, register_to_config -from .scheduling_utils_flax import FlaxSchedulerMixin, FlaxSchedulerOutput, broadcast_to_shape_from_left - - -@flax.struct.dataclass -class ScoreSdeVeSchedulerState: - # setable values - timesteps: Optional[jnp.ndarray] = None - discrete_sigmas: Optional[jnp.ndarray] = None - sigmas: Optional[jnp.ndarray] = None - - @classmethod - def create(cls): - return cls() - - -@dataclass -class FlaxSdeVeOutput(FlaxSchedulerOutput): - """ - Output class for the ScoreSdeVeScheduler's step function output. - - Args: - state (`ScoreSdeVeSchedulerState`): - prev_sample (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - prev_sample_mean (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)` for images): - Mean averaged `prev_sample`. Same as `prev_sample`, only mean-averaged over previous timesteps. - """ - - state: ScoreSdeVeSchedulerState - prev_sample: jnp.ndarray - prev_sample_mean: Optional[jnp.ndarray] = None - - -class FlaxScoreSdeVeScheduler(FlaxSchedulerMixin, ConfigMixin): - """ - The variance exploding stochastic differential equation (SDE) scheduler. - - For more information, see the original paper: https://arxiv.org/abs/2011.13456 - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - snr (`float`): - coefficient weighting the step from the model_output sample (from the network) to the random noise. - sigma_min (`float`): - initial noise scale for sigma sequence in sampling procedure. The minimum sigma should mirror the - distribution of the data. - sigma_max (`float`): maximum value used for the range of continuous timesteps passed into the model. - sampling_eps (`float`): the end value of sampling, where timesteps decrease progressively from 1 to - epsilon. - correct_steps (`int`): number of correction steps performed on a produced sample. - """ - - @property - def has_state(self): - return True - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 2000, - snr: float = 0.15, - sigma_min: float = 0.01, - sigma_max: float = 1348.0, - sampling_eps: float = 1e-5, - correct_steps: int = 1, - ): - pass - - def create_state(self): - state = ScoreSdeVeSchedulerState.create() - return self.set_sigmas( - state, - self.config.num_train_timesteps, - self.config.sigma_min, - self.config.sigma_max, - self.config.sampling_eps, - ) - - def set_timesteps( - self, state: ScoreSdeVeSchedulerState, num_inference_steps: int, shape: Tuple = (), sampling_eps: float = None - ) -> ScoreSdeVeSchedulerState: - """ - Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - state (`ScoreSdeVeSchedulerState`): the `FlaxScoreSdeVeScheduler` state data class instance. - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - sampling_eps (`float`, optional): final timestep value (overrides value given at Scheduler instantiation). - - """ - sampling_eps = sampling_eps if sampling_eps is not None else self.config.sampling_eps - - timesteps = jnp.linspace(1, sampling_eps, num_inference_steps) - return state.replace(timesteps=timesteps) - - def set_sigmas( - self, - state: ScoreSdeVeSchedulerState, - num_inference_steps: int, - sigma_min: float = None, - sigma_max: float = None, - sampling_eps: float = None, - ) -> ScoreSdeVeSchedulerState: - """ - Sets the noise scales used for the diffusion chain. Supporting function to be run before inference. - - The sigmas control the weight of the `drift` and `diffusion` components of sample update. - - Args: - state (`ScoreSdeVeSchedulerState`): the `FlaxScoreSdeVeScheduler` state data class instance. - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - sigma_min (`float`, optional): - initial noise scale value (overrides value given at Scheduler instantiation). - sigma_max (`float`, optional): final noise scale value (overrides value given at Scheduler instantiation). - sampling_eps (`float`, optional): final timestep value (overrides value given at Scheduler instantiation). - """ - sigma_min = sigma_min if sigma_min is not None else self.config.sigma_min - sigma_max = sigma_max if sigma_max is not None else self.config.sigma_max - sampling_eps = sampling_eps if sampling_eps is not None else self.config.sampling_eps - if state.timesteps is None: - state = self.set_timesteps(state, num_inference_steps, sampling_eps) - - discrete_sigmas = jnp.exp(jnp.linspace(jnp.log(sigma_min), jnp.log(sigma_max), num_inference_steps)) - sigmas = jnp.array([sigma_min * (sigma_max / sigma_min) ** t for t in state.timesteps]) - - return state.replace(discrete_sigmas=discrete_sigmas, sigmas=sigmas) - - def get_adjacent_sigma(self, state, timesteps, t): - return jnp.where(timesteps == 0, jnp.zeros_like(t), state.discrete_sigmas[timesteps - 1]) - - def step_pred( - self, - state: ScoreSdeVeSchedulerState, - model_output: jnp.ndarray, - timestep: int, - sample: jnp.ndarray, - key: random.KeyArray, - return_dict: bool = True, - ) -> Union[FlaxSdeVeOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - state (`ScoreSdeVeSchedulerState`): the `FlaxScoreSdeVeScheduler` state data class instance. - model_output (`jnp.ndarray`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - generator: random number generator. - return_dict (`bool`): option for returning tuple rather than FlaxSdeVeOutput class - - Returns: - [`FlaxSdeVeOutput`] or `tuple`: [`FlaxSdeVeOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - - """ - if state.timesteps is None: - raise ValueError( - "`state.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler" - ) - - timestep = timestep * jnp.ones( - sample.shape[0], - ) - timesteps = (timestep * (len(state.timesteps) - 1)).long() - - sigma = state.discrete_sigmas[timesteps] - adjacent_sigma = self.get_adjacent_sigma(state, timesteps, timestep) - drift = jnp.zeros_like(sample) - diffusion = (sigma**2 - adjacent_sigma**2) ** 0.5 - - # equation 6 in the paper: the model_output modeled by the network is grad_x log pt(x) - # also equation 47 shows the analog from SDE models to ancestral sampling methods - diffusion = diffusion.flatten() - diffusion = broadcast_to_shape_from_left(diffusion, sample.shape) - drift = drift - diffusion**2 * model_output - - # equation 6: sample noise for the diffusion term of - key = random.split(key, num=1) - noise = random.normal(key=key, shape=sample.shape) - prev_sample_mean = sample - drift # subtract because `dt` is a small negative timestep - # TODO is the variable diffusion the correct scaling term for the noise? - prev_sample = prev_sample_mean + diffusion * noise # add impact of diffusion field g - - if not return_dict: - return (prev_sample, prev_sample_mean, state) - - return FlaxSdeVeOutput(prev_sample=prev_sample, prev_sample_mean=prev_sample_mean, state=state) - - def step_correct( - self, - state: ScoreSdeVeSchedulerState, - model_output: jnp.ndarray, - sample: jnp.ndarray, - key: random.KeyArray, - return_dict: bool = True, - ) -> Union[FlaxSdeVeOutput, Tuple]: - """ - Correct the predicted sample based on the output model_output of the network. This is often run repeatedly - after making the prediction for the previous timestep. - - Args: - state (`ScoreSdeVeSchedulerState`): the `FlaxScoreSdeVeScheduler` state data class instance. - model_output (`jnp.ndarray`): direct output from learned diffusion model. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - generator: random number generator. - return_dict (`bool`): option for returning tuple rather than FlaxSdeVeOutput class - - Returns: - [`FlaxSdeVeOutput`] or `tuple`: [`FlaxSdeVeOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - - """ - if state.timesteps is None: - raise ValueError( - "`state.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler" - ) - - # For small batch sizes, the paper "suggest replacing norm(z) with sqrt(d), where d is the dim. of z" - # sample noise for correction - key = random.split(key, num=1) - noise = random.normal(key=key, shape=sample.shape) - - # compute step size from the model_output, the noise, and the snr - grad_norm = jnp.linalg.norm(model_output) - noise_norm = jnp.linalg.norm(noise) - step_size = (self.config.snr * noise_norm / grad_norm) ** 2 * 2 - step_size = step_size * jnp.ones(sample.shape[0]) - - # compute corrected sample: model_output term and noise term - step_size = step_size.flatten() - step_size = broadcast_to_shape_from_left(step_size, sample.shape) - prev_sample_mean = sample + step_size * model_output - prev_sample = prev_sample_mean + ((step_size * 2) ** 0.5) * noise - - if not return_dict: - return (prev_sample, state) - - return FlaxSdeVeOutput(prev_sample=prev_sample, state=state) - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Conversion/README.md b/spaces/Kevin676/ChatGPT-with-Voice-Conversion/README.md deleted file mode 100644 index c5570d0e1bce5f6912b630ce30bbb352670fc539..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Conversion/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Voice Conversion Yourtts -emoji: 😻 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.25.0 -app_file: app.py -pinned: false -license: unknown -duplicated_from: ramkamal2000/voice-conversion-yourtts ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Knowles-Lab/tiger/tiger.md b/spaces/Knowles-Lab/tiger/tiger.md deleted file mode 100644 index 9b770a376f08c69426d5585e79a157aab7f01a8b..0000000000000000000000000000000000000000 --- a/spaces/Knowles-Lab/tiger/tiger.md +++ /dev/null @@ -1,46 +0,0 @@ -## TIGER Online Tool for Cas13 Efficacy Prediction - -Welcome to TIGER! -This online tool accompanies our recent study from the labs of [David Knowles](https://daklab.github.io/) and [Neville Sanjana](http://sanjanalab.org/). -TIGER's ability to make accurate on- and off-target predictions enables users to 1) design highly effective gRNAs and 2) precisely modulate transcript expression by engineered gRNA-target mismatches. - -If you use the TIGER Online Tool in your study, please consider citing: -> **[Prediction of on-target and off-target activity of CRISPR–Cas13d guide RNAs using deep learning](http://sanjanalab.org/reprints/WesselsStirn_NBT_2023.pdf).** Wessels, H.-H.\*, Stirn, A.\*, Méndez-Mancilla, A., Kim, E. J., Hart, S. K., Knowles, D. A.#, & Sanjana, N. E.# *Nature Biotechnology* (2023). [https://doi.org/10.1038/s41587-023-01830-8](https://doi.org/10.1038/s41587-023-01830-8) - -Please note that this precompiled, online tool differs from the manuscript slightly. -First, this version of TIGER predicts using just target and guide sequence (see [Figure 3c](http://sanjanalab.org/reprints/WesselsStirn_NBT_2023.pdf)). Second, we map TIGER's predictions to the unit interval (0,1) to make estimates more interpretable: A `Guide Score` close to 1 corresponds to high gRNA activity (i.e. desirable for on-target guides). -A `Guide Score` near 0 denotes no/minimal activity (i.e. desirable for predicted off-targets to minimize the activity of these gRNAs on unintended targets). -This transformation is monotonic and therefore preserves Spearman, AUROC, and AUPRC performance. -These estimates (transformations of log-fold-change predictions from TIGER) appear in the `Guide Score` column of this online tool’s output. - -### Using the TIGER Online Tool - -The tool supports two methods for transcript entry: -1) Manual entry of a single transcript -2) Uploading a FASTA file that can contain one or more transcripts. Each transcript **must** have a unique ID. - -The tool has three run modes: -1) Report all on-target gRNAs for each provided transcript. -2) Report the top 10 most active, on-target gRNAs for each provided transcript. This mode allows for the optional identification of off-target effects. For off-target avoidance, please note that a higher `Guide Score` (closer to 1) corresponds to *more* likely off-target effects. -3) Report the top 10 most active, on-target gRNAs for each provided transcript and their titration candidates (all possible single mismatches). A higher `Guide Score` (closer to 1) corresponds to greater transcript knockdown. - -The tool uses Gencode v19 (protein-coding and non-coding RNAs) to identify potential off-target transcripts. -Due to computational limitations, the online tool only supports off-target predictions for the top 10 most active, on-target gRNAs per transcript. - -### Future Development Plans - -- Off-target scanning speed improvements -- Off-target scanning for titration (engineered mismatch) mode -- Allow users to select more than the top ten guides per transcript -- Incorporate non-scalar features (target accessibility, hybridization energies, etc...) - -To report bugs or to request additional features, please click the "Community" button in the top right corner of this screen and start a new discussion. -Alternatively, please email [Andrew Stirn](mailto:andrew.stirn@cs.columbia.edu). - -#### Version -You are using version 2.0 of this tool. -All hugging face versions are marked with a `vX.x` tag. -The code used to train this model can be found [here](https://github.com/daklab/tiger)--specifically, please see `tiger_trainer.py` therein. -This GitHub repository has matching `vX.x` tags. -We will increment the major number when a change causes a difference in predictions (e.g. retraining the model). -We will otherwise increment the minor number (e.g. changes to the user interface, speed improvements, etc...). diff --git a/spaces/Komeng/Stock_Prediction/setup.sh b/spaces/Komeng/Stock_Prediction/setup.sh deleted file mode 100644 index c8650a8b74a58d9a5f53b185fd711c5668e1cd52..0000000000000000000000000000000000000000 --- a/spaces/Komeng/Stock_Prediction/setup.sh +++ /dev/null @@ -1,13 +0,0 @@ -mkdir -p ~/.streamlit/ - -echo "\ -[general]\n\ -email = \"your-email@domain.com\"\n\ -" > ~/.streamlit/credentials.toml - -echo "\ -[server]\n\ -headless = true\n\ -enableCORS=false\n\ -port = $PORT\n\ -" > ~/.streamlit/config.toml \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmdet/datasets/xml_style.py b/spaces/KyanChen/RSPrompter/mmdet/datasets/xml_style.py deleted file mode 100644 index f5a6d8ca9b933d45af71c8b020aab5b6459cd3c4..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/datasets/xml_style.py +++ /dev/null @@ -1,186 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import xml.etree.ElementTree as ET -from typing import List, Optional, Union - -import mmcv -from mmengine.fileio import get, get_local_path, list_from_file - -from mmdet.registry import DATASETS -from .base_det_dataset import BaseDetDataset - - -@DATASETS.register_module() -class XMLDataset(BaseDetDataset): - """XML dataset for detection. - - Args: - img_subdir (str): Subdir where images are stored. Default: JPEGImages. - ann_subdir (str): Subdir where annotations are. Default: Annotations. - backend_args (dict, optional): Arguments to instantiate the - corresponding backend. Defaults to None. - """ - - def __init__(self, - img_subdir: str = 'JPEGImages', - ann_subdir: str = 'Annotations', - **kwargs) -> None: - self.img_subdir = img_subdir - self.ann_subdir = ann_subdir - super().__init__(**kwargs) - - @property - def sub_data_root(self) -> str: - """Return the sub data root.""" - return self.data_prefix.get('sub_data_root', '') - - def load_data_list(self) -> List[dict]: - """Load annotation from XML style ann_file. - - Returns: - list[dict]: Annotation info from XML file. - """ - assert self._metainfo.get('classes', None) is not None, \ - '`classes` in `XMLDataset` can not be None.' - self.cat2label = { - cat: i - for i, cat in enumerate(self._metainfo['classes']) - } - - data_list = [] - img_ids = list_from_file(self.ann_file, backend_args=self.backend_args) - for img_id in img_ids: - file_name = osp.join(self.img_subdir, f'{img_id}.jpg') - xml_path = osp.join(self.sub_data_root, self.ann_subdir, - f'{img_id}.xml') - - raw_img_info = {} - raw_img_info['img_id'] = img_id - raw_img_info['file_name'] = file_name - raw_img_info['xml_path'] = xml_path - - parsed_data_info = self.parse_data_info(raw_img_info) - data_list.append(parsed_data_info) - return data_list - - @property - def bbox_min_size(self) -> Optional[str]: - """Return the minimum size of bounding boxes in the images.""" - if self.filter_cfg is not None: - return self.filter_cfg.get('bbox_min_size', None) - else: - return None - - def parse_data_info(self, img_info: dict) -> Union[dict, List[dict]]: - """Parse raw annotation to target format. - - Args: - img_info (dict): Raw image information, usually it includes - `img_id`, `file_name`, and `xml_path`. - - Returns: - Union[dict, List[dict]]: Parsed annotation. - """ - data_info = {} - img_path = osp.join(self.sub_data_root, img_info['file_name']) - data_info['img_path'] = img_path - data_info['img_id'] = img_info['img_id'] - data_info['xml_path'] = img_info['xml_path'] - - # deal with xml file - with get_local_path( - img_info['xml_path'], - backend_args=self.backend_args) as local_path: - raw_ann_info = ET.parse(local_path) - root = raw_ann_info.getroot() - size = root.find('size') - if size is not None: - width = int(size.find('width').text) - height = int(size.find('height').text) - else: - img_bytes = get(img_path, backend_args=self.backend_args) - img = mmcv.imfrombytes(img_bytes, backend='cv2') - height, width = img.shape[:2] - del img, img_bytes - - data_info['height'] = height - data_info['width'] = width - - data_info['instances'] = self._parse_instance_info( - raw_ann_info, minus_one=True) - - return data_info - - def _parse_instance_info(self, - raw_ann_info: ET, - minus_one: bool = True) -> List[dict]: - """parse instance information. - - Args: - raw_ann_info (ElementTree): ElementTree object. - minus_one (bool): Whether to subtract 1 from the coordinates. - Defaults to True. - - Returns: - List[dict]: List of instances. - """ - instances = [] - for obj in raw_ann_info.findall('object'): - instance = {} - name = obj.find('name').text - if name not in self._metainfo['classes']: - continue - difficult = obj.find('difficult') - difficult = 0 if difficult is None else int(difficult.text) - bnd_box = obj.find('bndbox') - bbox = [ - int(float(bnd_box.find('xmin').text)), - int(float(bnd_box.find('ymin').text)), - int(float(bnd_box.find('xmax').text)), - int(float(bnd_box.find('ymax').text)) - ] - - # VOC needs to subtract 1 from the coordinates - if minus_one: - bbox = [x - 1 for x in bbox] - - ignore = False - if self.bbox_min_size is not None: - assert not self.test_mode - w = bbox[2] - bbox[0] - h = bbox[3] - bbox[1] - if w < self.bbox_min_size or h < self.bbox_min_size: - ignore = True - if difficult or ignore: - instance['ignore_flag'] = 1 - else: - instance['ignore_flag'] = 0 - instance['bbox'] = bbox - instance['bbox_label'] = self.cat2label[name] - instances.append(instance) - return instances - - def filter_data(self) -> List[dict]: - """Filter annotations according to filter_cfg. - - Returns: - List[dict]: Filtered results. - """ - if self.test_mode: - return self.data_list - - filter_empty_gt = self.filter_cfg.get('filter_empty_gt', False) \ - if self.filter_cfg is not None else False - min_size = self.filter_cfg.get('min_size', 0) \ - if self.filter_cfg is not None else 0 - - valid_data_infos = [] - for i, data_info in enumerate(self.data_list): - width = data_info['width'] - height = data_info['height'] - if filter_empty_gt and len(data_info['instances']) == 0: - continue - if min(width, height) >= min_size: - valid_data_infos.append(data_info) - - return valid_data_infos diff --git a/spaces/L1211/New_space1/app.py b/spaces/L1211/New_space1/app.py deleted file mode 100644 index 259ca2394a705d1efff05832e2ea5a302a31f01d..0000000000000000000000000000000000000000 --- a/spaces/L1211/New_space1/app.py +++ /dev/null @@ -1,11 +0,0 @@ -import gradio as gr -from gradio.mix import Parallel - -title="My First Text Generator" -description="Input text." - -model1=gr.Interface.load("huggingface/EleutherAI/gpt-j-6B") -model2=gr.Interface.load("huggingface/gpt2") -model3=gr.Interface.load("huggingface/EleutherAI/gpt-neo-125M") - -gr.Parallel(model1 , model2 , model3 , title=title, description=description).launch() \ No newline at end of file diff --git a/spaces/Lamai/LAMAIGPT/autogpt/__init__.py b/spaces/Lamai/LAMAIGPT/autogpt/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/vc_infer_pipeline.py b/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/vc_infer_pipeline.py deleted file mode 100644 index 25f873e1e210879e085afd073306d796bf5114ea..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/vc_infer_pipeline.py +++ /dev/null @@ -1,653 +0,0 @@ -from functools import lru_cache -from time import time as ttime - -import faiss -import librosa -import numpy as np -import os -import parselmouth -import pyworld -import sys -import torch -import torch.nn.functional as F -import torchcrepe -import traceback -from scipy import signal -from torch import Tensor - -BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) -now_dir = os.path.join(BASE_DIR, 'src') -sys.path.append(now_dir) - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - # Fork Feature: Get the best torch device to use for f0 algorithms that require a torch device. Will return the type (torch.device) - def get_optimal_torch_device(self, index: int = 0) -> torch.device: - # Get cuda device - if torch.cuda.is_available(): - return torch.device( - f"cuda:{index % torch.cuda.device_count()}" - ) # Very fast - elif torch.backends.mps.is_available(): - return torch.device("mps") - # Insert an else here to grab "xla" devices if available. TO DO later. Requires the torch_xla.core.xla_model library - # Else wise return the "cpu" as a torch device, - return torch.device("cpu") - - # Fork Feature: Compute f0 with the crepe method - def get_f0_crepe_computation( - self, - x, - f0_min, - f0_max, - p_len, - hop_length=160, # 512 before. Hop length changes the speed that the voice jumps to a different dramatic pitch. Lower hop lengths means more pitch accuracy but longer inference time. - model="full", # Either use crepe-tiny "tiny" or crepe "full". Default is full - ): - x = x.astype( - np.float32 - ) # fixes the F.conv2D exception. We needed to convert double to float. - x /= np.quantile(np.abs(x), 0.999) - torch_device = self.get_optimal_torch_device() - audio = torch.from_numpy(x).to(torch_device, copy=True) - audio = torch.unsqueeze(audio, dim=0) - if audio.ndim == 2 and audio.shape[0] > 1: - audio = torch.mean(audio, dim=0, keepdim=True).detach() - audio = audio.detach() - print("Initiating prediction with a crepe_hop_length of: " + str(hop_length)) - pitch: Tensor = torchcrepe.predict( - audio, - self.sr, - hop_length, - f0_min, - f0_max, - model, - batch_size=hop_length * 2, - device=torch_device, - pad=True, - ) - p_len = p_len or x.shape[0] // hop_length - # Resize the pitch for final f0 - source = np.array(pitch.squeeze(0).cpu().float().numpy()) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * p_len, len(source)) / p_len, - np.arange(0, len(source)), - source, - ) - f0 = np.nan_to_num(target) - return f0 # Resized f0 - - def get_f0_official_crepe_computation( - self, - x, - f0_min, - f0_max, - model="full", - ): - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - return f0 - - # Fork Feature: Compute pYIN f0 method - def get_f0_pyin_computation(self, x, f0_min, f0_max): - y, sr = librosa.load("saudio/Sidney.wav", self.sr, mono=True) - f0, _, _ = librosa.pyin(y, sr=self.sr, fmin=f0_min, fmax=f0_max) - f0 = f0[1:] # Get rid of extra first frame - return f0 - - # Fork Feature: Acquire median hybrid f0 estimation calculation - def get_f0_hybrid_computation( - self, - methods_str, - input_audio_path, - x, - f0_min, - f0_max, - p_len, - filter_radius, - crepe_hop_length, - time_step, - ): - # Get various f0 methods from input to use in the computation stack - s = methods_str - s = s.split("hybrid")[1] - s = s.replace("[", "").replace("]", "") - methods = s.split("+") - f0_computation_stack = [] - - print("Calculating f0 pitch estimations for methods: %s" % str(methods)) - x = x.astype(np.float32) - x /= np.quantile(np.abs(x), 0.999) - # Get f0 calculations for all methods specified - for method in methods: - f0 = None - if method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif method == "crepe": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max) - f0 = f0[1:] # Get rid of extra first frame - elif method == "crepe-tiny": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny") - f0 = f0[1:] # Get rid of extra first frame - elif method == "mangio-crepe": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length - ) - elif method == "mangio-crepe-tiny": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length, "tiny" - ) - elif method == "harvest": - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - f0 = f0[1:] # Get rid of first frame. - elif method == "dio": # Potentially buggy? - f0, t = pyworld.dio( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 = f0[1:] - # elif method == "pyin": Not Working just yet - # f0 = self.get_f0_pyin_computation(x, f0_min, f0_max) - # Push method to the stack - f0_computation_stack.append(f0) - - for fc in f0_computation_stack: - print(len(fc)) - - print("Calculating hybrid median f0 from the stack of: %s" % str(methods)) - f0_median_hybrid = None - if len(f0_computation_stack) == 1: - f0_median_hybrid = f0_computation_stack[0] - else: - f0_median_hybrid = np.nanmedian(f0_computation_stack, axis=0) - return f0_median_hybrid - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - crepe_hop_length, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "dio": # Potentially Buggy? - f0, t = pyworld.dio( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max) - elif f0_method == "crepe-tiny": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny") - elif f0_method == "mangio-crepe": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length - ) - elif f0_method == "mangio-crepe-tiny": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length, "tiny" - ) - elif f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from rmvpe import RMVPE - - self.model_rmvpe = RMVPE( - os.path.join(BASE_DIR, 'rvc_models', 'rmvpe.pt'), is_half=self.is_half, device=self.device - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - - elif "hybrid" in f0_method: - # Perform hybrid median pitch estimation - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = self.get_f0_hybrid_computation( - f0_method, - input_audio_path, - x, - f0_min, - f0_max, - p_len, - filter_radius, - crepe_hop_length, - time_step, - ) - - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - crepe_hop_length, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - crepe_hop_length, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/julius/utils.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/julius/utils.py deleted file mode 100644 index 944b973ad1a38700c1ba98ab7306c233cb87868d..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/uvr5_pack/julius/utils.py +++ /dev/null @@ -1,101 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 -""" -Non signal processing related utilities. -""" - -import inspect -import typing as tp -import sys -import time - - -def simple_repr(obj, attrs: tp.Optional[tp.Sequence[str]] = None, - overrides: dict = {}): - """ - Return a simple representation string for `obj`. - If `attrs` is not None, it should be a list of attributes to include. - """ - params = inspect.signature(obj.__class__).parameters - attrs_repr = [] - if attrs is None: - attrs = list(params.keys()) - for attr in attrs: - display = False - if attr in overrides: - value = overrides[attr] - elif hasattr(obj, attr): - value = getattr(obj, attr) - else: - continue - if attr in params: - param = params[attr] - if param.default is inspect._empty or value != param.default: # type: ignore - display = True - else: - display = True - - if display: - attrs_repr.append(f"{attr}={value}") - return f"{obj.__class__.__name__}({','.join(attrs_repr)})" - - -class MarkdownTable: - """ - Simple MarkdownTable generator. The column titles should be large enough - for the lines content. This will right align everything. - - >>> import io # we use io purely for test purposes, default is sys.stdout. - >>> file = io.StringIO() - >>> table = MarkdownTable(["Item Name", "Price"], file=file) - >>> table.header(); table.line(["Honey", "5"]); table.line(["Car", "5,000"]) - >>> print(file.getvalue().strip()) # Strip for test purposes - | Item Name | Price | - |-----------|-------| - | Honey | 5 | - | Car | 5,000 | - """ - def __init__(self, columns, file=sys.stdout): - self.columns = columns - self.file = file - - def _writeln(self, line): - self.file.write("|" + "|".join(line) + "|\n") - - def header(self): - self._writeln(f" {col} " for col in self.columns) - self._writeln("-" * (len(col) + 2) for col in self.columns) - - def line(self, line): - out = [] - for val, col in zip(line, self.columns): - val = format(val, '>' + str(len(col))) - out.append(" " + val + " ") - self._writeln(out) - - -class Chrono: - """ - Measures ellapsed time, calling `torch.cuda.synchronize` if necessary. - `Chrono` instances can be used as context managers (e.g. with `with`). - Upon exit of the block, you can access the duration of the block in seconds - with the `duration` attribute. - - >>> with Chrono() as chrono: - ... _ = sum(range(10_000)) - ... - >>> print(chrono.duration < 10) # Should be true unless on a really slow computer. - True - """ - def __init__(self): - self.duration = None - - def __enter__(self): - self._begin = time.time() - return self - - def __exit__(self, exc_type, exc_value, exc_tracebck): - import torch - if torch.cuda.is_available(): - torch.cuda.synchronize() - self.duration = time.time() - self._begin diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/train/train.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/train/train.py deleted file mode 100644 index 3e47dd7471a248db88506c5a5400e3a790c1426a..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/train/train.py +++ /dev/null @@ -1,723 +0,0 @@ -import os -import sys -import logging - -logger = logging.getLogger(__name__) - -now_dir = os.getcwd() -sys.path.append(os.path.join(now_dir)) - -import datetime - -from lib.infer.infer_libs.train import utils - -hps = utils.get_hparams() -os.environ["CUDA_VISIBLE_DEVICES"] = hps.gpus.replace("-", ",") -n_gpus = len(hps.gpus.split("-")) -from random import randint, shuffle - -import torch -try: - import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import - if torch.xpu.is_available(): - from lib.infer.modules.ipex import ipex_init - from lib.infer.modules.ipex.gradscaler import gradscaler_init - from torch.xpu.amp import autocast - GradScaler = gradscaler_init() - ipex_init() - else: - from torch.cuda.amp import GradScaler, autocast -except Exception: - from torch.cuda.amp import GradScaler, autocast - -torch.backends.cudnn.deterministic = False -torch.backends.cudnn.benchmark = False -from time import sleep -from time import time as ttime - -import torch.distributed as dist -import torch.multiprocessing as mp - -from torch.nn import functional as F -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter - -from lib.infer.infer_libs.infer_pack import commons -from lib.infer.infer_libs.train.data_utils import ( - DistributedBucketSampler, - TextAudioCollate, - TextAudioCollateMultiNSFsid, - TextAudioLoader, - TextAudioLoaderMultiNSFsid, -) - -if hps.version == "v1": - from lib.infer.infer_libs.infer_pack.models import MultiPeriodDiscriminator - from lib.infer.infer_libs.infer_pack.models import SynthesizerTrnMs256NSFsid as RVC_Model_f0 - from lib.infer.infer_libs.infer_pack.models import ( - SynthesizerTrnMs256NSFsid_nono as RVC_Model_nof0, - ) -else: - from lib.infer.infer_libs.infer_pack.models import ( - SynthesizerTrnMs768NSFsid as RVC_Model_f0, - SynthesizerTrnMs768NSFsid_nono as RVC_Model_nof0, - MultiPeriodDiscriminatorV2 as MultiPeriodDiscriminator, - ) - -from lib.infer.infer_libs.train.losses import ( - discriminator_loss, - feature_loss, - generator_loss, - kl_loss, -) -from lib.infer.infer_libs.train.mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from lib.infer.infer_libs.train.process_ckpt import savee - -global_step = 0 -import csv - -class EpochRecorder: - def __init__(self): - self.last_time = ttime() - - def record(self): - now_time = ttime() - elapsed_time = now_time - self.last_time - self.last_time = now_time - elapsed_time_str = str(datetime.timedelta(seconds=elapsed_time)) - current_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") - return f"[{current_time}] | ({elapsed_time_str})" - -def reset_stop_flag(): - with open("lib/csvdb/stop.csv", "w+", newline="") as STOPCSVwrite: - csv_writer = csv.writer(STOPCSVwrite, delimiter=",") - csv_writer.writerow(["False"]) - -def create_model(hps, model_f0, model_nof0): - filter_length_adjusted = hps.data.filter_length // 2 + 1 - segment_size_adjusted = hps.train.segment_size // hps.data.hop_length - is_half = hps.train.fp16_run - sr = hps.sample_rate - - model = model_f0 if hps.if_f0 == 1 else model_nof0 - - return model( - filter_length_adjusted, - segment_size_adjusted, - **hps.model, - is_half=is_half, - sr=sr - ) - -def move_model_to_cuda_if_available(model, rank): - if torch.cuda.is_available(): - return model.cuda(rank) - else: - return model - -def create_optimizer(model, hps): - return torch.optim.AdamW( - model.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - -def create_ddp_model(model, rank): - if torch.cuda.is_available(): - return DDP(model, device_ids=[rank]) - else: - return DDP(model) - -def create_dataset(hps, if_f0=True): - return TextAudioLoaderMultiNSFsid(hps.data.training_files, hps.data) if if_f0 else TextAudioLoader(hps.data.training_files, hps.data) - -def create_sampler(dataset, batch_size, n_gpus, rank): - return DistributedBucketSampler( - dataset, - batch_size * n_gpus, - # [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1200,1400], # 16s - [100, 200, 300, 400, 500, 600, 700, 800, 900], # 16s - num_replicas=n_gpus, - rank=rank, - shuffle=True, - ) - -def set_collate_fn(if_f0=True): - return TextAudioCollateMultiNSFsid() if if_f0 else TextAudioCollate() - - -def main(): - n_gpus = torch.cuda.device_count() - - if torch.cuda.is_available() == False and torch.backends.mps.is_available() == True: - n_gpus = 1 - if n_gpus < 1: - # patch to unblock people without gpus. there is probably a better way. - logger.warn("NO GPU DETECTED: falling back to CPU - this may take a while") - n_gpus = 1 - os.environ["MASTER_ADDR"] = "localhost" - os.environ["MASTER_PORT"] = str(randint(20000, 55555)) - children = [] - for i in range(n_gpus): - subproc = mp.Process( - target=run, - args=( - i, - n_gpus, - hps, - ), - ) - children.append(subproc) - subproc.start() - - for i in range(n_gpus): - children[i].join() - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - # utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group( - backend="gloo", init_method="env://", world_size=n_gpus, rank=rank - ) - torch.manual_seed(hps.train.seed) - if torch.cuda.is_available(): - torch.cuda.set_device(rank) - - if hps.if_f0 == 1: - train_dataset = TextAudioLoaderMultiNSFsid(hps.data.training_files, hps.data) - else: - train_dataset = TextAudioLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size * n_gpus, - # [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1200,1400], # 16s - [100, 200, 300, 400, 500, 600, 700, 800, 900], # 16s - num_replicas=n_gpus, - rank=rank, - shuffle=True, - ) - # It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit. - # num_workers=8 -> num_workers=4 - if hps.if_f0 == 1: - collate_fn = TextAudioCollateMultiNSFsid() - else: - collate_fn = TextAudioCollate() - train_loader = DataLoader( - train_dataset, - num_workers=4, - shuffle=False, - pin_memory=True, - collate_fn=collate_fn, - batch_sampler=train_sampler, - persistent_workers=True, - prefetch_factor=8, - ) - if hps.if_f0 == 1: - net_g = RVC_Model_f0( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model, - is_half=hps.train.fp16_run, - sr=hps.sample_rate, - ) - else: - net_g = RVC_Model_nof0( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model, - is_half=hps.train.fp16_run, - ) - if torch.cuda.is_available(): - net_g = net_g.cuda(rank) - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm) - if torch.cuda.is_available(): - net_d = net_d.cuda(rank) - optim_g = torch.optim.AdamW( - net_g.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps, - ) - # net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - # net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if hasattr(torch, "xpu") and torch.xpu.is_available(): - pass - elif torch.cuda.is_available(): - net_g = DDP(net_g, device_ids=[rank]) - net_d = DDP(net_d, device_ids=[rank]) - else: - net_g = DDP(net_g) - net_d = DDP(net_d) - - try: # 如果能加载自动resume - _, _, _, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d - ) # D多半加载没事 - if rank == 0: - logger.info("loaded D") - # _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g,load_opt=0) - _, _, _, epoch_str = utils.load_checkpoint( - utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g - ) - global_step = (epoch_str - 1) * len(train_loader) - # epoch_str = 1 - # global_step = 0 - except: # 如果首次不能加载,加载pretrain - os.system('cls' if os.name == 'nt' else 'clear') - epoch_str = 1 - global_step = 0 - if hps.pretrainG != "": - if rank == 0: - logger.info("Loaded pretrained %s" % (hps.pretrainG)) - if hasattr(net_g, "module"): - logger.info( - net_g.module.load_state_dict( - torch.load(hps.pretrainG, map_location="cpu")["model"] - ) - ) ##测试不加载优化器 - else: - logger.info( - net_g.load_state_dict( - torch.load(hps.pretrainG, map_location="cpu")["model"] - ) - ) ##测试不加载优化器 - if hps.pretrainD != "": - if rank == 0: - logger.info("Loaded pretrained %s" % (hps.pretrainD)) - if hasattr(net_d, "module"): - logger.info( - net_d.module.load_state_dict( - torch.load(hps.pretrainD, map_location="cpu")["model"] - ) - ) - else: - logger.info( - net_d.load_state_dict( - torch.load(hps.pretrainD, map_location="cpu")["model"] - ) - ) - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR( - optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR( - optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2 - ) - - scaler = GradScaler(enabled=hps.train.fp16_run) - - cache = [] - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d], - [optim_g, optim_d], - [scheduler_g, scheduler_d], - scaler, - [train_loader, None], - logger, - [writer, writer_eval], - cache, - ) - else: - train_and_evaluate( - rank, - epoch, - hps, - [net_g, net_d], - [optim_g, optim_d], - [scheduler_g, scheduler_d], - scaler, - [train_loader, None], - None, - None, - cache, - ) - scheduler_g.step() - scheduler_d.step() - - -def train_and_evaluate( - rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers, cache -): - net_g, net_d = nets - optim_g, optim_d = optims - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - - # Prepare data iterator - if hps.if_cache_data_in_gpu == True: - # Use Cache - data_iterator = cache - if cache == []: - # Make new cache - for batch_idx, info in enumerate(train_loader): - # Unpack - if hps.if_f0 == 1: - ( - phone, - phone_lengths, - pitch, - pitchf, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ) = info - else: - ( - phone, - phone_lengths, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ) = info - # Load on CUDA - if torch.cuda.is_available(): - phone = phone.cuda(rank, non_blocking=True) - phone_lengths = phone_lengths.cuda(rank, non_blocking=True) - if hps.if_f0 == 1: - pitch = pitch.cuda(rank, non_blocking=True) - pitchf = pitchf.cuda(rank, non_blocking=True) - sid = sid.cuda(rank, non_blocking=True) - spec = spec.cuda(rank, non_blocking=True) - spec_lengths = spec_lengths.cuda(rank, non_blocking=True) - wave = wave.cuda(rank, non_blocking=True) - wave_lengths = wave_lengths.cuda(rank, non_blocking=True) - # Cache on list - if hps.if_f0 == 1: - cache.append( - ( - batch_idx, - ( - phone, - phone_lengths, - pitch, - pitchf, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ), - ) - ) - else: - cache.append( - ( - batch_idx, - ( - phone, - phone_lengths, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ), - ) - ) - else: - # Load shuffled cache - shuffle(cache) - else: - # Loader - data_iterator = enumerate(train_loader) - - # Run steps - epoch_recorder = EpochRecorder() - for batch_idx, info in data_iterator: - # Data - ## Unpack - if hps.if_f0 == 1: - ( - phone, - phone_lengths, - pitch, - pitchf, - spec, - spec_lengths, - wave, - wave_lengths, - sid, - ) = info - else: - phone, phone_lengths, spec, spec_lengths, wave, wave_lengths, sid = info - ## Load on CUDA - if (hps.if_cache_data_in_gpu == False) and torch.cuda.is_available(): - phone = phone.cuda(rank, non_blocking=True) - phone_lengths = phone_lengths.cuda(rank, non_blocking=True) - if hps.if_f0 == 1: - pitch = pitch.cuda(rank, non_blocking=True) - pitchf = pitchf.cuda(rank, non_blocking=True) - sid = sid.cuda(rank, non_blocking=True) - spec = spec.cuda(rank, non_blocking=True) - spec_lengths = spec_lengths.cuda(rank, non_blocking=True) - wave = wave.cuda(rank, non_blocking=True) - # wave_lengths = wave_lengths.cuda(rank, non_blocking=True) - - # Calculate - with autocast(enabled=hps.train.fp16_run): - if hps.if_f0 == 1: - ( - y_hat, - ids_slice, - x_mask, - z_mask, - (z, z_p, m_p, logs_p, m_q, logs_q), - ) = net_g(phone, phone_lengths, pitch, pitchf, spec, spec_lengths, sid) - else: - ( - y_hat, - ids_slice, - x_mask, - z_mask, - (z, z_p, m_p, logs_p, m_q, logs_q), - ) = net_g(phone, phone_lengths, spec, spec_lengths, sid) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - y_mel = commons.slice_segments( - mel, ids_slice, hps.train.segment_size // hps.data.hop_length - ) - with autocast(enabled=False): - y_hat_mel = mel_spectrogram_torch( - y_hat.float().squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax, - ) - if hps.train.fp16_run == True: - y_hat_mel = y_hat_mel.float() - wave = commons.slice_segments( - wave, ids_slice * hps.data.hop_length, hps.train.segment_size - ) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(wave, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss( - y_d_hat_r, y_d_hat_g - ) - optim_d.zero_grad() - scaler.scale(loss_disc).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(wave, y_hat) - with autocast(enabled=False): - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]["lr"] - logger.info( - "Train Epoch: {} [{:.0f}%]".format( - epoch, 100.0 * batch_idx / len(train_loader) - ) - ) - # Amor For Tensorboard display - if loss_mel > 75: - loss_mel = 75 - if loss_kl > 9: - loss_kl = 9 - - logger.info([global_step, lr]) - logger.info( - f"[loss_disc={loss_disc:.3f}] | [loss_gen={loss_gen:.3f}] | [loss_fm={loss_fm:.3f}] | [loss_mel={loss_mel:.3f}] | [loss_kl={loss_kl:.3f}]" - ) - scalar_dict = { - "loss/g/total": loss_gen_all, - "loss/d/total": loss_disc, - "learning_rate": lr, - "grad_norm_d": grad_norm_d, - "grad_norm_g": grad_norm_g, - } - scalar_dict.update( - { - "loss/g/fm": loss_fm, - "loss/g/mel": loss_mel, - "loss/g/kl": loss_kl, - } - ) - - scalar_dict.update( - {"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)} - ) - scalar_dict.update( - {"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)} - ) - scalar_dict.update( - {"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)} - ) - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy( - y_mel[0].data.cpu().numpy() - ), - "slice/mel_gen": utils.plot_spectrogram_to_numpy( - y_hat_mel[0].data.cpu().numpy() - ), - "all/mel": utils.plot_spectrogram_to_numpy( - mel[0].data.cpu().numpy() - ), - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict, - ) - global_step += 1 - # /Run steps - - if epoch % hps.save_every_epoch == 0 and rank == 0: - if hps.if_latest == 0: - utils.save_checkpoint( - net_g, - optim_g, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step)), - ) - utils.save_checkpoint( - net_d, - optim_d, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step)), - ) - else: - utils.save_checkpoint( - net_g, - optim_g, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(2333333)), - ) - utils.save_checkpoint( - net_d, - optim_d, - hps.train.learning_rate, - epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(2333333)), - ) - if rank == 0 and hps.save_every_weights == "1": - if hasattr(net_g, "module"): - ckpt = net_g.module.state_dict() - else: - ckpt = net_g.state_dict() - logger.info( - "saving ckpt %s_e%s:%s" - % ( - hps.name, - epoch, - savee( - ckpt, - hps.sample_rate, - hps.if_f0, - hps.name + "_e%s_s%s" % (epoch, global_step), - epoch, - hps.version, - hps, - ), - ) - ) - - stopbtn = False - try: - with open("lib/csvdb/stop.csv", 'r') as csv_file: - stopbtn_str = next(csv.reader(csv_file), [None])[0] - if stopbtn_str is not None: stopbtn = stopbtn_str.lower() == 'true' - except (ValueError, TypeError, FileNotFoundError, IndexError) as e: - print(f"Handling exception: {e}") - stopbtn = False - - if stopbtn: - logger.info("Stop Button was pressed. The program is closed.") - ckpt = net_g.module.state_dict() if hasattr(net_g, "module") else net_g.state_dict() - logger.info( - "saving final ckpt:%s" - % ( - savee( - ckpt, hps.sample_rate, hps.if_f0, hps.name, epoch, hps.version, hps - ) - ) - ) - sleep(1) - reset_stop_flag() - os._exit(2333333) - - if rank == 0: - logger.info("Epoch: {} {}".format(epoch, epoch_recorder.record())) - if epoch >= hps.total_epoch and rank == 0: - logger.info("Training successfully completed, closing the program...") - - if hasattr(net_g, "module"): - ckpt = net_g.module.state_dict() - else: - ckpt = net_g.state_dict() - logger.info( - "Saving final ckpt... %s" - % ( - savee( - ckpt, hps.sample_rate, hps.if_f0, hps.name, epoch, hps.version, hps - ) - ) - ) - sleep(1) - os._exit(2333333) - - -if __name__ == "__main__": - torch.multiprocessing.set_start_method("spawn") - main() diff --git a/spaces/Legal-ease/legal-ease/base/constants.py b/spaces/Legal-ease/legal-ease/base/constants.py deleted file mode 100644 index a1b86540e62993e5fdf4e017e2b014552dda3fe9..0000000000000000000000000000000000000000 --- a/spaces/Legal-ease/legal-ease/base/constants.py +++ /dev/null @@ -1,26 +0,0 @@ -# name of cohere's summarization model -SUMMARIZATION_MODEL = "summarize-xlarge" - -# path of the csv file containing the example legal documents -EXAMPLES_FILE_PATH = "base/examples.csv" - -# whether to use multilingual embeddings to represent the documents or not -USE_MULTILINGUAL_EMBEDDING = True - -# name of cohere's multilingual embedding model -MULTILINGUAL_EMBEDDING_MODEL = "multilingual-22-12" - -# name of cohere's default embedding model -ENGLISH_EMBEDDING_MODEL = "large" - -# The name with which you want to create a collection in Qdrant -CREATE_QDRANT_COLLECTION_NAME = "covid19" - -# name of cohere's model which will be used for generating the translation of an input document -TEXT_GENERATION_MODEL = "command-xlarge-nightly" - -# whether the search results obtained via document search module should be translated into the language which was used by the user to type their `search query`. -TRANSLATE_BASED_ON_USER_QUERY = False - -# If you have multiple collections inside your Qdrant DB, make sure the value of this variable is set to the name of the collection on which you want to enable search. -SEARCH_QDRANT_COLLECTION_NAME = "covid19" diff --git a/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/models/model2d.py b/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/models/model2d.py deleted file mode 100644 index cc3e99c07e9e45b83511a98f0d0fc6f4b6a970ea..0000000000000000000000000000000000000000 --- a/spaces/Lewislou/Lewislou-cell-seg-sribd/stardist_pkg/models/model2d.py +++ /dev/null @@ -1,570 +0,0 @@ -from __future__ import print_function, unicode_literals, absolute_import, division - -import numpy as np -import warnings -import math -from tqdm import tqdm - -from csbdeep.models import BaseConfig -from csbdeep.internals.blocks import unet_block -from csbdeep.utils import _raise, backend_channels_last, axes_check_and_normalize, axes_dict -from csbdeep.utils.tf import keras_import, IS_TF_1, CARETensorBoard, CARETensorBoardImage -from skimage.segmentation import clear_border -from skimage.measure import regionprops -from scipy.ndimage import zoom -from distutils.version import LooseVersion - -keras = keras_import() -K = keras_import('backend') -Input, Conv2D, MaxPooling2D = keras_import('layers', 'Input', 'Conv2D', 'MaxPooling2D') -Model = keras_import('models', 'Model') - -from .base import StarDistBase, StarDistDataBase, _tf_version_at_least -from ..sample_patches import sample_patches -from ..utils import edt_prob, _normalize_grid, mask_to_categorical -from ..geometry import star_dist, dist_to_coord, polygons_to_label -from ..nms import non_maximum_suppression, non_maximum_suppression_sparse - - -class StarDistData2D(StarDistDataBase): - - def __init__(self, X, Y, batch_size, n_rays, length, - n_classes=None, classes=None, - patch_size=(256,256), b=32, grid=(1,1), shape_completion=False, augmenter=None, foreground_prob=0, **kwargs): - - super().__init__(X=X, Y=Y, n_rays=n_rays, grid=grid, - n_classes=n_classes, classes=classes, - batch_size=batch_size, patch_size=patch_size, length=length, - augmenter=augmenter, foreground_prob=foreground_prob, **kwargs) - - self.shape_completion = bool(shape_completion) - if self.shape_completion and b > 0: - self.b = slice(b,-b),slice(b,-b) - else: - self.b = slice(None),slice(None) - - self.sd_mode = 'opencl' if self.use_gpu else 'cpp' - - - def __getitem__(self, i): - idx = self.batch(i) - arrays = [sample_patches((self.Y[k],) + self.channels_as_tuple(self.X[k]), - patch_size=self.patch_size, n_samples=1, - valid_inds=self.get_valid_inds(k)) for k in idx] - - if self.n_channel is None: - X, Y = list(zip(*[(x[0][self.b],y[0]) for y,x in arrays])) - else: - X, Y = list(zip(*[(np.stack([_x[0] for _x in x],axis=-1)[self.b], y[0]) for y,*x in arrays])) - - X, Y = tuple(zip(*tuple(self.augmenter(_x, _y) for _x, _y in zip(X,Y)))) - - - prob = np.stack([edt_prob(lbl[self.b][self.ss_grid[1:3]]) for lbl in Y]) - # prob = np.stack([edt_prob(lbl[self.b]) for lbl in Y]) - # prob = prob[self.ss_grid] - - if self.shape_completion: - Y_cleared = [clear_border(lbl) for lbl in Y] - _dist = np.stack([star_dist(lbl,self.n_rays,mode=self.sd_mode)[self.b+(slice(None),)] for lbl in Y_cleared]) - dist = _dist[self.ss_grid] - dist_mask = np.stack([edt_prob(lbl[self.b][self.ss_grid[1:3]]) for lbl in Y_cleared]) - else: - # directly subsample with grid - dist = np.stack([star_dist(lbl,self.n_rays,mode=self.sd_mode, grid=self.grid) for lbl in Y]) - dist_mask = prob - - X = np.stack(X) - if X.ndim == 3: # input image has no channel axis - X = np.expand_dims(X,-1) - prob = np.expand_dims(prob,-1) - dist_mask = np.expand_dims(dist_mask,-1) - - # subsample wth given grid - # dist_mask = dist_mask[self.ss_grid] - # prob = prob[self.ss_grid] - - # append dist_mask to dist as additional channel - # dist_and_mask = np.concatenate([dist,dist_mask],axis=-1) - # faster than concatenate - dist_and_mask = np.empty(dist.shape[:-1]+(self.n_rays+1,), np.float32) - dist_and_mask[...,:-1] = dist - dist_and_mask[...,-1:] = dist_mask - - - if self.n_classes is None: - return [X], [prob,dist_and_mask] - else: - prob_class = np.stack(tuple((mask_to_categorical(y, self.n_classes, self.classes[k]) for y,k in zip(Y, idx)))) - - # TODO: investigate downsampling via simple indexing vs. using 'zoom' - # prob_class = prob_class[self.ss_grid] - # 'zoom' might lead to better registered maps (especially if upscaled later) - prob_class = zoom(prob_class, (1,)+tuple(1/g for g in self.grid)+(1,), order=0) - - return [X], [prob,dist_and_mask, prob_class] - - - -class Config2D(BaseConfig): - """Configuration for a :class:`StarDist2D` model. - - Parameters - ---------- - axes : str or None - Axes of the input images. - n_rays : int - Number of radial directions for the star-convex polygon. - Recommended to use a power of 2 (default: 32). - n_channel_in : int - Number of channels of given input image (default: 1). - grid : (int,int) - Subsampling factors (must be powers of 2) for each of the axes. - Model will predict on a subsampled grid for increased efficiency and larger field of view. - n_classes : None or int - Number of object classes to use for multi-class predection (use None to disable) - backbone : str - Name of the neural network architecture to be used as backbone. - kwargs : dict - Overwrite (or add) configuration attributes (see below). - - - Attributes - ---------- - unet_n_depth : int - Number of U-Net resolution levels (down/up-sampling layers). - unet_kernel_size : (int,int) - Convolution kernel size for all (U-Net) convolution layers. - unet_n_filter_base : int - Number of convolution kernels (feature channels) for first U-Net layer. - Doubled after each down-sampling layer. - unet_pool : (int,int) - Maxpooling size for all (U-Net) convolution layers. - net_conv_after_unet : int - Number of filters of the extra convolution layer after U-Net (0 to disable). - unet_* : * - Additional parameters for U-net backbone. - train_shape_completion : bool - Train model to predict complete shapes for partially visible objects at image boundary. - train_completion_crop : int - If 'train_shape_completion' is set to True, specify number of pixels to crop at boundary of training patches. - Should be chosen based on (largest) object sizes. - train_patch_size : (int,int) - Size of patches to be cropped from provided training images. - train_background_reg : float - Regularizer to encourage distance predictions on background regions to be 0. - train_foreground_only : float - Fraction (0..1) of patches that will only be sampled from regions that contain foreground pixels. - train_sample_cache : bool - Activate caching of valid patch regions for all training images (disable to save memory for large datasets) - train_dist_loss : str - Training loss for star-convex polygon distances ('mse' or 'mae'). - train_loss_weights : tuple of float - Weights for losses relating to (probability, distance) - train_epochs : int - Number of training epochs. - train_steps_per_epoch : int - Number of parameter update steps per epoch. - train_learning_rate : float - Learning rate for training. - train_batch_size : int - Batch size for training. - train_n_val_patches : int - Number of patches to be extracted from validation images (``None`` = one patch per image). - train_tensorboard : bool - Enable TensorBoard for monitoring training progress. - train_reduce_lr : dict - Parameter :class:`dict` of ReduceLROnPlateau_ callback; set to ``None`` to disable. - use_gpu : bool - Indicate that the data generator should use OpenCL to do computations on the GPU. - - .. _ReduceLROnPlateau: https://keras.io/api/callbacks/reduce_lr_on_plateau/ - """ - - def __init__(self, axes='YX', n_rays=32, n_channel_in=1, grid=(1,1), n_classes=None, backbone='unet', **kwargs): - """See class docstring.""" - - super().__init__(axes=axes, n_channel_in=n_channel_in, n_channel_out=1+n_rays) - - # directly set by parameters - self.n_rays = int(n_rays) - self.grid = _normalize_grid(grid,2) - self.backbone = str(backbone).lower() - self.n_classes = None if n_classes is None else int(n_classes) - - # default config (can be overwritten by kwargs below) - if self.backbone == 'unet': - self.unet_n_depth = 3 - self.unet_kernel_size = 3,3 - self.unet_n_filter_base = 32 - self.unet_n_conv_per_depth = 2 - self.unet_pool = 2,2 - self.unet_activation = 'relu' - self.unet_last_activation = 'relu' - self.unet_batch_norm = False - self.unet_dropout = 0.0 - self.unet_prefix = '' - self.net_conv_after_unet = 128 - else: - # TODO: resnet backbone for 2D model? - raise ValueError("backbone '%s' not supported." % self.backbone) - - # net_mask_shape not needed but kept for legacy reasons - if backend_channels_last(): - self.net_input_shape = None,None,self.n_channel_in - self.net_mask_shape = None,None,1 - else: - self.net_input_shape = self.n_channel_in,None,None - self.net_mask_shape = 1,None,None - - self.train_shape_completion = False - self.train_completion_crop = 32 - self.train_patch_size = 256,256 - self.train_background_reg = 1e-4 - self.train_foreground_only = 0.9 - self.train_sample_cache = True - - self.train_dist_loss = 'mae' - self.train_loss_weights = (1,0.2) if self.n_classes is None else (1,0.2,1) - self.train_class_weights = (1,1) if self.n_classes is None else (1,)*(self.n_classes+1) - self.train_epochs = 400 - self.train_steps_per_epoch = 100 - self.train_learning_rate = 0.0003 - self.train_batch_size = 4 - self.train_n_val_patches = None - self.train_tensorboard = True - # the parameter 'min_delta' was called 'epsilon' for keras<=2.1.5 - min_delta_key = 'epsilon' if LooseVersion(keras.__version__)<=LooseVersion('2.1.5') else 'min_delta' - self.train_reduce_lr = {'factor': 0.5, 'patience': 40, min_delta_key: 0} - - self.use_gpu = False - - # remove derived attributes that shouldn't be overwritten - for k in ('n_dim', 'n_channel_out'): - try: del kwargs[k] - except KeyError: pass - - self.update_parameters(False, **kwargs) - - # FIXME: put into is_valid() - if not len(self.train_loss_weights) == (2 if self.n_classes is None else 3): - raise ValueError(f"train_loss_weights {self.train_loss_weights} not compatible with n_classes ({self.n_classes}): must be 3 weights if n_classes is not None, otherwise 2") - - if not len(self.train_class_weights) == (2 if self.n_classes is None else self.n_classes+1): - raise ValueError(f"train_class_weights {self.train_class_weights} not compatible with n_classes ({self.n_classes}): must be 'n_classes + 1' weights if n_classes is not None, otherwise 2") - - - -class StarDist2D(StarDistBase): - """StarDist2D model. - - Parameters - ---------- - config : :class:`Config` or None - Will be saved to disk as JSON (``config.json``). - If set to ``None``, will be loaded from disk (must exist). - name : str or None - Model name. Uses a timestamp if set to ``None`` (default). - basedir : str - Directory that contains (or will contain) a folder with the given model name. - - Raises - ------ - FileNotFoundError - If ``config=None`` and config cannot be loaded from disk. - ValueError - Illegal arguments, including invalid configuration. - - Attributes - ---------- - config : :class:`Config` - Configuration, as provided during instantiation. - keras_model : `Keras model `_ - Keras neural network model. - name : str - Model name. - logdir : :class:`pathlib.Path` - Path to model folder (which stores configuration, weights, etc.) - """ - - def __init__(self, config=Config2D(), name=None, basedir='.'): - """See class docstring.""" - super().__init__(config, name=name, basedir=basedir) - - - def _build(self): - self.config.backbone == 'unet' or _raise(NotImplementedError()) - unet_kwargs = {k[len('unet_'):]:v for (k,v) in vars(self.config).items() if k.startswith('unet_')} - - input_img = Input(self.config.net_input_shape, name='input') - - # maxpool input image to grid size - pooled = np.array([1,1]) - pooled_img = input_img - while tuple(pooled) != tuple(self.config.grid): - pool = 1 + (np.asarray(self.config.grid) > pooled) - pooled *= pool - for _ in range(self.config.unet_n_conv_per_depth): - pooled_img = Conv2D(self.config.unet_n_filter_base, self.config.unet_kernel_size, - padding='same', activation=self.config.unet_activation)(pooled_img) - pooled_img = MaxPooling2D(pool)(pooled_img) - - unet_base = unet_block(**unet_kwargs)(pooled_img) - - if self.config.net_conv_after_unet > 0: - unet = Conv2D(self.config.net_conv_after_unet, self.config.unet_kernel_size, - name='features', padding='same', activation=self.config.unet_activation)(unet_base) - else: - unet = unet_base - - output_prob = Conv2D( 1, (1,1), name='prob', padding='same', activation='sigmoid')(unet) - output_dist = Conv2D(self.config.n_rays, (1,1), name='dist', padding='same', activation='linear')(unet) - - # attach extra classification head when self.n_classes is given - if self._is_multiclass(): - if self.config.net_conv_after_unet > 0: - unet_class = Conv2D(self.config.net_conv_after_unet, self.config.unet_kernel_size, - name='features_class', padding='same', activation=self.config.unet_activation)(unet_base) - else: - unet_class = unet_base - - output_prob_class = Conv2D(self.config.n_classes+1, (1,1), name='prob_class', padding='same', activation='softmax')(unet_class) - return Model([input_img], [output_prob,output_dist,output_prob_class]) - else: - return Model([input_img], [output_prob,output_dist]) - - - def train(self, X, Y, validation_data, classes='auto', augmenter=None, seed=None, epochs=None, steps_per_epoch=None, workers=1): - """Train the neural network with the given data. - - Parameters - ---------- - X : tuple, list, `numpy.ndarray`, `keras.utils.Sequence` - Input images - Y : tuple, list, `numpy.ndarray`, `keras.utils.Sequence` - Label masks - classes (optional): 'auto' or iterable of same length as X - label id -> class id mapping for each label mask of Y if multiclass prediction is activated (n_classes > 0) - list of dicts with label id -> class id (1,...,n_classes) - 'auto' -> all objects will be assigned to the first non-background class, - or will be ignored if config.n_classes is None - validation_data : tuple(:class:`numpy.ndarray`, :class:`numpy.ndarray`) or triple (if multiclass) - Tuple (triple if multiclass) of X,Y,[classes] validation data. - augmenter : None or callable - Function with expected signature ``xt, yt = augmenter(x, y)`` - that takes in a single pair of input/label image (x,y) and returns - the transformed images (xt, yt) for the purpose of data augmentation - during training. Not applied to validation images. - Example: - def simple_augmenter(x,y): - x = x + 0.05*np.random.normal(0,1,x.shape) - return x,y - seed : int - Convenience to set ``np.random.seed(seed)``. (To obtain reproducible validation patches, etc.) - epochs : int - Optional argument to use instead of the value from ``config``. - steps_per_epoch : int - Optional argument to use instead of the value from ``config``. - - Returns - ------- - ``History`` object - See `Keras training history `_. - - """ - if seed is not None: - # https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development - np.random.seed(seed) - if epochs is None: - epochs = self.config.train_epochs - if steps_per_epoch is None: - steps_per_epoch = self.config.train_steps_per_epoch - - classes = self._parse_classes_arg(classes, len(X)) - - if not self._is_multiclass() and classes is not None: - warnings.warn("Ignoring given classes as n_classes is set to None") - - isinstance(validation_data,(list,tuple)) or _raise(ValueError()) - if self._is_multiclass() and len(validation_data) == 2: - validation_data = tuple(validation_data) + ('auto',) - ((len(validation_data) == (3 if self._is_multiclass() else 2)) - or _raise(ValueError(f'len(validation_data) = {len(validation_data)}, but should be {3 if self._is_multiclass() else 2}'))) - - patch_size = self.config.train_patch_size - axes = self.config.axes.replace('C','') - b = self.config.train_completion_crop if self.config.train_shape_completion else 0 - div_by = self._axes_div_by(axes) - [(p-2*b) % d == 0 or _raise(ValueError( - "'train_patch_size' - 2*'train_completion_crop' must be divisible by {d} along axis '{a}'".format(a=a,d=d) if self.config.train_shape_completion else - "'train_patch_size' must be divisible by {d} along axis '{a}'".format(a=a,d=d) - )) for p,d,a in zip(patch_size,div_by,axes)] - - if not self._model_prepared: - self.prepare_for_training() - - data_kwargs = dict ( - n_rays = self.config.n_rays, - patch_size = self.config.train_patch_size, - grid = self.config.grid, - shape_completion = self.config.train_shape_completion, - b = self.config.train_completion_crop, - use_gpu = self.config.use_gpu, - foreground_prob = self.config.train_foreground_only, - n_classes = self.config.n_classes, - sample_ind_cache = self.config.train_sample_cache, - ) - - # generate validation data and store in numpy arrays - n_data_val = len(validation_data[0]) - classes_val = self._parse_classes_arg(validation_data[2], n_data_val) if self._is_multiclass() else None - n_take = self.config.train_n_val_patches if self.config.train_n_val_patches is not None else n_data_val - _data_val = StarDistData2D(validation_data[0],validation_data[1], classes=classes_val, batch_size=n_take, length=1, **data_kwargs) - data_val = _data_val[0] - - # expose data generator as member for general diagnostics - self.data_train = StarDistData2D(X, Y, classes=classes, batch_size=self.config.train_batch_size, - augmenter=augmenter, length=epochs*steps_per_epoch, **data_kwargs) - - if self.config.train_tensorboard: - # show dist for three rays - _n = min(3, self.config.n_rays) - channel = axes_dict(self.config.axes)['C'] - output_slices = [[slice(None)]*4,[slice(None)]*4] - output_slices[1][1+channel] = slice(0,(self.config.n_rays//_n)*_n, self.config.n_rays//_n) - if self._is_multiclass(): - _n = min(3, self.config.n_classes) - output_slices += [[slice(None)]*4] - output_slices[2][1+channel] = slice(1,1+(self.config.n_classes//_n)*_n, self.config.n_classes//_n) - - if IS_TF_1: - for cb in self.callbacks: - if isinstance(cb,CARETensorBoard): - cb.output_slices = output_slices - # target image for dist includes dist_mask and thus has more channels than dist output - cb.output_target_shapes = [None,[None]*4,None] - cb.output_target_shapes[1][1+channel] = data_val[1][1].shape[1+channel] - elif self.basedir is not None and not any(isinstance(cb,CARETensorBoardImage) for cb in self.callbacks): - self.callbacks.append(CARETensorBoardImage(model=self.keras_model, data=data_val, log_dir=str(self.logdir/'logs'/'images'), - n_images=3, prob_out=False, output_slices=output_slices)) - - fit = self.keras_model.fit_generator if IS_TF_1 else self.keras_model.fit - history = fit(iter(self.data_train), validation_data=data_val, - epochs=epochs, steps_per_epoch=steps_per_epoch, - workers=workers, use_multiprocessing=workers>1, - callbacks=self.callbacks, verbose=1, - # set validation batchsize to training batchsize (only works for tf >= 2.2) - **(dict(validation_batch_size = self.config.train_batch_size) if _tf_version_at_least("2.2.0") else {})) - self._training_finished() - - return history - - - # def _instances_from_prediction_old(self, img_shape, prob, dist,points = None, prob_class = None, prob_thresh=None, nms_thresh=None, overlap_label = None, **nms_kwargs): - # from stardist.geometry.geom2d import _polygons_to_label_old, _dist_to_coord_old - # from stardist.nms import _non_maximum_suppression_old - - # if prob_thresh is None: prob_thresh = self.thresholds.prob - # if nms_thresh is None: nms_thresh = self.thresholds.nms - # if overlap_label is not None: raise NotImplementedError("overlap_label not supported for 2D yet!") - - # coord = _dist_to_coord_old(dist, grid=self.config.grid) - # inds = _non_maximum_suppression_old(coord, prob, grid=self.config.grid, - # prob_thresh=prob_thresh, nms_thresh=nms_thresh, **nms_kwargs) - # labels = _polygons_to_label_old(coord, prob, inds, shape=img_shape) - # # sort 'inds' such that ids in 'labels' map to entries in polygon dictionary entries - # inds = inds[np.argsort(prob[inds[:,0],inds[:,1]])] - # # adjust for grid - # points = inds*np.array(self.config.grid) - - # res_dict = dict(coord=coord[inds[:,0],inds[:,1]], points=points, prob=prob[inds[:,0],inds[:,1]]) - - # if prob_class is not None: - # prob_class = np.asarray(prob_class) - # res_dict.update(dict(class_prob = prob_class)) - - # return labels, res_dict - - - def _instances_from_prediction(self, img_shape, prob, dist, points=None, prob_class=None, prob_thresh=None, nms_thresh=None, overlap_label=None, return_labels=True, scale=None, **nms_kwargs): - """ - if points is None -> dense prediction - if points is not None -> sparse prediction - - if prob_class is None -> single class prediction - if prob_class is not None -> multi class prediction - """ - if prob_thresh is None: prob_thresh = self.thresholds.prob - if nms_thresh is None: nms_thresh = self.thresholds.nms - if overlap_label is not None: raise NotImplementedError("overlap_label not supported for 2D yet!") - - # sparse prediction - if points is not None: - points, probi, disti, indsi = non_maximum_suppression_sparse(dist, prob, points, nms_thresh=nms_thresh, **nms_kwargs) - if prob_class is not None: - prob_class = prob_class[indsi] - - # dense prediction - else: - points, probi, disti = non_maximum_suppression(dist, prob, grid=self.config.grid, - prob_thresh=prob_thresh, nms_thresh=nms_thresh, **nms_kwargs) - if prob_class is not None: - inds = tuple(p//g for p,g in zip(points.T, self.config.grid)) - prob_class = prob_class[inds] - - if scale is not None: - # need to undo the scaling given by the scale dict, e.g. scale = dict(X=0.5,Y=0.5): - # 1. re-scale points (origins of polygons) - # 2. re-scale coordinates (computed from distances) of (zero-origin) polygons - if not (isinstance(scale,dict) and 'X' in scale and 'Y' in scale): - raise ValueError("scale must be a dictionary with entries for 'X' and 'Y'") - rescale = (1/scale['Y'],1/scale['X']) - points = points * np.array(rescale).reshape(1,2) - else: - rescale = (1,1) - - if return_labels: - labels = polygons_to_label(disti, points, prob=probi, shape=img_shape, scale_dist=rescale) - else: - labels = None - - coord = dist_to_coord(disti, points, scale_dist=rescale) - res_dict = dict(coord=coord, points=points, prob=probi) - - # multi class prediction - if prob_class is not None: - prob_class = np.asarray(prob_class) - class_id = np.argmax(prob_class, axis=-1) - res_dict.update(dict(class_prob=prob_class, class_id=class_id)) - - return labels, res_dict - - - def _axes_div_by(self, query_axes): - self.config.backbone == 'unet' or _raise(NotImplementedError()) - query_axes = axes_check_and_normalize(query_axes) - assert len(self.config.unet_pool) == len(self.config.grid) - div_by = dict(zip( - self.config.axes.replace('C',''), - tuple(p**self.config.unet_n_depth * g for p,g in zip(self.config.unet_pool,self.config.grid)) - )) - return tuple(div_by.get(a,1) for a in query_axes) - - - # def _axes_tile_overlap(self, query_axes): - # self.config.backbone == 'unet' or _raise(NotImplementedError()) - # query_axes = axes_check_and_normalize(query_axes) - # assert len(self.config.unet_pool) == len(self.config.grid) == len(self.config.unet_kernel_size) - # # TODO: compute this properly when any value of grid > 1 - # # all(g==1 for g in self.config.grid) or warnings.warn('FIXME') - # overlap = dict(zip( - # self.config.axes.replace('C',''), - # tuple(tile_overlap(self.config.unet_n_depth + int(np.log2(g)), k, p) - # for p,k,g in zip(self.config.unet_pool,self.config.unet_kernel_size,self.config.grid)) - # )) - # return tuple(overlap.get(a,0) for a in query_axes) - - - @property - def _config_class(self): - return Config2D diff --git a/spaces/Lightxr/sd-diffusers-webui/modules/lora.py b/spaces/Lightxr/sd-diffusers-webui/modules/lora.py deleted file mode 100644 index 3b84192f4417e4b65fd3c63b61396591bd7bbc59..0000000000000000000000000000000000000000 --- a/spaces/Lightxr/sd-diffusers-webui/modules/lora.py +++ /dev/null @@ -1,183 +0,0 @@ -# LoRA network module -# reference: -# https://github.com/microsoft/LoRA/blob/main/loralib/layers.py -# https://github.com/cloneofsimo/lora/blob/master/lora_diffusion/lora.py -# https://github.com/bmaltais/kohya_ss/blob/master/networks/lora.py#L48 - -import math -import os -import torch -import modules.safe as _ -from safetensors.torch import load_file - - -class LoRAModule(torch.nn.Module): - """ - replaces forward method of the original Linear, instead of replacing the original Linear module. - """ - - def __init__( - self, - lora_name, - org_module: torch.nn.Module, - multiplier=1.0, - lora_dim=4, - alpha=1, - ): - """if alpha == 0 or None, alpha is rank (no scaling).""" - super().__init__() - self.lora_name = lora_name - self.lora_dim = lora_dim - - if org_module.__class__.__name__ == "Conv2d": - in_dim = org_module.in_channels - out_dim = org_module.out_channels - self.lora_down = torch.nn.Conv2d(in_dim, lora_dim, (1, 1), bias=False) - self.lora_up = torch.nn.Conv2d(lora_dim, out_dim, (1, 1), bias=False) - else: - in_dim = org_module.in_features - out_dim = org_module.out_features - self.lora_down = torch.nn.Linear(in_dim, lora_dim, bias=False) - self.lora_up = torch.nn.Linear(lora_dim, out_dim, bias=False) - - if type(alpha) == torch.Tensor: - alpha = alpha.detach().float().numpy() # without casting, bf16 causes error - - alpha = lora_dim if alpha is None or alpha == 0 else alpha - self.scale = alpha / self.lora_dim - self.register_buffer("alpha", torch.tensor(alpha)) # 定数として扱える - - # same as microsoft's - torch.nn.init.kaiming_uniform_(self.lora_down.weight, a=math.sqrt(5)) - torch.nn.init.zeros_(self.lora_up.weight) - - self.multiplier = multiplier - self.org_module = org_module # remove in applying - self.enable = False - - def resize(self, rank, alpha, multiplier): - self.alpha = torch.tensor(alpha) - self.multiplier = multiplier - self.scale = alpha / rank - if self.lora_down.__class__.__name__ == "Conv2d": - in_dim = self.lora_down.in_channels - out_dim = self.lora_up.out_channels - self.lora_down = torch.nn.Conv2d(in_dim, rank, (1, 1), bias=False) - self.lora_up = torch.nn.Conv2d(rank, out_dim, (1, 1), bias=False) - else: - in_dim = self.lora_down.in_features - out_dim = self.lora_up.out_features - self.lora_down = torch.nn.Linear(in_dim, rank, bias=False) - self.lora_up = torch.nn.Linear(rank, out_dim, bias=False) - - def apply(self): - if hasattr(self, "org_module"): - self.org_forward = self.org_module.forward - self.org_module.forward = self.forward - del self.org_module - - def forward(self, x): - if self.enable: - return ( - self.org_forward(x) - + self.lora_up(self.lora_down(x)) * self.multiplier * self.scale - ) - return self.org_forward(x) - - -class LoRANetwork(torch.nn.Module): - UNET_TARGET_REPLACE_MODULE = ["Transformer2DModel", "Attention"] - TEXT_ENCODER_TARGET_REPLACE_MODULE = ["CLIPAttention", "CLIPMLP"] - LORA_PREFIX_UNET = "lora_unet" - LORA_PREFIX_TEXT_ENCODER = "lora_te" - - def __init__(self, text_encoder, unet, multiplier=1.0, lora_dim=4, alpha=1) -> None: - super().__init__() - self.multiplier = multiplier - self.lora_dim = lora_dim - self.alpha = alpha - - # create module instances - def create_modules(prefix, root_module: torch.nn.Module, target_replace_modules): - loras = [] - for name, module in root_module.named_modules(): - if module.__class__.__name__ in target_replace_modules: - for child_name, child_module in module.named_modules(): - if child_module.__class__.__name__ == "Linear" or (child_module.__class__.__name__ == "Conv2d" and child_module.kernel_size == (1, 1)): - lora_name = prefix + "." + name + "." + child_name - lora_name = lora_name.replace(".", "_") - lora = LoRAModule(lora_name, child_module, self.multiplier, self.lora_dim, self.alpha,) - loras.append(lora) - return loras - - if isinstance(text_encoder, list): - self.text_encoder_loras = text_encoder - else: - self.text_encoder_loras = create_modules(LoRANetwork.LORA_PREFIX_TEXT_ENCODER, text_encoder, LoRANetwork.TEXT_ENCODER_TARGET_REPLACE_MODULE) - print(f"Create LoRA for Text Encoder: {len(self.text_encoder_loras)} modules.") - - self.unet_loras = create_modules(LoRANetwork.LORA_PREFIX_UNET, unet, LoRANetwork.UNET_TARGET_REPLACE_MODULE) - print(f"Create LoRA for U-Net: {len(self.unet_loras)} modules.") - - self.weights_sd = None - - # assertion - names = set() - for lora in self.text_encoder_loras + self.unet_loras: - assert (lora.lora_name not in names), f"duplicated lora name: {lora.lora_name}" - names.add(lora.lora_name) - - lora.apply() - self.add_module(lora.lora_name, lora) - - def reset(self): - for lora in self.text_encoder_loras + self.unet_loras: - lora.enable = False - - def load(self, file, scale): - - weights = None - if os.path.splitext(file)[1] == ".safetensors": - weights = load_file(file) - else: - weights = torch.load(file, map_location="cpu") - - if not weights: - return - - network_alpha = None - network_dim = None - for key, value in weights.items(): - if network_alpha is None and "alpha" in key: - network_alpha = value - if network_dim is None and "lora_down" in key and len(value.size()) == 2: - network_dim = value.size()[0] - - if network_alpha is None: - network_alpha = network_dim - - weights_has_text_encoder = weights_has_unet = False - weights_to_modify = [] - - for key in weights.keys(): - if key.startswith(LoRANetwork.LORA_PREFIX_TEXT_ENCODER): - weights_has_text_encoder = True - - if key.startswith(LoRANetwork.LORA_PREFIX_UNET): - weights_has_unet = True - - if weights_has_text_encoder: - weights_to_modify += self.text_encoder_loras - - if weights_has_unet: - weights_to_modify += self.unet_loras - - for lora in self.text_encoder_loras + self.unet_loras: - lora.resize(network_dim, network_alpha, scale) - if lora in weights_to_modify: - lora.enable = True - - info = self.load_state_dict(weights, False) - if len(info.unexpected_keys) > 0: - print(f"Weights are loaded. Unexpected keys={info.unexpected_keys}") - \ No newline at end of file diff --git a/spaces/LittleYuan/My-Real-Bot/README.md b/spaces/LittleYuan/My-Real-Bot/README.md deleted file mode 100644 index 6639839eff504a79bc26a4525c93b52d0d4aa6f3..0000000000000000000000000000000000000000 --- a/spaces/LittleYuan/My-Real-Bot/README.md +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: Real ESRGAN -emoji: 🏃 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false ---- - -### 这个是fork的`https://huggingface.co/spaces/akhaliq/Real-ESRGAN`,目的是搭建个人API来使用,因为我不是很确定这个平台是否收费,会不会对原作者产生经济上的损失。补充:原仓库采用BSD-3-Clause开源协议 - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Lynx1221/rvc-test1/infer_pack/modules.py b/spaces/Lynx1221/rvc-test1/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/Lynx1221/rvc-test1/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/MAEBA96/SUMMARISER96/README.md b/spaces/MAEBA96/SUMMARISER96/README.md deleted file mode 100644 index 242f541eeba0ce934b5ab63393302911acab44bd..0000000000000000000000000000000000000000 --- a/spaces/MAEBA96/SUMMARISER96/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SUMMARISER96 -emoji: 🐢 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MB311/Wordle_Performance_Checker/README.md b/spaces/MB311/Wordle_Performance_Checker/README.md deleted file mode 100644 index 0561df56614f4c63e1a962e094cd4d42ba7e4934..0000000000000000000000000000000000000000 --- a/spaces/MB311/Wordle_Performance_Checker/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Wordle Performance Checker -emoji: 👀 -colorFrom: blue -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/parallel/_functions.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/parallel/_functions.py deleted file mode 100644 index 9b5a8a44483ab991411d07122b22a1d027e4be8e..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmcv/parallel/_functions.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel._functions import _get_stream - - -def scatter(input, devices, streams=None): - """Scatters tensor across multiple GPUs.""" - if streams is None: - streams = [None] * len(devices) - - if isinstance(input, list): - chunk_size = (len(input) - 1) // len(devices) + 1 - outputs = [ - scatter(input[i], [devices[i // chunk_size]], - [streams[i // chunk_size]]) for i in range(len(input)) - ] - return outputs - elif isinstance(input, torch.Tensor): - output = input.contiguous() - # TODO: copy to a pinned buffer first (if copying from CPU) - stream = streams[0] if output.numel() > 0 else None - if devices != [-1]: - with torch.cuda.device(devices[0]), torch.cuda.stream(stream): - output = output.cuda(devices[0], non_blocking=True) - else: - # unsqueeze the first dimension thus the tensor's shape is the - # same as those scattered with GPU. - output = output.unsqueeze(0) - return output - else: - raise Exception(f'Unknown type {type(input)}.') - - -def synchronize_stream(output, devices, streams): - if isinstance(output, list): - chunk_size = len(output) // len(devices) - for i in range(len(devices)): - for j in range(chunk_size): - synchronize_stream(output[i * chunk_size + j], [devices[i]], - [streams[i]]) - elif isinstance(output, torch.Tensor): - if output.numel() != 0: - with torch.cuda.device(devices[0]): - main_stream = torch.cuda.current_stream() - main_stream.wait_stream(streams[0]) - output.record_stream(main_stream) - else: - raise Exception(f'Unknown type {type(output)}.') - - -def get_input_device(input): - if isinstance(input, list): - for item in input: - input_device = get_input_device(item) - if input_device != -1: - return input_device - return -1 - elif isinstance(input, torch.Tensor): - return input.get_device() if input.is_cuda else -1 - else: - raise Exception(f'Unknown type {type(input)}.') - - -class Scatter: - - @staticmethod - def forward(target_gpus, input): - input_device = get_input_device(input) - streams = None - if input_device == -1 and target_gpus != [-1]: - # Perform CPU to GPU copies in a background stream - streams = [_get_stream(device) for device in target_gpus] - - outputs = scatter(input, target_gpus, streams) - # Synchronize with the copy stream - if streams is not None: - synchronize_stream(outputs, target_gpus, streams) - - return tuple(outputs) diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/image_degradation/bsrgan.py b/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/image_degradation/bsrgan.py deleted file mode 100644 index 32ef56169978e550090261cddbcf5eb611a6173b..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/ldm/modules/image_degradation/bsrgan.py +++ /dev/null @@ -1,730 +0,0 @@ -# -*- coding: utf-8 -*- -""" -# -------------------------------------------- -# Super-Resolution -# -------------------------------------------- -# -# Kai Zhang (cskaizhang@gmail.com) -# https://github.com/cszn -# From 2019/03--2021/08 -# -------------------------------------------- -""" - -import numpy as np -import cv2 -import torch - -from functools import partial -import random -from scipy import ndimage -import scipy -import scipy.stats as ss -from scipy.interpolate import interp2d -from scipy.linalg import orth -import albumentations - -import ldm.modules.image_degradation.utils_image as util - - -def modcrop_np(img, sf): - ''' - Args: - img: numpy image, WxH or WxHxC - sf: scale factor - Return: - cropped image - ''' - w, h = img.shape[:2] - im = np.copy(img) - return im[:w - w % sf, :h - h % sf, ...] - - -""" -# -------------------------------------------- -# anisotropic Gaussian kernels -# -------------------------------------------- -""" - - -def analytic_kernel(k): - """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)""" - k_size = k.shape[0] - # Calculate the big kernels size - big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2)) - # Loop over the small kernel to fill the big one - for r in range(k_size): - for c in range(k_size): - big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k - # Crop the edges of the big kernel to ignore very small values and increase run time of SR - crop = k_size // 2 - cropped_big_k = big_k[crop:-crop, crop:-crop] - # Normalize to 1 - return cropped_big_k / cropped_big_k.sum() - - -def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6): - """ generate an anisotropic Gaussian kernel - Args: - ksize : e.g., 15, kernel size - theta : [0, pi], rotation angle range - l1 : [0.1,50], scaling of eigenvalues - l2 : [0.1,l1], scaling of eigenvalues - If l1 = l2, will get an isotropic Gaussian kernel. - Returns: - k : kernel - """ - - v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.])) - V = np.array([[v[0], v[1]], [v[1], -v[0]]]) - D = np.array([[l1, 0], [0, l2]]) - Sigma = np.dot(np.dot(V, D), np.linalg.inv(V)) - k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize) - - return k - - -def gm_blur_kernel(mean, cov, size=15): - center = size / 2.0 + 0.5 - k = np.zeros([size, size]) - for y in range(size): - for x in range(size): - cy = y - center + 1 - cx = x - center + 1 - k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov) - - k = k / np.sum(k) - return k - - -def shift_pixel(x, sf, upper_left=True): - """shift pixel for super-resolution with different scale factors - Args: - x: WxHxC or WxH - sf: scale factor - upper_left: shift direction - """ - h, w = x.shape[:2] - shift = (sf - 1) * 0.5 - xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0) - if upper_left: - x1 = xv + shift - y1 = yv + shift - else: - x1 = xv - shift - y1 = yv - shift - - x1 = np.clip(x1, 0, w - 1) - y1 = np.clip(y1, 0, h - 1) - - if x.ndim == 2: - x = interp2d(xv, yv, x)(x1, y1) - if x.ndim == 3: - for i in range(x.shape[-1]): - x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1) - - return x - - -def blur(x, k): - ''' - x: image, NxcxHxW - k: kernel, Nx1xhxw - ''' - n, c = x.shape[:2] - p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2 - x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate') - k = k.repeat(1, c, 1, 1) - k = k.view(-1, 1, k.shape[2], k.shape[3]) - x = x.view(1, -1, x.shape[2], x.shape[3]) - x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c) - x = x.view(n, c, x.shape[2], x.shape[3]) - - return x - - -def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0): - """" - # modified version of https://github.com/assafshocher/BlindSR_dataset_generator - # Kai Zhang - # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var - # max_var = 2.5 * sf - """ - # Set random eigen-vals (lambdas) and angle (theta) for COV matrix - lambda_1 = min_var + np.random.rand() * (max_var - min_var) - lambda_2 = min_var + np.random.rand() * (max_var - min_var) - theta = np.random.rand() * np.pi # random theta - noise = -noise_level + np.random.rand(*k_size) * noise_level * 2 - - # Set COV matrix using Lambdas and Theta - LAMBDA = np.diag([lambda_1, lambda_2]) - Q = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - SIGMA = Q @ LAMBDA @ Q.T - INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :] - - # Set expectation position (shifting kernel for aligned image) - MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2) - MU = MU[None, None, :, None] - - # Create meshgrid for Gaussian - [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1])) - Z = np.stack([X, Y], 2)[:, :, :, None] - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - MU - ZZ_t = ZZ.transpose(0, 1, 3, 2) - raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise) - - # shift the kernel so it will be centered - # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor) - - # Normalize the kernel and return - # kernel = raw_kernel_centered / np.sum(raw_kernel_centered) - kernel = raw_kernel / np.sum(raw_kernel) - return kernel - - -def fspecial_gaussian(hsize, sigma): - hsize = [hsize, hsize] - siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0] - std = sigma - [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1)) - arg = -(x * x + y * y) / (2 * std * std) - h = np.exp(arg) - h[h < scipy.finfo(float).eps * h.max()] = 0 - sumh = h.sum() - if sumh != 0: - h = h / sumh - return h - - -def fspecial_laplacian(alpha): - alpha = max([0, min([alpha, 1])]) - h1 = alpha / (alpha + 1) - h2 = (1 - alpha) / (alpha + 1) - h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]] - h = np.array(h) - return h - - -def fspecial(filter_type, *args, **kwargs): - ''' - python code from: - https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py - ''' - if filter_type == 'gaussian': - return fspecial_gaussian(*args, **kwargs) - if filter_type == 'laplacian': - return fspecial_laplacian(*args, **kwargs) - - -""" -# -------------------------------------------- -# degradation models -# -------------------------------------------- -""" - - -def bicubic_degradation(x, sf=3): - ''' - Args: - x: HxWxC image, [0, 1] - sf: down-scale factor - Return: - bicubicly downsampled LR image - ''' - x = util.imresize_np(x, scale=1 / sf) - return x - - -def srmd_degradation(x, k, sf=3): - ''' blur + bicubic downsampling - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2018learning, - title={Learning a single convolutional super-resolution network for multiple degradations}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3262--3271}, - year={2018} - } - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror' - x = bicubic_degradation(x, sf=sf) - return x - - -def dpsr_degradation(x, k, sf=3): - ''' bicubic downsampling + blur - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2019deep, - title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={1671--1681}, - year={2019} - } - ''' - x = bicubic_degradation(x, sf=sf) - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - return x - - -def classical_degradation(x, k, sf=3): - ''' blur + downsampling - Args: - x: HxWxC image, [0, 1]/[0, 255] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2)) - st = 0 - return x[st::sf, st::sf, ...] - - -def add_sharpening(img, weight=0.5, radius=50, threshold=10): - """USM sharpening. borrowed from real-ESRGAN - Input image: I; Blurry image: B. - 1. K = I + weight * (I - B) - 2. Mask = 1 if abs(I - B) > threshold, else: 0 - 3. Blur mask: - 4. Out = Mask * K + (1 - Mask) * I - Args: - img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. - weight (float): Sharp weight. Default: 1. - radius (float): Kernel size of Gaussian blur. Default: 50. - threshold (int): - """ - if radius % 2 == 0: - radius += 1 - blur = cv2.GaussianBlur(img, (radius, radius), 0) - residual = img - blur - mask = np.abs(residual) * 255 > threshold - mask = mask.astype('float32') - soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) - - K = img + weight * residual - K = np.clip(K, 0, 1) - return soft_mask * K + (1 - soft_mask) * img - - -def add_blur(img, sf=4): - wd2 = 4.0 + sf - wd = 2.0 + 0.2 * sf - if random.random() < 0.5: - l1 = wd2 * random.random() - l2 = wd2 * random.random() - k = anisotropic_Gaussian(ksize=2 * random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2) - else: - k = fspecial('gaussian', 2 * random.randint(2, 11) + 3, wd * random.random()) - img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror') - - return img - - -def add_resize(img, sf=4): - rnum = np.random.rand() - if rnum > 0.8: # up - sf1 = random.uniform(1, 2) - elif rnum < 0.7: # down - sf1 = random.uniform(0.5 / sf, 1) - else: - sf1 = 1.0 - img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - return img - - -# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): -# noise_level = random.randint(noise_level1, noise_level2) -# rnum = np.random.rand() -# if rnum > 0.6: # add color Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) -# elif rnum < 0.4: # add grayscale Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) -# else: # add noise -# L = noise_level2 / 255. -# D = np.diag(np.random.rand(3)) -# U = orth(np.random.rand(3, 3)) -# conv = np.dot(np.dot(np.transpose(U), D), U) -# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) -# img = np.clip(img, 0.0, 1.0) -# return img - -def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - rnum = np.random.rand() - if rnum > 0.6: # add color Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: # add grayscale Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: # add noise - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_speckle_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - img = np.clip(img, 0.0, 1.0) - rnum = random.random() - if rnum > 0.6: - img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: - img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_Poisson_noise(img): - img = np.clip((img * 255.0).round(), 0, 255) / 255. - vals = 10 ** (2 * random.random() + 2.0) # [2, 4] - if random.random() < 0.5: - img = np.random.poisson(img * vals).astype(np.float32) / vals - else: - img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114]) - img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255. - noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray - img += noise_gray[:, :, np.newaxis] - img = np.clip(img, 0.0, 1.0) - return img - - -def add_JPEG_noise(img): - quality_factor = random.randint(30, 95) - img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR) - result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor]) - img = cv2.imdecode(encimg, 1) - img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB) - return img - - -def random_crop(lq, hq, sf=4, lq_patchsize=64): - h, w = lq.shape[:2] - rnd_h = random.randint(0, h - lq_patchsize) - rnd_w = random.randint(0, w - lq_patchsize) - lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :] - - rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf) - hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :] - return lq, hq - - -def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - hq = img.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - img = util.imresize_np(img, 1 / 2, True) - img = np.clip(img, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - img = add_blur(img, sf=sf) - - elif i == 1: - img = add_blur(img, sf=sf) - - elif i == 2: - a, b = img.shape[1], img.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror') - img = img[0::sf, 0::sf, ...] # nearest downsampling - img = np.clip(img, 0.0, 1.0) - - elif i == 3: - # downsample3 - img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - img = add_JPEG_noise(img) - - elif i == 6: - # add processed camera sensor noise - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf_ori, lq_patchsize) - - return img, hq - - -# todo no isp_model? -def degradation_bsrgan_variant(image, sf=4, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - image = util.uint2single(image) - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = image.shape[:2] - image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = image.shape[:2] - - hq = image.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - image = util.imresize_np(image, 1 / 2, True) - image = np.clip(image, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - image = add_blur(image, sf=sf) - - elif i == 1: - image = add_blur(image, sf=sf) - - elif i == 2: - a, b = image.shape[1], image.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror') - image = image[0::sf, 0::sf, ...] # nearest downsampling - image = np.clip(image, 0.0, 1.0) - - elif i == 3: - # downsample3 - image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - image = np.clip(image, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - image = add_Gaussian_noise(image, noise_level1=2, noise_level2=25) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - image = add_JPEG_noise(image) - - # elif i == 6: - # # add processed camera sensor noise - # if random.random() < isp_prob and isp_model is not None: - # with torch.no_grad(): - # img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - image = add_JPEG_noise(image) - image = util.single2uint(image) - example = {"image":image} - return example - - -# TODO incase there is a pickle error one needs to replace a += x with a = a + x in add_speckle_noise etc... -def degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.5, use_sharp=True, lq_patchsize=64, isp_model=None): - """ - This is an extended degradation model by combining - the degradation models of BSRGAN and Real-ESRGAN - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - use_shuffle: the degradation shuffle - use_sharp: sharpening the img - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - if use_sharp: - img = add_sharpening(img) - hq = img.copy() - - if random.random() < shuffle_prob: - shuffle_order = random.sample(range(13), 13) - else: - shuffle_order = list(range(13)) - # local shuffle for noise, JPEG is always the last one - shuffle_order[2:6] = random.sample(shuffle_order[2:6], len(range(2, 6))) - shuffle_order[9:13] = random.sample(shuffle_order[9:13], len(range(9, 13))) - - poisson_prob, speckle_prob, isp_prob = 0.1, 0.1, 0.1 - - for i in shuffle_order: - if i == 0: - img = add_blur(img, sf=sf) - elif i == 1: - img = add_resize(img, sf=sf) - elif i == 2: - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - elif i == 3: - if random.random() < poisson_prob: - img = add_Poisson_noise(img) - elif i == 4: - if random.random() < speckle_prob: - img = add_speckle_noise(img) - elif i == 5: - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - elif i == 6: - img = add_JPEG_noise(img) - elif i == 7: - img = add_blur(img, sf=sf) - elif i == 8: - img = add_resize(img, sf=sf) - elif i == 9: - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - elif i == 10: - if random.random() < poisson_prob: - img = add_Poisson_noise(img) - elif i == 11: - if random.random() < speckle_prob: - img = add_speckle_noise(img) - elif i == 12: - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - else: - print('check the shuffle!') - - # resize to desired size - img = cv2.resize(img, (int(1 / sf * hq.shape[1]), int(1 / sf * hq.shape[0])), - interpolation=random.choice([1, 2, 3])) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf, lq_patchsize) - - return img, hq - - -if __name__ == '__main__': - print("hey") - img = util.imread_uint('utils/test.png', 3) - print(img) - img = util.uint2single(img) - print(img) - img = img[:448, :448] - h = img.shape[0] // 4 - print("resizing to", h) - sf = 4 - deg_fn = partial(degradation_bsrgan_variant, sf=sf) - for i in range(20): - print(i) - img_lq = deg_fn(img) - print(img_lq) - img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img)["image"] - print(img_lq.shape) - print("bicubic", img_lq_bicubic.shape) - print(img_hq.shape) - lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1) - util.imsave(img_concat, str(i) + '.png') - - diff --git a/spaces/MercurialAi/OncoMedleyMini/OncoMedley/tools/tumor_loc_x.py b/spaces/MercurialAi/OncoMedleyMini/OncoMedley/tools/tumor_loc_x.py deleted file mode 100644 index 366775e71a203a76000de08c82b18235443d99b5..0000000000000000000000000000000000000000 --- a/spaces/MercurialAi/OncoMedleyMini/OncoMedley/tools/tumor_loc_x.py +++ /dev/null @@ -1,84 +0,0 @@ -from pydantic import BaseModel, Field -from langchain.tools import BaseTool -from typing import Type -import os -import torch -from sklearn.preprocessing import MinMaxScaler - -from OncoMedley.src.import_DICOM import import_DICOM -from OncoMedley.src.four_dim_normalization import FourDimNormalization -from OncoMedley.GMIC.constants import PERCENT_T_DICT, TOP_K_DICT -from OncoMedley.GMIC.gmic3d import GMIC3D - -min_max_scaler = MinMaxScaler() - -class TumorLocXInput(BaseModel): - """Inputs for the prediction of the x-coordinates of a tumor within a recontructed patient volume""" - patient_id: int = Field(description="the ID of the patient") - -class TumorLocXTool(BaseTool): - name="Tumor_loc_x" - description="predicts the x-coordinate of the tumor for a breast cancer patient from their reconstructed MRI imagery, given their patient ID" - args_schema: Type[BaseModel] = TumorLocXInput - patient_dir = "data/patient_imagery" - model_path = "data/torch_image_only_model_state_tumor_loc_x.pth" - - def _run(self, patient_id: int) -> float: - - # 1. Load the corresponding volume for the patient ID provided - patients = os.listdir(self.patient_dir) - id_dir = "" - for p in patients: - if str(patient_id) in p: - id_dir = p - break - - full_dir = os.path.join(self.patient_dir, id_dir) - - p_volume = import_DICOM(full_dir) - p_volume = FourDimNormalization(p_volume) - p_volume = torch.from_numpy(p_volume) - p_volume = torch.reshape(p_volume, (1, p_volume.shape[-1], p_volume.shape[0], p_volume.shape[1], p_volume.shape[2])) - - # 2. Load and use the tumor X loc model - model_index = "1" - parameters = { - "device_type": "cpu", - "gpu_number": 2, - # model related hyper-parameters - "K": TOP_K_DICT[str(model_index)], - "percent_t": PERCENT_T_DICT[str(model_index)], - "crop_shape": (100, 100), - "post_processing_dim":256, - "num_classes":1, - "use_v1_global":False, - "half": False, - "norm_class": 'group', # GroupNorm in GlobalNetwork - "num_groups": 8, # GroupNorm in GlobalNetwork - "saliency_nonlinearity": 'tanh_relu', - } - GMIC_model = GMIC3D(parameters) - - state = torch.load(self.model_path, map_location='cpu') - - # remove module from state keys to transfer from DataParallel - stateKeys = list(state.keys()) - i = 0 - for key in stateKeys: - if 'module.' in key: - key = key.replace('module.', '') - stateKeys[i] = key - i = i + 1 - - state = dict(zip(stateKeys, list(state.values()))) - - GMIC_model.load_state_dict(state) - - with torch.no_grad(): - pred = float(GMIC_model(p_volume)) - - return pred - - async def _arun(self, patient_ID: int) -> float: - raise NotImplementedError("the x-coordinate tumor location tool does not support async") - \ No newline at end of file diff --git a/spaces/Mississippiexhib/theintuitiveye-HARDblend/README.md b/spaces/Mississippiexhib/theintuitiveye-HARDblend/README.md deleted file mode 100644 index 8b38b39a0b47b89bec1028ec45a0b6daf558ecc6..0000000000000000000000000000000000000000 --- a/spaces/Mississippiexhib/theintuitiveye-HARDblend/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Theintuitiveye HARDblend -emoji: 🏢 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/apps/__init__.py b/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/apps/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Monteg/anything-v3.0/utils.py b/spaces/Monteg/anything-v3.0/utils.py deleted file mode 100644 index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000 --- a/spaces/Monteg/anything-v3.0/utils.py +++ /dev/null @@ -1,6 +0,0 @@ -def is_google_colab(): - try: - import google.colab - return True - except: - return False \ No newline at end of file diff --git a/spaces/Mozira/voice-models/infer_pack/commons.py b/spaces/Mozira/voice-models/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/Mozira/voice-models/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/NCTCMumbai/NCTC/models/official/recommendation/README.md b/spaces/NCTCMumbai/NCTC/models/official/recommendation/README.md deleted file mode 100644 index 441bc128681c3189b53f7909b22c70fccf564414..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/recommendation/README.md +++ /dev/null @@ -1,72 +0,0 @@ -# Recommendation Model -## Overview -This is an implementation of the Neural Collaborative Filtering (NCF) framework with Neural Matrix Factorization (NeuMF) model as described in the [Neural Collaborative Filtering](https://arxiv.org/abs/1708.05031) paper. Current implementation is based on the code from the authors' [NCF code](https://github.com/hexiangnan/neural_collaborative_filtering) and the Stanford implementation in the [MLPerf Repo](https://github.com/mlperf/reference/tree/master/recommendation/pytorch). - -NCF is a general framework for collaborative filtering of recommendations in which a neural network architecture is used to model user-item interactions. Unlike traditional models, NCF does not resort to Matrix Factorization (MF) with an inner product on latent features of users and items. It replaces the inner product with a multi-layer perceptron that can learn an arbitrary function from data. - -Two instantiations of NCF are Generalized Matrix Factorization (GMF) and Multi-Layer Perceptron (MLP). GMF applies a linear kernel to model the latent feature interactions, and and MLP uses a nonlinear kernel to learn the interaction function from data. NeuMF is a fused model of GMF and MLP to better model the complex user-item interactions, and unifies the strengths of linearity of MF and non-linearity of MLP for modeling the user-item latent structures. NeuMF allows GMF and MLP to learn separate embeddings, and combines the two models by concatenating their last hidden layer. [neumf_model.py](neumf_model.py) defines the architecture details. - -Some abbreviations used the code base include: - - NCF: Neural Collaborative Filtering - - NeuMF: Neural Matrix Factorization - - GMF: Generalized Matrix Factorization - - MLP: Multi-Layer Perceptron - - HR: Hit Ratio (HR) - - NDCG: Normalized Discounted Cumulative Gain - - ml-1m: MovieLens 1 million dataset - - ml-20m: MovieLens 20 million dataset - -## Dataset -The [MovieLens datasets](http://files.grouplens.org/datasets/movielens/) are used for model training and evaluation. Specifically, we use two datasets: **ml-1m** (short for MovieLens 1 million) and **ml-20m** (short for MovieLens 20 million). - -### ml-1m -ml-1m dataset contains 1,000,209 anonymous ratings of approximately 3,706 movies made by 6,040 users who joined MovieLens in 2000. All ratings are contained in the file "ratings.dat" without header row, and are in the following format: -``` - UserID::MovieID::Rating::Timestamp -``` - - UserIDs range between 1 and 6040. - - MovieIDs range between 1 and 3952. - - Ratings are made on a 5-star scale (whole-star ratings only). - -### ml-20m -ml-20m dataset contains 20,000,263 ratings of 26,744 movies by 138493 users. All ratings are contained in the file "ratings.csv". Each line of this file after the header row represents one rating of one movie by one user, and has the following format: -``` -userId,movieId,rating,timestamp -``` - - The lines within this file are ordered first by userId, then, within user, by movieId. - - Ratings are made on a 5-star scale, with half-star increments (0.5 stars - 5.0 stars). - -In both datasets, the timestamp is represented in seconds since midnight Coordinated Universal Time (UTC) of January 1, 1970. Each user has at least 20 ratings. - -## Running Code - -### Download and preprocess dataset -To download the dataset, please install Pandas package first. Then issue the following command: -``` -python movielens.py -``` -Arguments: - * `--data_dir`: Directory where to download and save the preprocessed data. By default, it is `/tmp/movielens-data/`. - * `--dataset`: The dataset name to be downloaded and preprocessed. By default, it is `ml-1m`. - -Use the `--help` or `-h` flag to get a full list of possible arguments. - -Note the ml-20m dataset is large (the rating file is ~500 MB), and it may take several minutes (~2 mins) for data preprocessing. -Both the ml-1m and ml-20m datasets will be coerced into a common format when downloaded. - -### Train and evaluate model - -[ncf_keras_main.py](ncf_keras_main.py) is the Keras trainer that supports -features in TF 2.x. Users can train the model on both GPU and TPU. - -To train and evaluate the model, issue the following command: -``` -python ncf_keras_main.py -``` -Arguments: - * `--model_dir`: Directory to save model training checkpoints. By default, it is `/tmp/ncf/`. - * `--data_dir`: This should be set to the same directory given to the `data_download`'s `data_dir` argument. - * `--dataset`: The dataset name to be downloaded and preprocessed. By default, it is `ml-1m`. - * `--num_gpus`: The number of GPUs used for training/evaluation of the model. Use CPU if this flag is 0. By default, it is 1. - -There are other arguments about models and training process. Refer to the [Flags package](https://abseil.io/docs/python/guides/flags) documentation or use the `--helpfull` flag to get a full list of possible arguments with detailed descriptions. diff --git a/spaces/NCTCMumbai/NCTC/models/official/staging/training/controller.py b/spaces/NCTCMumbai/NCTC/models/official/staging/training/controller.py deleted file mode 100644 index a07be66329ad49ba07dff300d66f153552e1c78f..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/staging/training/controller.py +++ /dev/null @@ -1,337 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""A light weight utilities to train TF2 models.""" - -from __future__ import absolute_import -from __future__ import division -# from __future__ import google_type_annotations -from __future__ import print_function - -import time - -from absl import logging - -import tensorflow.compat.v2 as tf -from typing import Callable, Dict, Optional, Text - -from official.staging.training import utils - - -class Controller(object): - """Class that facilitates training and evaluation of models.""" - - def __init__( - self, - strategy: Optional[tf.distribute.Strategy] = None, - train_fn: Optional[Callable[[tf.Tensor], - Optional[Dict[Text, tf.Tensor]]]] = None, - eval_fn: Optional[Callable[[tf.Tensor], - Optional[Dict[Text, tf.Tensor]]]] = None, - global_step: Optional[tf.Variable] = None, - # Train related - train_steps: Optional[int] = None, - steps_per_loop: Optional[int] = None, - summary_dir: Optional[Text] = None, - checkpoint_manager: Optional[tf.train.CheckpointManager] = None, - # summary related - summary_interval: Optional[int] = None, - # Evaluation related - eval_summary_dir: Optional[Text] = None, - eval_steps: Optional[int] = None, - eval_interval: Optional[int] = None): - """Constructs a `Controller` instance. - - Args: - strategy: An instance of `tf.distribute.Strategy`. - train_fn: A callable defined as `def train_fn(num_steps)`, which - `num_steps` indicates the number of steps to run for each loop. - eval_fn: A callable defined as `def eval_fn(num_steps)`, which `num_steps` - indicates the number of steps for one evaluation. - global_step: An integer `tf.Variable` indicating the global training step - number. Usually this can be obtained from `iterations` property of the - model's optimizer (e.g. `self.optimizer.iterations`), or users can - create their own global step variable as well. If the users create their - own global step variable, it is recommended to create the `tf.Variable` - inside strategy scope, and with - `aggregation=tf.VariableAggregation.ONLY_FIRST_REPLICA`. - train_steps: The total (maximum) number of training steps to perform. - steps_per_loop: The number of steps to run in each "inner loop" of - training (passed to the `num_steps` parameter of `train_fn`). - summary_dir: The directory to restore and write checkpoints and summaries. - If None, it will be set to `checkpoint_manager.directory`. - checkpoint_manager: An instance of `tf.train.CheckpointManager`. - summary_interval: Step interval for training summaries. Note that this - argument only applies to the summaries outside the training loop. If the - value is None, then training summaries are not enabled. - eval_summary_dir: The directory to write eval summaries. If None, it will - be set to `summary_dir`. - eval_steps: Number of steps to run evaluation. - eval_interval: Step interval for evaluation. If None, will skip evaluation - in the middle of training. Note that evaluation only happens outside the - training loop, which the loop iteration is specify by `steps_per_loop` - parameter. - - Raises: - ValueError: If both `train_fn` and `eval_fn` are None. - ValueError: If `train_fn` is not None and `train_steps` is None. - ValueError: If `steps_per_loop` is None when `train_fn` is provided. - ValueError: If `steps_per_loop` is not a positive integer. - """ - if train_fn is None and eval_fn is None: - raise ValueError("`train_fn` and `eval_fn` should not both be None") - - # TODO(rxsang): Support training until exhaustion by passing - # `train_steps=-1`. Currently it cannot be supported with a host training - # loop because break statements are not supported with distributed dataset. - if train_fn is not None: - if train_steps is None: - raise ValueError("`train_steps` is required when `train_fn` is " - "provided.") - if steps_per_loop is None: - raise ValueError("`steps_per_loop` is required when `train_fn is " - "provided.") - if not isinstance(steps_per_loop, int) or steps_per_loop < 1: - raise ValueError("`steps_per_loop` should be a positive integer") - if summary_interval is not None and summary_interval <= 0: - raise ValueError("`summary_interval` should be larger than 0") - - self.strategy = strategy or tf.distribute.get_strategy() - - self.train_fn = train_fn - self.eval_fn = eval_fn - self.global_step = global_step - self.checkpoint_manager = checkpoint_manager - - if self.train_fn is not None: - self.train_steps = train_steps - self.steps_per_loop = steps_per_loop - if summary_dir: - self.summary_dir = summary_dir - elif checkpoint_manager: - self.summary_dir = checkpoint_manager.directory - else: - self.summary_dir = None - - self.summary_interval = summary_interval - if self.summary_dir and self.summary_interval: - summary_writer = tf.summary.create_file_writer(self.summary_dir) - else: - summary_writer = None - # TODO(rxsang): Consider pass SummaryManager directly into Controller for - # maximum customizability. - self.summary_manager = utils.SummaryManager( - summary_writer, - tf.summary.scalar, - global_step=self.global_step, - summary_interval=self.summary_interval) - - if self.eval_fn is not None: - eval_summary_dir = eval_summary_dir or self.summary_dir - eval_summary_writer = tf.summary.create_file_writer( - eval_summary_dir) if eval_summary_dir else None - self.eval_summary_manager = utils.SummaryManager( - eval_summary_writer, tf.summary.scalar, global_step=self.global_step) - - self.eval_steps = eval_steps - self.eval_interval = eval_interval - - # Creates and initializes the interval triggers. - self.eval_trigger = utils.IntervalTrigger(self.eval_interval, - self.global_step.numpy()) # pytype: disable=attribute-error - - if self.global_step: - tf.summary.experimental.set_step(self.global_step) - - # Restores the model if needed. - if self.checkpoint_manager is not None: - model_restored = self._restore_model() - if not model_restored and self.checkpoint_manager.checkpoint_interval: - # If the model is not restored from a checkpoint, save an initial - # checkpoint. - ckpt_path = self.checkpoint_manager.save( - checkpoint_number=self.global_step) - logging.info("Saved checkpoins in %s", ckpt_path) - - def _restore_model(self, checkpoint_path=None): - """Restore or initialize the model. - - Args: - checkpoint_path: An optional string indicates the checkpoint path to - restore. If None, will restore from `self.checkpoint_manager`. - - Returns: - True if the latest checkpoint is found or restored. Otherwise False. - """ - with self.strategy.scope(): - # Checkpoint restoring should be inside scope. b/139450638 - if checkpoint_path is not None: - self.checkpoint_manager.checkpoint.restore(checkpoint_path) - return True - return self.checkpoint_manager.restore_or_initialize() - - def _evaluate_once(self, current_step): - """Runs the evaluation once.""" - logging.info("Start evaluation at step: %s", current_step) - - with self.eval_summary_manager.summary_writer.as_default(): - eval_outputs = self.eval_fn(self.eval_steps) - - if eval_outputs: - eval_outputs = tf.nest.map_structure(lambda x: x.numpy(), eval_outputs) - - info = "step: {} evaluation metric: {}".format( - current_step, eval_outputs) - self._log_info(info) - - self.eval_summary_manager.write_summaries(eval_outputs) - self.eval_summary_manager.flush() - - def _maybe_save_checkpoints(self, current_step, force_trigger=False): - if self.checkpoint_manager and self.checkpoint_manager.checkpoint_interval: - ckpt_path = self.checkpoint_manager.save( - checkpoint_number=current_step, check_interval=not force_trigger) - if ckpt_path is not None: - logging.info("Saved checkpoins in %s", ckpt_path) - - def _maybe_evaluate(self, current_step, force_trigger=False): - if self.eval_trigger(current_step, force_trigger): - self._evaluate_once(current_step) - - def _log_info(self, message): - """Logs `message` to the `info` log, and also prints to stdout.""" - logging.info(message) - print(message) - - def train(self, evaluate=True): - """Runs the training, with optional evaluation. - - This handles evaluation, gathering summaries, and saving checkpoints. - - Args: - evaluate: A boolean indicates whether to perform evaluation during - training. - - Raises: - RuntimeError: If `global_step` is not updated correctly in `train_fn`. - """ - if self.train_fn is None: - raise ValueError("`self.train_fn` is required when calling `train` " - "method.") - if self.global_step is None: - raise ValueError("`self.global_step` is required when calling `train` " - "method.") - if evaluate and self.eval_fn is None: - raise ValueError("`self.eval_fn` is required when calling `train` method " - "with `evaluate=True`") - - step_timer = _StepTimer(self.global_step) - current_step = self.global_step.numpy() - logging.info("Train at step %s of %s", current_step, self.train_steps) - while current_step < self.train_steps: - # Calculates steps to run for the next train loop. - steps_per_loop = min(self.train_steps - current_step, self.steps_per_loop) - logging.info("Entering training loop with %s steps, at step %s of %s", - steps_per_loop, current_step, self.train_steps) - current_step += steps_per_loop - steps_per_loop = tf.convert_to_tensor(steps_per_loop, dtype=tf.int32) - - with self.summary_manager.summary_writer.as_default(): - train_outputs = self.train_fn(steps_per_loop) - - # Updates and verifies the current step after a training loop finishes. - if current_step != self.global_step.numpy(): - raise RuntimeError("`self.train_fn` is not updating `global_step` " - "correctly, expected: %s, actual: %s" % - (current_step, self.global_step.numpy())) - - # Print information like metrics and steps_per_second after a training - # loop. - if train_outputs: - train_outputs = tf.nest.map_structure( - lambda x: x.numpy(), train_outputs) - steps_per_second = step_timer.steps_per_second() - info = "step: {} steps_per_second: {:.2f} {}".format( - current_step, steps_per_second, train_outputs) - self._log_info(info) - - train_outputs = train_outputs or {} - train_outputs["steps_per_second"] = steps_per_second - self.summary_manager.write_summaries(train_outputs) - - self._maybe_save_checkpoints(current_step) - - if evaluate: - self._maybe_evaluate(current_step) - - self.summary_manager.write_summaries(train_outputs, always_write=True) - self.summary_manager.flush() - self._maybe_save_checkpoints(current_step, force_trigger=True) - if evaluate: - self._maybe_evaluate(current_step, force_trigger=True) - - def evaluate(self, continuous=False, timeout_fn=None): - """Runs the evaluation. - - Args: - continuous: If `True`, will continously monitor the checkpoint directory - to evaluate on the latest checkpoint. If `False`, will do the evaluation - once. - timeout_fn: Optional callable to call after a timeout. If the function - returns True, then it means that no new checkpoints will be generated - and the iterator will exit. - - Raises: - ValueError: If no checkpoint found in `self.checkpoint_manager.directory`. - """ - if self.eval_fn is None: - raise ValueError("`self.eval_fn` should not be None to call " - "`evaluate()` method.") - - if not continuous and timeout_fn is not None: - raise ValueError("`timeout_fn` can be only passed when `continuous` is " - "True") - - if continuous: - for checkpoint_path in tf.train.checkpoints_iterator( - self.checkpoint_manager.directory, timeout_fn=timeout_fn): - self._restore_model(checkpoint_path) - self._evaluate_once(self.global_step.numpy()) - return - - latest_checkpoint = self.checkpoint_manager.latest_checkpoint - if not latest_checkpoint: - raise ValueError("no checkpoint found in dir %s" % - self.checkpoint_manager.directory) - self._restore_model() - self._evaluate_once(self.global_step.numpy()) - - -class _StepTimer(object): - """Utility class for measuring steps/second.""" - - def __init__(self, step): - self.step = step - self.start() - - def start(self): - self.last_iteration = self.step.numpy() - self.last_time = time.time() - - def steps_per_second(self, restart=True): - value = ((self.step.numpy() - self.last_iteration) / - (time.time() - self.last_time)) - if restart: - self.start() - return value diff --git a/spaces/NMEX/rvc-hoyogame-v2/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/NMEX/rvc-hoyogame-v2/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/NN520/AI/src/components/theme-toggle.tsx b/spaces/NN520/AI/src/components/theme-toggle.tsx deleted file mode 100644 index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000 --- a/spaces/NN520/AI/src/components/theme-toggle.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import { useTheme } from 'next-themes' - -import { Button } from '@/components/ui/button' -import { IconMoon, IconSun } from '@/components/ui/icons' - -export function ThemeToggle() { - const { setTheme, theme } = useTheme() - const [_, startTransition] = React.useTransition() - - return ( - - ) -} diff --git a/spaces/Nikhil0987/omm/app.py b/spaces/Nikhil0987/omm/app.py deleted file mode 100644 index 4ef1233bf555d0f4d8ad2155ea375edfe09759c6..0000000000000000000000000000000000000000 --- a/spaces/Nikhil0987/omm/app.py +++ /dev/null @@ -1,106 +0,0 @@ -import streamlit as st -from home import dashboard -from streamlit_option_menu import option_menu -import pymongo -# from dotenv import load_dotenv -import os -import re - - -# load_dotenv() - -from pymongo.mongo_client import MongoClient - -# uri = os.environ["MONGO_CONNECTION_STRING"] -uri = "mongodb+srv://cluster0.j2p0gjo.mongodb.net/?authSource=%24external&authMechanism=MONGODB-X509&retryWrites=true&w=majority" - -# Create a new client and connect to the server -client = MongoClient(uri, tlsCertificateKeyFile="cert.pem") - -db = client["mydata"] - -col = db["Users"] - -# Send a ping to confirm a successful connection -try: - client.admin.command('ping') - print("Pinged your deployment. You successfully connected to MongoDB!") -except Exception as e: - print(e) - - - -# st.set_page_config(page_title="Authentication", page_icon=":guardsman:", layout="wide") - -# st.title("Authentication") - - - -def login(): - st.title("Login") - usrname = st.text_input("Username") - password = st.text_input("Password", type="password") - if st.button("Login", key="loginkey"): - document = col.find_one({"username": usrname}) - if document: - if password == document["password"]: - st.session_state.user = "logged" - st.experimental_rerun() - else: - st.error("Incorrect Password") - elif password == "go": - st.session_state.user = "logged" - st.experimental_rerun() - else: - st.error("Incorrect Username") - - -def signup(): - - st.title("Signup") - username = st.text_input("Username") - password = st.text_input("Password", type="password") - confirm_password = st.text_input("Confirm Password", type="password") - if st.button("Signup", key="signupkey"): - if password == confirm_password: - newuser = { - "username": username, - "password": password - } - col.insert_one(newuser) - st.success("Account created! You can now login.") - st.snow() - st.cache_data.clear() - else: - st.error("Passwords do not match") - -def main(): - # st.title("Authentication") - if "user" not in st.session_state: - st.session_state["user"] = "visitor" - - - - if st.session_state["user"] == "visitor": - - option = option_menu( - menu_title="Authentication", - options=["Login", "Signup"], - icons=["house", "activity"], - menu_icon="cast", - default_index=0, - orientation="horizontal", - - ) - if option == "Login": - login() - elif option == "Signup": - signup() - elif st.session_state["user"] == "logged": - dashboard() - - - - -main() - diff --git a/spaces/OAOA/DifFace/models/script_util.py b/spaces/OAOA/DifFace/models/script_util.py deleted file mode 100644 index c6f255012d2b692e31559f0b50801de7679fa879..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/models/script_util.py +++ /dev/null @@ -1,45 +0,0 @@ -import argparse -import inspect - -from . import gaussian_diffusion as gd -from .respace import SpacedDiffusion, space_timesteps - -def create_gaussian_diffusion( - *, - steps=1000, - learn_sigma=False, - sigma_small=False, - noise_schedule="linear", - use_kl=False, - predict_xstart=False, - rescale_timesteps=False, - rescale_learned_sigmas=False, - timestep_respacing="", -): - betas = gd.get_named_beta_schedule(noise_schedule, steps) - if use_kl: - loss_type = gd.LossType.RESCALED_KL - elif rescale_learned_sigmas: - loss_type = gd.LossType.RESCALED_MSE - else: - loss_type = gd.LossType.MSE - if not timestep_respacing: - timestep_respacing = [steps] - return SpacedDiffusion( - use_timesteps=space_timesteps(steps, timestep_respacing), - betas=betas, - model_mean_type=( - gd.ModelMeanType.EPSILON if not predict_xstart else gd.ModelMeanType.START_X - ), - model_var_type=( - ( - gd.ModelVarType.FIXED_LARGE - if not sigma_small - else gd.ModelVarType.FIXED_SMALL - ) - if not learn_sigma - else gd.ModelVarType.LEARNED_RANGE - ), - loss_type=loss_type, - rescale_timesteps=rescale_timesteps, - ) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/shuffled_word_order/README.finetuning.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/shuffled_word_order/README.finetuning.md deleted file mode 100644 index ecbcb65884640c3327a2cbaef8aad4f3cfe812f7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/shuffled_word_order/README.finetuning.md +++ /dev/null @@ -1,135 +0,0 @@ -# Fine-tuning details - -For each task (GLUE and PAWS), we perform hyperparam search for each model, and report the mean and standard deviation across 5 seeds of the best model. First, get the datasets following the instructions in [RoBERTa fine-tuning README](../roberta/README.glue.md). Alternatively, you can use [huggingface datasets](https://huggingface.co/docs/datasets/) to get the task data: - -```python -from datasets import load_dataset -import pandas as pd -from pathlib import Path - -key2file = { -"paws": { - "loc": "paws_data", - "columns": ["id", "sentence1", "sentence2", "label"], - "train": "train.tsv", - "validation": "dev.tsv", - "test": "test.tsv" - } -} - -task_data = load_dataset("paws", "labeled_final") -task_config = key2file["paws"] -save_path = Path(task_config["loc"]) -save_path.mkdir(exist_ok=True, parents=True) -for key, fl in task_config.items(): - if key in ["loc", "columns"]: - continue - print(f"Reading {key}") - columns = task_config["columns"] - df = pd.DataFrame(task_data[key]) - print(df.columns) - df = df[columns] - print(f"Got {len(df)} records") - save_loc = save_path / fl - print(f"Saving to : {save_loc}") - df.to_csv(save_loc, sep="\t", header=None, index=None) - -``` - -- Preprocess using RoBERTa GLUE preprocessing script, while keeping in mind the column numbers for `sentence1`, `sentence2` and `label` (which is 0,1,2 if you save the data according to the above example.) -- Then, fine-tuning is performed similarly to RoBERTa (for example, in case of RTE): - -```bash -TOTAL_NUM_UPDATES=30875 # 10 epochs through RTE for bsz 16 -WARMUP_UPDATES=1852 # 6 percent of the number of updates -LR=2e-05 # Peak LR for polynomial LR scheduler. -NUM_CLASSES=2 -MAX_SENTENCES=16 # Batch size. -SHUFFLED_ROBERTA_PATH=/path/to/shuffled_roberta/model.pt - -CUDA_VISIBLE_DEVICES=0 fairseq-train RTE-bin/ \ - --restore-file $SHUFFLED_ROBERTA_PATH \ - --max-positions 512 \ - --batch-size $MAX_SENTENCES \ - --max-tokens 4400 \ - --task sentence_prediction \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --init-token 0 --separator-token 2 \ - --arch roberta_large \ - --criterion sentence_prediction \ - --num-classes $NUM_CLASSES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --max-epoch 10 \ - --find-unused-parameters \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric; -``` - -- `TOTAL_NUM_UPDATES` is computed based on the `--batch_size` value and the dataset size. -- `WARMUP_UPDATES` is computed as 6% of `TOTAL_NUM_UPDATES` -- Best hyperparam of `--lr` and `--batch_size` is reported below: - -## `--lr` - -| | name | RTE | MRPC | SST-2 | CoLA | QQP | QNLI | MNLI | PAWS | -| --: | :----------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ----: | -| 0 | original | 2e-05 | 2e-05 | 1e-05 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 2e-05 | -| 1 | n_1 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | -| 2 | n_2 | 2e-05 | 2e-05 | 1e-05 | 1e-05 | 2e-05 | 1e-05 | 1e-05 | 3e-05 | -| 3 | n_3 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | 3e-05 | 1e-05 | 1e-05 | 2e-05 | -| 4 | n_4 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | 2e-05 | 1e-05 | 1e-05 | 2e-05 | -| 5 | r512 | 1e-05 | 3e-05 | 2e-05 | 2e-05 | 3e-05 | 2e-05 | 3e-05 | 2e-05 | -| 6 | rand_corpus | 2e-05 | 1e-05 | 3e-05 | 1e-05 | 3e-05 | 3e-05 | 3e-05 | 2e-05 | -| 7 | rand_uniform | 2e-05 | 1e-05 | 3e-05 | 2e-05 | 3e-05 | 3e-05 | 3e-05 | 1e-05 | -| 8 | rand_init | 1e-05 | 1e-05 | 3e-05 | 1e-05 | 1e-05 | 1e-05 | 2e-05 | 1e-05 | -| 9 | no_pos | 1e-05 | 3e-05 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 1e-05 | 1e-05 | - -## `--batch_size` - -| | name | RTE | MRPC | SST-2 | CoLA | QQP | QNLI | MNLI | PAWS | -| --: | :----------- | --: | ---: | ----: | ---: | --: | ---: | ---: | ---: | -| 0 | orig | 16 | 16 | 32 | 16 | 16 | 32 | 32 | 16 | -| 1 | n_1 | 32 | 32 | 16 | 32 | 32 | 16 | 32 | 16 | -| 2 | n_2 | 32 | 16 | 32 | 16 | 32 | 32 | 16 | 32 | -| 3 | n_3 | 32 | 32 | 16 | 32 | 32 | 16 | 32 | 32 | -| 4 | n_4 | 32 | 16 | 32 | 16 | 32 | 32 | 32 | 32 | -| 5 | r512 | 32 | 16 | 16 | 32 | 32 | 16 | 16 | 16 | -| 6 | rand_corpus | 16 | 16 | 16 | 16 | 32 | 16 | 16 | 32 | -| 7 | rand_uniform | 16 | 32 | 16 | 16 | 32 | 16 | 16 | 16 | -| 8 | rand_init | 16 | 16 | 32 | 16 | 16 | 16 | 32 | 16 | -| 9 | no_pos | 16 | 32 | 16 | 16 | 32 | 16 | 16 | 16 | - -- Perform inference similar to RoBERTa as well: - -```python -from fairseq.models.roberta import RobertaModel - -roberta = RobertaModel.from_pretrained( - 'checkpoints/', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='PAWS-bin' -) - -label_fn = lambda label: roberta.task.label_dictionary.string( - [label + roberta.task.label_dictionary.nspecial] -) -ncorrect, nsamples = 0, 0 -roberta.cuda() -roberta.eval() -with open('paws_data/dev.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[0], tokens[1], tokens[2] - tokens = roberta.encode(sent1, sent2) - prediction = roberta.predict('sentence_classification_head', tokens).argmax().item() - prediction_label = label_fn(prediction) - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) - -``` diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/pq/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/pq/__init__.py deleted file mode 100644 index c142a802e05ec7ecfa5dba7d9a98c26a60ac75d2..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/pq/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .utils import SizeTracker, get_param, attrsetter, quantize_model_ # NOQA diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/multilingual_masked_lm.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/multilingual_masked_lm.py deleted file mode 100644 index 9e6ce4b8a2f77ed889a6e1451321a8e3ac21dc67..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/tasks/multilingual_masked_lm.py +++ /dev/null @@ -1,338 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -import numpy as np -import torch -from fairseq import utils -from fairseq.data import ( - ConcatDataset, - Dictionary, - IdDataset, - MaskTokensDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - PadDataset, - PrependTokenDataset, - RawLabelDataset, - ResamplingDataset, - SortDataset, - TokenBlockDataset, - data_utils, - encoders, -) -from fairseq.tasks import LegacyFairseqTask, register_task - - -logger = logging.getLogger(__name__) - - -@register_task("multilingual_masked_lm") -class MultiLingualMaskedLMTask(LegacyFairseqTask): - """Task for training masked language models (e.g., BERT, RoBERTa).""" - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument( - "data", - help="colon separated path to data directories list, \ - will be iterated upon during epochs in round-robin manner", - ) - parser.add_argument( - "--sample-break-mode", - default="complete", - choices=["none", "complete", "complete_doc", "eos"], - help='If omitted or "none", fills each sample with tokens-per-sample ' - 'tokens. If set to "complete", splits samples only at the end ' - "of sentence, but may include multiple sentences per sample. " - '"complete_doc" is similar but respects doc boundaries. ' - 'If set to "eos", includes only one sentence per sample.', - ) - parser.add_argument( - "--tokens-per-sample", - default=512, - type=int, - help="max number of total tokens over all segments " - "per sample for BERT dataset", - ) - parser.add_argument( - "--mask-prob", - default=0.15, - type=float, - help="probability of replacing a token with mask", - ) - parser.add_argument( - "--leave-unmasked-prob", - default=0.1, - type=float, - help="probability that a masked token is unmasked", - ) - parser.add_argument( - "--random-token-prob", - default=0.1, - type=float, - help="probability of replacing a token with a random token", - ) - parser.add_argument( - "--freq-weighted-replacement", - action="store_true", - help="sample random replacement words based on word frequencies", - ) - parser.add_argument( - "--mask-whole-words", - default=False, - action="store_true", - help="mask whole words; you may also want to set --bpe", - ) - parser.add_argument( - "--multilang-sampling-alpha", - type=float, - default=1.0, - help="smoothing alpha for sample rations across multiple datasets", - ) - - def __init__(self, args, dictionary): - super().__init__(args) - self.dictionary = dictionary - self.seed = args.seed - - # add mask token - self.mask_idx = dictionary.add_symbol("") - - @classmethod - def setup_task(cls, args, **kwargs): - paths = utils.split_paths(args.data) - assert len(paths) > 0 - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - logger.info("dictionary: {} types".format(len(dictionary))) - return cls(args, dictionary) - - def _get_whole_word_mask(self): - # create masked input and targets - if self.args.mask_whole_words: - bpe = encoders.build_bpe(self.args) - if bpe is not None: - - def is_beginning_of_word(i): - if i < self.source_dictionary.nspecial: - # special elements are always considered beginnings - return True - tok = self.source_dictionary[i] - if tok.startswith("madeupword"): - return True - try: - return bpe.is_beginning_of_word(tok) - except ValueError: - return True - - mask_whole_words = torch.ByteTensor( - list(map(is_beginning_of_word, range(len(self.source_dictionary)))) - ) - else: - mask_whole_words = None - return mask_whole_words - - def _get_sample_prob(self, dataset_lens): - """ - Get smoothed sampling porbability by languages. This helps low resource - languages by upsampling them. - """ - prob = dataset_lens / dataset_lens.sum() - smoothed_prob = prob ** self.args.multilang_sampling_alpha - smoothed_prob = smoothed_prob / smoothed_prob.sum() - return smoothed_prob - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - paths = utils.split_paths(self.args.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - - languages = sorted( - name - for name in os.listdir(data_path) - if os.path.isdir(os.path.join(data_path, name)) - ) - - logger.info("Training on {0} languages: {1}".format(len(languages), languages)) - logger.info( - "Language to id mapping: ", {lang: id for id, lang in enumerate(languages)} - ) - - mask_whole_words = self._get_whole_word_mask() - lang_datasets = [] - for lang_id, language in enumerate(languages): - split_path = os.path.join(data_path, language, split) - - dataset = data_utils.load_indexed_dataset( - split_path, - self.source_dictionary, - self.args.dataset_impl, - combine=combine, - ) - if dataset is None: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, split_path) - ) - - # create continuous blocks of tokens - dataset = TokenBlockDataset( - dataset, - dataset.sizes, - self.args.tokens_per_sample - 1, # one less for - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode=self.args.sample_break_mode, - ) - logger.info("loaded {} blocks from: {}".format(len(dataset), split_path)) - - # prepend beginning-of-sentence token (, equiv. to [CLS] in BERT) - dataset = PrependTokenDataset(dataset, self.source_dictionary.bos()) - - src_dataset, tgt_dataset = MaskTokensDataset.apply_mask( - dataset, - self.source_dictionary, - pad_idx=self.source_dictionary.pad(), - mask_idx=self.mask_idx, - seed=self.args.seed, - mask_prob=self.args.mask_prob, - leave_unmasked_prob=self.args.leave_unmasked_prob, - random_token_prob=self.args.random_token_prob, - freq_weighted_replacement=self.args.freq_weighted_replacement, - mask_whole_words=mask_whole_words, - ) - - lang_dataset = NestedDictionaryDataset( - { - "net_input": { - "src_tokens": PadDataset( - src_dataset, - pad_idx=self.source_dictionary.pad(), - left_pad=False, - ), - "src_lengths": NumelDataset(src_dataset, reduce=False), - }, - "target": PadDataset( - tgt_dataset, - pad_idx=self.source_dictionary.pad(), - left_pad=False, - ), - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_dataset, reduce=True), - "lang_id": RawLabelDataset([lang_id] * src_dataset.sizes.shape[0]), - }, - sizes=[src_dataset.sizes], - ) - lang_datasets.append(lang_dataset) - - dataset_lengths = np.array( - [len(d) for d in lang_datasets], - dtype=float, - ) - logger.info( - "loaded total {} blocks for all languages".format( - dataset_lengths.sum(), - ) - ) - if split == self.args.train_subset: - # For train subset, additionally up or down sample languages. - sample_probs = self._get_sample_prob(dataset_lengths) - logger.info( - "Sample probability by language: ", - { - lang: "{0:.4f}".format(sample_probs[id]) - for id, lang in enumerate(languages) - }, - ) - size_ratio = (sample_probs * dataset_lengths.sum()) / dataset_lengths - logger.info( - "Up/Down Sampling ratio by language: ", - { - lang: "{0:.2f}".format(size_ratio[id]) - for id, lang in enumerate(languages) - }, - ) - - resampled_lang_datasets = [ - ResamplingDataset( - lang_datasets[i], - size_ratio=size_ratio[i], - seed=self.args.seed, - epoch=epoch, - replace=size_ratio[i] >= 1.0, - ) - for i, d in enumerate(lang_datasets) - ] - dataset = ConcatDataset(resampled_lang_datasets) - else: - dataset = ConcatDataset(lang_datasets) - lang_splits = [split] - for lang_id, lang_dataset in enumerate(lang_datasets): - split_name = split + "_" + languages[lang_id] - lang_splits.append(split_name) - self.datasets[split_name] = lang_dataset - - # [TODO]: This is hacky for now to print validation ppl for each - # language individually. Maybe need task API changes to allow it - # in more generic ways. - if split in self.args.valid_subset: - self.args.valid_subset = self.args.valid_subset.replace( - split, ",".join(lang_splits) - ) - - with data_utils.numpy_seed(self.args.seed + epoch): - shuffle = np.random.permutation(len(dataset)) - - self.datasets[split] = SortDataset( - dataset, - sort_order=[ - shuffle, - dataset.sizes, - ], - ) - - def build_dataset_for_inference(self, src_tokens, src_lengths, sort=True): - src_dataset = PadDataset( - TokenBlockDataset( - src_tokens, - src_lengths, - self.args.tokens_per_sample - 1, # one less for - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode="eos", - ), - pad_idx=self.source_dictionary.pad(), - left_pad=False, - ) - src_dataset = PrependTokenDataset(src_dataset, self.source_dictionary.bos()) - src_dataset = NestedDictionaryDataset( - { - "id": IdDataset(), - "net_input": { - "src_tokens": src_dataset, - "src_lengths": NumelDataset(src_dataset, reduce=False), - }, - }, - sizes=src_lengths, - ) - if sort: - src_dataset = SortDataset(src_dataset, sort_order=[src_lengths]) - return src_dataset - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/tasks/mm_tasks/vqa_gen.py b/spaces/OFA-Sys/OFA-Generic_Interface/tasks/mm_tasks/vqa_gen.py deleted file mode 100644 index 8f200407e3e3b07d0819717aedbeaa39e08e86cd..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/tasks/mm_tasks/vqa_gen.py +++ /dev/null @@ -1,228 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import json -import logging -import os -import math -import pickle -from typing import Optional -from argparse import Namespace -from data.file_dataset import FileDataset - -import torch -from fairseq import metrics -from fairseq.tasks import register_task - -from models import search -from data.mm_data.vqa_gen_dataset import VqaGenDataset -from data import data_utils -from tasks.ofa_task import OFAConfig, OFATask -from utils.trie import Trie - -logger = logging.getLogger(__name__) - - -@dataclass -class VqaGenConfig(OFAConfig): - max_object_length: int = field( - default=30, metadata={"help": "the maximum object sequence length"} - ) - ans2label_dict: Optional[str] = field( - default='{"no": 0, "yes":1}', - metadata={"help": 'answer to label dict'}, - ) - ans2label_file: Optional[str] = field( - default=None, - metadata={"help": "path to load ans2label file"}, - ) - - add_object: bool = field( - default=False, - metadata={"help": "add object to encoder"}, - ) - valid_batch_size: int = field( - default=20, - metadata={"help": "valid batch size per step"}, - ) - prompt_type: Optional[str] = field( - default=None, - metadata={"help": "prompt_type"}, - ) - uses_ema: Optional[bool] = field( - default=False, - metadata={"help": "whether to use ema"}, - ) - - -@register_task("vqa_gen", dataclass=VqaGenConfig) -class VqaGenTask(OFATask): - def __init__(self, cfg: VqaGenConfig, src_dict, tgt_dict): - super().__init__(cfg, src_dict, tgt_dict) - - self.ans2label_dict = None - if self.cfg.ans2label_file is not None: - self.ans2label_dict = pickle.load(open(self.cfg.ans2label_file, "rb")) - else: - self.ans2label_dict = json.loads(self.cfg.ans2label_dict) - - self.uses_ema = self.cfg.uses_ema - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - paths = self.cfg.data.split(',') - assert len(paths) > 0 - - if split == 'train': - table_path = paths[(epoch - 1) % (len(paths) - 1)] - else: - table_path = paths[-1] - dataset = FileDataset(table_path, self.cfg.selected_cols) - - self.datasets[split] = VqaGenDataset( - split, - dataset, - self.bpe, - self.src_dict, - self.tgt_dict, - max_src_length=self.cfg.max_src_length, - max_object_length=self.cfg.max_object_length, - max_tgt_length=self.cfg.max_tgt_length, - patch_image_size=self.cfg.patch_image_size, - add_object=self.cfg.add_object, - constraint_trie=self.constraint_trie, - imagenet_default_mean_and_std=self.cfg.imagenet_default_mean_and_std, - prompt_type=self.cfg.prompt_type - ) - - def build_model(self, cfg): - model = super().build_model(cfg) - answer_item_list = [] - self.index2ans = {} - self.constraint_trie = Trie(self.tgt_dict.eos()) - for i, answer in enumerate(self.ans2label_dict.keys()): - answer_item = self.tgt_dict.encode_line( - line=self.bpe.encode(' ' + answer), - add_if_not_exist=False, - append_eos=False - ).long() - answer_item_list.append(answer_item) - self.index2ans[i] = answer - self.constraint_trie.insert([self.tgt_dict.bos()] + answer_item.tolist() + [self.tgt_dict.eos()]) - - constraint_mask_list = [] - for answer_item in answer_item_list: - constraint_mask = torch.zeros((len(answer_item)+1, len(self.tgt_dict))).bool() - for i in range(len(answer_item)+1): - constraint_prefix_token = [self.src_dict.bos()] + answer_item[:i].tolist() - constraint_nodes = self.constraint_trie.get_next_layer(constraint_prefix_token) - constraint_mask[i][constraint_nodes] = True - constraint_mask_list.append(constraint_mask) - - self.valid_answers_list = [] - self.valid_constraint_masks_list = [] - for i in range(0, len(answer_item_list), self.cfg.valid_batch_size): - self.valid_answers_list += [answer_item_list[i:i+self.cfg.valid_batch_size]] - self.valid_constraint_masks_list += [constraint_mask_list[i:i+self.cfg.valid_batch_size]] - - return model - - def build_generator( - self, models, args, seq_gen_cls=None, extra_gen_cls_kwargs=None, prefix_allowed_tokens_fn=None, - ): - seq_generator = super().build_generator(models, args, seq_gen_cls, extra_gen_cls_kwargs, prefix_allowed_tokens_fn) - seq_generator.constraint_trie = self.constraint_trie - - return seq_generator - - def valid_step(self, sample, model, criterion, **extra_kwargs): - loss, sample_size, logging_output = super().valid_step(sample, model, criterion) - - if self.uses_ema: - assert 'ema_model' in extra_kwargs and extra_kwargs['ema_model'] is not None - if self.uses_ema: - eval_model = extra_kwargs['ema_model'] - else: - eval_model = model - - eval_model.eval() - with torch.no_grad(): - encoder_out = eval_model.encoder( - sample["net_input"]["src_tokens"], - src_lengths=sample["net_input"]["src_lengths"], - patch_images=sample["net_input"]["patch_images"], - patch_masks=sample["net_input"]["patch_masks"] - ) - device = sample["net_input"]["src_tokens"].device - eos_item = torch.tensor([self.src_dict.eos()]) - pad = self.src_dict.pad() - valid_result = [] - for valid_answers, valid_constraint_masks in zip(self.valid_answers_list, self.valid_constraint_masks_list): - valid_size = len(valid_answers) - valid_tgt_items = [ - torch.cat([torch.tensor(decoder_prompt[1:]), valid_answer, eos_item]) - for decoder_prompt in sample["decoder_prompts"] for valid_answer in valid_answers - ] - valid_prev_items = [ - torch.cat([torch.tensor(decoder_prompt), valid_answer]) - for decoder_prompt in sample["decoder_prompts"] for valid_answer in valid_answers - ] - valid_constraint_mask_items = [ - torch.cat([torch.zeros(len(decoder_prompt)-1, valid_constraint_mask.size(1)).bool(), valid_constraint_mask], dim=0) - for decoder_prompt in sample["decoder_prompts"] for valid_constraint_mask in valid_constraint_masks - ] - valid_tgt = data_utils.collate_tokens(valid_tgt_items, pad_idx=pad, left_pad=False).to(device) - valid_prev_output = data_utils.collate_tokens(valid_prev_items, pad_idx=pad, left_pad=False).to(device) - valid_constraint_masks = data_utils.collate_tokens(valid_constraint_mask_items, pad_idx=pad, left_pad=False).to(device) - - new_encoder_out = {} - new_encoder_out["encoder_out"] = [ - encoder_out["encoder_out"][0].repeat_interleave(valid_size, dim=1) - ] - new_encoder_out["encoder_padding_mask"] = [ - encoder_out["encoder_padding_mask"][0].repeat_interleave(valid_size, dim=0) - ] - new_encoder_out["position_embeddings"] = [ - encoder_out["position_embeddings"][0].repeat_interleave(valid_size, dim=0) - ] - - decoder_out = eval_model.decoder(valid_prev_output, encoder_out=new_encoder_out) - decoder_out[0].masked_fill_(~valid_constraint_masks, -math.inf) - lprobs = eval_model.get_normalized_probs(decoder_out, log_probs=True) - scores = lprobs.gather(dim=-1, index=valid_tgt.unsqueeze(-1)).squeeze(-1) - scores = scores.masked_fill(valid_tgt.eq(self.tgt_dict.pad()), 0) - scores = scores.masked_fill((~valid_constraint_masks).all(2), 0) - scores = scores.sum(1) - scores = scores.view(-1, valid_size) - valid_result.append(scores) - - valid_result = torch.cat(valid_result, dim=-1) - predicts = valid_result.argmax(1).tolist() - hyps = [self.index2ans[predict_index] for predict_index in predicts] - scores = [ref_dict.get(hyp, 0) for ref_dict, hyp in zip(sample['ref_dict'], hyps)] - logging_output["_vqa_score_sum"] = sum(scores) - logging_output["_vqa_cnt"] = len(scores) - - return loss, sample_size, logging_output - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - - def sum_logs(key): - import torch - result = sum(log.get(key, 0) for log in logging_outputs) - if torch.is_tensor(result): - result = result.cpu() - return result - - def compute_score(meters): - score = meters["_vqa_score_sum"].sum / meters["_vqa_cnt"].sum - score = score if isinstance(score, float) else score.item() - return round(score, 4) - - if sum_logs("_vqa_cnt") > 0: - metrics.log_scalar("_vqa_score_sum", sum_logs("_vqa_score_sum")) - metrics.log_scalar("_vqa_cnt", sum_logs("_vqa_cnt")) - metrics.log_derived("vqa_score", compute_score) \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/truncated_bptt/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/truncated_bptt/README.md deleted file mode 100644 index 86518c9d5ef09fbd4fed1512a52e9431b74f08fa..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/truncated_bptt/README.md +++ /dev/null @@ -1,70 +0,0 @@ -# Truncated Backpropagation Through Time (BPTT) - -Truncated BPTT is a useful technique for training language models on very long -sequences. Typically a long sequences is split into chunks and a language model -is trained over the chunks sequentially. The LM may condition on previous -chunks, but gradients only flow through the current chunk. This technique was -the basis for the paper: [Transformer-XL: Attentive Language Models Beyond a -Fixed-Length Context](https://arxiv.org/abs/1901.02860), which achieved -state-of-the-art language modeling results at the time of publication. - -It is slightly tricky to implement Truncated BPTT efficiently in fairseq, since -we need to iterate over the data sequentially and disable any batch shuffling -logic. The code provided in this example illustrates how to implement Truncated -BPTT in fairseq by overriding ``FairseqTask::get_batch_iterator`` to iterate -over the data sequentially. Crucially, this example supports batching and -multi-GPU (data parallel) training. - -##### 0. Setup - -First, see the general [language modeling README](README.md) for instructions on -preprocessing the WikiText-103 data. - -##### 1. Train a Transformer-XL model on WikiText-103 - -We will train a 16-layer Transformer-XL model following the [hyperparameters -used in the original -paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_wt103_base.sh). - -The following command assumes 4 GPUs, so that the total batch size is 60 -sequences (15 x 4). Training should take ~24 hours on 4 V100 GPUs: -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train \ - --user-dir examples/truncated_bptt \ - data-bin/wikitext-103/ \ - --task truncated_bptt_lm --tokens-per-sample 150 \ - --batch-size 15 --max-update 200000 \ - --arch transformer_xl --n-layer 16 --d-model 410 --n-head 10 \ - --d-head 41 --d-inner 2100 --dropout 0.1 --dropatt 0.0 --mem-len 150 \ - --optimizer adam --clip-norm 0.25 \ - --lr-scheduler cosine --warmup-updates 0 --min-lr 0.0 --lr 0.00025 \ - --log-format json --log-interval 25 \ - --fp16 -``` - -If training on a single GPU, set `--update-freq=4` to accumulate 4x gradients -and simulate training on 4 GPUs. - -##### 2. Evaluate - -```bash -fairseq-eval-lm data-bin/wikitext-103/ \ - --path checkpoints/checkpoint_best.pt \ - --user-dir examples/truncated_bptt/ \ - --task truncated_bptt_lm \ - --batch-size 1 --required-batch-size-multiple 1 \ - --model-overrides '{"mem_len":640,"clamp_len":400,"same_length":True}' \ - --tokens-per-sample 64 -# ... | INFO | fairseq_cli.eval_lm | num. model params: 151123537 -# ... | INFO | fairseq_cli.eval_lm | Evaluated 245569 tokens in 83.1s (2956.82 tokens/s) -# ... | INFO | fairseq_cli.eval_lm | Loss (base 2): 4.5668, Perplexity: 23.70 -# Compare to 24.0 test perplexity from the paper -``` - -*Note:* During training the model saw 150 tokens of context -(``--tokens-per-sample=150``) and 150 extra memory tokens (``--mem-len=150``). -During evaluation we measure perplexity on sequences of 64 tokens -(``--tokens-per-sample=64``) and increase the memory length -(``--model-overrides='{"mem_len":640}'``). These settings match the evaluation -settings from [the original -paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_wt103_base.sh). diff --git a/spaces/OFA-Sys/OFA-vqa/run_scripts/caption/coco_eval.py b/spaces/OFA-Sys/OFA-vqa/run_scripts/caption/coco_eval.py deleted file mode 100644 index c46ff0812fa0eecf46748fba9281af01abaee4df..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/run_scripts/caption/coco_eval.py +++ /dev/null @@ -1,42 +0,0 @@ -import json -import sys -import os.path as op - -from pycocotools.coco import COCO -from pycocoevalcap.eval import COCOEvalCap - - -def evaluate_on_coco_caption(res_file, label_file, outfile=None): - """ - res_file: txt file, each row is [image_key, json format list of captions]. - Each caption is a dict, with fields "caption", "conf". - label_file: JSON file of ground truth captions in COCO format. - """ - coco = COCO(label_file) - cocoRes = coco.loadRes(res_file) - cocoEval = COCOEvalCap(coco, cocoRes) - - # evaluate on a subset of images by setting - # cocoEval.params['image_id'] = cocoRes.getImgIds() - # please remove this line when evaluating the full validation set - cocoEval.params['image_id'] = cocoRes.getImgIds() - - # evaluate results - # SPICE will take a few minutes the first time, but speeds up due to caching - cocoEval.evaluate() - result = cocoEval.eval - if not outfile: - print(result) - else: - with open(outfile, 'w') as fp: - json.dump(result, fp, indent=4) - return result - - -if __name__ == "__main__": - if len(sys.argv) == 3: - evaluate_on_coco_caption(sys.argv[1], sys.argv[2]) - elif len(sys.argv) == 4: - evaluate_on_coco_caption(sys.argv[1], sys.argv[2], sys.argv[3]) - else: - raise NotImplementedError \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/config.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/config.py deleted file mode 100644 index fabe7f0fbe1e41c6eb280f8f7d6ae2e9c4911135..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/config.py +++ /dev/null @@ -1,50 +0,0 @@ -from detectron2.config import CfgNode as CN - - -def add_grit_config(cfg): - _C = cfg - - _C.MODEL.BEAM_SIZE = 1 - _C.MODEL.TRAIN_TASK = ["ObjectDet", "DenseCap"] - _C.MODEL.TEST_TASK = "DenseCap" # This can be varied if the model is jointly trained on multiple tasks - - _C.MODEL.ROI_BOX_HEAD.USE_BIAS = 0.0 # >= 0: not use - _C.MODEL.ROI_BOX_HEAD.MULT_PROPOSAL_SCORE = False - - _C.MODEL.ROI_HEADS.MASK_WEIGHT = 1.0 - _C.MODEL.ROI_HEADS.OBJECT_FEAT_POOLER_RES = 14 - _C.MODEL.ROI_HEADS.SOFT_NMS_ENABLED = False - - # Backbones - _C.MODEL.VIT_LAYERS = 12 - - # Text Decoder - _C.TEXT_DECODER = CN() - _C.TEXT_DECODER.VOCAB_SIZE = 30522 - _C.TEXT_DECODER.HIDDEN_SIZE = 768 - _C.TEXT_DECODER.NUM_LAYERS = 6 - _C.TEXT_DECODER.ATTENTION_HEADS = 12 - _C.TEXT_DECODER.FEEDFORWARD_SIZE = 768 * 4 - - # Multi-dataset dataloader - _C.DATALOADER.DATASET_RATIO = [1, 1] # sample ratio - _C.DATALOADER.DATASET_BS = 1 - _C.DATALOADER.DATASET_INPUT_SIZE = [1024, 1024] - _C.DATALOADER.DATASET_INPUT_SCALE = [(0.1, 2.0), (0.1, 2.0)] - _C.DATALOADER.DATASET_MIN_SIZES = [(640, 800), (640, 800)] - _C.DATALOADER.DATASET_MAX_SIZES = [1333, 1333] - - _C.SOLVER.USE_CUSTOM_SOLVER = True - _C.SOLVER.OPTIMIZER = 'ADAMW' - _C.SOLVER.VIT_LAYER_DECAY = True - _C.SOLVER.VIT_LAYER_DECAY_RATE = 0.7 - - _C.INPUT.CUSTOM_AUG = 'EfficientDetResizeCrop' - _C.INPUT.TRAIN_SIZE = 1024 - _C.INPUT.TEST_SIZE = 1024 - _C.INPUT.SCALE_RANGE = (0.1, 2.) - # 'default' for fixed short / long edge - _C.INPUT.TEST_INPUT_TYPE = 'default' - - _C.FIND_UNUSED_PARAM = True - _C.USE_ACT_CHECKPOINT = True \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/ml_nms.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/ml_nms.py deleted file mode 100644 index 325d709a98422d8a355fc7c7e281179642850968..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/ml_nms.py +++ /dev/null @@ -1,31 +0,0 @@ -from detectron2.layers import batched_nms - - -def ml_nms(boxlist, nms_thresh, max_proposals=-1, - score_field="scores", label_field="labels"): - """ - Performs non-maximum suppression on a boxlist, with scores specified - in a boxlist field via score_field. - Arguments: - boxlist(BoxList) - nms_thresh (float) - max_proposals (int): if > 0, then only the top max_proposals are kept - after non-maximum suppression - score_field (str) - """ - if nms_thresh <= 0: - return boxlist - if boxlist.has('pred_boxes'): - boxes = boxlist.pred_boxes.tensor - labels = boxlist.pred_classes - else: - boxes = boxlist.proposal_boxes.tensor - labels = boxlist.proposal_boxes.tensor.new_zeros( - len(boxlist.proposal_boxes.tensor)) - scores = boxlist.scores - - keep = batched_nms(boxes, scores, labels, nms_thresh) - if max_proposals > 0: - keep = keep[: max_proposals] - boxlist = boxlist[keep] - return boxlist diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/visualizers/__init__.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/visualizers/__init__.py deleted file mode 100644 index 4770d1f15a6790ab9606c7b9881f798c8e2d9545..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/visualizers/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -import logging - -from saicinpainting.training.visualizers.directory import DirectoryVisualizer -from saicinpainting.training.visualizers.noop import NoopVisualizer - - -def make_visualizer(kind, **kwargs): - logging.info(f'Make visualizer {kind}') - - if kind == 'directory': - return DirectoryVisualizer(**kwargs) - if kind == 'noop': - return NoopVisualizer() - - raise ValueError(f'Unknown visualizer kind {kind}') diff --git a/spaces/OpenGVLab/VideoChatGPT/models/Qformer.py b/spaces/OpenGVLab/VideoChatGPT/models/Qformer.py deleted file mode 100644 index ccb5098c682498b250278d326dbcddfc0f028abc..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/VideoChatGPT/models/Qformer.py +++ /dev/null @@ -1,1237 +0,0 @@ -""" - * Copyright (c) 2023, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li - * Based on huggingface code base - * https://github.com/huggingface/transformers/blob/v4.15.0/src/transformers/models/bert -""" - -import math -import os -import warnings -from dataclasses import dataclass -from typing import Optional, Tuple, Dict, Any - -import torch -from torch import Tensor, device, dtype, nn -import torch.utils.checkpoint -from torch import nn -from torch.nn import CrossEntropyLoss -import torch.nn.functional as F - -from timm.models.layers import drop_path -from transformers.activations import ACT2FN -from transformers.file_utils import ( - ModelOutput, -) -from transformers.modeling_outputs import ( - BaseModelOutputWithPastAndCrossAttentions, - BaseModelOutputWithPoolingAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - MaskedLMOutput, - MultipleChoiceModelOutput, - NextSentencePredictorOutput, - QuestionAnsweringModelOutput, - SequenceClassifierOutput, - TokenClassifierOutput, -) -from transformers.modeling_utils import ( - PreTrainedModel, - apply_chunking_to_forward, - find_pruneable_heads_and_indices, - prune_linear_layer, -) -from transformers.utils import logging -from transformers.models.bert.configuration_bert import BertConfig - -logger = logging.get_logger(__name__) - - -class BertEmbeddings(nn.Module): - """Construct the embeddings from word and position embeddings.""" - - def __init__(self, config): - super().__init__() - self.word_embeddings = nn.Embedding( - config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id - ) - self.position_embeddings = nn.Embedding( - config.max_position_embeddings, config.hidden_size - ) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.register_buffer( - "position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)) - ) - self.position_embedding_type = getattr( - config, "position_embedding_type", "absolute" - ) - - self.config = config - - def forward( - self, - input_ids=None, - position_ids=None, - query_embeds=None, - past_key_values_length=0, - ): - if input_ids is not None: - seq_length = input_ids.size()[1] - else: - seq_length = 0 - - if position_ids is None: - position_ids = self.position_ids[ - :, past_key_values_length : seq_length + past_key_values_length - ].clone() - - if input_ids is not None: - embeddings = self.word_embeddings(input_ids) - if self.position_embedding_type == "absolute": - position_embeddings = self.position_embeddings(position_ids) - embeddings = embeddings + position_embeddings - - if query_embeds is not None: - embeddings = torch.cat((query_embeds, embeddings), dim=1) - else: - embeddings = query_embeds - - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - - -class BertSelfAttention(nn.Module): - def __init__(self, config, is_cross_attention): - super().__init__() - self.config = config - if config.hidden_size % config.num_attention_heads != 0 and not hasattr( - config, "embedding_size" - ): - raise ValueError( - "The hidden size (%d) is not a multiple of the number of attention " - "heads (%d)" % (config.hidden_size, config.num_attention_heads) - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size) - if is_cross_attention: - self.key = nn.Linear(config.encoder_width, self.all_head_size) - self.value = nn.Linear(config.encoder_width, self.all_head_size) - else: - self.key = nn.Linear(config.hidden_size, self.all_head_size) - self.value = nn.Linear(config.hidden_size, self.all_head_size) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.position_embedding_type = getattr( - config, "position_embedding_type", "absolute" - ) - if ( - self.position_embedding_type == "relative_key" - or self.position_embedding_type == "relative_key_query" - ): - self.max_position_embeddings = config.max_position_embeddings - self.distance_embedding = nn.Embedding( - 2 * config.max_position_embeddings - 1, self.attention_head_size - ) - self.save_attention = False - - def save_attn_gradients(self, attn_gradients): - self.attn_gradients = attn_gradients - - def get_attn_gradients(self): - return self.attn_gradients - - def save_attention_map(self, attention_map): - self.attention_map = attention_map - - def get_attention_map(self): - return self.attention_map - - def transpose_for_scores(self, x): - new_x_shape = x.size()[:-1] + ( - self.num_attention_heads, - self.attention_head_size, - ) - x = x.view(*new_x_shape) - return x.permute(0, 2, 1, 3) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - ): - - # If this is instantiated as a cross-attention module, the keys - # and values come from an encoder; the attention mask needs to be - # such that the encoder's padding tokens are not attended to. - is_cross_attention = encoder_hidden_states is not None - - if is_cross_attention: - key_layer = self.transpose_for_scores(self.key(encoder_hidden_states)) - value_layer = self.transpose_for_scores(self.value(encoder_hidden_states)) - attention_mask = encoder_attention_mask - elif past_key_value is not None: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - key_layer = torch.cat([past_key_value[0], key_layer], dim=2) - value_layer = torch.cat([past_key_value[1], value_layer], dim=2) - else: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - - mixed_query_layer = self.query(hidden_states) - - query_layer = self.transpose_for_scores(mixed_query_layer) - - past_key_value = (key_layer, value_layer) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - if ( - self.position_embedding_type == "relative_key" - or self.position_embedding_type == "relative_key_query" - ): - seq_length = hidden_states.size()[1] - position_ids_l = torch.arange( - seq_length, dtype=torch.long, device=hidden_states.device - ).view(-1, 1) - position_ids_r = torch.arange( - seq_length, dtype=torch.long, device=hidden_states.device - ).view(1, -1) - distance = position_ids_l - position_ids_r - positional_embedding = self.distance_embedding( - distance + self.max_position_embeddings - 1 - ) - positional_embedding = positional_embedding.to( - dtype=query_layer.dtype - ) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum( - "bhld,lrd->bhlr", query_layer, positional_embedding - ) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum( - "bhld,lrd->bhlr", query_layer, positional_embedding - ) - relative_position_scores_key = torch.einsum( - "bhrd,lrd->bhlr", key_layer, positional_embedding - ) - attention_scores = ( - attention_scores - + relative_position_scores_query - + relative_position_scores_key - ) - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in BertModel forward() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = nn.Softmax(dim=-1)(attention_scores) - - if is_cross_attention and self.save_attention: - self.save_attention_map(attention_probs) - attention_probs.register_hook(self.save_attn_gradients) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs_dropped = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs_dropped = attention_probs_dropped * head_mask - - context_layer = torch.matmul(attention_probs_dropped, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(*new_context_layer_shape) - - outputs = ( - (context_layer, attention_probs) if output_attentions else (context_layer,) - ) - - outputs = outputs + (past_key_value,) - return outputs - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - """ - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - def extra_repr(self) -> str: - return 'p={}'.format(self.drop_prob) - - -class BertSelfOutput(nn.Module): - def __init__(self, config, drop_path=0.): - super().__init__() - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states, input_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.drop_path(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class BertAttention(nn.Module): - def __init__(self, config, is_cross_attention=False, drop_path=0.,): - super().__init__() - self.self = BertSelfAttention(config, is_cross_attention) - self.output = BertSelfOutput(config, drop_path=drop_path) - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, - self.self.num_attention_heads, - self.self.attention_head_size, - self.pruned_heads, - ) - - # Prune linear layers - self.self.query = prune_linear_layer(self.self.query, index) - self.self.key = prune_linear_layer(self.self.key, index) - self.self.value = prune_linear_layer(self.self.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.self.num_attention_heads = self.self.num_attention_heads - len(heads) - self.self.all_head_size = ( - self.self.attention_head_size * self.self.num_attention_heads - ) - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - ): - self_outputs = self.self( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - ) - attention_output = self.output(self_outputs[0], hidden_states) - - outputs = (attention_output,) + self_outputs[ - 1: - ] # add attentions if we output them - return outputs - - -class BertIntermediate(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - return hidden_states - - -class BertOutput(nn.Module): - def __init__(self, config, drop_path=0.): - super().__init__() - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states, input_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.drop_path(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class BertLayer(nn.Module): - def __init__(self, config, layer_num): - super().__init__() - self.config = config - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - drop_path = config.drop_path_list[layer_num] - self.attention = BertAttention(config, drop_path=drop_path) - self.layer_num = layer_num - if ( - self.config.add_cross_attention - and layer_num % self.config.cross_attention_freq == 0 - ): - self.crossattention = BertAttention( - config, is_cross_attention=self.config.add_cross_attention, - drop_path=drop_path - ) - self.has_cross_attention = True - else: - self.has_cross_attention = False - self.intermediate = BertIntermediate(config) - self.output = BertOutput(config, drop_path=drop_path) - - self.intermediate_query = BertIntermediate(config) - self.output_query = BertOutput(config, drop_path=drop_path) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - query_length=0, - ): - # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 - self_attn_past_key_value = ( - past_key_value[:2] if past_key_value is not None else None - ) - self_attention_outputs = self.attention( - hidden_states, - attention_mask, - head_mask, - output_attentions=output_attentions, - past_key_value=self_attn_past_key_value, - ) - attention_output = self_attention_outputs[0] - outputs = self_attention_outputs[1:-1] - - present_key_value = self_attention_outputs[-1] - - if query_length > 0: - query_attention_output = attention_output[:, :query_length, :] - - if self.has_cross_attention: - assert ( - encoder_hidden_states is not None - ), "encoder_hidden_states must be given for cross-attention layers" - cross_attention_outputs = self.crossattention( - query_attention_output, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - output_attentions=output_attentions, - ) - query_attention_output = cross_attention_outputs[0] - outputs = ( - outputs + cross_attention_outputs[1:-1] - ) # add cross attentions if we output attention weights - - layer_output = apply_chunking_to_forward( - self.feed_forward_chunk_query, - self.chunk_size_feed_forward, - self.seq_len_dim, - query_attention_output, - ) - if attention_output.shape[1] > query_length: - layer_output_text = apply_chunking_to_forward( - self.feed_forward_chunk, - self.chunk_size_feed_forward, - self.seq_len_dim, - attention_output[:, query_length:, :], - ) - layer_output = torch.cat([layer_output, layer_output_text], dim=1) - else: - layer_output = apply_chunking_to_forward( - self.feed_forward_chunk, - self.chunk_size_feed_forward, - self.seq_len_dim, - attention_output, - ) - outputs = (layer_output,) + outputs - - outputs = outputs + (present_key_value,) - - return outputs - - def feed_forward_chunk(self, attention_output): - intermediate_output = self.intermediate(attention_output) - layer_output = self.output(intermediate_output, attention_output) - return layer_output - - def feed_forward_chunk_query(self, attention_output): - intermediate_output = self.intermediate_query(attention_output) - layer_output = self.output_query(intermediate_output, attention_output) - return layer_output - - -class BertEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.layer = nn.ModuleList( - [BertLayer(config, i) for i in range(config.num_hidden_layers)] - ) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_values=None, - use_cache=None, - output_attentions=False, - output_hidden_states=False, - return_dict=True, - query_length=0, - ): - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = ( - () if output_attentions and self.config.add_cross_attention else None - ) - - next_decoder_cache = () if use_cache else None - - for i in range(self.config.num_hidden_layers): - layer_module = self.layer[i] - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - past_key_value = past_key_values[i] if past_key_values is not None else None - - if getattr(self.config, "gradient_checkpointing", False) and self.training: - - if use_cache: - logger.warn( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - def create_custom_forward(module): - def custom_forward(*inputs): - return module( - *inputs, past_key_value, output_attentions, query_length - ) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - ) - else: - layer_outputs = layer_module( - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - query_length, - ) - - hidden_states = layer_outputs[0] - if use_cache: - next_decoder_cache += (layer_outputs[-1],) - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - all_cross_attentions = all_cross_attentions + (layer_outputs[2],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [ - hidden_states, - next_decoder_cache, - all_hidden_states, - all_self_attentions, - all_cross_attentions, - ] - if v is not None - ) - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=next_decoder_cache, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, - ) - - -class BertPooler(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states): - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pooled_output) - return pooled_output - - -class BertPredictionHeadTransform(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - if isinstance(config.hidden_act, str): - self.transform_act_fn = ACT2FN[config.hidden_act] - else: - self.transform_act_fn = config.hidden_act - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - - def forward(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.transform_act_fn(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - return hidden_states - - -class BertLMPredictionHead(nn.Module): - def __init__(self, config): - super().__init__() - self.transform = BertPredictionHeadTransform(config) - - # The output weights are the same as the input embeddings, but there is - # an output-only bias for each token. - self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False) - - self.bias = nn.Parameter(torch.zeros(config.vocab_size)) - - # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` - self.decoder.bias = self.bias - - def forward(self, hidden_states): - hidden_states = self.transform(hidden_states) - hidden_states = self.decoder(hidden_states) - return hidden_states - - -class BertOnlyMLMHead(nn.Module): - def __init__(self, config): - super().__init__() - self.predictions = BertLMPredictionHead(config) - - def forward(self, sequence_output): - prediction_scores = self.predictions(sequence_output) - return prediction_scores - - -class BertPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = BertConfig - base_model_prefix = "bert" - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Embedding)): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - - -class BertModel(BertPreTrainedModel): - """ - The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of - cross-attention is added between the self-attention layers, following the architecture described in `Attention is - all you need `__ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, - Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. - argument and :obj:`add_cross_attention` set to :obj:`True`; an :obj:`encoder_hidden_states` is then expected as an - input to the forward pass. - """ - - def __init__(self, config, add_pooling_layer=False): - super().__init__(config) - self.config = config - - self.embeddings = BertEmbeddings(config) - - self.encoder = BertEncoder(config) - - self.pooler = BertPooler(config) if add_pooling_layer else None - - self.init_weights() - - def get_input_embeddings(self): - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value): - self.embeddings.word_embeddings = value - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - def get_extended_attention_mask( - self, - attention_mask: Tensor, - input_shape: Tuple[int], - device: device, - is_decoder: bool, - has_query: bool = False, - ) -> Tensor: - """ - Makes broadcastable attention and causal masks so that future and masked tokens are ignored. - - Arguments: - attention_mask (:obj:`torch.Tensor`): - Mask with ones indicating tokens to attend to, zeros for tokens to ignore. - input_shape (:obj:`Tuple[int]`): - The shape of the input to the model. - device: (:obj:`torch.device`): - The device of the input to the model. - - Returns: - :obj:`torch.Tensor` The extended attention mask, with a the same dtype as :obj:`attention_mask.dtype`. - """ - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - if attention_mask.dim() == 3: - extended_attention_mask = attention_mask[:, None, :, :] - elif attention_mask.dim() == 2: - # Provided a padding mask of dimensions [batch_size, seq_length] - # - if the model is a decoder, apply a causal mask in addition to the padding mask - # - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length] - if is_decoder: - batch_size, seq_length = input_shape - - seq_ids = torch.arange(seq_length, device=device) - causal_mask = ( - seq_ids[None, None, :].repeat(batch_size, seq_length, 1) - <= seq_ids[None, :, None] - ) - - # add a prefix ones mask to the causal mask - # causal and attention masks must have same type with pytorch version < 1.3 - causal_mask = causal_mask.to(attention_mask.dtype) - - if causal_mask.shape[1] < attention_mask.shape[1]: - prefix_seq_len = attention_mask.shape[1] - causal_mask.shape[1] - if has_query: # UniLM style attention mask - causal_mask = torch.cat( - [ - torch.zeros( - (batch_size, prefix_seq_len, seq_length), - device=device, - dtype=causal_mask.dtype, - ), - causal_mask, - ], - axis=1, - ) - causal_mask = torch.cat( - [ - torch.ones( - (batch_size, causal_mask.shape[1], prefix_seq_len), - device=device, - dtype=causal_mask.dtype, - ), - causal_mask, - ], - axis=-1, - ) - extended_attention_mask = ( - causal_mask[:, None, :, :] * attention_mask[:, None, None, :] - ) - else: - extended_attention_mask = attention_mask[:, None, None, :] - else: - raise ValueError( - "Wrong shape for input_ids (shape {}) or attention_mask (shape {})".format( - input_shape, attention_mask.shape - ) - ) - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and -10000.0 for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - extended_attention_mask = extended_attention_mask.to( - dtype=self.dtype - ) # fp16 compatibility - extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 - return extended_attention_mask - - def forward( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - head_mask=None, - query_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_values=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - is_decoder=False, - ): - r""" - encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``: - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` - (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` - instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`. - use_cache (:obj:`bool`, `optional`): - If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up - decoding (see :obj:`past_key_values`). - """ - output_attentions = ( - output_attentions - if output_attentions is not None - else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = ( - return_dict if return_dict is not None else self.config.use_return_dict - ) - - # use_cache = use_cache if use_cache is not None else self.config.use_cache - - if input_ids is None: - assert ( - query_embeds is not None - ), "You have to specify query_embeds when input_ids is None" - - # past_key_values_length - past_key_values_length = ( - past_key_values[0][0].shape[2] - self.config.query_length - if past_key_values is not None - else 0 - ) - - query_length = query_embeds.shape[1] if query_embeds is not None else 0 - - embedding_output = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - query_embeds=query_embeds, - past_key_values_length=past_key_values_length, - ) - - input_shape = embedding_output.size()[:-1] - batch_size, seq_length = input_shape - device = embedding_output.device - - if attention_mask is None: - attention_mask = torch.ones( - ((batch_size, seq_length + past_key_values_length)), device=device - ) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - if is_decoder: - extended_attention_mask = self.get_extended_attention_mask( - attention_mask, - input_ids.shape, - device, - is_decoder, - has_query=(query_embeds is not None), - ) - else: - extended_attention_mask = self.get_extended_attention_mask( - attention_mask, input_shape, device, is_decoder - ) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if encoder_hidden_states is not None: - if type(encoder_hidden_states) == list: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states[ - 0 - ].size() - else: - ( - encoder_batch_size, - encoder_sequence_length, - _, - ) = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - - if type(encoder_attention_mask) == list: - encoder_extended_attention_mask = [ - self.invert_attention_mask(mask) for mask in encoder_attention_mask - ] - elif encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask( - encoder_attention_mask - ) - else: - encoder_extended_attention_mask = self.invert_attention_mask( - encoder_attention_mask - ) - else: - encoder_extended_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - query_length=query_length, - ) - sequence_output = encoder_outputs[0] - pooled_output = ( - self.pooler(sequence_output) if self.pooler is not None else None - ) - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, - ) - - -class BertLMHeadModel(BertPreTrainedModel): - - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] - - def __init__(self, config): - super().__init__(config) - - self.bert = BertModel(config, add_pooling_layer=False) - self.cls = BertOnlyMLMHead(config) - - self.init_weights() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def set_output_embeddings(self, new_embeddings): - self.cls.predictions.decoder = new_embeddings - - def forward( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - head_mask=None, - query_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - labels=None, - past_key_values=None, - use_cache=True, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - return_logits=False, - is_decoder=True, - reduction="mean", - ): - r""" - encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``: - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in - ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are - ignored (masked), the loss is only computed for the tokens with labels n ``[0, ..., config.vocab_size]`` - past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` - (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` - instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`. - use_cache (:obj:`bool`, `optional`): - If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up - decoding (see :obj:`past_key_values`). - Returns: - Example:: - >>> from transformers import BertTokenizer, BertLMHeadModel, BertConfig - >>> import torch - >>> tokenizer = BertTokenizer.from_pretrained('bert-base-cased') - >>> config = BertConfig.from_pretrained("bert-base-cased") - >>> model = BertLMHeadModel.from_pretrained('bert-base-cased', config=config) - >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") - >>> outputs = model(**inputs) - >>> prediction_logits = outputs.logits - """ - return_dict = ( - return_dict if return_dict is not None else self.config.use_return_dict - ) - if labels is not None: - use_cache = False - if past_key_values is not None: - query_embeds = None - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - head_mask=head_mask, - query_embeds=query_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - is_decoder=is_decoder, - ) - - sequence_output = outputs[0] - if query_embeds is not None: - sequence_output = outputs[0][:, query_embeds.shape[1] :, :] - - prediction_scores = self.cls(sequence_output) - - if return_logits: - return prediction_scores[:, :-1, :].contiguous() - - lm_loss = None - if labels is not None: - # we are doing next-token prediction; shift prediction scores and input ids by one - shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous() - labels = labels[:, 1:].contiguous() - loss_fct = CrossEntropyLoss(reduction=reduction, label_smoothing=0.1) - lm_loss = loss_fct( - shifted_prediction_scores.view(-1, self.config.vocab_size), - labels.view(-1), - ) - if reduction == "none": - lm_loss = lm_loss.view(prediction_scores.size(0), -1).sum(1) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((lm_loss,) + output) if lm_loss is not None else output - - return CausalLMOutputWithCrossAttentions( - loss=lm_loss, - logits=prediction_scores, - past_key_values=outputs.past_key_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - - def prepare_inputs_for_generation( - self, input_ids, query_embeds, past=None, attention_mask=None, **model_kwargs - ): - # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly - if attention_mask is None: - attention_mask = input_ids.new_ones(input_ids.shape) - query_mask = input_ids.new_ones(query_embeds.shape[:-1]) - attention_mask = torch.cat([query_mask, attention_mask], dim=-1) - - # cut decoder_input_ids if past is used - if past is not None: - input_ids = input_ids[:, -1:] - - return { - "input_ids": input_ids, - "query_embeds": query_embeds, - "attention_mask": attention_mask, - "past_key_values": past, - "encoder_hidden_states": model_kwargs.get("encoder_hidden_states", None), - "encoder_attention_mask": model_kwargs.get("encoder_attention_mask", None), - "is_decoder": True, - } - - def _reorder_cache(self, past, beam_idx): - reordered_past = () - for layer_past in past: - reordered_past += ( - tuple( - past_state.index_select(0, beam_idx) for past_state in layer_past - ), - ) - return reordered_past - - -class BertForMaskedLM(BertPreTrainedModel): - - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] - - def __init__(self, config): - super().__init__(config) - - self.bert = BertModel(config, add_pooling_layer=False) - self.cls = BertOnlyMLMHead(config) - - self.init_weights() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def set_output_embeddings(self, new_embeddings): - self.cls.predictions.decoder = new_embeddings - - def forward( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - head_mask=None, - query_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - labels=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - return_logits=False, - is_decoder=False, - ): - r""" - labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ..., - config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored - (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]`` - """ - - return_dict = ( - return_dict if return_dict is not None else self.config.use_return_dict - ) - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - head_mask=head_mask, - query_embeds=query_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - is_decoder=is_decoder, - ) - - if query_embeds is not None: - sequence_output = outputs[0][:, query_embeds.shape[1] :, :] - prediction_scores = self.cls(sequence_output) - - if return_logits: - return prediction_scores - - masked_lm_loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() # -100 index = padding token - masked_lm_loss = loss_fct( - prediction_scores.view(-1, self.config.vocab_size), labels.view(-1) - ) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ( - ((masked_lm_loss,) + output) if masked_lm_loss is not None else output - ) - - return MaskedLMOutput( - loss=masked_lm_loss, - logits=prediction_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/Owechada/roopfaceswapr/README.md b/spaces/Owechada/roopfaceswapr/README.md deleted file mode 100644 index 8765ab0b78d11834fa64339bc2aacf743657ea64..0000000000000000000000000000000000000000 --- a/spaces/Owechada/roopfaceswapr/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Roop -emoji: 📈 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: agpl-3.0 -duplicated_from: ezioruan/roop ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/PaulEdwards/StarWords/README.md b/spaces/PaulEdwards/StarWords/README.md deleted file mode 100644 index 46be08eecaa577856d6b5062464a7ebe90043313..0000000000000000000000000000000000000000 --- a/spaces/PaulEdwards/StarWords/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: StarWords -emoji: 🌖 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.0.24 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - - -https://huggingface.co/spaces/PaulEdwards/StarWords \ No newline at end of file diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/__init__.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distlib/database.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distlib/database.py deleted file mode 100644 index 5db5d7f507c1d150e6b36f236df7ee61c0f65581..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distlib/database.py +++ /dev/null @@ -1,1350 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2012-2017 The Python Software Foundation. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -"""PEP 376 implementation.""" - -from __future__ import unicode_literals - -import base64 -import codecs -import contextlib -import hashlib -import logging -import os -import posixpath -import sys -import zipimport - -from . import DistlibException, resources -from .compat import StringIO -from .version import get_scheme, UnsupportedVersionError -from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME, - LEGACY_METADATA_FILENAME) -from .util import (parse_requirement, cached_property, parse_name_and_version, - read_exports, write_exports, CSVReader, CSVWriter) - - -__all__ = ['Distribution', 'BaseInstalledDistribution', - 'InstalledDistribution', 'EggInfoDistribution', - 'DistributionPath'] - - -logger = logging.getLogger(__name__) - -EXPORTS_FILENAME = 'pydist-exports.json' -COMMANDS_FILENAME = 'pydist-commands.json' - -DIST_FILES = ('INSTALLER', METADATA_FILENAME, 'RECORD', 'REQUESTED', - 'RESOURCES', EXPORTS_FILENAME, 'SHARED') - -DISTINFO_EXT = '.dist-info' - - -class _Cache(object): - """ - A simple cache mapping names and .dist-info paths to distributions - """ - def __init__(self): - """ - Initialise an instance. There is normally one for each DistributionPath. - """ - self.name = {} - self.path = {} - self.generated = False - - def clear(self): - """ - Clear the cache, setting it to its initial state. - """ - self.name.clear() - self.path.clear() - self.generated = False - - def add(self, dist): - """ - Add a distribution to the cache. - :param dist: The distribution to add. - """ - if dist.path not in self.path: - self.path[dist.path] = dist - self.name.setdefault(dist.key, []).append(dist) - - -class DistributionPath(object): - """ - Represents a set of distributions installed on a path (typically sys.path). - """ - def __init__(self, path=None, include_egg=False): - """ - Create an instance from a path, optionally including legacy (distutils/ - setuptools/distribute) distributions. - :param path: The path to use, as a list of directories. If not specified, - sys.path is used. - :param include_egg: If True, this instance will look for and return legacy - distributions as well as those based on PEP 376. - """ - if path is None: - path = sys.path - self.path = path - self._include_dist = True - self._include_egg = include_egg - - self._cache = _Cache() - self._cache_egg = _Cache() - self._cache_enabled = True - self._scheme = get_scheme('default') - - def _get_cache_enabled(self): - return self._cache_enabled - - def _set_cache_enabled(self, value): - self._cache_enabled = value - - cache_enabled = property(_get_cache_enabled, _set_cache_enabled) - - def clear_cache(self): - """ - Clears the internal cache. - """ - self._cache.clear() - self._cache_egg.clear() - - - def _yield_distributions(self): - """ - Yield .dist-info and/or .egg(-info) distributions. - """ - # We need to check if we've seen some resources already, because on - # some Linux systems (e.g. some Debian/Ubuntu variants) there are - # symlinks which alias other files in the environment. - seen = set() - for path in self.path: - finder = resources.finder_for_path(path) - if finder is None: - continue - r = finder.find('') - if not r or not r.is_container: - continue - rset = sorted(r.resources) - for entry in rset: - r = finder.find(entry) - if not r or r.path in seen: - continue - try: - if self._include_dist and entry.endswith(DISTINFO_EXT): - possible_filenames = [METADATA_FILENAME, - WHEEL_METADATA_FILENAME, - LEGACY_METADATA_FILENAME] - for metadata_filename in possible_filenames: - metadata_path = posixpath.join(entry, metadata_filename) - pydist = finder.find(metadata_path) - if pydist: - break - else: - continue - - with contextlib.closing(pydist.as_stream()) as stream: - metadata = Metadata(fileobj=stream, scheme='legacy') - logger.debug('Found %s', r.path) - seen.add(r.path) - yield new_dist_class(r.path, metadata=metadata, - env=self) - elif self._include_egg and entry.endswith(('.egg-info', - '.egg')): - logger.debug('Found %s', r.path) - seen.add(r.path) - yield old_dist_class(r.path, self) - except Exception as e: - msg = 'Unable to read distribution at %s, perhaps due to bad metadata: %s' - logger.warning(msg, r.path, e) - import warnings - warnings.warn(msg % (r.path, e), stacklevel=2) - - def _generate_cache(self): - """ - Scan the path for distributions and populate the cache with - those that are found. - """ - gen_dist = not self._cache.generated - gen_egg = self._include_egg and not self._cache_egg.generated - if gen_dist or gen_egg: - for dist in self._yield_distributions(): - if isinstance(dist, InstalledDistribution): - self._cache.add(dist) - else: - self._cache_egg.add(dist) - - if gen_dist: - self._cache.generated = True - if gen_egg: - self._cache_egg.generated = True - - @classmethod - def distinfo_dirname(cls, name, version): - """ - The *name* and *version* parameters are converted into their - filename-escaped form, i.e. any ``'-'`` characters are replaced - with ``'_'`` other than the one in ``'dist-info'`` and the one - separating the name from the version number. - - :parameter name: is converted to a standard distribution name by replacing - any runs of non- alphanumeric characters with a single - ``'-'``. - :type name: string - :parameter version: is converted to a standard version string. Spaces - become dots, and all other non-alphanumeric characters - (except dots) become dashes, with runs of multiple - dashes condensed to a single dash. - :type version: string - :returns: directory name - :rtype: string""" - name = name.replace('-', '_') - return '-'.join([name, version]) + DISTINFO_EXT - - def get_distributions(self): - """ - Provides an iterator that looks for distributions and returns - :class:`InstalledDistribution` or - :class:`EggInfoDistribution` instances for each one of them. - - :rtype: iterator of :class:`InstalledDistribution` and - :class:`EggInfoDistribution` instances - """ - if not self._cache_enabled: - for dist in self._yield_distributions(): - yield dist - else: - self._generate_cache() - - for dist in self._cache.path.values(): - yield dist - - if self._include_egg: - for dist in self._cache_egg.path.values(): - yield dist - - def get_distribution(self, name): - """ - Looks for a named distribution on the path. - - This function only returns the first result found, as no more than one - value is expected. If nothing is found, ``None`` is returned. - - :rtype: :class:`InstalledDistribution`, :class:`EggInfoDistribution` - or ``None`` - """ - result = None - name = name.lower() - if not self._cache_enabled: - for dist in self._yield_distributions(): - if dist.key == name: - result = dist - break - else: - self._generate_cache() - - if name in self._cache.name: - result = self._cache.name[name][0] - elif self._include_egg and name in self._cache_egg.name: - result = self._cache_egg.name[name][0] - return result - - def provides_distribution(self, name, version=None): - """ - Iterates over all distributions to find which distributions provide *name*. - If a *version* is provided, it will be used to filter the results. - - This function only returns the first result found, since no more than - one values are expected. If the directory is not found, returns ``None``. - - :parameter version: a version specifier that indicates the version - required, conforming to the format in ``PEP-345`` - - :type name: string - :type version: string - """ - matcher = None - if version is not None: - try: - matcher = self._scheme.matcher('%s (%s)' % (name, version)) - except ValueError: - raise DistlibException('invalid name or version: %r, %r' % - (name, version)) - - for dist in self.get_distributions(): - # We hit a problem on Travis where enum34 was installed and doesn't - # have a provides attribute ... - if not hasattr(dist, 'provides'): - logger.debug('No "provides": %s', dist) - else: - provided = dist.provides - - for p in provided: - p_name, p_ver = parse_name_and_version(p) - if matcher is None: - if p_name == name: - yield dist - break - else: - if p_name == name and matcher.match(p_ver): - yield dist - break - - def get_file_path(self, name, relative_path): - """ - Return the path to a resource file. - """ - dist = self.get_distribution(name) - if dist is None: - raise LookupError('no distribution named %r found' % name) - return dist.get_resource_path(relative_path) - - def get_exported_entries(self, category, name=None): - """ - Return all of the exported entries in a particular category. - - :param category: The category to search for entries. - :param name: If specified, only entries with that name are returned. - """ - for dist in self.get_distributions(): - r = dist.exports - if category in r: - d = r[category] - if name is not None: - if name in d: - yield d[name] - else: - for v in d.values(): - yield v - - -class Distribution(object): - """ - A base class for distributions, whether installed or from indexes. - Either way, it must have some metadata, so that's all that's needed - for construction. - """ - - build_time_dependency = False - """ - Set to True if it's known to be only a build-time dependency (i.e. - not needed after installation). - """ - - requested = False - """A boolean that indicates whether the ``REQUESTED`` metadata file is - present (in other words, whether the package was installed by user - request or it was installed as a dependency).""" - - def __init__(self, metadata): - """ - Initialise an instance. - :param metadata: The instance of :class:`Metadata` describing this - distribution. - """ - self.metadata = metadata - self.name = metadata.name - self.key = self.name.lower() # for case-insensitive comparisons - self.version = metadata.version - self.locator = None - self.digest = None - self.extras = None # additional features requested - self.context = None # environment marker overrides - self.download_urls = set() - self.digests = {} - - @property - def source_url(self): - """ - The source archive download URL for this distribution. - """ - return self.metadata.source_url - - download_url = source_url # Backward compatibility - - @property - def name_and_version(self): - """ - A utility property which displays the name and version in parentheses. - """ - return '%s (%s)' % (self.name, self.version) - - @property - def provides(self): - """ - A set of distribution names and versions provided by this distribution. - :return: A set of "name (version)" strings. - """ - plist = self.metadata.provides - s = '%s (%s)' % (self.name, self.version) - if s not in plist: - plist.append(s) - return plist - - def _get_requirements(self, req_attr): - md = self.metadata - reqts = getattr(md, req_attr) - logger.debug('%s: got requirements %r from metadata: %r', self.name, req_attr, - reqts) - return set(md.get_requirements(reqts, extras=self.extras, - env=self.context)) - - @property - def run_requires(self): - return self._get_requirements('run_requires') - - @property - def meta_requires(self): - return self._get_requirements('meta_requires') - - @property - def build_requires(self): - return self._get_requirements('build_requires') - - @property - def test_requires(self): - return self._get_requirements('test_requires') - - @property - def dev_requires(self): - return self._get_requirements('dev_requires') - - def matches_requirement(self, req): - """ - Say if this instance matches (fulfills) a requirement. - :param req: The requirement to match. - :rtype req: str - :return: True if it matches, else False. - """ - # Requirement may contain extras - parse to lose those - # from what's passed to the matcher - r = parse_requirement(req) - scheme = get_scheme(self.metadata.scheme) - try: - matcher = scheme.matcher(r.requirement) - except UnsupportedVersionError: - # XXX compat-mode if cannot read the version - logger.warning('could not read version %r - using name only', - req) - name = req.split()[0] - matcher = scheme.matcher(name) - - name = matcher.key # case-insensitive - - result = False - for p in self.provides: - p_name, p_ver = parse_name_and_version(p) - if p_name != name: - continue - try: - result = matcher.match(p_ver) - break - except UnsupportedVersionError: - pass - return result - - def __repr__(self): - """ - Return a textual representation of this instance, - """ - if self.source_url: - suffix = ' [%s]' % self.source_url - else: - suffix = '' - return '' % (self.name, self.version, suffix) - - def __eq__(self, other): - """ - See if this distribution is the same as another. - :param other: The distribution to compare with. To be equal to one - another. distributions must have the same type, name, - version and source_url. - :return: True if it is the same, else False. - """ - if type(other) is not type(self): - result = False - else: - result = (self.name == other.name and - self.version == other.version and - self.source_url == other.source_url) - return result - - def __hash__(self): - """ - Compute hash in a way which matches the equality test. - """ - return hash(self.name) + hash(self.version) + hash(self.source_url) - - -class BaseInstalledDistribution(Distribution): - """ - This is the base class for installed distributions (whether PEP 376 or - legacy). - """ - - hasher = None - - def __init__(self, metadata, path, env=None): - """ - Initialise an instance. - :param metadata: An instance of :class:`Metadata` which describes the - distribution. This will normally have been initialised - from a metadata file in the ``path``. - :param path: The path of the ``.dist-info`` or ``.egg-info`` - directory for the distribution. - :param env: This is normally the :class:`DistributionPath` - instance where this distribution was found. - """ - super(BaseInstalledDistribution, self).__init__(metadata) - self.path = path - self.dist_path = env - - def get_hash(self, data, hasher=None): - """ - Get the hash of some data, using a particular hash algorithm, if - specified. - - :param data: The data to be hashed. - :type data: bytes - :param hasher: The name of a hash implementation, supported by hashlib, - or ``None``. Examples of valid values are ``'sha1'``, - ``'sha224'``, ``'sha384'``, '``sha256'``, ``'md5'`` and - ``'sha512'``. If no hasher is specified, the ``hasher`` - attribute of the :class:`InstalledDistribution` instance - is used. If the hasher is determined to be ``None``, MD5 - is used as the hashing algorithm. - :returns: The hash of the data. If a hasher was explicitly specified, - the returned hash will be prefixed with the specified hasher - followed by '='. - :rtype: str - """ - if hasher is None: - hasher = self.hasher - if hasher is None: - hasher = hashlib.md5 - prefix = '' - else: - hasher = getattr(hashlib, hasher) - prefix = '%s=' % self.hasher - digest = hasher(data).digest() - digest = base64.urlsafe_b64encode(digest).rstrip(b'=').decode('ascii') - return '%s%s' % (prefix, digest) - - -class InstalledDistribution(BaseInstalledDistribution): - """ - Created with the *path* of the ``.dist-info`` directory provided to the - constructor. It reads the metadata contained in ``pydist.json`` when it is - instantiated., or uses a passed in Metadata instance (useful for when - dry-run mode is being used). - """ - - hasher = 'sha256' - - def __init__(self, path, metadata=None, env=None): - self.modules = [] - self.finder = finder = resources.finder_for_path(path) - if finder is None: - raise ValueError('finder unavailable for %s' % path) - if env and env._cache_enabled and path in env._cache.path: - metadata = env._cache.path[path].metadata - elif metadata is None: - r = finder.find(METADATA_FILENAME) - # Temporary - for Wheel 0.23 support - if r is None: - r = finder.find(WHEEL_METADATA_FILENAME) - # Temporary - for legacy support - if r is None: - r = finder.find(LEGACY_METADATA_FILENAME) - if r is None: - raise ValueError('no %s found in %s' % (METADATA_FILENAME, - path)) - with contextlib.closing(r.as_stream()) as stream: - metadata = Metadata(fileobj=stream, scheme='legacy') - - super(InstalledDistribution, self).__init__(metadata, path, env) - - if env and env._cache_enabled: - env._cache.add(self) - - r = finder.find('REQUESTED') - self.requested = r is not None - p = os.path.join(path, 'top_level.txt') - if os.path.exists(p): - with open(p, 'rb') as f: - data = f.read().decode('utf-8') - self.modules = data.splitlines() - - def __repr__(self): - return '' % ( - self.name, self.version, self.path) - - def __str__(self): - return "%s %s" % (self.name, self.version) - - def _get_records(self): - """ - Get the list of installed files for the distribution - :return: A list of tuples of path, hash and size. Note that hash and - size might be ``None`` for some entries. The path is exactly - as stored in the file (which is as in PEP 376). - """ - results = [] - r = self.get_distinfo_resource('RECORD') - with contextlib.closing(r.as_stream()) as stream: - with CSVReader(stream=stream) as record_reader: - # Base location is parent dir of .dist-info dir - #base_location = os.path.dirname(self.path) - #base_location = os.path.abspath(base_location) - for row in record_reader: - missing = [None for i in range(len(row), 3)] - path, checksum, size = row + missing - #if not os.path.isabs(path): - # path = path.replace('/', os.sep) - # path = os.path.join(base_location, path) - results.append((path, checksum, size)) - return results - - @cached_property - def exports(self): - """ - Return the information exported by this distribution. - :return: A dictionary of exports, mapping an export category to a dict - of :class:`ExportEntry` instances describing the individual - export entries, and keyed by name. - """ - result = {} - r = self.get_distinfo_resource(EXPORTS_FILENAME) - if r: - result = self.read_exports() - return result - - def read_exports(self): - """ - Read exports data from a file in .ini format. - - :return: A dictionary of exports, mapping an export category to a list - of :class:`ExportEntry` instances describing the individual - export entries. - """ - result = {} - r = self.get_distinfo_resource(EXPORTS_FILENAME) - if r: - with contextlib.closing(r.as_stream()) as stream: - result = read_exports(stream) - return result - - def write_exports(self, exports): - """ - Write a dictionary of exports to a file in .ini format. - :param exports: A dictionary of exports, mapping an export category to - a list of :class:`ExportEntry` instances describing the - individual export entries. - """ - rf = self.get_distinfo_file(EXPORTS_FILENAME) - with open(rf, 'w') as f: - write_exports(exports, f) - - def get_resource_path(self, relative_path): - """ - NOTE: This API may change in the future. - - Return the absolute path to a resource file with the given relative - path. - - :param relative_path: The path, relative to .dist-info, of the resource - of interest. - :return: The absolute path where the resource is to be found. - """ - r = self.get_distinfo_resource('RESOURCES') - with contextlib.closing(r.as_stream()) as stream: - with CSVReader(stream=stream) as resources_reader: - for relative, destination in resources_reader: - if relative == relative_path: - return destination - raise KeyError('no resource file with relative path %r ' - 'is installed' % relative_path) - - def list_installed_files(self): - """ - Iterates over the ``RECORD`` entries and returns a tuple - ``(path, hash, size)`` for each line. - - :returns: iterator of (path, hash, size) - """ - for result in self._get_records(): - yield result - - def write_installed_files(self, paths, prefix, dry_run=False): - """ - Writes the ``RECORD`` file, using the ``paths`` iterable passed in. Any - existing ``RECORD`` file is silently overwritten. - - prefix is used to determine when to write absolute paths. - """ - prefix = os.path.join(prefix, '') - base = os.path.dirname(self.path) - base_under_prefix = base.startswith(prefix) - base = os.path.join(base, '') - record_path = self.get_distinfo_file('RECORD') - logger.info('creating %s', record_path) - if dry_run: - return None - with CSVWriter(record_path) as writer: - for path in paths: - if os.path.isdir(path) or path.endswith(('.pyc', '.pyo')): - # do not put size and hash, as in PEP-376 - hash_value = size = '' - else: - size = '%d' % os.path.getsize(path) - with open(path, 'rb') as fp: - hash_value = self.get_hash(fp.read()) - if path.startswith(base) or (base_under_prefix and - path.startswith(prefix)): - path = os.path.relpath(path, base) - writer.writerow((path, hash_value, size)) - - # add the RECORD file itself - if record_path.startswith(base): - record_path = os.path.relpath(record_path, base) - writer.writerow((record_path, '', '')) - return record_path - - def check_installed_files(self): - """ - Checks that the hashes and sizes of the files in ``RECORD`` are - matched by the files themselves. Returns a (possibly empty) list of - mismatches. Each entry in the mismatch list will be a tuple consisting - of the path, 'exists', 'size' or 'hash' according to what didn't match - (existence is checked first, then size, then hash), the expected - value and the actual value. - """ - mismatches = [] - base = os.path.dirname(self.path) - record_path = self.get_distinfo_file('RECORD') - for path, hash_value, size in self.list_installed_files(): - if not os.path.isabs(path): - path = os.path.join(base, path) - if path == record_path: - continue - if not os.path.exists(path): - mismatches.append((path, 'exists', True, False)) - elif os.path.isfile(path): - actual_size = str(os.path.getsize(path)) - if size and actual_size != size: - mismatches.append((path, 'size', size, actual_size)) - elif hash_value: - if '=' in hash_value: - hasher = hash_value.split('=', 1)[0] - else: - hasher = None - - with open(path, 'rb') as f: - actual_hash = self.get_hash(f.read(), hasher) - if actual_hash != hash_value: - mismatches.append((path, 'hash', hash_value, actual_hash)) - return mismatches - - @cached_property - def shared_locations(self): - """ - A dictionary of shared locations whose keys are in the set 'prefix', - 'purelib', 'platlib', 'scripts', 'headers', 'data' and 'namespace'. - The corresponding value is the absolute path of that category for - this distribution, and takes into account any paths selected by the - user at installation time (e.g. via command-line arguments). In the - case of the 'namespace' key, this would be a list of absolute paths - for the roots of namespace packages in this distribution. - - The first time this property is accessed, the relevant information is - read from the SHARED file in the .dist-info directory. - """ - result = {} - shared_path = os.path.join(self.path, 'SHARED') - if os.path.isfile(shared_path): - with codecs.open(shared_path, 'r', encoding='utf-8') as f: - lines = f.read().splitlines() - for line in lines: - key, value = line.split('=', 1) - if key == 'namespace': - result.setdefault(key, []).append(value) - else: - result[key] = value - return result - - def write_shared_locations(self, paths, dry_run=False): - """ - Write shared location information to the SHARED file in .dist-info. - :param paths: A dictionary as described in the documentation for - :meth:`shared_locations`. - :param dry_run: If True, the action is logged but no file is actually - written. - :return: The path of the file written to. - """ - shared_path = os.path.join(self.path, 'SHARED') - logger.info('creating %s', shared_path) - if dry_run: - return None - lines = [] - for key in ('prefix', 'lib', 'headers', 'scripts', 'data'): - path = paths[key] - if os.path.isdir(paths[key]): - lines.append('%s=%s' % (key, path)) - for ns in paths.get('namespace', ()): - lines.append('namespace=%s' % ns) - - with codecs.open(shared_path, 'w', encoding='utf-8') as f: - f.write('\n'.join(lines)) - return shared_path - - def get_distinfo_resource(self, path): - if path not in DIST_FILES: - raise DistlibException('invalid path for a dist-info file: ' - '%r at %r' % (path, self.path)) - finder = resources.finder_for_path(self.path) - if finder is None: - raise DistlibException('Unable to get a finder for %s' % self.path) - return finder.find(path) - - def get_distinfo_file(self, path): - """ - Returns a path located under the ``.dist-info`` directory. Returns a - string representing the path. - - :parameter path: a ``'/'``-separated path relative to the - ``.dist-info`` directory or an absolute path; - If *path* is an absolute path and doesn't start - with the ``.dist-info`` directory path, - a :class:`DistlibException` is raised - :type path: str - :rtype: str - """ - # Check if it is an absolute path # XXX use relpath, add tests - if path.find(os.sep) >= 0: - # it's an absolute path? - distinfo_dirname, path = path.split(os.sep)[-2:] - if distinfo_dirname != self.path.split(os.sep)[-1]: - raise DistlibException( - 'dist-info file %r does not belong to the %r %s ' - 'distribution' % (path, self.name, self.version)) - - # The file must be relative - if path not in DIST_FILES: - raise DistlibException('invalid path for a dist-info file: ' - '%r at %r' % (path, self.path)) - - return os.path.join(self.path, path) - - def list_distinfo_files(self): - """ - Iterates over the ``RECORD`` entries and returns paths for each line if - the path is pointing to a file located in the ``.dist-info`` directory - or one of its subdirectories. - - :returns: iterator of paths - """ - base = os.path.dirname(self.path) - for path, checksum, size in self._get_records(): - # XXX add separator or use real relpath algo - if not os.path.isabs(path): - path = os.path.join(base, path) - if path.startswith(self.path): - yield path - - def __eq__(self, other): - return (isinstance(other, InstalledDistribution) and - self.path == other.path) - - # See http://docs.python.org/reference/datamodel#object.__hash__ - __hash__ = object.__hash__ - - -class EggInfoDistribution(BaseInstalledDistribution): - """Created with the *path* of the ``.egg-info`` directory or file provided - to the constructor. It reads the metadata contained in the file itself, or - if the given path happens to be a directory, the metadata is read from the - file ``PKG-INFO`` under that directory.""" - - requested = True # as we have no way of knowing, assume it was - shared_locations = {} - - def __init__(self, path, env=None): - def set_name_and_version(s, n, v): - s.name = n - s.key = n.lower() # for case-insensitive comparisons - s.version = v - - self.path = path - self.dist_path = env - if env and env._cache_enabled and path in env._cache_egg.path: - metadata = env._cache_egg.path[path].metadata - set_name_and_version(self, metadata.name, metadata.version) - else: - metadata = self._get_metadata(path) - - # Need to be set before caching - set_name_and_version(self, metadata.name, metadata.version) - - if env and env._cache_enabled: - env._cache_egg.add(self) - super(EggInfoDistribution, self).__init__(metadata, path, env) - - def _get_metadata(self, path): - requires = None - - def parse_requires_data(data): - """Create a list of dependencies from a requires.txt file. - - *data*: the contents of a setuptools-produced requires.txt file. - """ - reqs = [] - lines = data.splitlines() - for line in lines: - line = line.strip() - if line.startswith('['): - logger.warning('Unexpected line: quitting requirement scan: %r', - line) - break - r = parse_requirement(line) - if not r: - logger.warning('Not recognised as a requirement: %r', line) - continue - if r.extras: - logger.warning('extra requirements in requires.txt are ' - 'not supported') - if not r.constraints: - reqs.append(r.name) - else: - cons = ', '.join('%s%s' % c for c in r.constraints) - reqs.append('%s (%s)' % (r.name, cons)) - return reqs - - def parse_requires_path(req_path): - """Create a list of dependencies from a requires.txt file. - - *req_path*: the path to a setuptools-produced requires.txt file. - """ - - reqs = [] - try: - with codecs.open(req_path, 'r', 'utf-8') as fp: - reqs = parse_requires_data(fp.read()) - except IOError: - pass - return reqs - - tl_path = tl_data = None - if path.endswith('.egg'): - if os.path.isdir(path): - p = os.path.join(path, 'EGG-INFO') - meta_path = os.path.join(p, 'PKG-INFO') - metadata = Metadata(path=meta_path, scheme='legacy') - req_path = os.path.join(p, 'requires.txt') - tl_path = os.path.join(p, 'top_level.txt') - requires = parse_requires_path(req_path) - else: - # FIXME handle the case where zipfile is not available - zipf = zipimport.zipimporter(path) - fileobj = StringIO( - zipf.get_data('EGG-INFO/PKG-INFO').decode('utf8')) - metadata = Metadata(fileobj=fileobj, scheme='legacy') - try: - data = zipf.get_data('EGG-INFO/requires.txt') - tl_data = zipf.get_data('EGG-INFO/top_level.txt').decode('utf-8') - requires = parse_requires_data(data.decode('utf-8')) - except IOError: - requires = None - elif path.endswith('.egg-info'): - if os.path.isdir(path): - req_path = os.path.join(path, 'requires.txt') - requires = parse_requires_path(req_path) - path = os.path.join(path, 'PKG-INFO') - tl_path = os.path.join(path, 'top_level.txt') - metadata = Metadata(path=path, scheme='legacy') - else: - raise DistlibException('path must end with .egg-info or .egg, ' - 'got %r' % path) - - if requires: - metadata.add_requirements(requires) - # look for top-level modules in top_level.txt, if present - if tl_data is None: - if tl_path is not None and os.path.exists(tl_path): - with open(tl_path, 'rb') as f: - tl_data = f.read().decode('utf-8') - if not tl_data: - tl_data = [] - else: - tl_data = tl_data.splitlines() - self.modules = tl_data - return metadata - - def __repr__(self): - return '' % ( - self.name, self.version, self.path) - - def __str__(self): - return "%s %s" % (self.name, self.version) - - def check_installed_files(self): - """ - Checks that the hashes and sizes of the files in ``RECORD`` are - matched by the files themselves. Returns a (possibly empty) list of - mismatches. Each entry in the mismatch list will be a tuple consisting - of the path, 'exists', 'size' or 'hash' according to what didn't match - (existence is checked first, then size, then hash), the expected - value and the actual value. - """ - mismatches = [] - record_path = os.path.join(self.path, 'installed-files.txt') - if os.path.exists(record_path): - for path, _, _ in self.list_installed_files(): - if path == record_path: - continue - if not os.path.exists(path): - mismatches.append((path, 'exists', True, False)) - return mismatches - - def list_installed_files(self): - """ - Iterates over the ``installed-files.txt`` entries and returns a tuple - ``(path, hash, size)`` for each line. - - :returns: a list of (path, hash, size) - """ - - def _md5(path): - f = open(path, 'rb') - try: - content = f.read() - finally: - f.close() - return hashlib.md5(content).hexdigest() - - def _size(path): - return os.stat(path).st_size - - record_path = os.path.join(self.path, 'installed-files.txt') - result = [] - if os.path.exists(record_path): - with codecs.open(record_path, 'r', encoding='utf-8') as f: - for line in f: - line = line.strip() - p = os.path.normpath(os.path.join(self.path, line)) - # "./" is present as a marker between installed files - # and installation metadata files - if not os.path.exists(p): - logger.warning('Non-existent file: %s', p) - if p.endswith(('.pyc', '.pyo')): - continue - #otherwise fall through and fail - if not os.path.isdir(p): - result.append((p, _md5(p), _size(p))) - result.append((record_path, None, None)) - return result - - def list_distinfo_files(self, absolute=False): - """ - Iterates over the ``installed-files.txt`` entries and returns paths for - each line if the path is pointing to a file located in the - ``.egg-info`` directory or one of its subdirectories. - - :parameter absolute: If *absolute* is ``True``, each returned path is - transformed into a local absolute path. Otherwise the - raw value from ``installed-files.txt`` is returned. - :type absolute: boolean - :returns: iterator of paths - """ - record_path = os.path.join(self.path, 'installed-files.txt') - if os.path.exists(record_path): - skip = True - with codecs.open(record_path, 'r', encoding='utf-8') as f: - for line in f: - line = line.strip() - if line == './': - skip = False - continue - if not skip: - p = os.path.normpath(os.path.join(self.path, line)) - if p.startswith(self.path): - if absolute: - yield p - else: - yield line - - def __eq__(self, other): - return (isinstance(other, EggInfoDistribution) and - self.path == other.path) - - # See http://docs.python.org/reference/datamodel#object.__hash__ - __hash__ = object.__hash__ - -new_dist_class = InstalledDistribution -old_dist_class = EggInfoDistribution - - -class DependencyGraph(object): - """ - Represents a dependency graph between distributions. - - The dependency relationships are stored in an ``adjacency_list`` that maps - distributions to a list of ``(other, label)`` tuples where ``other`` - is a distribution and the edge is labeled with ``label`` (i.e. the version - specifier, if such was provided). Also, for more efficient traversal, for - every distribution ``x``, a list of predecessors is kept in - ``reverse_list[x]``. An edge from distribution ``a`` to - distribution ``b`` means that ``a`` depends on ``b``. If any missing - dependencies are found, they are stored in ``missing``, which is a - dictionary that maps distributions to a list of requirements that were not - provided by any other distributions. - """ - - def __init__(self): - self.adjacency_list = {} - self.reverse_list = {} - self.missing = {} - - def add_distribution(self, distribution): - """Add the *distribution* to the graph. - - :type distribution: :class:`distutils2.database.InstalledDistribution` - or :class:`distutils2.database.EggInfoDistribution` - """ - self.adjacency_list[distribution] = [] - self.reverse_list[distribution] = [] - #self.missing[distribution] = [] - - def add_edge(self, x, y, label=None): - """Add an edge from distribution *x* to distribution *y* with the given - *label*. - - :type x: :class:`distutils2.database.InstalledDistribution` or - :class:`distutils2.database.EggInfoDistribution` - :type y: :class:`distutils2.database.InstalledDistribution` or - :class:`distutils2.database.EggInfoDistribution` - :type label: ``str`` or ``None`` - """ - self.adjacency_list[x].append((y, label)) - # multiple edges are allowed, so be careful - if x not in self.reverse_list[y]: - self.reverse_list[y].append(x) - - def add_missing(self, distribution, requirement): - """ - Add a missing *requirement* for the given *distribution*. - - :type distribution: :class:`distutils2.database.InstalledDistribution` - or :class:`distutils2.database.EggInfoDistribution` - :type requirement: ``str`` - """ - logger.debug('%s missing %r', distribution, requirement) - self.missing.setdefault(distribution, []).append(requirement) - - def _repr_dist(self, dist): - return '%s %s' % (dist.name, dist.version) - - def repr_node(self, dist, level=1): - """Prints only a subgraph""" - output = [self._repr_dist(dist)] - for other, label in self.adjacency_list[dist]: - dist = self._repr_dist(other) - if label is not None: - dist = '%s [%s]' % (dist, label) - output.append(' ' * level + str(dist)) - suboutput = self.repr_node(other, level + 1) - subs = suboutput.split('\n') - output.extend(subs[1:]) - return '\n'.join(output) - - def to_dot(self, f, skip_disconnected=True): - """Writes a DOT output for the graph to the provided file *f*. - - If *skip_disconnected* is set to ``True``, then all distributions - that are not dependent on any other distribution are skipped. - - :type f: has to support ``file``-like operations - :type skip_disconnected: ``bool`` - """ - disconnected = [] - - f.write("digraph dependencies {\n") - for dist, adjs in self.adjacency_list.items(): - if len(adjs) == 0 and not skip_disconnected: - disconnected.append(dist) - for other, label in adjs: - if not label is None: - f.write('"%s" -> "%s" [label="%s"]\n' % - (dist.name, other.name, label)) - else: - f.write('"%s" -> "%s"\n' % (dist.name, other.name)) - if not skip_disconnected and len(disconnected) > 0: - f.write('subgraph disconnected {\n') - f.write('label = "Disconnected"\n') - f.write('bgcolor = red\n') - - for dist in disconnected: - f.write('"%s"' % dist.name) - f.write('\n') - f.write('}\n') - f.write('}\n') - - def topological_sort(self): - """ - Perform a topological sort of the graph. - :return: A tuple, the first element of which is a topologically sorted - list of distributions, and the second element of which is a - list of distributions that cannot be sorted because they have - circular dependencies and so form a cycle. - """ - result = [] - # Make a shallow copy of the adjacency list - alist = {} - for k, v in self.adjacency_list.items(): - alist[k] = v[:] - while True: - # See what we can remove in this run - to_remove = [] - for k, v in list(alist.items())[:]: - if not v: - to_remove.append(k) - del alist[k] - if not to_remove: - # What's left in alist (if anything) is a cycle. - break - # Remove from the adjacency list of others - for k, v in alist.items(): - alist[k] = [(d, r) for d, r in v if d not in to_remove] - logger.debug('Moving to result: %s', - ['%s (%s)' % (d.name, d.version) for d in to_remove]) - result.extend(to_remove) - return result, list(alist.keys()) - - def __repr__(self): - """Representation of the graph""" - output = [] - for dist, adjs in self.adjacency_list.items(): - output.append(self.repr_node(dist)) - return '\n'.join(output) - - -def make_graph(dists, scheme='default'): - """Makes a dependency graph from the given distributions. - - :parameter dists: a list of distributions - :type dists: list of :class:`distutils2.database.InstalledDistribution` and - :class:`distutils2.database.EggInfoDistribution` instances - :rtype: a :class:`DependencyGraph` instance - """ - scheme = get_scheme(scheme) - graph = DependencyGraph() - provided = {} # maps names to lists of (version, dist) tuples - - # first, build the graph and find out what's provided - for dist in dists: - graph.add_distribution(dist) - - for p in dist.provides: - name, version = parse_name_and_version(p) - logger.debug('Add to provided: %s, %s, %s', name, version, dist) - provided.setdefault(name, []).append((version, dist)) - - # now make the edges - for dist in dists: - requires = (dist.run_requires | dist.meta_requires | - dist.build_requires | dist.dev_requires) - for req in requires: - try: - matcher = scheme.matcher(req) - except UnsupportedVersionError: - # XXX compat-mode if cannot read the version - logger.warning('could not read version %r - using name only', - req) - name = req.split()[0] - matcher = scheme.matcher(name) - - name = matcher.key # case-insensitive - - matched = False - if name in provided: - for version, provider in provided[name]: - try: - match = matcher.match(version) - except UnsupportedVersionError: - match = False - - if match: - graph.add_edge(dist, provider, req) - matched = True - break - if not matched: - graph.add_missing(dist, req) - return graph - - -def get_dependent_dists(dists, dist): - """Recursively generate a list of distributions from *dists* that are - dependent on *dist*. - - :param dists: a list of distributions - :param dist: a distribution, member of *dists* for which we are interested - """ - if dist not in dists: - raise DistlibException('given distribution %r is not a member ' - 'of the list' % dist.name) - graph = make_graph(dists) - - dep = [dist] # dependent distributions - todo = graph.reverse_list[dist] # list of nodes we should inspect - - while todo: - d = todo.pop() - dep.append(d) - for succ in graph.reverse_list[d]: - if succ not in dep: - todo.append(succ) - - dep.pop(0) # remove dist from dep, was there to prevent infinite loops - return dep - - -def get_required_dists(dists, dist): - """Recursively generate a list of distributions from *dists* that are - required by *dist*. - - :param dists: a list of distributions - :param dist: a distribution, member of *dists* for which we are interested - in finding the dependencies. - """ - if dist not in dists: - raise DistlibException('given distribution %r is not a member ' - 'of the list' % dist.name) - graph = make_graph(dists) - - req = set() # required distributions - todo = graph.adjacency_list[dist] # list of nodes we should inspect - seen = set(t[0] for t in todo) # already added to todo - - while todo: - d = todo.pop()[0] - req.add(d) - pred_list = graph.adjacency_list[d] - for pred in pred_list: - d = pred[0] - if d not in req and d not in seen: - seen.add(d) - todo.append(pred) - return req - - -def make_dist(name, version, **kwargs): - """ - A convenience method for making a dist given just a name and version. - """ - summary = kwargs.pop('summary', 'Placeholder for summary') - md = Metadata(**kwargs) - md.name = name - md.version = version - md.summary = summary or 'Placeholder for summary' - return Distribution(md) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/exceptions.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/exceptions.py deleted file mode 100644 index a38447bb05bd5d503a32651d6046ff8667785c0c..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/exceptions.py +++ /dev/null @@ -1,267 +0,0 @@ -# exceptions.py - -import re -import sys -import typing - -from .util import col, line, lineno, _collapse_string_to_ranges -from .unicode import pyparsing_unicode as ppu - - -class ExceptionWordUnicode(ppu.Latin1, ppu.LatinA, ppu.LatinB, ppu.Greek, ppu.Cyrillic): - pass - - -_extract_alphanums = _collapse_string_to_ranges(ExceptionWordUnicode.alphanums) -_exception_word_extractor = re.compile("([" + _extract_alphanums + "]{1,16})|.") - - -class ParseBaseException(Exception): - """base exception class for all parsing runtime exceptions""" - - # Performance tuning: we construct a *lot* of these, so keep this - # constructor as small and fast as possible - def __init__( - self, - pstr: str, - loc: int = 0, - msg: typing.Optional[str] = None, - elem=None, - ): - self.loc = loc - if msg is None: - self.msg = pstr - self.pstr = "" - else: - self.msg = msg - self.pstr = pstr - self.parser_element = self.parserElement = elem - self.args = (pstr, loc, msg) - - @staticmethod - def explain_exception(exc, depth=16): - """ - Method to take an exception and translate the Python internal traceback into a list - of the pyparsing expressions that caused the exception to be raised. - - Parameters: - - - exc - exception raised during parsing (need not be a ParseException, in support - of Python exceptions that might be raised in a parse action) - - depth (default=16) - number of levels back in the stack trace to list expression - and function names; if None, the full stack trace names will be listed; if 0, only - the failing input line, marker, and exception string will be shown - - Returns a multi-line string listing the ParserElements and/or function names in the - exception's stack trace. - """ - import inspect - from .core import ParserElement - - if depth is None: - depth = sys.getrecursionlimit() - ret = [] - if isinstance(exc, ParseBaseException): - ret.append(exc.line) - ret.append(" " * (exc.column - 1) + "^") - ret.append("{}: {}".format(type(exc).__name__, exc)) - - if depth > 0: - callers = inspect.getinnerframes(exc.__traceback__, context=depth) - seen = set() - for i, ff in enumerate(callers[-depth:]): - frm = ff[0] - - f_self = frm.f_locals.get("self", None) - if isinstance(f_self, ParserElement): - if frm.f_code.co_name not in ("parseImpl", "_parseNoCache"): - continue - if id(f_self) in seen: - continue - seen.add(id(f_self)) - - self_type = type(f_self) - ret.append( - "{}.{} - {}".format( - self_type.__module__, self_type.__name__, f_self - ) - ) - - elif f_self is not None: - self_type = type(f_self) - ret.append("{}.{}".format(self_type.__module__, self_type.__name__)) - - else: - code = frm.f_code - if code.co_name in ("wrapper", ""): - continue - - ret.append("{}".format(code.co_name)) - - depth -= 1 - if not depth: - break - - return "\n".join(ret) - - @classmethod - def _from_exception(cls, pe): - """ - internal factory method to simplify creating one type of ParseException - from another - avoids having __init__ signature conflicts among subclasses - """ - return cls(pe.pstr, pe.loc, pe.msg, pe.parserElement) - - @property - def line(self) -> str: - """ - Return the line of text where the exception occurred. - """ - return line(self.loc, self.pstr) - - @property - def lineno(self) -> int: - """ - Return the 1-based line number of text where the exception occurred. - """ - return lineno(self.loc, self.pstr) - - @property - def col(self) -> int: - """ - Return the 1-based column on the line of text where the exception occurred. - """ - return col(self.loc, self.pstr) - - @property - def column(self) -> int: - """ - Return the 1-based column on the line of text where the exception occurred. - """ - return col(self.loc, self.pstr) - - def __str__(self) -> str: - if self.pstr: - if self.loc >= len(self.pstr): - foundstr = ", found end of text" - else: - # pull out next word at error location - found_match = _exception_word_extractor.match(self.pstr, self.loc) - if found_match is not None: - found = found_match.group(0) - else: - found = self.pstr[self.loc : self.loc + 1] - foundstr = (", found %r" % found).replace(r"\\", "\\") - else: - foundstr = "" - return "{}{} (at char {}), (line:{}, col:{})".format( - self.msg, foundstr, self.loc, self.lineno, self.column - ) - - def __repr__(self): - return str(self) - - def mark_input_line(self, marker_string: str = None, *, markerString=">!<") -> str: - """ - Extracts the exception line from the input string, and marks - the location of the exception with a special symbol. - """ - markerString = marker_string if marker_string is not None else markerString - line_str = self.line - line_column = self.column - 1 - if markerString: - line_str = "".join( - (line_str[:line_column], markerString, line_str[line_column:]) - ) - return line_str.strip() - - def explain(self, depth=16) -> str: - """ - Method to translate the Python internal traceback into a list - of the pyparsing expressions that caused the exception to be raised. - - Parameters: - - - depth (default=16) - number of levels back in the stack trace to list expression - and function names; if None, the full stack trace names will be listed; if 0, only - the failing input line, marker, and exception string will be shown - - Returns a multi-line string listing the ParserElements and/or function names in the - exception's stack trace. - - Example:: - - expr = pp.Word(pp.nums) * 3 - try: - expr.parse_string("123 456 A789") - except pp.ParseException as pe: - print(pe.explain(depth=0)) - - prints:: - - 123 456 A789 - ^ - ParseException: Expected W:(0-9), found 'A' (at char 8), (line:1, col:9) - - Note: the diagnostic output will include string representations of the expressions - that failed to parse. These representations will be more helpful if you use `set_name` to - give identifiable names to your expressions. Otherwise they will use the default string - forms, which may be cryptic to read. - - Note: pyparsing's default truncation of exception tracebacks may also truncate the - stack of expressions that are displayed in the ``explain`` output. To get the full listing - of parser expressions, you may have to set ``ParserElement.verbose_stacktrace = True`` - """ - return self.explain_exception(self, depth) - - markInputline = mark_input_line - - -class ParseException(ParseBaseException): - """ - Exception thrown when a parse expression doesn't match the input string - - Example:: - - try: - Word(nums).set_name("integer").parse_string("ABC") - except ParseException as pe: - print(pe) - print("column: {}".format(pe.column)) - - prints:: - - Expected integer (at char 0), (line:1, col:1) - column: 1 - - """ - - -class ParseFatalException(ParseBaseException): - """ - User-throwable exception thrown when inconsistent parse content - is found; stops all parsing immediately - """ - - -class ParseSyntaxException(ParseFatalException): - """ - Just like :class:`ParseFatalException`, but thrown internally - when an :class:`ErrorStop` ('-' operator) indicates - that parsing is to stop immediately because an unbacktrackable - syntax error has been found. - """ - - -class RecursiveGrammarException(Exception): - """ - Exception thrown by :class:`ParserElement.validate` if the - grammar could be left-recursive; parser may need to enable - left recursion using :class:`ParserElement.enable_left_recursion` - """ - - def __init__(self, parseElementList): - self.parseElementTrace = parseElementList - - def __str__(self) -> str: - return "RecursiveGrammarException: {}".format(self.parseElementTrace) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/egg_info.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/egg_info.py deleted file mode 100644 index 25888ed8642ffe2e078bed5440bcc720f076904f..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/command/egg_info.py +++ /dev/null @@ -1,763 +0,0 @@ -"""setuptools.command.egg_info - -Create a distribution's .egg-info directory and contents""" - -from distutils.filelist import FileList as _FileList -from distutils.errors import DistutilsInternalError -from distutils.util import convert_path -from distutils import log -import distutils.errors -import distutils.filelist -import functools -import os -import re -import sys -import io -import warnings -import time -import collections - -from .._importlib import metadata -from .. import _entry_points - -from setuptools import Command -from setuptools.command.sdist import sdist -from setuptools.command.sdist import walk_revctrl -from setuptools.command.setopt import edit_config -from setuptools.command import bdist_egg -from pkg_resources import ( - Requirement, safe_name, parse_version, - safe_version, to_filename) -import setuptools.unicode_utils as unicode_utils -from setuptools.glob import glob - -from setuptools.extern import packaging -from setuptools.extern.jaraco.text import yield_lines -from setuptools import SetuptoolsDeprecationWarning - - -def translate_pattern(glob): # noqa: C901 # is too complex (14) # FIXME - """ - Translate a file path glob like '*.txt' in to a regular expression. - This differs from fnmatch.translate which allows wildcards to match - directory separators. It also knows about '**/' which matches any number of - directories. - """ - pat = '' - - # This will split on '/' within [character classes]. This is deliberate. - chunks = glob.split(os.path.sep) - - sep = re.escape(os.sep) - valid_char = '[^%s]' % (sep,) - - for c, chunk in enumerate(chunks): - last_chunk = c == len(chunks) - 1 - - # Chunks that are a literal ** are globstars. They match anything. - if chunk == '**': - if last_chunk: - # Match anything if this is the last component - pat += '.*' - else: - # Match '(name/)*' - pat += '(?:%s+%s)*' % (valid_char, sep) - continue # Break here as the whole path component has been handled - - # Find any special characters in the remainder - i = 0 - chunk_len = len(chunk) - while i < chunk_len: - char = chunk[i] - if char == '*': - # Match any number of name characters - pat += valid_char + '*' - elif char == '?': - # Match a name character - pat += valid_char - elif char == '[': - # Character class - inner_i = i + 1 - # Skip initial !/] chars - if inner_i < chunk_len and chunk[inner_i] == '!': - inner_i = inner_i + 1 - if inner_i < chunk_len and chunk[inner_i] == ']': - inner_i = inner_i + 1 - - # Loop till the closing ] is found - while inner_i < chunk_len and chunk[inner_i] != ']': - inner_i = inner_i + 1 - - if inner_i >= chunk_len: - # Got to the end of the string without finding a closing ] - # Do not treat this as a matching group, but as a literal [ - pat += re.escape(char) - else: - # Grab the insides of the [brackets] - inner = chunk[i + 1:inner_i] - char_class = '' - - # Class negation - if inner[0] == '!': - char_class = '^' - inner = inner[1:] - - char_class += re.escape(inner) - pat += '[%s]' % (char_class,) - - # Skip to the end ] - i = inner_i - else: - pat += re.escape(char) - i += 1 - - # Join each chunk with the dir separator - if not last_chunk: - pat += sep - - pat += r'\Z' - return re.compile(pat, flags=re.MULTILINE | re.DOTALL) - - -class InfoCommon: - tag_build = None - tag_date = None - - @property - def name(self): - return safe_name(self.distribution.get_name()) - - def tagged_version(self): - return safe_version(self._maybe_tag(self.distribution.get_version())) - - def _maybe_tag(self, version): - """ - egg_info may be called more than once for a distribution, - in which case the version string already contains all tags. - """ - return ( - version if self.vtags and self._already_tagged(version) - else version + self.vtags - ) - - def _already_tagged(self, version: str) -> bool: - # Depending on their format, tags may change with version normalization. - # So in addition the regular tags, we have to search for the normalized ones. - return version.endswith(self.vtags) or version.endswith(self._safe_tags()) - - def _safe_tags(self) -> str: - # To implement this we can rely on `safe_version` pretending to be version 0 - # followed by tags. Then we simply discard the starting 0 (fake version number) - return safe_version(f"0{self.vtags}")[1:] - - def tags(self) -> str: - version = '' - if self.tag_build: - version += self.tag_build - if self.tag_date: - version += time.strftime("-%Y%m%d") - return version - vtags = property(tags) - - -class egg_info(InfoCommon, Command): - description = "create a distribution's .egg-info directory" - - user_options = [ - ('egg-base=', 'e', "directory containing .egg-info directories" - " (default: top of the source tree)"), - ('tag-date', 'd', "Add date stamp (e.g. 20050528) to version number"), - ('tag-build=', 'b', "Specify explicit tag to add to version number"), - ('no-date', 'D', "Don't include date stamp [default]"), - ] - - boolean_options = ['tag-date'] - negative_opt = { - 'no-date': 'tag-date', - } - - def initialize_options(self): - self.egg_base = None - self.egg_name = None - self.egg_info = None - self.egg_version = None - self.broken_egg_info = False - self.ignore_egg_info_in_manifest = False - - #################################### - # allow the 'tag_svn_revision' to be detected and - # set, supporting sdists built on older Setuptools. - @property - def tag_svn_revision(self): - pass - - @tag_svn_revision.setter - def tag_svn_revision(self, value): - pass - #################################### - - def save_version_info(self, filename): - """ - Materialize the value of date into the - build tag. Install build keys in a deterministic order - to avoid arbitrary reordering on subsequent builds. - """ - egg_info = collections.OrderedDict() - # follow the order these keys would have been added - # when PYTHONHASHSEED=0 - egg_info['tag_build'] = self.tags() - egg_info['tag_date'] = 0 - edit_config(filename, dict(egg_info=egg_info)) - - def finalize_options(self): - # Note: we need to capture the current value returned - # by `self.tagged_version()`, so we can later update - # `self.distribution.metadata.version` without - # repercussions. - self.egg_name = self.name - self.egg_version = self.tagged_version() - parsed_version = parse_version(self.egg_version) - - try: - is_version = isinstance(parsed_version, packaging.version.Version) - spec = "%s==%s" if is_version else "%s===%s" - Requirement(spec % (self.egg_name, self.egg_version)) - except ValueError as e: - raise distutils.errors.DistutilsOptionError( - "Invalid distribution name or version syntax: %s-%s" % - (self.egg_name, self.egg_version) - ) from e - - if self.egg_base is None: - dirs = self.distribution.package_dir - self.egg_base = (dirs or {}).get('', os.curdir) - - self.ensure_dirname('egg_base') - self.egg_info = to_filename(self.egg_name) + '.egg-info' - if self.egg_base != os.curdir: - self.egg_info = os.path.join(self.egg_base, self.egg_info) - if '-' in self.egg_name: - self.check_broken_egg_info() - - # Set package version for the benefit of dumber commands - # (e.g. sdist, bdist_wininst, etc.) - # - self.distribution.metadata.version = self.egg_version - - # If we bootstrapped around the lack of a PKG-INFO, as might be the - # case in a fresh checkout, make sure that any special tags get added - # to the version info - # - pd = self.distribution._patched_dist - if pd is not None and pd.key == self.egg_name.lower(): - pd._version = self.egg_version - pd._parsed_version = parse_version(self.egg_version) - self.distribution._patched_dist = None - - def write_or_delete_file(self, what, filename, data, force=False): - """Write `data` to `filename` or delete if empty - - If `data` is non-empty, this routine is the same as ``write_file()``. - If `data` is empty but not ``None``, this is the same as calling - ``delete_file(filename)`. If `data` is ``None``, then this is a no-op - unless `filename` exists, in which case a warning is issued about the - orphaned file (if `force` is false), or deleted (if `force` is true). - """ - if data: - self.write_file(what, filename, data) - elif os.path.exists(filename): - if data is None and not force: - log.warn( - "%s not set in setup(), but %s exists", what, filename - ) - return - else: - self.delete_file(filename) - - def write_file(self, what, filename, data): - """Write `data` to `filename` (if not a dry run) after announcing it - - `what` is used in a log message to identify what is being written - to the file. - """ - log.info("writing %s to %s", what, filename) - data = data.encode("utf-8") - if not self.dry_run: - f = open(filename, 'wb') - f.write(data) - f.close() - - def delete_file(self, filename): - """Delete `filename` (if not a dry run) after announcing it""" - log.info("deleting %s", filename) - if not self.dry_run: - os.unlink(filename) - - def run(self): - self.mkpath(self.egg_info) - os.utime(self.egg_info, None) - for ep in metadata.entry_points(group='egg_info.writers'): - writer = ep.load() - writer(self, ep.name, os.path.join(self.egg_info, ep.name)) - - # Get rid of native_libs.txt if it was put there by older bdist_egg - nl = os.path.join(self.egg_info, "native_libs.txt") - if os.path.exists(nl): - self.delete_file(nl) - - self.find_sources() - - def find_sources(self): - """Generate SOURCES.txt manifest file""" - manifest_filename = os.path.join(self.egg_info, "SOURCES.txt") - mm = manifest_maker(self.distribution) - mm.ignore_egg_info_dir = self.ignore_egg_info_in_manifest - mm.manifest = manifest_filename - mm.run() - self.filelist = mm.filelist - - def check_broken_egg_info(self): - bei = self.egg_name + '.egg-info' - if self.egg_base != os.curdir: - bei = os.path.join(self.egg_base, bei) - if os.path.exists(bei): - log.warn( - "-" * 78 + '\n' - "Note: Your current .egg-info directory has a '-' in its name;" - '\nthis will not work correctly with "setup.py develop".\n\n' - 'Please rename %s to %s to correct this problem.\n' + '-' * 78, - bei, self.egg_info - ) - self.broken_egg_info = self.egg_info - self.egg_info = bei # make it work for now - - -class FileList(_FileList): - # Implementations of the various MANIFEST.in commands - - def __init__(self, warn=None, debug_print=None, ignore_egg_info_dir=False): - super().__init__(warn, debug_print) - self.ignore_egg_info_dir = ignore_egg_info_dir - - def process_template_line(self, line): - # Parse the line: split it up, make sure the right number of words - # is there, and return the relevant words. 'action' is always - # defined: it's the first word of the line. Which of the other - # three are defined depends on the action; it'll be either - # patterns, (dir and patterns), or (dir_pattern). - (action, patterns, dir, dir_pattern) = self._parse_template_line(line) - - action_map = { - 'include': self.include, - 'exclude': self.exclude, - 'global-include': self.global_include, - 'global-exclude': self.global_exclude, - 'recursive-include': functools.partial( - self.recursive_include, dir, - ), - 'recursive-exclude': functools.partial( - self.recursive_exclude, dir, - ), - 'graft': self.graft, - 'prune': self.prune, - } - log_map = { - 'include': "warning: no files found matching '%s'", - 'exclude': ( - "warning: no previously-included files found " - "matching '%s'" - ), - 'global-include': ( - "warning: no files found matching '%s' " - "anywhere in distribution" - ), - 'global-exclude': ( - "warning: no previously-included files matching " - "'%s' found anywhere in distribution" - ), - 'recursive-include': ( - "warning: no files found matching '%s' " - "under directory '%s'" - ), - 'recursive-exclude': ( - "warning: no previously-included files matching " - "'%s' found under directory '%s'" - ), - 'graft': "warning: no directories found matching '%s'", - 'prune': "no previously-included directories found matching '%s'", - } - - try: - process_action = action_map[action] - except KeyError: - raise DistutilsInternalError( - "this cannot happen: invalid action '{action!s}'". - format(action=action), - ) - - # OK, now we know that the action is valid and we have the - # right number of words on the line for that action -- so we - # can proceed with minimal error-checking. - - action_is_recursive = action.startswith('recursive-') - if action in {'graft', 'prune'}: - patterns = [dir_pattern] - extra_log_args = (dir, ) if action_is_recursive else () - log_tmpl = log_map[action] - - self.debug_print( - ' '.join( - [action] + - ([dir] if action_is_recursive else []) + - patterns, - ) - ) - for pattern in patterns: - if not process_action(pattern): - log.warn(log_tmpl, pattern, *extra_log_args) - - def _remove_files(self, predicate): - """ - Remove all files from the file list that match the predicate. - Return True if any matching files were removed - """ - found = False - for i in range(len(self.files) - 1, -1, -1): - if predicate(self.files[i]): - self.debug_print(" removing " + self.files[i]) - del self.files[i] - found = True - return found - - def include(self, pattern): - """Include files that match 'pattern'.""" - found = [f for f in glob(pattern) if not os.path.isdir(f)] - self.extend(found) - return bool(found) - - def exclude(self, pattern): - """Exclude files that match 'pattern'.""" - match = translate_pattern(pattern) - return self._remove_files(match.match) - - def recursive_include(self, dir, pattern): - """ - Include all files anywhere in 'dir/' that match the pattern. - """ - full_pattern = os.path.join(dir, '**', pattern) - found = [f for f in glob(full_pattern, recursive=True) - if not os.path.isdir(f)] - self.extend(found) - return bool(found) - - def recursive_exclude(self, dir, pattern): - """ - Exclude any file anywhere in 'dir/' that match the pattern. - """ - match = translate_pattern(os.path.join(dir, '**', pattern)) - return self._remove_files(match.match) - - def graft(self, dir): - """Include all files from 'dir/'.""" - found = [ - item - for match_dir in glob(dir) - for item in distutils.filelist.findall(match_dir) - ] - self.extend(found) - return bool(found) - - def prune(self, dir): - """Filter out files from 'dir/'.""" - match = translate_pattern(os.path.join(dir, '**')) - return self._remove_files(match.match) - - def global_include(self, pattern): - """ - Include all files anywhere in the current directory that match the - pattern. This is very inefficient on large file trees. - """ - if self.allfiles is None: - self.findall() - match = translate_pattern(os.path.join('**', pattern)) - found = [f for f in self.allfiles if match.match(f)] - self.extend(found) - return bool(found) - - def global_exclude(self, pattern): - """ - Exclude all files anywhere that match the pattern. - """ - match = translate_pattern(os.path.join('**', pattern)) - return self._remove_files(match.match) - - def append(self, item): - if item.endswith('\r'): # Fix older sdists built on Windows - item = item[:-1] - path = convert_path(item) - - if self._safe_path(path): - self.files.append(path) - - def extend(self, paths): - self.files.extend(filter(self._safe_path, paths)) - - def _repair(self): - """ - Replace self.files with only safe paths - - Because some owners of FileList manipulate the underlying - ``files`` attribute directly, this method must be called to - repair those paths. - """ - self.files = list(filter(self._safe_path, self.files)) - - def _safe_path(self, path): - enc_warn = "'%s' not %s encodable -- skipping" - - # To avoid accidental trans-codings errors, first to unicode - u_path = unicode_utils.filesys_decode(path) - if u_path is None: - log.warn("'%s' in unexpected encoding -- skipping" % path) - return False - - # Must ensure utf-8 encodability - utf8_path = unicode_utils.try_encode(u_path, "utf-8") - if utf8_path is None: - log.warn(enc_warn, path, 'utf-8') - return False - - try: - # ignore egg-info paths - is_egg_info = ".egg-info" in u_path or b".egg-info" in utf8_path - if self.ignore_egg_info_dir and is_egg_info: - return False - # accept is either way checks out - if os.path.exists(u_path) or os.path.exists(utf8_path): - return True - # this will catch any encode errors decoding u_path - except UnicodeEncodeError: - log.warn(enc_warn, path, sys.getfilesystemencoding()) - - -class manifest_maker(sdist): - template = "MANIFEST.in" - - def initialize_options(self): - self.use_defaults = 1 - self.prune = 1 - self.manifest_only = 1 - self.force_manifest = 1 - self.ignore_egg_info_dir = False - - def finalize_options(self): - pass - - def run(self): - self.filelist = FileList(ignore_egg_info_dir=self.ignore_egg_info_dir) - if not os.path.exists(self.manifest): - self.write_manifest() # it must exist so it'll get in the list - self.add_defaults() - if os.path.exists(self.template): - self.read_template() - self.add_license_files() - self.prune_file_list() - self.filelist.sort() - self.filelist.remove_duplicates() - self.write_manifest() - - def _manifest_normalize(self, path): - path = unicode_utils.filesys_decode(path) - return path.replace(os.sep, '/') - - def write_manifest(self): - """ - Write the file list in 'self.filelist' to the manifest file - named by 'self.manifest'. - """ - self.filelist._repair() - - # Now _repairs should encodability, but not unicode - files = [self._manifest_normalize(f) for f in self.filelist.files] - msg = "writing manifest file '%s'" % self.manifest - self.execute(write_file, (self.manifest, files), msg) - - def warn(self, msg): - if not self._should_suppress_warning(msg): - sdist.warn(self, msg) - - @staticmethod - def _should_suppress_warning(msg): - """ - suppress missing-file warnings from sdist - """ - return re.match(r"standard file .*not found", msg) - - def add_defaults(self): - sdist.add_defaults(self) - self.filelist.append(self.template) - self.filelist.append(self.manifest) - rcfiles = list(walk_revctrl()) - if rcfiles: - self.filelist.extend(rcfiles) - elif os.path.exists(self.manifest): - self.read_manifest() - - if os.path.exists("setup.py"): - # setup.py should be included by default, even if it's not - # the script called to create the sdist - self.filelist.append("setup.py") - - ei_cmd = self.get_finalized_command('egg_info') - self.filelist.graft(ei_cmd.egg_info) - - def add_license_files(self): - license_files = self.distribution.metadata.license_files or [] - for lf in license_files: - log.info("adding license file '%s'", lf) - pass - self.filelist.extend(license_files) - - def prune_file_list(self): - build = self.get_finalized_command('build') - base_dir = self.distribution.get_fullname() - self.filelist.prune(build.build_base) - self.filelist.prune(base_dir) - sep = re.escape(os.sep) - self.filelist.exclude_pattern(r'(^|' + sep + r')(RCS|CVS|\.svn)' + sep, - is_regex=1) - - def _safe_data_files(self, build_py): - """ - The parent class implementation of this method - (``sdist``) will try to include data files, which - might cause recursion problems when - ``include_package_data=True``. - - Therefore, avoid triggering any attempt of - analyzing/building the manifest again. - """ - if hasattr(build_py, 'get_data_files_without_manifest'): - return build_py.get_data_files_without_manifest() - - warnings.warn( - "Custom 'build_py' does not implement " - "'get_data_files_without_manifest'.\nPlease extend command classes" - " from setuptools instead of distutils.", - SetuptoolsDeprecationWarning - ) - return build_py.get_data_files() - - -def write_file(filename, contents): - """Create a file with the specified name and write 'contents' (a - sequence of strings without line terminators) to it. - """ - contents = "\n".join(contents) - - # assuming the contents has been vetted for utf-8 encoding - contents = contents.encode("utf-8") - - with open(filename, "wb") as f: # always write POSIX-style manifest - f.write(contents) - - -def write_pkg_info(cmd, basename, filename): - log.info("writing %s", filename) - if not cmd.dry_run: - metadata = cmd.distribution.metadata - metadata.version, oldver = cmd.egg_version, metadata.version - metadata.name, oldname = cmd.egg_name, metadata.name - - try: - # write unescaped data to PKG-INFO, so older pkg_resources - # can still parse it - metadata.write_pkg_info(cmd.egg_info) - finally: - metadata.name, metadata.version = oldname, oldver - - safe = getattr(cmd.distribution, 'zip_safe', None) - - bdist_egg.write_safety_flag(cmd.egg_info, safe) - - -def warn_depends_obsolete(cmd, basename, filename): - if os.path.exists(filename): - log.warn( - "WARNING: 'depends.txt' is not used by setuptools 0.6!\n" - "Use the install_requires/extras_require setup() args instead." - ) - - -def _write_requirements(stream, reqs): - lines = yield_lines(reqs or ()) - - def append_cr(line): - return line + '\n' - lines = map(append_cr, lines) - stream.writelines(lines) - - -def write_requirements(cmd, basename, filename): - dist = cmd.distribution - data = io.StringIO() - _write_requirements(data, dist.install_requires) - extras_require = dist.extras_require or {} - for extra in sorted(extras_require): - data.write('\n[{extra}]\n'.format(**vars())) - _write_requirements(data, extras_require[extra]) - cmd.write_or_delete_file("requirements", filename, data.getvalue()) - - -def write_setup_requirements(cmd, basename, filename): - data = io.StringIO() - _write_requirements(data, cmd.distribution.setup_requires) - cmd.write_or_delete_file("setup-requirements", filename, data.getvalue()) - - -def write_toplevel_names(cmd, basename, filename): - pkgs = dict.fromkeys( - [ - k.split('.', 1)[0] - for k in cmd.distribution.iter_distribution_names() - ] - ) - cmd.write_file("top-level names", filename, '\n'.join(sorted(pkgs)) + '\n') - - -def overwrite_arg(cmd, basename, filename): - write_arg(cmd, basename, filename, True) - - -def write_arg(cmd, basename, filename, force=False): - argname = os.path.splitext(basename)[0] - value = getattr(cmd.distribution, argname, None) - if value is not None: - value = '\n'.join(value) + '\n' - cmd.write_or_delete_file(argname, filename, value, force) - - -def write_entries(cmd, basename, filename): - eps = _entry_points.load(cmd.distribution.entry_points) - defn = _entry_points.render(eps) - cmd.write_or_delete_file('entry points', filename, defn, True) - - -def get_pkg_info_revision(): - """ - Get a -r### off of PKG-INFO Version in case this is an sdist of - a subversion revision. - """ - warnings.warn( - "get_pkg_info_revision is deprecated.", EggInfoDeprecationWarning) - if os.path.exists('PKG-INFO'): - with io.open('PKG-INFO') as f: - for line in f: - match = re.match(r"Version:.*-r(\d+)\s*$", line) - if match: - return int(match.group(1)) - return 0 - - -class EggInfoDeprecationWarning(SetuptoolsDeprecationWarning): - """Deprecated behavior warning for EggInfo, bypassing suppression.""" diff --git a/spaces/ReThGe/Linet/README.md b/spaces/ReThGe/Linet/README.md deleted file mode 100644 index 286449e19bfcdb44a51534083cb3be6b6d764f4f..0000000000000000000000000000000000000000 --- a/spaces/ReThGe/Linet/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Linet -emoji: 🐠 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Reha2704/VToonify/vtoonify/model/raft/core/utils/__init__.py b/spaces/Reha2704/VToonify/vtoonify/model/raft/core/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Reself/StableVideo/cldm/hack.py b/spaces/Reself/StableVideo/cldm/hack.py deleted file mode 100644 index 454361e9d036cd1a6a79122c2fd16b489e4767b1..0000000000000000000000000000000000000000 --- a/spaces/Reself/StableVideo/cldm/hack.py +++ /dev/null @@ -1,111 +0,0 @@ -import torch -import einops - -import ldm.modules.encoders.modules -import ldm.modules.attention - -from transformers import logging -from ldm.modules.attention import default - - -def disable_verbosity(): - logging.set_verbosity_error() - print('logging improved.') - return - - -def enable_sliced_attention(): - ldm.modules.attention.CrossAttention.forward = _hacked_sliced_attentin_forward - print('Enabled sliced_attention.') - return - - -def hack_everything(clip_skip=0): - disable_verbosity() - ldm.modules.encoders.modules.FrozenCLIPEmbedder.forward = _hacked_clip_forward - ldm.modules.encoders.modules.FrozenCLIPEmbedder.clip_skip = clip_skip - print('Enabled clip hacks.') - return - - -# Written by Lvmin -def _hacked_clip_forward(self, text): - PAD = self.tokenizer.pad_token_id - EOS = self.tokenizer.eos_token_id - BOS = self.tokenizer.bos_token_id - - def tokenize(t): - return self.tokenizer(t, truncation=False, add_special_tokens=False)["input_ids"] - - def transformer_encode(t): - if self.clip_skip > 1: - rt = self.transformer(input_ids=t, output_hidden_states=True) - return self.transformer.text_model.final_layer_norm(rt.hidden_states[-self.clip_skip]) - else: - return self.transformer(input_ids=t, output_hidden_states=False).last_hidden_state - - def split(x): - return x[75 * 0: 75 * 1], x[75 * 1: 75 * 2], x[75 * 2: 75 * 3] - - def pad(x, p, i): - return x[:i] if len(x) >= i else x + [p] * (i - len(x)) - - raw_tokens_list = tokenize(text) - tokens_list = [] - - for raw_tokens in raw_tokens_list: - raw_tokens_123 = split(raw_tokens) - raw_tokens_123 = [[BOS] + raw_tokens_i + [EOS] for raw_tokens_i in raw_tokens_123] - raw_tokens_123 = [pad(raw_tokens_i, PAD, 77) for raw_tokens_i in raw_tokens_123] - tokens_list.append(raw_tokens_123) - - tokens_list = torch.IntTensor(tokens_list).to(self.device) - - feed = einops.rearrange(tokens_list, 'b f i -> (b f) i') - y = transformer_encode(feed) - z = einops.rearrange(y, '(b f) i c -> b (f i) c', f=3) - - return z - - -# Stolen from https://github.com/basujindal/stable-diffusion/blob/main/optimizedSD/splitAttention.py -def _hacked_sliced_attentin_forward(self, x, context=None, mask=None): - h = self.heads - - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - del context, x - - q, k, v = map(lambda t: einops.rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - - limit = k.shape[0] - att_step = 1 - q_chunks = list(torch.tensor_split(q, limit // att_step, dim=0)) - k_chunks = list(torch.tensor_split(k, limit // att_step, dim=0)) - v_chunks = list(torch.tensor_split(v, limit // att_step, dim=0)) - - q_chunks.reverse() - k_chunks.reverse() - v_chunks.reverse() - sim = torch.zeros(q.shape[0], q.shape[1], v.shape[2], device=q.device) - del k, q, v - for i in range(0, limit, att_step): - q_buffer = q_chunks.pop() - k_buffer = k_chunks.pop() - v_buffer = v_chunks.pop() - sim_buffer = torch.einsum('b i d, b j d -> b i j', q_buffer, k_buffer) * self.scale - - del k_buffer, q_buffer - # attention, what we cannot get enough of, by chunks - - sim_buffer = sim_buffer.softmax(dim=-1) - - sim_buffer = torch.einsum('b i j, b j d -> b i d', sim_buffer, v_buffer) - del v_buffer - sim[i:i + att_step, :, :] = sim_buffer - - del sim_buffer - sim = einops.rearrange(sim, '(b h) n d -> b n (h d)', h=h) - return self.to_out(sim) diff --git a/spaces/Riakzu/parkinson_detection/predictive system.py b/spaces/Riakzu/parkinson_detection/predictive system.py deleted file mode 100644 index c3f79713ac7515d6215fe4c6932c41e313643e3a..0000000000000000000000000000000000000000 --- a/spaces/Riakzu/parkinson_detection/predictive system.py +++ /dev/null @@ -1,24 +0,0 @@ -import numpy as np -import pickle - - - -# loading the saved model -loaded_model = pickle.load(open('C:/Users/Asus_user/Desktop/parkinson_prediction/trained_model.sav', 'rb')) #rb read binary - - -input_data = (197.07600,206.89600,192.05500,0.00289,0.00001,0.00166,0.00168,0.00498,0.01098,0.09700,0.00563,0.00680,0.00802,0.01689,0.00339,26.77500,0.422229,0.741367,-7.348300,0.177551,1.743867,0.085569) - -input_data_as_numpy_array = np.asarray(input_data) - -input_data_reshaped = input_data_as_numpy_array.reshape(1,-1) - -prediction = loaded_model.predict(input_data_reshaped) -print(prediction) - - -if (prediction[0] == 0): - print("The Person does not have Parkinsons Disease") - -else: - print("The Person has Parkinsons") \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/parallel/utils.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/parallel/utils.py deleted file mode 100644 index 0f5712cb42c38a2e8563bf563efb6681383cab9b..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/parallel/utils.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .registry import MODULE_WRAPPERS - - -def is_module_wrapper(module): - """Check if a module is a module wrapper. - - The following 3 modules in MMCV (and their subclasses) are regarded as - module wrappers: DataParallel, DistributedDataParallel, - MMDistributedDataParallel (the deprecated version). You may add you own - module wrapper by registering it to mmcv.parallel.MODULE_WRAPPERS. - - Args: - module (nn.Module): The module to be checked. - - Returns: - bool: True if the input module is a module wrapper. - """ - module_wrappers = tuple(MODULE_WRAPPERS.module_dict.values()) - return isinstance(module, module_wrappers) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/deepfashion.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/deepfashion.py deleted file mode 100644 index 1125376091f2d4ee6843ae4f2156b3b0453be369..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/datasets/deepfashion.py +++ /dev/null @@ -1,10 +0,0 @@ -from .builder import DATASETS -from .coco import CocoDataset - - -@DATASETS.register_module() -class DeepFashionDataset(CocoDataset): - - CLASSES = ('top', 'skirt', 'leggings', 'dress', 'outer', 'pants', 'bag', - 'neckwear', 'headwear', 'eyeglass', 'belt', 'footwear', 'hair', - 'skin', 'face') diff --git a/spaces/Roxza/DialoGPT/README.md b/spaces/Roxza/DialoGPT/README.md deleted file mode 100644 index 9c2e631d5e9231a0d909f0b19b64a4d9571791e8..0000000000000000000000000000000000000000 --- a/spaces/Roxza/DialoGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: DialoGPT -emoji: 🌍 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.14.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SLU-CSCI4750/Demo8_RegressionGradientDecentCompare/README.md b/spaces/SLU-CSCI4750/Demo8_RegressionGradientDecentCompare/README.md deleted file mode 100644 index 46bffbf49ca5ffe64605d169d008fc94f29a5287..0000000000000000000000000000000000000000 --- a/spaces/SLU-CSCI4750/Demo8_RegressionGradientDecentCompare/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Demo8 RegressionGradientDecentCompare -emoji: 🐢 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.0.13 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/EnglishCV/train_with_wav2vec.py b/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/EnglishCV/train_with_wav2vec.py deleted file mode 100644 index 25edc1a9bb8ebe29c887bca40018e9c467aa98ef..0000000000000000000000000000000000000000 --- a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/EnglishCV/train_with_wav2vec.py +++ /dev/null @@ -1,388 +0,0 @@ -#!/usr/bin/env python3 -import sys -import torch -import logging -import speechbrain as sb -import torchaudio -from hyperpyyaml import load_hyperpyyaml -from speechbrain.tokenizers.SentencePiece import SentencePiece -from speechbrain.utils.data_utils import undo_padding -from speechbrain.utils.distributed import run_on_main - -"""Recipe for training a sequence-to-sequence ASR system with CommonVoice. -The system employs a wav2vec2 encoder and a CTC decoder. -Decoding is performed with greedy decoding (will be extended to beam search). - -To run this recipe, do the following: -> python train_with_wav2vec2.py hparams/train_with_wav2vec2.yaml - -With the default hyperparameters, the system employs a pretrained wav2vec2 encoder. -The wav2vec2 model is pretrained following the model given in the hprams file. -It may be dependent on the language. - -The neural network is trained with CTC on sub-word units estimated with -Byte Pairwise Encoding (BPE). - -The experiment file is flexible enough to support a large variety of -different systems. By properly changing the parameter files, you can try -different encoders, decoders, tokens (e.g, characters instead of BPE), -training languages (all CommonVoice languages), and many -other possible variations. - -Authors - * Titouan Parcollet 2021 -""" - -logger = logging.getLogger(__name__) - - -# Define training procedure -class ASRCV(sb.core.Brain): - def compute_forward(self, batch, stage): - """Forward computations from the waveform batches to the output probabilities.""" - - batch = batch.to(self.device) - wavs, wav_lens = batch.sig - tokens_bos, _ = batch.tokens_bos - wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device) - - if stage == sb.Stage.TRAIN: - if hasattr(self.hparams, "augmentation"): - wavs = self.hparams.augmentation(wavs, wav_lens) - - # Forward pass - feats = self.modules.wav2vec2(wavs, wav_lens) - x = self.modules.enc(feats) - logits = self.modules.ctc_lin(x) - p_ctc = self.hparams.log_softmax(logits) - - return p_ctc, wav_lens - - def compute_objectives(self, predictions, batch, stage): - """Computes the loss (CTC) given predictions and targets.""" - - p_ctc, wav_lens = predictions - - ids = batch.id - tokens_eos, tokens_eos_lens = batch.tokens_eos - tokens, tokens_lens = batch.tokens - - loss = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens) - - if stage != sb.Stage.TRAIN: - # Decode token terms to words - sequence = sb.decoders.ctc_greedy_decode( - p_ctc, wav_lens, blank_id=self.hparams.blank_index - ) - - predicted_words = self.tokenizer(sequence, task="decode_from_list") - - # Convert indices to words - target_words = undo_padding(tokens, tokens_lens) - target_words = self.tokenizer(target_words, task="decode_from_list") - - self.wer_metric.append(ids, predicted_words, target_words) - self.cer_metric.append(ids, predicted_words, target_words) - - return loss - - def fit_batch(self, batch): - """Train the parameters given a single batch in input""" - should_step = self.step % self.grad_accumulation_factor == 0 - # Managing automatic mixed precision - # TOFIX: CTC fine-tuning currently is unstable - # This is certainly due to CTC being done in fp16 instead of fp32 - if self.auto_mix_prec: - with torch.cuda.amp.autocast(): - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - with self.no_sync(not should_step): - self.scaler.scale( - loss / self.grad_accumulation_factor - ).backward() - if should_step: - - if not self.hparams.wav2vec2.freeze: - self.scaler.unscale_(self.wav2vec_optimizer) - self.scaler.unscale_(self.model_optimizer) - if self.check_gradients(loss): - if not self.hparams.wav2vec2.freeze: - if self.optimizer_step >= self.hparams.warmup_steps: - self.scaler.step(self.wav2vec_optimizer) - self.scaler.step(self.model_optimizer) - self.scaler.update() - self.zero_grad() - self.optimizer_step += 1 - else: - # This is mandatory because HF models have a weird behavior with DDP - # on the forward pass - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - - with self.no_sync(not should_step): - (loss / self.grad_accumulation_factor).backward() - if should_step: - if self.check_gradients(loss): - if not self.hparams.wav2vec2.freeze: - if self.optimizer_step >= self.hparams.warmup_steps: - self.wav2vec_optimizer.step() - self.model_optimizer.step() - self.zero_grad() - self.optimizer_step += 1 - - self.on_fit_batch_end(batch, outputs, loss, should_step) - return loss.detach().cpu() - - def evaluate_batch(self, batch, stage): - """Computations needed for validation/test batches""" - predictions = self.compute_forward(batch, stage=stage) - with torch.no_grad(): - loss = self.compute_objectives(predictions, batch, stage=stage) - return loss.detach() - - def on_stage_start(self, stage, epoch): - """Gets called at the beginning of each epoch""" - if stage != sb.Stage.TRAIN: - self.cer_metric = self.hparams.cer_computer() - self.wer_metric = self.hparams.error_rate_computer() - - def on_stage_end(self, stage, stage_loss, epoch): - """Gets called at the end of an epoch.""" - # Compute/store important stats - stage_stats = {"loss": stage_loss} - if stage == sb.Stage.TRAIN: - self.train_stats = stage_stats - else: - stage_stats["CER"] = self.cer_metric.summarize("error_rate") - stage_stats["WER"] = self.wer_metric.summarize("error_rate") - - # Perform end-of-iteration things, like annealing, logging, etc. - if stage == sb.Stage.VALID: - old_lr_model, new_lr_model = self.hparams.lr_annealing_model( - stage_stats["loss"] - ) - old_lr_wav2vec, new_lr_wav2vec = self.hparams.lr_annealing_wav2vec( - stage_stats["loss"] - ) - sb.nnet.schedulers.update_learning_rate( - self.model_optimizer, new_lr_model - ) - if not self.hparams.wav2vec2.freeze: - sb.nnet.schedulers.update_learning_rate( - self.wav2vec_optimizer, new_lr_wav2vec - ) - self.hparams.train_logger.log_stats( - stats_meta={ - "epoch": epoch, - "lr_model": old_lr_model, - "lr_wav2vec": old_lr_wav2vec, - }, - train_stats=self.train_stats, - valid_stats=stage_stats, - ) - self.checkpointer.save_and_keep_only( - meta={"WER": stage_stats["WER"]}, min_keys=["WER"], - ) - elif stage == sb.Stage.TEST: - self.hparams.train_logger.log_stats( - stats_meta={"Epoch loaded": self.hparams.epoch_counter.current}, - test_stats=stage_stats, - ) - with open(self.hparams.wer_file, "w") as w: - self.wer_metric.write_stats(w) - - def init_optimizers(self): - "Initializes the wav2vec2 optimizer and model optimizer" - - # If the wav2vec encoder is unfrozen, we create the optimizer - if not self.hparams.wav2vec2.freeze: - self.wav2vec_optimizer = self.hparams.wav2vec_opt_class( - self.modules.wav2vec2.parameters() - ) - if self.checkpointer is not None: - self.checkpointer.add_recoverable( - "wav2vec_opt", self.wav2vec_optimizer - ) - - self.model_optimizer = self.hparams.model_opt_class( - self.hparams.model.parameters() - ) - - if self.checkpointer is not None: - self.checkpointer.add_recoverable("modelopt", self.model_optimizer) - - def zero_grad(self, set_to_none=False): - if not self.hparams.wav2vec2.freeze: - self.wav2vec_optimizer.zero_grad(set_to_none) - self.model_optimizer.zero_grad(set_to_none) - - -# Define custom data procedure -def dataio_prepare(hparams, tokenizer): - """This function prepares the datasets to be used in the brain class. - It also defines the data processing pipeline through user-defined functions.""" - - # 1. Define datasets - data_folder = hparams["data_folder"] - - train_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["train_csv"], replacements={"data_root": data_folder}, - ) - - if hparams["sorting"] == "ascending": - # we sort training data to speed up training and get better results. - train_data = train_data.filtered_sorted( - sort_key="duration", - key_max_value={"duration": hparams["avoid_if_longer_than"]}, - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["dataloader_options"]["shuffle"] = False - - elif hparams["sorting"] == "descending": - train_data = train_data.filtered_sorted( - sort_key="duration", - reverse=True, - key_max_value={"duration": hparams["avoid_if_longer_than"]}, - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["dataloader_options"]["shuffle"] = False - - elif hparams["sorting"] == "random": - pass - - else: - raise NotImplementedError( - "sorting must be random, ascending or descending" - ) - - valid_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["valid_csv"], replacements={"data_root": data_folder}, - ) - # We also sort the validation data so it is faster to validate - valid_data = valid_data.filtered_sorted(sort_key="duration") - - test_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["test_csv"], replacements={"data_root": data_folder}, - ) - - # We also sort the validation data so it is faster to validate - test_data = test_data.filtered_sorted(sort_key="duration") - - datasets = [train_data, valid_data, test_data] - - # 2. Define audio pipeline: - @sb.utils.data_pipeline.takes("wav") - @sb.utils.data_pipeline.provides("sig") - def audio_pipeline(wav): - info = torchaudio.info(wav) - sig = sb.dataio.dataio.read_audio(wav) - resampled = torchaudio.transforms.Resample( - info.sample_rate, hparams["sample_rate"], - )(sig) - return resampled - - sb.dataio.dataset.add_dynamic_item(datasets, audio_pipeline) - - # 3. Define text pipeline: - @sb.utils.data_pipeline.takes("wrd") - @sb.utils.data_pipeline.provides( - "tokens_list", "tokens_bos", "tokens_eos", "tokens" - ) - def text_pipeline(wrd): - tokens_list = tokenizer.sp.encode_as_ids(wrd) - yield tokens_list - tokens_bos = torch.LongTensor([hparams["bos_index"]] + (tokens_list)) - yield tokens_bos - tokens_eos = torch.LongTensor(tokens_list + [hparams["eos_index"]]) - yield tokens_eos - tokens = torch.LongTensor(tokens_list) - yield tokens - - sb.dataio.dataset.add_dynamic_item(datasets, text_pipeline) - - # 4. Set output: - sb.dataio.dataset.set_output_keys( - datasets, ["id", "sig", "tokens_bos", "tokens_eos", "tokens"], - ) - return train_data, valid_data, test_data - - -if __name__ == "__main__": - - # Load hyperparameters file with command-line overrides - hparams_file, run_opts, overrides = sb.parse_arguments(sys.argv[1:]) - with open(hparams_file) as fin: - hparams = load_hyperpyyaml(fin, overrides) - - # If --distributed_launch then - # create ddp_group with the right communication protocol - sb.utils.distributed.ddp_init_group(run_opts) - - # Dataset preparation (parsing CommonVoice) - from common_voice_prepare import prepare_common_voice # noqa - - # Create experiment directory - sb.create_experiment_directory( - experiment_directory=hparams["output_folder"], - hyperparams_to_save=hparams_file, - overrides=overrides, - ) - - # Due to DDP, we do the preparation ONLY on the main python process - run_on_main( - prepare_common_voice, - kwargs={ - "data_folder": hparams["data_folder"], - "save_folder": hparams["save_folder"], - "train_tsv_file": hparams["train_tsv_file"], - "dev_tsv_file": hparams["dev_tsv_file"], - "test_tsv_file": hparams["test_tsv_file"], - "accented_letters": hparams["accented_letters"], - "language": hparams["language"], - "skip_prep": hparams["skip_prep"], - }, - ) - - # Defining tokenizer and loading it - tokenizer = SentencePiece( - model_dir=hparams["save_folder"], - vocab_size=hparams["output_neurons"], - annotation_train=hparams["train_csv"], - annotation_read="wrd", - model_type=hparams["token_type"], - character_coverage=hparams["character_coverage"], - ) - - # Create the datasets objects as well as tokenization and encoding :-D - train_data, valid_data, test_data = dataio_prepare(hparams, tokenizer) - - # Trainer initialization - asr_brain = ASRCV( - modules=hparams["modules"], - hparams=hparams, - run_opts=run_opts, - checkpointer=hparams["checkpointer"], - ) - - # Adding objects to trainer. - asr_brain.tokenizer = tokenizer - - # Training - asr_brain.fit( - asr_brain.hparams.epoch_counter, - train_data, - valid_data, - train_loader_kwargs=hparams["dataloader_options"], - valid_loader_kwargs=hparams["test_dataloader_options"], - ) - - # Test - asr_brain.hparams.wer_file = hparams["output_folder"] + "/wer_test.txt" - asr_brain.evaluate( - test_data, - min_key="WER", - test_loader_kwargs=hparams["test_dataloader_options"], - ) diff --git a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/cv_train.py b/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/cv_train.py deleted file mode 100644 index 25edc1a9bb8ebe29c887bca40018e9c467aa98ef..0000000000000000000000000000000000000000 --- a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/cv_train.py +++ /dev/null @@ -1,388 +0,0 @@ -#!/usr/bin/env python3 -import sys -import torch -import logging -import speechbrain as sb -import torchaudio -from hyperpyyaml import load_hyperpyyaml -from speechbrain.tokenizers.SentencePiece import SentencePiece -from speechbrain.utils.data_utils import undo_padding -from speechbrain.utils.distributed import run_on_main - -"""Recipe for training a sequence-to-sequence ASR system with CommonVoice. -The system employs a wav2vec2 encoder and a CTC decoder. -Decoding is performed with greedy decoding (will be extended to beam search). - -To run this recipe, do the following: -> python train_with_wav2vec2.py hparams/train_with_wav2vec2.yaml - -With the default hyperparameters, the system employs a pretrained wav2vec2 encoder. -The wav2vec2 model is pretrained following the model given in the hprams file. -It may be dependent on the language. - -The neural network is trained with CTC on sub-word units estimated with -Byte Pairwise Encoding (BPE). - -The experiment file is flexible enough to support a large variety of -different systems. By properly changing the parameter files, you can try -different encoders, decoders, tokens (e.g, characters instead of BPE), -training languages (all CommonVoice languages), and many -other possible variations. - -Authors - * Titouan Parcollet 2021 -""" - -logger = logging.getLogger(__name__) - - -# Define training procedure -class ASRCV(sb.core.Brain): - def compute_forward(self, batch, stage): - """Forward computations from the waveform batches to the output probabilities.""" - - batch = batch.to(self.device) - wavs, wav_lens = batch.sig - tokens_bos, _ = batch.tokens_bos - wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device) - - if stage == sb.Stage.TRAIN: - if hasattr(self.hparams, "augmentation"): - wavs = self.hparams.augmentation(wavs, wav_lens) - - # Forward pass - feats = self.modules.wav2vec2(wavs, wav_lens) - x = self.modules.enc(feats) - logits = self.modules.ctc_lin(x) - p_ctc = self.hparams.log_softmax(logits) - - return p_ctc, wav_lens - - def compute_objectives(self, predictions, batch, stage): - """Computes the loss (CTC) given predictions and targets.""" - - p_ctc, wav_lens = predictions - - ids = batch.id - tokens_eos, tokens_eos_lens = batch.tokens_eos - tokens, tokens_lens = batch.tokens - - loss = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens) - - if stage != sb.Stage.TRAIN: - # Decode token terms to words - sequence = sb.decoders.ctc_greedy_decode( - p_ctc, wav_lens, blank_id=self.hparams.blank_index - ) - - predicted_words = self.tokenizer(sequence, task="decode_from_list") - - # Convert indices to words - target_words = undo_padding(tokens, tokens_lens) - target_words = self.tokenizer(target_words, task="decode_from_list") - - self.wer_metric.append(ids, predicted_words, target_words) - self.cer_metric.append(ids, predicted_words, target_words) - - return loss - - def fit_batch(self, batch): - """Train the parameters given a single batch in input""" - should_step = self.step % self.grad_accumulation_factor == 0 - # Managing automatic mixed precision - # TOFIX: CTC fine-tuning currently is unstable - # This is certainly due to CTC being done in fp16 instead of fp32 - if self.auto_mix_prec: - with torch.cuda.amp.autocast(): - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - with self.no_sync(not should_step): - self.scaler.scale( - loss / self.grad_accumulation_factor - ).backward() - if should_step: - - if not self.hparams.wav2vec2.freeze: - self.scaler.unscale_(self.wav2vec_optimizer) - self.scaler.unscale_(self.model_optimizer) - if self.check_gradients(loss): - if not self.hparams.wav2vec2.freeze: - if self.optimizer_step >= self.hparams.warmup_steps: - self.scaler.step(self.wav2vec_optimizer) - self.scaler.step(self.model_optimizer) - self.scaler.update() - self.zero_grad() - self.optimizer_step += 1 - else: - # This is mandatory because HF models have a weird behavior with DDP - # on the forward pass - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - - with self.no_sync(not should_step): - (loss / self.grad_accumulation_factor).backward() - if should_step: - if self.check_gradients(loss): - if not self.hparams.wav2vec2.freeze: - if self.optimizer_step >= self.hparams.warmup_steps: - self.wav2vec_optimizer.step() - self.model_optimizer.step() - self.zero_grad() - self.optimizer_step += 1 - - self.on_fit_batch_end(batch, outputs, loss, should_step) - return loss.detach().cpu() - - def evaluate_batch(self, batch, stage): - """Computations needed for validation/test batches""" - predictions = self.compute_forward(batch, stage=stage) - with torch.no_grad(): - loss = self.compute_objectives(predictions, batch, stage=stage) - return loss.detach() - - def on_stage_start(self, stage, epoch): - """Gets called at the beginning of each epoch""" - if stage != sb.Stage.TRAIN: - self.cer_metric = self.hparams.cer_computer() - self.wer_metric = self.hparams.error_rate_computer() - - def on_stage_end(self, stage, stage_loss, epoch): - """Gets called at the end of an epoch.""" - # Compute/store important stats - stage_stats = {"loss": stage_loss} - if stage == sb.Stage.TRAIN: - self.train_stats = stage_stats - else: - stage_stats["CER"] = self.cer_metric.summarize("error_rate") - stage_stats["WER"] = self.wer_metric.summarize("error_rate") - - # Perform end-of-iteration things, like annealing, logging, etc. - if stage == sb.Stage.VALID: - old_lr_model, new_lr_model = self.hparams.lr_annealing_model( - stage_stats["loss"] - ) - old_lr_wav2vec, new_lr_wav2vec = self.hparams.lr_annealing_wav2vec( - stage_stats["loss"] - ) - sb.nnet.schedulers.update_learning_rate( - self.model_optimizer, new_lr_model - ) - if not self.hparams.wav2vec2.freeze: - sb.nnet.schedulers.update_learning_rate( - self.wav2vec_optimizer, new_lr_wav2vec - ) - self.hparams.train_logger.log_stats( - stats_meta={ - "epoch": epoch, - "lr_model": old_lr_model, - "lr_wav2vec": old_lr_wav2vec, - }, - train_stats=self.train_stats, - valid_stats=stage_stats, - ) - self.checkpointer.save_and_keep_only( - meta={"WER": stage_stats["WER"]}, min_keys=["WER"], - ) - elif stage == sb.Stage.TEST: - self.hparams.train_logger.log_stats( - stats_meta={"Epoch loaded": self.hparams.epoch_counter.current}, - test_stats=stage_stats, - ) - with open(self.hparams.wer_file, "w") as w: - self.wer_metric.write_stats(w) - - def init_optimizers(self): - "Initializes the wav2vec2 optimizer and model optimizer" - - # If the wav2vec encoder is unfrozen, we create the optimizer - if not self.hparams.wav2vec2.freeze: - self.wav2vec_optimizer = self.hparams.wav2vec_opt_class( - self.modules.wav2vec2.parameters() - ) - if self.checkpointer is not None: - self.checkpointer.add_recoverable( - "wav2vec_opt", self.wav2vec_optimizer - ) - - self.model_optimizer = self.hparams.model_opt_class( - self.hparams.model.parameters() - ) - - if self.checkpointer is not None: - self.checkpointer.add_recoverable("modelopt", self.model_optimizer) - - def zero_grad(self, set_to_none=False): - if not self.hparams.wav2vec2.freeze: - self.wav2vec_optimizer.zero_grad(set_to_none) - self.model_optimizer.zero_grad(set_to_none) - - -# Define custom data procedure -def dataio_prepare(hparams, tokenizer): - """This function prepares the datasets to be used in the brain class. - It also defines the data processing pipeline through user-defined functions.""" - - # 1. Define datasets - data_folder = hparams["data_folder"] - - train_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["train_csv"], replacements={"data_root": data_folder}, - ) - - if hparams["sorting"] == "ascending": - # we sort training data to speed up training and get better results. - train_data = train_data.filtered_sorted( - sort_key="duration", - key_max_value={"duration": hparams["avoid_if_longer_than"]}, - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["dataloader_options"]["shuffle"] = False - - elif hparams["sorting"] == "descending": - train_data = train_data.filtered_sorted( - sort_key="duration", - reverse=True, - key_max_value={"duration": hparams["avoid_if_longer_than"]}, - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["dataloader_options"]["shuffle"] = False - - elif hparams["sorting"] == "random": - pass - - else: - raise NotImplementedError( - "sorting must be random, ascending or descending" - ) - - valid_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["valid_csv"], replacements={"data_root": data_folder}, - ) - # We also sort the validation data so it is faster to validate - valid_data = valid_data.filtered_sorted(sort_key="duration") - - test_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["test_csv"], replacements={"data_root": data_folder}, - ) - - # We also sort the validation data so it is faster to validate - test_data = test_data.filtered_sorted(sort_key="duration") - - datasets = [train_data, valid_data, test_data] - - # 2. Define audio pipeline: - @sb.utils.data_pipeline.takes("wav") - @sb.utils.data_pipeline.provides("sig") - def audio_pipeline(wav): - info = torchaudio.info(wav) - sig = sb.dataio.dataio.read_audio(wav) - resampled = torchaudio.transforms.Resample( - info.sample_rate, hparams["sample_rate"], - )(sig) - return resampled - - sb.dataio.dataset.add_dynamic_item(datasets, audio_pipeline) - - # 3. Define text pipeline: - @sb.utils.data_pipeline.takes("wrd") - @sb.utils.data_pipeline.provides( - "tokens_list", "tokens_bos", "tokens_eos", "tokens" - ) - def text_pipeline(wrd): - tokens_list = tokenizer.sp.encode_as_ids(wrd) - yield tokens_list - tokens_bos = torch.LongTensor([hparams["bos_index"]] + (tokens_list)) - yield tokens_bos - tokens_eos = torch.LongTensor(tokens_list + [hparams["eos_index"]]) - yield tokens_eos - tokens = torch.LongTensor(tokens_list) - yield tokens - - sb.dataio.dataset.add_dynamic_item(datasets, text_pipeline) - - # 4. Set output: - sb.dataio.dataset.set_output_keys( - datasets, ["id", "sig", "tokens_bos", "tokens_eos", "tokens"], - ) - return train_data, valid_data, test_data - - -if __name__ == "__main__": - - # Load hyperparameters file with command-line overrides - hparams_file, run_opts, overrides = sb.parse_arguments(sys.argv[1:]) - with open(hparams_file) as fin: - hparams = load_hyperpyyaml(fin, overrides) - - # If --distributed_launch then - # create ddp_group with the right communication protocol - sb.utils.distributed.ddp_init_group(run_opts) - - # Dataset preparation (parsing CommonVoice) - from common_voice_prepare import prepare_common_voice # noqa - - # Create experiment directory - sb.create_experiment_directory( - experiment_directory=hparams["output_folder"], - hyperparams_to_save=hparams_file, - overrides=overrides, - ) - - # Due to DDP, we do the preparation ONLY on the main python process - run_on_main( - prepare_common_voice, - kwargs={ - "data_folder": hparams["data_folder"], - "save_folder": hparams["save_folder"], - "train_tsv_file": hparams["train_tsv_file"], - "dev_tsv_file": hparams["dev_tsv_file"], - "test_tsv_file": hparams["test_tsv_file"], - "accented_letters": hparams["accented_letters"], - "language": hparams["language"], - "skip_prep": hparams["skip_prep"], - }, - ) - - # Defining tokenizer and loading it - tokenizer = SentencePiece( - model_dir=hparams["save_folder"], - vocab_size=hparams["output_neurons"], - annotation_train=hparams["train_csv"], - annotation_read="wrd", - model_type=hparams["token_type"], - character_coverage=hparams["character_coverage"], - ) - - # Create the datasets objects as well as tokenization and encoding :-D - train_data, valid_data, test_data = dataio_prepare(hparams, tokenizer) - - # Trainer initialization - asr_brain = ASRCV( - modules=hparams["modules"], - hparams=hparams, - run_opts=run_opts, - checkpointer=hparams["checkpointer"], - ) - - # Adding objects to trainer. - asr_brain.tokenizer = tokenizer - - # Training - asr_brain.fit( - asr_brain.hparams.epoch_counter, - train_data, - valid_data, - train_loader_kwargs=hparams["dataloader_options"], - valid_loader_kwargs=hparams["test_dataloader_options"], - ) - - # Test - asr_brain.hparams.wer_file = hparams["output_folder"] + "/wer_test.txt" - asr_brain.evaluate( - test_data, - min_key="WER", - test_loader_kwargs=hparams["test_dataloader_options"], - ) diff --git a/spaces/Sapphire-356/Video2MC/common/camera.py b/spaces/Sapphire-356/Video2MC/common/camera.py deleted file mode 100644 index d147a36d7024007ef2461883dd6c1907582ee08e..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/common/camera.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) 2018-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# - -import numpy as np -import torch - -from common.quaternion import qrot, qinverse -from common.utils import wrap - - -def normalize_screen_coordinates(X, w, h): - assert X.shape[-1] == 2 - - # Normalize so that [0, w] is mapped to [-1, 1], while preserving the aspect ratio - return X / w * 2 - [1, h / w] - - -def normalize_screen_coordinates_new(X, w, h): - assert X.shape[-1] == 2 - - return (X - (w / 2, h / 2)) / (w / 2, h / 2) - - -def image_coordinates_new(X, w, h): - assert X.shape[-1] == 2 - - # Reverse camera frame normalization - return (X * (w / 2, h / 2)) + (w / 2, h / 2) - - -def image_coordinates(X, w, h): - assert X.shape[-1] == 2 - - # Reverse camera frame normalization - return (X + [1, h / w]) * w / 2 - - -def world_to_camera(X, R, t): - Rt = wrap(qinverse, R) # Invert rotation - return wrap(qrot, np.tile(Rt, (*X.shape[:-1], 1)), X - t) # Rotate and translate - - -def camera_to_world(X, R, t): - return wrap(qrot, np.tile(R, (*X.shape[:-1], 1)), X) + t - - -def project_to_2d(X, camera_params): - """ - Project 3D points to 2D using the Human3.6M camera projection function. - This is a differentiable and batched reimplementation of the original MATLAB script. - - Arguments: - X -- 3D points in *camera space* to transform (N, *, 3) - camera_params -- intrinsic parameteres (N, 2+2+3+2=9) - focal length / principal point / radial_distortion / tangential_distortion - """ - assert X.shape[-1] == 3 - assert len(camera_params.shape) == 2 - assert camera_params.shape[-1] == 9 - assert X.shape[0] == camera_params.shape[0] - - while len(camera_params.shape) < len(X.shape): - camera_params = camera_params.unsqueeze(1) - - f = camera_params[..., :2] # focal lendgth - c = camera_params[..., 2:4] # center principal point - k = camera_params[..., 4:7] - p = camera_params[..., 7:] - - XX = torch.clamp(X[..., :2] / X[..., 2:], min=-1, max=1) - r2 = torch.sum(XX[..., :2] ** 2, dim=len(XX.shape) - 1, keepdim=True) - - radial = 1 + torch.sum(k * torch.cat((r2, r2 ** 2, r2 ** 3), dim=len(r2.shape) - 1), dim=len(r2.shape) - 1, keepdim=True) - tan = torch.sum(p * XX, dim=len(XX.shape) - 1, keepdim=True) - - XXX = XX * (radial + tan) + p * r2 - - return f * XXX + c - - -def project_to_2d_linear(X, camera_params): - """ - 使用linear parameters is a little difference for use linear and no-linear parameters - Project 3D points to 2D using only linear parameters (focal length and principal point). - - Arguments: - X -- 3D points in *camera space* to transform (N, *, 3) - camera_params -- intrinsic parameteres (N, 2+2+3+2=9) - """ - assert X.shape[-1] == 3 - assert len(camera_params.shape) == 2 - assert camera_params.shape[-1] == 9 - assert X.shape[0] == camera_params.shape[0] - - while len(camera_params.shape) < len(X.shape): - camera_params = camera_params.unsqueeze(1) - - f = camera_params[..., :2] - c = camera_params[..., 2:4] - - XX = torch.clamp(X[..., :2] / X[..., 2:], min=-1, max=1) - - return f * XX + c diff --git a/spaces/SaulLu/test-demo/quantization/quant.py b/spaces/SaulLu/test-demo/quantization/quant.py deleted file mode 100644 index 8b6ae2344051edc8f1ced3bcb4b02be10075c2f3..0000000000000000000000000000000000000000 --- a/spaces/SaulLu/test-demo/quantization/quant.py +++ /dev/null @@ -1,144 +0,0 @@ -# Quantization reduces a bit representation to less bits for efficient storage or computation. -# Most floating point data types have a mapping from a bit representation, e.g. 0010 = 2 to a floating -# point representation 2 -> 2 / max(0010) = 2/15 = 0.133333 -# As such, we can represent a floating point quantization a mapping from integers to floating point values, e.g. -# [0, 1, 2, 3] -> [-1.0, -0.25, 0.25 , 1.0] -import numpy as np -from scipy.spatial.distance import cdist - -index = np.array([0, 1, 2, 3, 4, 5, 6, 7]) -values = np.linspace(-1.0, 1.0, 8) # 3-bit linear quantization -print('quantization values:', values) - -# To quantize an input distribution we first need to normalize its range into the range of the quantization values, in this case [-1.0, 1.0] -# We can do this through division by the abolute maximum value if our distribution is roughly symmetric (most distribution in deep learning are noramlly distributed) - -rand_inputs = np.random.randn(1024, 1024).astype(np.float32) - -absmax = np.max(np.abs(rand_inputs)) -normed = rand_inputs / absmax -print('normalized min and max range', np.min(normed), np.max(normed)) - -# The next step is to round the input value to the closest quantization value. -# This can be done by performing a binary search of each element of the normalized input tensor with respect to the sorted values array: -# In this case, we simply compute the distance between all values and find the closest directly. - -dist = cdist(normed.flatten().reshape(-1, 1), values.reshape(-1, 1)) -closest_idx = np.argmin(dist, 1).reshape(rand_inputs.shape) - -val, count = np.unique(closest_idx, return_counts=True) -print('Values:', val) -print('Count:', count) - -# Closest index now represents the quantized 3 bit representation (4 different values). We can use this representation to store the data efficiently. - - -# ==================DEQUANTIZATION======================== -# To dequantize the tensor we reverse the operations the we did -# 1. lookup the values corresponding to the 3-bit index -# 2. Denormalize by multipying by absmax - -dequant = values[closest_idx]*absmax -# mean absolute error: -error = np.abs(dequant-rand_inputs).mean() -print(f'Absolute linear 3-bit quantization error: {error:.4f}') - -# This yields an error of about 0.34 per value. We can do better with non-linear quantization. - -# ==================NON-LINEAR QUANTIZATION======================== -# In non-linear quantization the distance between quantization values is not always equal. -# This allows us to allocate more values to regions of high density. For example, the normal distribution has many values around 0. -# This can reduce the overall error in the distribution. -index = np.array([0, 1, 2, 3, 4, 5, 6, 7]) -values = np.array([-1.0, -0.5, -0.25, -0.075, 0.075, 0.25, 0.5, 1.0]) - -dist = cdist(normed.flatten().reshape(-1, 1), values.reshape(-1, 1)) -closest_idx = np.argmin(dist, 1).reshape(rand_inputs.shape) - -val, count = np.unique(closest_idx, return_counts=True) -print('Values:', val) -print('Count:', count) - -dequant = values[closest_idx]*absmax -error = np.abs(dequant-rand_inputs).mean() -print(f'Absolute non-linear 3-bit quantization error: {error:.4f}') - -# dynamic quantization -# Adaptive from: https://github.com/facebookresearch/bitsandbytes/blob/main/bitsandbytes/functional.py -def create_dynamic_map(signed=True, n=7): - ''' - Creates the dynamic quantiztion map. - The dynamic data type is made up of a dynamic exponent and - fraction. As the exponent increase from 0 to -7 the number - of bits available for the fraction shrinks. - This is a generalization of the dynamic type where a certain - number of the bits and be reserved for the linear quantization - region (the fraction). n determines the maximum number of - exponent bits. - For more details see - (8-Bit Approximations for Parallelism in Deep Learning)[https://arxiv.org/abs/1511.04561] - ''' - - data = [] - # these are additional items that come from the case - # where all the exponent bits are zero and no - # indicator bit is present - additional_items = 2**(7-n)-1 - if not signed: additional_items = 2*additional_items - for i in range(n): - fraction_items = 2**(i+7-n)+1 if signed else 2**(i+7-n+1)+1 - boundaries = np.linspace(0.1, 1, fraction_items) - means = (boundaries[:-1]+boundaries[1:])/2.0 - data += ((10**(-(n-1)+i))*means).tolist() - if signed: - data += (-(10**(-(n-1)+i))*means).tolist() - - if additional_items > 0: - boundaries = np.linspace(0.1, 1, additional_items+1) - means = (boundaries[:-1]+boundaries[1:])/2.0 - data += ((10**(-(n-1)+i))*means).tolist() - if signed: - data += (-(10**(-(n-1)+i))*means).tolist() - - data.append(0) - data.append(1.0) - data.sort() - return np.array(data) - -import time - -values = create_dynamic_map(signed=True) - -t0 = time.time() -dist = cdist(normed.flatten().reshape(-1, 1), values.reshape(-1, 1)) -closest_idx = np.argmin(dist, 1).reshape(rand_inputs.shape) -quant_time = time.time()-t0 - -dequant = values[closest_idx]*absmax -error = np.abs(dequant-rand_inputs).mean() -print(f'Absolute dynamic 8-bit quantization error: {error:.4f}') -print(f'Total time taken: {quant_time:.4f} seconds.') - -# This yields an error as low as 0.012. We could do even better when we use block-wise quantization. -# But performing block-wise quantization without optimized code is a bit slow. We can use the bitsandbytes library to do this quickly. - -import torch -import bitsandbytes.functional as F - -rand_inputs = torch.from_numpy(rand_inputs) -t0 = time.time() -quant_values, quant_state = F.quantize_blockwise(rand_inputs) -quant_time = time.time()-t0 -dequant_values = F.dequantize_blockwise(quant_values, quant_state) - -error = torch.abs(dequant_values-rand_inputs).mean().item() -print(f'Absolute dynamic block-wise 8-bit quantization error: {error:.4f}') -print(f'Total time taken (CPU): {quant_time:.4f} seconds.') - -rand_inputs = rand_inputs.cuda() -t0 = time.time() -quant_values, quant_state = F.quantize_blockwise(rand_inputs) -quant_time = time.time()-t0 -print(f'Total time taken (GPU): {quant_time:.4f} seconds.') - - diff --git a/spaces/SeViLA/SeViLA/lavis/models/sevila_models/sevila.py b/spaces/SeViLA/SeViLA/lavis/models/sevila_models/sevila.py deleted file mode 100644 index 8ebe984195a2a9cf0597c9c6577d779b79194b22..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/sevila_models/sevila.py +++ /dev/null @@ -1,1015 +0,0 @@ -""" - Copyright (c) 2023, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" -import logging - -import copy -import torch -import torch.nn as nn -from torch.cuda.amp import autocast as autocast -from transformers import T5TokenizerFast, BertTokenizer - -from lavis.common.registry import registry -from lavis.models.blip2_models.blip2 import Blip2Base, disabled_train -from lavis.models.blip2_models.modeling_t5 import T5Config, T5ForConditionalGeneration - -@registry.register_model("sevila") -class SeViLA(Blip2Base): - """ - BLIP2 T5 model. - Supported model types: - - pretrain_flant5xl: pretrained model with FlanT5-XL - - pretrain_flant5xxl: pretrained model with FlanT5-XXL - - caption_coco_flant5xl: fintuned image captioning model with FlanT5-XL - Usage: - >>> from lavis.models import load_model - >>> model = load_model("blip2_t5", "pretrain_flant5xl") - """ - PRETRAINED_MODEL_CONFIG_DICT = { - "pretrain_flant5xl": "configs/models/blip2/blip2_pretrain_flant5xl.yaml", - "pretrain_flant5xxl": "configs/models/blip2/blip2_pretrain_flant5xxl.yaml", - "caption_coco_flant5xl": "configs/models/blip2/blip2_caption_flant5xl.yaml", - } - - def __init__( self, img_size=224, drop_path_rate=0, - use_grad_checkpoint=False, vit_precision="fp16", freeze_vit=True, - num_query_token=32, t5_model="google/flan-t5-xl", prompt="", - max_txt_len=32, frame_num=8, answer_num=5, apply_lemmatizer=False, task='qa'): - """ - apply_lemmatizer: when set to True, postprocess predict_answers() result with lemmas. - """ - super().__init__() - - self.task = task - - # vision backbone - self.visual_encoder, self.ln_vision, self.ln_vision_loc = self.init_vision_encoder_sevila( - img_size, drop_path_rate, use_grad_checkpoint, vit_precision) - - # freeze ViT - if freeze_vit: - for name, param in self.visual_encoder.named_parameters(): - param.requires_grad = False - self.visual_encoder = self.visual_encoder.eval() - self.visual_encoder.train = disabled_train - logging.info("freeze vision encoder") - - # text backbone - self.t5_tokenizer = T5TokenizerFast.from_pretrained(t5_model) - t5_config = T5Config.from_pretrained(t5_model) - t5_config.dense_act_fn = "gelu" - self.t5_model = T5ForConditionalGeneration.from_pretrained( - t5_model, config=t5_config) - - # freeze T5 - for name, param in self.t5_model.named_parameters(): - param.requires_grad = False - param.data = param.data.bfloat16() - - # Q-Former for Answerer - self.Qformer, self.query_tokens = self.init_Qformer( - num_query_token, self.visual_encoder.num_features) - self.Qformer.cls = None - self.Qformer.bert.embeddings.word_embeddings = None - self.Qformer.bert.embeddings.position_embeddings = None - for layer in self.Qformer.bert.encoder.layer: - layer.output = None - layer.intermediate = None - self.num_query_token = num_query_token - self.t5_proj = nn.Linear( - self.Qformer.config.hidden_size, self.t5_model.config.hidden_size) - - # Q-Former for Localizer - if 'loc' in task: - self.Qformer_loc, self.query_tokens_loc = self.init_Qformer( - num_query_token, self.visual_encoder.num_features) - - self.Qformer_loc.cls = None - self.Qformer_loc.bert.embeddings.word_embeddings = None - self.Qformer_loc.bert.embeddings.position_embeddings = None - for layer in self.Qformer_loc.bert.encoder.layer: - layer.output = None - layer.intermediate = None - self.t5_proj_loc = nn.Linear( - self.Qformer_loc.config.hidden_size, self.t5_model.config.hidden_size - ) - - self.max_txt_len = 77 - answer_id = [71, 272, 205, 309, 262] # A B C D E - self.answer_id = answer_id[:answer_num] - self.yes_id, self.no_id = 4273, 150 - - self._apply_lemmatizer = apply_lemmatizer - self._lemmatizer = None - - self.frame_num = frame_num - self.ANS_MAP = {'A':0, 'B':1, 'C':2, 'D':3, 'E':4} - self.frame_prefix = ['Frame: '] - self.vid_prefix = ['Frame {}: '.format(str(i+1)) for i in range(frame_num)] - - - if 'freeze_qa' in task: - for name, param in self.Qformer.named_parameters(): - param.requires_grad = False - self.query_tokens.requires_grad = False - self.t5_proj.requires_grad = False - - if 'freeze_loc' in task: - for name, param in self.Qformer_loc.named_parameters(): - param.requires_grad = False - self.query_tokens_loc.requires_grad = False - self.t5_proj_loc.requires_grad = False - - def forward(self, samples, - use_nucleus_sampling=False, - num_beams=5, max_length=30, - min_length=1, top_p=0.9, - repetition_penalty=1.0, length_penalty=1.0, - num_captions=1, temperature=1,): - - image = samples["video"] - - b, t, c, w, h = image.shape - image = image.reshape(-1, c, w, h) - image_embeds = self.visual_encoder(image) - _, n, _ = image_embeds.shape - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(image.device) # bt n c - - # Localizer self-refinement - if 'train_loc' in self.task: - - # ========= Generate pseudo labels by frozen answerer ============ - with torch.no_grad(): - - image_embeds_, image_atts_ = image_embeds.detach().clone(), image_atts.detach().clone() - image_embeds_ = self.ln_vision(image_embeds_) - - query_tokens_qa = self.query_tokens.expand(image_embeds_.shape[0], -1, -1) - query_output_qa = self.Qformer.bert( - query_embeds=query_tokens_qa, encoder_hidden_states=image_embeds_, - encoder_attention_mask=image_atts_, return_dict=True) - inputs_t5_qa = self.t5_proj(query_output_qa.last_hidden_state) - atts_t5_qa = torch.ones(inputs_t5_qa.size()[:-1], dtype=torch.long).to(image.device) - text_input_qa = samples['qa_input'] - answer = samples['qa_output'] - ans_idx = [self.ANS_MAP[a[-1]] for a in answer] - - with torch.cuda.amp.autocast(dtype=torch.bfloat16): - # Frame Prefix - frame_prefix = self.t5_tokenizer( - self.frame_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt", - ).to(image.device) # - frame_prefix_id = torch.repeat_interleave(frame_prefix.input_ids, b*t, 0) - frame_prefix_mask = torch.repeat_interleave(frame_prefix.attention_mask, b*t, 0) - # Question, options input - input_tokens_qa = self.t5_tokenizer( - text_input_qa, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - input_ids_qa = torch.repeat_interleave(input_tokens_qa.input_ids, t, 0) - input_attention_mask_qa = torch.repeat_interleave(input_tokens_qa.attention_mask, t, 0) - - # Output target - output_tokens_qa = self.t5_tokenizer( - answer, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - targets_qa = output_tokens_qa.input_ids.masked_fill( - output_tokens_qa.input_ids == self.t5_tokenizer.pad_token_id, -100) - output_tokens_mask_qa = torch.repeat_interleave(output_tokens_qa.attention_mask, t, dim=0) - targets_qa = torch.repeat_interleave(targets_qa, t, dim=0) - - # input for QA - frame_predix_embed = self.t5_model.encoder.embed_tokens(frame_prefix_id) - inputs_embeds_qa = self.t5_model.encoder.embed_tokens(input_ids_qa) - inputs_embeds_qa = torch.cat([frame_predix_embed, inputs_t5_qa, inputs_embeds_qa], dim=1) - encoder_atts_qa = torch.cat([frame_prefix_mask, atts_t5_qa, input_attention_mask_qa], dim=1) - - outputs_embed_qa = self.t5_model( - inputs_embeds=inputs_embeds_qa, attention_mask=encoder_atts_qa, - decoder_attention_mask=output_tokens_mask_qa, return_dict=True, labels=targets_qa) - pred_logits_qa = outputs_embed_qa.logits.detach() - pred_logits_qa = pred_logits_qa[:, 1, self.answer_id] # b*t, 5 - pred_ans = torch.argmax(pred_logits_qa, dim=-1) - pred_ans = pred_ans.reshape(b, -1) # b, t - # print('pred_ans', pred_ans) - pseudo_label = [] - for i, preds in enumerate(pred_ans): - for p in preds: - if p == ans_idx[i]: - pseudo_label.append('yes') - else: - pseudo_label.append('no') - # ================================================================ - - # ============== Train localizer with pseudo labels ================= - text_input_loc = samples['loc_input'] - query_tokens_loc = self.query_tokens_loc.expand(image_embeds.shape[0], -1, -1) - image_embeds = self.ln_vision_loc(image_embeds) - - query_output_loc = self.Qformer_loc.bert( - query_embeds=query_tokens_loc, encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, return_dict=True) # bt, n, c - inputs_t5_loc = self.t5_proj_loc(query_output_loc.last_hidden_state) # bt, n, c - atts_t5_loc = torch.ones(inputs_t5_loc.size()[:-1], dtype=torch.long).to(image.device) - with torch.cuda.amp.autocast(dtype=torch.bfloat16): - frame_prefix = self.t5_tokenizer( - self.frame_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt").to(image.device) - frame_prefix_id = torch.repeat_interleave(frame_prefix.input_ids, b*t, 0) - frame_prefix_mask = torch.repeat_interleave(frame_prefix.attention_mask, b*t, 0) - frame_predix_embed = self.t5_model.encoder.embed_tokens(frame_prefix_id) - - input_tokens_loc = self.t5_tokenizer( - text_input_loc, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - input_ids_loc = torch.repeat_interleave(input_tokens_loc.input_ids, t, 0) - input_attention_mask_loc = torch.repeat_interleave(input_tokens_loc.attention_mask, t, 0) - inputs_embeds_loc = self.t5_model.encoder.embed_tokens(input_ids_loc) - - inputs_embeds_loc = torch.cat([frame_predix_embed, inputs_t5_loc, inputs_embeds_loc], dim=1) - encoder_atts_loc = torch.cat([frame_prefix_mask, atts_t5_loc, input_attention_mask_loc], dim=1) - - output_tokens_loc = self.t5_tokenizer( - pseudo_label, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - targets_loc = output_tokens_loc.input_ids.masked_fill( - output_tokens_loc.input_ids == self.t5_tokenizer.pad_token_id, -100) - output_tokens_loc_mask = output_tokens_loc.attention_mask - - outputs_loc = self.t5_model( - inputs_embeds=inputs_embeds_loc, attention_mask=encoder_atts_loc, - decoder_attention_mask=output_tokens_loc_mask, - return_dict=True, labels=targets_loc) - loss = outputs_loc.loss - - return {"loss": loss} - - # Finetune answerer with localizer - elif 'train_qa_with_loc' in self.task: - # frame selection - with torch.no_grad(): - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(image.device) # bt n c - image_embeds_, image_atts_ = image_embeds.detach().clone(), image_atts.detach().clone() - image_embeds_ = self.ln_vision_loc(image_embeds_) - - text_input_loc = samples['loc_input'] - query_tokens_loc = self.query_tokens_loc.expand(image_embeds_.shape[0], -1, -1) - query_output_loc = self.Qformer_loc.bert( - query_embeds=query_tokens_loc, encoder_hidden_states=image_embeds_, - encoder_attention_mask=image_atts_, return_dict=True) - inputs_t5_loc = self.t5_proj_loc(query_output_loc.last_hidden_state) - - atts_t5_loc = torch.ones(inputs_t5_loc.size()[:-1], dtype=torch.long).to(image.device) - with torch.cuda.amp.autocast(dtype=torch.bfloat16): - - frame_prefix = self.t5_tokenizer( - self.frame_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt").to(image.device) - frame_prefix_id = torch.repeat_interleave(frame_prefix.input_ids, b*t, 0) - frame_prefix_mask = torch.repeat_interleave(frame_prefix.attention_mask, b*t, 0) - frame_predix_embed = self.t5_model.encoder.embed_tokens(frame_prefix_id) - input_tokens_loc = self.t5_tokenizer( - text_input_loc, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - input_ids_loc = torch.repeat_interleave(input_tokens_loc.input_ids, t, 0) - input_attention_mask_loc = torch.repeat_interleave(input_tokens_loc.attention_mask, t, 0) - inputs_embeds_loc = self.t5_model.encoder.embed_tokens(input_ids_loc) - inputs_embeds_loc = torch.cat([frame_predix_embed, inputs_t5_loc, inputs_embeds_loc], dim=1) - encoder_atts_loc = torch.cat([frame_prefix_mask, atts_t5_loc, input_attention_mask_loc], dim=1) - - outputs_loc = self.t5_model.generate( - inputs_embeds=inputs_embeds_loc, attention_mask=encoder_atts_loc, - do_sample=use_nucleus_sampling, top_p=top_p, temperature=temperature, num_beams=1, - max_new_tokens=max_length, min_length=min_length, repetition_penalty=repetition_penalty, - length_penalty=length_penalty, num_return_sequences=num_captions, - return_dict_in_generate=True, output_hidden_states=True, output_scores=True) - - pred_logits_loc = outputs_loc.scores[0] - loc_yes = pred_logits_loc[:, self.yes_id] - loc_yes = loc_yes.reshape(b, -1) - - text_input_qa = samples['qa_input'] - answer = samples['qa_output'] # Option A ... - select_frames_idx = torch.topk(loc_yes, self.frame_num, dim=-1).indices.tolist() - sorted_frames_idx = [] - image_embeds = self.ln_vision(image_embeds) - image_embeds = image_embeds.reshape(b, t, n, -1) - for frames in select_frames_idx: - sorted_frames_idx.append(sorted(frames)) - select_frames = [] - for i, fs in enumerate(sorted_frames_idx): - video = [] - for j, f in enumerate(fs): - video.append(image_embeds[i][f]) - video = torch.stack(video, dim=0) # 4, n , -1 - select_frames.append(video) - - select_frames = torch.stack(select_frames, dim=0) # b 4, n , -1 - select_frames = select_frames.reshape(-1, select_frames.shape[-2], select_frames.shape[-1]) - image_atts = torch.ones(select_frames.size()[:-1], dtype=torch.long).to(image.device) # bt n c - query_tokens_qa = self.query_tokens.expand(select_frames.shape[0], -1, -1) - query_output_qa = self.Qformer.bert( - query_embeds=query_tokens_qa, encoder_hidden_states=select_frames, - encoder_attention_mask=image_atts, return_dict=True) - inputs_t5_qa = self.t5_proj(query_output_qa.last_hidden_state) - inputs_t5_qa = inputs_t5_qa.reshape(b, -1, inputs_t5_qa.shape[-2], inputs_t5_qa.shape[-1]) - atts_t5_qa = torch.ones(inputs_t5_qa.size()[:-1], dtype=torch.long).to(image.device) - - with torch.cuda.amp.autocast(dtype=torch.bfloat16): - vid_prefix = self.t5_tokenizer( - self.vid_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt",).to(image.device) # - vid_prefix_id = torch.repeat_interleave(vid_prefix.input_ids.unsqueeze(0), b, 0) - vid_prefix_mask = torch.repeat_interleave(vid_prefix.attention_mask.unsqueeze(0), b, 0) - vid_prefix_embed = self.t5_model.encoder.embed_tokens(vid_prefix_id) # b t n_word c - - inputs_t5_qa = torch.cat([vid_prefix_embed, inputs_t5_qa], dim=2) # b, t, n_word + m, c - atts_t5_qa = torch.cat([vid_prefix_mask, atts_t5_qa], dim=2) # b, t, n_word + m - inputs_t5_qa = inputs_t5_qa.reshape(b, -1, inputs_t5_qa.shape[-1]) - atts_t5_qa = atts_t5_qa.reshape(b, -1) - - input_tokens_qa = self.t5_tokenizer( - text_input_qa, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - inputs_embeds_qa = self.t5_model.encoder.embed_tokens(input_tokens_qa.input_ids) - inputs_embeds_qa = torch.cat([inputs_t5_qa, inputs_embeds_qa], dim=1) - encoder_atts_qa = torch.cat([atts_t5_qa, input_tokens_qa.attention_mask], dim=1) - - output_tokens_qa = self.t5_tokenizer( - answer, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - targets_qa = output_tokens_qa.input_ids.masked_fill( - output_tokens_qa.input_ids == self.t5_tokenizer.pad_token_id, -100) - output_tokens_mask_qa = output_tokens_qa.attention_mask - - outputs_qa = self.t5_model( - inputs_embeds=inputs_embeds_qa, attention_mask=encoder_atts_qa, - decoder_attention_mask=output_tokens_mask_qa, return_dict=True, labels=targets_qa) - loss = outputs_qa.loss - - return {"loss": loss} - - # finetune answerer with random frames - elif 'loc' not in self.task or 'train_qa_wo_loc' in self.task: - #pass - query_tokens_qa = self.query_tokens.expand(image_embeds.shape[0], -1, -1) - image_embeds = self.ln_vision(image_embeds) - - query_output_qa = self.Qformer.bert( - query_embeds=query_tokens_qa, encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, return_dict=True) - inputs_t5_qa = self.t5_proj(query_output_qa.last_hidden_state) - text_input_qa = samples['qa_input'] - answer = samples['qa_output'] - - with torch.cuda.amp.autocast(dtype=torch.bfloat16): - # Frame Prefix - if 'qa_vid' not in self.task: - atts_t5_qa = torch.ones(inputs_t5_qa.size()[:-1], dtype=torch.long).to(image.device) - frame_prefix = self.t5_tokenizer( - self.frame_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len,return_tensors="pt", - ).to(image.device) - frame_prefix_id = torch.repeat_interleave(frame_prefix.input_ids, b*t, 0) - frame_prefix_mask = torch.repeat_interleave(frame_prefix.attention_mask, b*t, 0) - # Question, Options input - input_tokens_qa = self.t5_tokenizer( - text_input_qa, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - input_ids_qa = torch.repeat_interleave(input_tokens_qa.input_ids, t, 0) - input_attention_mask_qa = torch.repeat_interleave(input_tokens_qa.attention_mask, t, 0) - - # Output target - output_tokens_qa = self.t5_tokenizer( - answer, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - targets_qa = output_tokens_qa.input_ids.masked_fill( - output_tokens_qa.input_ids == self.t5_tokenizer.pad_token_id, -100) - output_tokens_mask_qa = torch.repeat_interleave(output_tokens_qa.attention_mask, t, dim=0) - targets_qa = torch.repeat_interleave(targets_qa, t, dim=0) - - # input for QA - frame_predix_embed = self.t5_model.encoder.embed_tokens(frame_prefix_id) - inputs_embeds_qa = self.t5_model.encoder.embed_tokens(input_ids_qa) - inputs_embeds_qa = torch.cat([frame_predix_embed, inputs_t5_qa, inputs_embeds_qa], dim=1) - encoder_atts_qa = torch.cat([frame_prefix_mask, atts_t5_qa, input_attention_mask_qa], dim=1) - else: - vid_prefix = self.t5_tokenizer( - self.vid_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt",).to(image.device) # - vid_prefix_id = torch.repeat_interleave(vid_prefix.input_ids.unsqueeze(0), b, 0) - vid_prefix_mask = torch.repeat_interleave(vid_prefix.attention_mask.unsqueeze(0), b, 0) - vid_prefix_embed = self.t5_model.encoder.embed_tokens(vid_prefix_id) # b t n_word c - - inputs_t5_qa = inputs_t5_qa.reshape(b, t, inputs_t5_qa.shape[-2], -1) # b, t, m ,c - atts_t5_qa = torch.ones(inputs_t5_qa.size()[:-1], dtype=torch.long).to(image.device) - - inputs_t5_qa = torch.cat([vid_prefix_embed, inputs_t5_qa], dim=2) # b, t, n_word + m, c - atts_t5_qa = torch.cat([vid_prefix_mask, atts_t5_qa], dim=2) # b, t, n_word + m - inputs_t5_qa = inputs_t5_qa.reshape(b, -1, inputs_t5_qa.shape[-1]) - atts_t5_qa = atts_t5_qa.reshape(b, -1) - - input_tokens_qa = self.t5_tokenizer( - text_input_qa, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - inputs_embeds_qa = self.t5_model.encoder.embed_tokens(input_tokens_qa.input_ids) - inputs_embeds_qa = torch.cat([inputs_t5_qa, inputs_embeds_qa], dim=1) - encoder_atts_qa = torch.cat([atts_t5_qa, input_tokens_qa.attention_mask], dim=1) - - output_tokens_qa = self.t5_tokenizer( - answer, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - targets_qa = output_tokens_qa.input_ids.masked_fill( - output_tokens_qa.input_ids == self.t5_tokenizer.pad_token_id, -100) - output_tokens_mask_qa = output_tokens_qa.attention_mask - - outputs_qa = self.t5_model( - inputs_embeds=inputs_embeds_qa, attention_mask=encoder_atts_qa, - decoder_attention_mask=output_tokens_mask_qa, return_dict=True, labels=targets_qa) - loss = outputs_qa.loss - - return {"loss": loss} - - - @torch.no_grad() - def generate(self, - samples, - use_nucleus_sampling=False, - num_beams=5, max_length=30, - min_length=1, top_p=0.9, - repetition_penalty=1.0, length_penalty=1.0, - num_captions=1, temperature=1,): - """ - Args: - samples (dict): A dictionary containing the following keys: - - image (torch.Tensor): A tensor of shape (batch_size, 3, H, W) - use_nucleus_sampling (bool): Whether to use nucleus sampling. If False, use top-k sampling. - num_beams (int): Number of beams for beam search. 1 means no beam search. - max_length (int): The maximum length of the sequence to be generated. - min_length (int): The minimum length of the sequence to be generated. - top_p (float): The cumulative probability for nucleus sampling. - repetition_penalty (float): The parameter for repetition penalty. 1.0 means no penalty. - num_captions (int): Number of captions to be generated for each image. - Returns: - captions (list): A list of strings of length batch_size * num_captions. - """ - out = {} - image, qid = samples["video"], samples['question_id'] - text_input_qa, answer = samples['qa_input'], samples['qa_output'] - - # uniform sampling - if 'loc' not in self.task or 'uni_eval' in self.task: - b, t, c, w, h = image.shape - image = image.reshape(-1, c, w, h) - with torch.cuda.amp.autocast(enabled=(self.device != torch.device("cpu"))): - image_embeds = self.ln_vision(self.visual_encoder(image)) # bt, n, c - _, n, _ = image_embeds.shape - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(image.device) # bt n c - - query_tokens_qa = self.query_tokens.expand(image_embeds.shape[0], -1, -1) - query_output_qa = self.Qformer.bert( - query_embeds=query_tokens_qa, encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, return_dict=True) - inputs_t5_qa = self.t5_proj(query_output_qa.last_hidden_state) - - with torch.cuda.amp.autocast(dtype=torch.bfloat16): - # Frame Prefix - if 'vid' not in self.task: - atts_t5_qa = torch.ones(inputs_t5_qa.size()[:-1], dtype=torch.long).to(image.device) - frame_prefix = self.t5_tokenizer( - self.frame_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt",).to(image.device) # - frame_prefix_id = torch.repeat_interleave(frame_prefix.input_ids, b*t, 0) - frame_prefix_mask = torch.repeat_interleave(frame_prefix.attention_mask, b*t, 0) - input_tokens_qa = self.t5_tokenizer( - text_input_qa, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - input_ids_qa = torch.repeat_interleave(input_tokens_qa.input_ids, t, 0) - input_attention_mask_qa = torch.repeat_interleave(input_tokens_qa.attention_mask, t, 0) - - # input for QA - frame_predix_embed = self.t5_model.encoder.embed_tokens(frame_prefix_id) - inputs_embeds_qa = self.t5_model.encoder.embed_tokens(input_ids_qa) - inputs_embeds_qa = torch.cat([frame_predix_embed, inputs_t5_qa, inputs_embeds_qa], dim=1) - encoder_atts_qa = torch.cat([frame_prefix_mask, atts_t5_qa, input_attention_mask_qa], dim=1) - - elif 'qa_vid' in self.task: - vid_prefix = self.t5_tokenizer( - self.vid_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt",).to(image.device) # - vid_prefix_id = torch.repeat_interleave(vid_prefix.input_ids.unsqueeze(0), b, 0) - vid_prefix_mask = torch.repeat_interleave(vid_prefix.attention_mask.unsqueeze(0), b, 0) - vid_prefix_embed = self.t5_model.encoder.embed_tokens(vid_prefix_id) # b t n_word c - - inputs_t5_qa = inputs_t5_qa.reshape(b, t, inputs_t5_qa.shape[-2], -1) # b, t, m ,c - atts_t5_qa = torch.ones(inputs_t5_qa.size()[:-1], dtype=torch.long).to(image.device) - - inputs_t5_qa = torch.cat([vid_prefix_embed, inputs_t5_qa], dim=2) # b, t, n_word + m, c - atts_t5_qa = torch.cat([vid_prefix_mask, atts_t5_qa], dim=2) # b, t, n_word + m - inputs_t5_qa = inputs_t5_qa.reshape(b, -1, inputs_t5_qa.shape[-1]) - atts_t5_qa = atts_t5_qa.reshape(b, -1) - - input_tokens_qa = self.t5_tokenizer( - text_input_qa, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - inputs_embeds_qa = self.t5_model.encoder.embed_tokens(input_tokens_qa.input_ids) - inputs_embeds_qa = torch.cat([inputs_t5_qa, inputs_embeds_qa], dim=1) - encoder_atts_qa = torch.cat([atts_t5_qa, input_tokens_qa.attention_mask], dim=1) - - outputs_qa = self.t5_model.generate( - inputs_embeds=inputs_embeds_qa, attention_mask=encoder_atts_qa, - do_sample=use_nucleus_sampling, top_p=top_p, - temperature=temperature, num_beams=1, - max_new_tokens=max_length, min_length=min_length, - repetition_penalty=repetition_penalty, length_penalty=length_penalty, - num_return_sequences=num_captions, return_dict_in_generate=True, - output_hidden_states=True, output_scores=True) - try: - pred_logits_qa = outputs_qa.scores[1] - except: - pred_logits_qa = outputs_qa.scores[0] - pred_logits_qa = pred_logits_qa[:, self.answer_id] # b, 5 - pred_ans = torch.argmax(pred_logits_qa, dim=-1).cpu().tolist() - - # inference with localizer - else: - - b, t, c, w, h = image.shape - image = image.reshape(-1, c, w, h) - with torch.cuda.amp.autocast(enabled=(self.device != torch.device("cpu"))): - image_embeds = self.visual_encoder(image) # bt, n, c - - _, n, _ = image_embeds.shape - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(image.device) # bt n c - image_embeds_, image_atts_ = image_embeds.detach().clone(), image_atts.detach().clone() - image_embeds_ = self.ln_vision_loc(image_embeds_) - - text_input_loc = samples['loc_input'] # Q + Prompt: Is this a good frame can answer the question? - query_tokens_loc = self.query_tokens_loc.expand(image_embeds_.shape[0], -1, -1) - query_output_loc = self.Qformer_loc.bert( - query_embeds=query_tokens_loc, encoder_hidden_states=image_embeds_, - encoder_attention_mask=image_atts_, return_dict=True) - inputs_t5_loc = self.t5_proj_loc(query_output_loc.last_hidden_state) - - atts_t5_loc = torch.ones(inputs_t5_loc.size()[:-1], dtype=torch.long).to(image.device) - with torch.cuda.amp.autocast(dtype=torch.bfloat16): - - frame_prefix = self.t5_tokenizer( - self.frame_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt").to(image.device) # - #print('frame_prefix 1', frame_prefix.input_ids.shape) 8, 4 - frame_prefix_id = torch.repeat_interleave(frame_prefix.input_ids, b*t, 0) - frame_prefix_mask = torch.repeat_interleave(frame_prefix.attention_mask, b*t, 0) - frame_predix_embed = self.t5_model.encoder.embed_tokens(frame_prefix_id) - input_tokens_loc = self.t5_tokenizer( - text_input_loc, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - #print('input_ids_loc.input_ids', input_tokens_loc.input_ids) - input_ids_loc = torch.repeat_interleave(input_tokens_loc.input_ids, t, 0) - #print('input_ids_loc', input_ids_loc) - input_attention_mask_loc = torch.repeat_interleave(input_tokens_loc.attention_mask, t, 0) - inputs_embeds_loc = self.t5_model.encoder.embed_tokens(input_ids_loc) - inputs_embeds_loc = torch.cat([frame_predix_embed, inputs_t5_loc, inputs_embeds_loc], dim=1) - encoder_atts_loc = torch.cat([frame_prefix_mask, atts_t5_loc, input_attention_mask_loc], dim=1) - - outputs_loc = self.t5_model.generate( - inputs_embeds=inputs_embeds_loc, attention_mask=encoder_atts_loc, - do_sample=use_nucleus_sampling, top_p=top_p, temperature=temperature, num_beams=1, - max_new_tokens=max_length, min_length=min_length, repetition_penalty=repetition_penalty, - length_penalty=length_penalty, num_return_sequences=num_captions, - return_dict_in_generate=True, output_hidden_states=True, output_scores=True) - - pred_logits_loc = outputs_loc.scores[0] - loc_yes = pred_logits_loc[:, self.yes_id] - loc_yes = loc_yes.reshape(b, -1) - if 'qa_vid' in self.task: - select_frames_idx = torch.topk(loc_yes, self.frame_num, dim=-1).indices.tolist() - sorted_frames_idx = [] - image_embeds = self.ln_vision(image_embeds) - image_embeds = image_embeds.reshape(b, t, n, -1) - for frames in select_frames_idx: - sorted_frames_idx.append(sorted(frames)) - out['frame_idx'] = sorted_frames_idx - select_frames = [] - for i, fs in enumerate(sorted_frames_idx): - video = [] - for j, f in enumerate(fs): - video.append(image_embeds[i][f]) - video = torch.stack(video, dim=0) - select_frames.append(video) - - select_frames = torch.stack(select_frames, dim=0) # b 4, n , -1 - select_frames = select_frames.reshape(-1, select_frames.shape[-2], select_frames.shape[-1]) - image_atts = torch.ones(select_frames.size()[:-1], dtype=torch.long).to(image.device) # bt n c - query_tokens_qa = self.query_tokens.expand(select_frames.shape[0], -1, -1) - query_output_qa = self.Qformer.bert( - query_embeds=query_tokens_qa, encoder_hidden_states=select_frames, - encoder_attention_mask=image_atts, return_dict=True) - inputs_t5_qa = self.t5_proj(query_output_qa.last_hidden_state) - inputs_t5_qa = inputs_t5_qa.reshape(b, -1, inputs_t5_qa.shape[-2], inputs_t5_qa.shape[-1]) - atts_t5_qa = torch.ones(inputs_t5_qa.size()[:-1], dtype=torch.long).to(image.device) - - vid_prefix = self.t5_tokenizer( - self.vid_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt",).to(image.device) # - vid_prefix_id = torch.repeat_interleave(vid_prefix.input_ids.unsqueeze(0), b, 0) - vid_prefix_mask = torch.repeat_interleave(vid_prefix.attention_mask.unsqueeze(0), b, 0) - vid_prefix_embed = self.t5_model.encoder.embed_tokens(vid_prefix_id) # b t n_word c - - inputs_t5_qa = torch.cat([vid_prefix_embed, inputs_t5_qa], dim=2) # b, t, n_word + m, c - atts_t5_qa = torch.cat([vid_prefix_mask, atts_t5_qa], dim=2) # b, t, n_word + m - inputs_t5_qa = inputs_t5_qa.reshape(b, -1, inputs_t5_qa.shape[-1]) - atts_t5_qa = atts_t5_qa.reshape(b, -1) - - input_tokens_qa = self.t5_tokenizer( - text_input_qa, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - inputs_embeds_qa = self.t5_model.encoder.embed_tokens(input_tokens_qa.input_ids) - inputs_embeds_qa = torch.cat([inputs_t5_qa, inputs_embeds_qa], dim=1) - encoder_atts_qa = torch.cat([atts_t5_qa, input_tokens_qa.attention_mask], dim=1) - - else: - select_frames_idx = torch.argmax(loc_yes, -1) - select_frames = [] - image_embeds = self.ln_vision(image_embeds) - image_embeds = image_embeds.reshape(b, t, n, -1) - for i, f in enumerate(select_frames_idx): - select_frames.append(image_embeds[i][f]) - - select_frames = torch.stack(select_frames, dim=0) - image_atts = torch.ones(select_frames.size()[:-1], dtype=torch.long).to(image.device) # bt n c - query_tokens_qa = self.query_tokens.expand(select_frames.shape[0], -1, -1) - query_output_qa = self.Qformer.bert( - query_embeds=query_tokens_qa, encoder_hidden_states=select_frames, - encoder_attention_mask=image_atts, return_dict=True) - inputs_t5_qa = self.t5_proj(query_output_qa.last_hidden_state) - atts_t5_qa = torch.ones(inputs_t5_qa.size()[:-1], dtype=torch.long).to(image.device) - - frame_prefix = self.t5_tokenizer( - self.frame_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt").to(image.device) # - frame_prefix_id = torch.repeat_interleave(frame_prefix.input_ids, b, 0) - frame_prefix_mask = torch.repeat_interleave(frame_prefix.attention_mask, b, 0) - - input_tokens_qa = self.t5_tokenizer( - text_input_qa, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - - frame_predix_embed = self.t5_model.encoder.embed_tokens(frame_prefix_id) - inputs_embeds_qa = self.t5_model.encoder.embed_tokens(input_tokens_qa.input_ids) - - inputs_embeds_qa = torch.cat([frame_predix_embed, inputs_t5_qa, inputs_embeds_qa], dim=1) - encoder_atts_qa = torch.cat([frame_prefix_mask, atts_t5_qa, input_tokens_qa.attention_mask], dim=1) - - outputs_qa = self.t5_model.generate( - inputs_embeds=inputs_embeds_qa, attention_mask=encoder_atts_qa, - do_sample=use_nucleus_sampling, top_p=top_p, - temperature=temperature, num_beams=1, - max_new_tokens=max_length, min_length=min_length, - repetition_penalty=repetition_penalty, length_penalty=length_penalty, - num_return_sequences=num_captions, return_dict_in_generate=True, - output_hidden_states=True, output_scores=True) - pred_logits_qa = outputs_qa.scores[1] - pred_logits_qa = pred_logits_qa[:, self.answer_id] # b, 5 - pred_ans = torch.argmax(pred_logits_qa, dim=-1).cpu().tolist() - - out['output_text'] = pred_ans - if 'qa_vid' not in self.task: - out['temp_idx'] = [j for i in range(b) for j in range(t)] - out['answer'] = [a for a in answer for i in range(t)] - out['qid'] = [q for q in qid for i in range(t)] - else: - out['answer'] = answer - out['qid'] = qid - - return out - - @torch.no_grad() - def generate_demo(self, - video, - text_input_qa, - text_input_loc, - keyframe_num, - qid='demo', - use_nucleus_sampling=False, - num_beams=5, max_length=30, - min_length=1, top_p=0.9, - repetition_penalty=1.0, length_penalty=1.0, - num_captions=1, temperature=1,): - """ - Args: - samples (dict): A dictionary containing the following keys: - - image (torch.Tensor): A tensor of shape (batch_size, 3, H, W) - use_nucleus_sampling (bool): Whether to use nucleus sampling. If False, use top-k sampling. - num_beams (int): Number of beams for beam search. 1 means no beam search. - max_length (int): The maximum length of the sequence to be generated. - min_length (int): The minimum length of the sequence to be generated. - top_p (float): The cumulative probability for nucleus sampling. - repetition_penalty (float): The parameter for repetition penalty. 1.0 means no penalty. - num_captions (int): Number of captions to be generated for each image. - Returns: - captions (list): A list of strings of length batch_size * num_captions. - """ - out = {} - image, qid = video, qid - text_input_qa, answer = text_input_qa, 0 - vid_prefix = ['Frame {}: '.format(str(i+1)) for i in range(keyframe_num)] - # inference with localizer - - b, t, c, w, h = image.shape - image = image.reshape(-1, c, w, h) - with torch.cuda.amp.autocast(enabled=(self.device != torch.device("cpu"))): - image_embeds = self.visual_encoder(image) # bt, n, c - - _, n, _ = image_embeds.shape - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(image.device) # bt n c - image_embeds_, image_atts_ = image_embeds.detach().clone(), image_atts.detach().clone() - image_embeds_ = self.ln_vision_loc(image_embeds_) - - text_input_loc = text_input_loc # Q + Prompt: Is this a good frame can answer the question? - query_tokens_loc = self.query_tokens_loc.expand(image_embeds_.shape[0], -1, -1) - query_output_loc = self.Qformer_loc.bert( - query_embeds=query_tokens_loc, encoder_hidden_states=image_embeds_, - encoder_attention_mask=image_atts_, return_dict=True) - inputs_t5_loc = self.t5_proj_loc(query_output_loc.last_hidden_state) - - atts_t5_loc = torch.ones(inputs_t5_loc.size()[:-1], dtype=torch.long).to(image.device) - with torch.cuda.amp.autocast(dtype=torch.bfloat16): - - frame_prefix = self.t5_tokenizer( - self.frame_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt").to(image.device) # - #print('frame_prefix 1', frame_prefix.input_ids.shape) 8, 4 - frame_prefix_id = torch.repeat_interleave(frame_prefix.input_ids, b*t, 0) - frame_prefix_mask = torch.repeat_interleave(frame_prefix.attention_mask, b*t, 0) - frame_predix_embed = self.t5_model.encoder.embed_tokens(frame_prefix_id) - input_tokens_loc = self.t5_tokenizer( - text_input_loc, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - #print('input_ids_loc.input_ids', input_tokens_loc.input_ids) - input_ids_loc = torch.repeat_interleave(input_tokens_loc.input_ids, t, 0) - #print('input_ids_loc', input_ids_loc) - input_attention_mask_loc = torch.repeat_interleave(input_tokens_loc.attention_mask, t, 0) - inputs_embeds_loc = self.t5_model.encoder.embed_tokens(input_ids_loc) - inputs_embeds_loc = torch.cat([frame_predix_embed, inputs_t5_loc, inputs_embeds_loc], dim=1) - encoder_atts_loc = torch.cat([frame_prefix_mask, atts_t5_loc, input_attention_mask_loc], dim=1) - - outputs_loc = self.t5_model.generate( - inputs_embeds=inputs_embeds_loc, attention_mask=encoder_atts_loc, - do_sample=use_nucleus_sampling, top_p=top_p, temperature=temperature, num_beams=1, - max_new_tokens=max_length, min_length=min_length, repetition_penalty=repetition_penalty, - length_penalty=length_penalty, num_return_sequences=num_captions, - return_dict_in_generate=True, output_hidden_states=True, output_scores=True) - - pred_logits_loc = outputs_loc.scores[0] - loc_yes = pred_logits_loc[:, self.yes_id] - loc_yes = loc_yes.reshape(b, -1) - if 'qa_vid' in self.task: - select_frames_idx = torch.topk(loc_yes, keyframe_num, dim=-1).indices.tolist() - sorted_frames_idx = [] - image_embeds = self.ln_vision(image_embeds) - image_embeds = image_embeds.reshape(b, t, n, -1) - for frames in select_frames_idx: - sorted_frames_idx.append(sorted(frames)) - out['frame_idx'] = sorted_frames_idx - select_frames = [] - for i, fs in enumerate(sorted_frames_idx): - video = [] - for j, f in enumerate(fs): - video.append(image_embeds[i][f]) - video = torch.stack(video, dim=0) - select_frames.append(video) - - select_frames = torch.stack(select_frames, dim=0) # b 4, n , -1 - select_frames = select_frames.reshape(-1, select_frames.shape[-2], select_frames.shape[-1]) - image_atts = torch.ones(select_frames.size()[:-1], dtype=torch.long).to(image.device) # bt n c - query_tokens_qa = self.query_tokens.expand(select_frames.shape[0], -1, -1) - query_output_qa = self.Qformer.bert( - query_embeds=query_tokens_qa, encoder_hidden_states=select_frames, - encoder_attention_mask=image_atts, return_dict=True) - inputs_t5_qa = self.t5_proj(query_output_qa.last_hidden_state) - inputs_t5_qa = inputs_t5_qa.reshape(b, -1, inputs_t5_qa.shape[-2], inputs_t5_qa.shape[-1]) - atts_t5_qa = torch.ones(inputs_t5_qa.size()[:-1], dtype=torch.long).to(image.device) - - vid_prefix = self.t5_tokenizer( - vid_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt",).to(image.device) # - vid_prefix_id = torch.repeat_interleave(vid_prefix.input_ids.unsqueeze(0), b, 0) - vid_prefix_mask = torch.repeat_interleave(vid_prefix.attention_mask.unsqueeze(0), b, 0) - vid_prefix_embed = self.t5_model.encoder.embed_tokens(vid_prefix_id) # b t n_word c - - inputs_t5_qa = torch.cat([vid_prefix_embed, inputs_t5_qa], dim=2) # b, t, n_word + m, c - atts_t5_qa = torch.cat([vid_prefix_mask, atts_t5_qa], dim=2) # b, t, n_word + m - inputs_t5_qa = inputs_t5_qa.reshape(b, -1, inputs_t5_qa.shape[-1]) - atts_t5_qa = atts_t5_qa.reshape(b, -1) - - input_tokens_qa = self.t5_tokenizer( - text_input_qa, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - inputs_embeds_qa = self.t5_model.encoder.embed_tokens(input_tokens_qa.input_ids) - inputs_embeds_qa = torch.cat([inputs_t5_qa, inputs_embeds_qa], dim=1) - encoder_atts_qa = torch.cat([atts_t5_qa, input_tokens_qa.attention_mask], dim=1) - - else: - select_frames_idx = torch.argmax(loc_yes, -1) - select_frames = [] - image_embeds = self.ln_vision(image_embeds) - image_embeds = image_embeds.reshape(b, t, n, -1) - for i, f in enumerate(select_frames_idx): - select_frames.append(image_embeds[i][f]) - - select_frames = torch.stack(select_frames, dim=0) - image_atts = torch.ones(select_frames.size()[:-1], dtype=torch.long).to(image.device) # bt n c - query_tokens_qa = self.query_tokens.expand(select_frames.shape[0], -1, -1) - query_output_qa = self.Qformer.bert( - query_embeds=query_tokens_qa, encoder_hidden_states=select_frames, - encoder_attention_mask=image_atts, return_dict=True) - inputs_t5_qa = self.t5_proj(query_output_qa.last_hidden_state) - atts_t5_qa = torch.ones(inputs_t5_qa.size()[:-1], dtype=torch.long).to(image.device) - - frame_prefix = self.t5_tokenizer( - self.frame_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt").to(image.device) # - frame_prefix_id = torch.repeat_interleave(frame_prefix.input_ids, b, 0) - frame_prefix_mask = torch.repeat_interleave(frame_prefix.attention_mask, b, 0) - - input_tokens_qa = self.t5_tokenizer( - text_input_qa, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - - frame_predix_embed = self.t5_model.encoder.embed_tokens(frame_prefix_id) - inputs_embeds_qa = self.t5_model.encoder.embed_tokens(input_tokens_qa.input_ids) - - inputs_embeds_qa = torch.cat([frame_predix_embed, inputs_t5_qa, inputs_embeds_qa], dim=1) - encoder_atts_qa = torch.cat([frame_prefix_mask, atts_t5_qa, input_tokens_qa.attention_mask], dim=1) - - outputs_qa = self.t5_model.generate( - inputs_embeds=inputs_embeds_qa, attention_mask=encoder_atts_qa, - do_sample=use_nucleus_sampling, top_p=top_p, - temperature=temperature, num_beams=1, - max_new_tokens=max_length, min_length=min_length, - repetition_penalty=repetition_penalty, length_penalty=length_penalty, - num_return_sequences=num_captions, return_dict_in_generate=True, - output_hidden_states=True, output_scores=True) - pred_logits_qa = outputs_qa.scores[1] - pred_logits_qa = pred_logits_qa[:, self.answer_id] # b, 5 - pred_ans = torch.argmax(pred_logits_qa, dim=-1).cpu().tolist() - - out['output_text'] = pred_ans - if 'qa_vid' not in self.task: - out['temp_idx'] = [j for i in range(b) for j in range(t)] - # out['answer'] = [a for a in answer for i in range(t)] - out['qid'] = [q for q in qid for i in range(t)] - else: - # out['answer'] = answer - out['qid'] = qid - - return out - - def predict_answers( - self, - samples, - num_beams=5, - inference_method="generate", - max_len=10, - min_len=1, - num_ans_candidates=128, - answer_list=None, - prompt="", - length_penalty=-1, - **kwargs - ): - image = samples["image"] - with torch.cuda.amp.autocast(enabled=(self.device != torch.device("cpu"))): - image_embeds = self.ln_vision(self.visual_encoder(image)) - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to( - image.device - ) - - query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1) - query_output = self.Qformer.bert( - query_embeds=query_tokens, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - - inputs_t5 = self.t5_proj(query_output.last_hidden_state) - atts_t5 = torch.ones(inputs_t5.size()[:-1], dtype=torch.long).to(image.device) - - if isinstance(samples["text_input"], str): - samples["text_input"] = [samples["text_input"]] - if prompt: - text_input = [prompt.format(question) for question in samples["text_input"]] - else: - text_input = samples["text_input"] - - input_tokens = self.t5_tokenizer( - text_input, padding="longest", return_tensors="pt" - ).to(image.device) - - encoder_atts = torch.cat([atts_t5, input_tokens.attention_mask], dim=1) - - device_type = "cuda" if "cuda" in str(self.device) else "cpu" - with torch.amp.autocast(device_type=device_type, dtype=torch.bfloat16): - inputs_embeds = self.t5_model.encoder.embed_tokens(input_tokens.input_ids) - inputs_embeds = torch.cat([inputs_t5, inputs_embeds], dim=1) - - outputs = self.t5_model.generate( - inputs_embeds=inputs_embeds, - attention_mask=encoder_atts, - do_sample=False, - num_beams=num_beams, - max_new_tokens=max_len, - min_length=min_len, - length_penalty=length_penalty, - ) - output_text = self.t5_tokenizer.batch_decode( - outputs, skip_special_tokens=True - ) - - if self._apply_lemmatizer: - output_text = self._lemmatize(output_text) - - return output_text - - def _lemmatize(self, answers): - def apply(answer): - doc = self.lemmatizer(answer) - - words = [] - for token in doc: - if token.pos_ in ["NOUN", "VERB"]: - words.append(token.lemma_) - else: - words.append(token.text) - answer = " ".join(words) - - return answer - - return [apply(answer) for answer in answers] - - @property - def lemmatizer(self): - if self._lemmatizer is None: - try: - import spacy - - self._lemmatizer = spacy.load("en_core_web_sm") - except ImportError: - logging.error( - """ - Please install spacy and en_core_web_sm model to apply lemmatization. - python -m spacy download en_core_web_sm - OR - import spacy.cli - spacy.cli.download("en_core_web_sm") - """ - ) - exit(1) - - return self._lemmatizer - - @classmethod - def from_config(cls, cfg): - img_size = cfg.get("image_size") - num_query_token = cfg.get("num_query_token") - t5_model = cfg.get("t5_model") - - drop_path_rate = cfg.get("drop_path_rate", 0) - use_grad_checkpoint = cfg.get("use_grad_checkpoint", False) - vit_precision = cfg.get("vit_precision", "fp16") - freeze_vit = cfg.get("freeze_vit", True) - - prompt = cfg.get("prompt", "") - max_txt_len = cfg.get("max_txt_len", 32) - frame_num = cfg.get("frame_num", 8) - answer_num = cfg.get("answer_num", 5) - apply_lemmatizer = cfg.get("apply_lemmatizer", False) - task = cfg.get("task", 'train_loc_freeze_qa') - - model = cls( - img_size=img_size, - drop_path_rate=drop_path_rate, - use_grad_checkpoint=use_grad_checkpoint, - vit_precision=vit_precision, - freeze_vit=freeze_vit, - num_query_token=num_query_token, - t5_model=t5_model, - prompt=prompt, - max_txt_len=max_txt_len, - apply_lemmatizer=apply_lemmatizer, - frame_num=frame_num, - answer_num=answer_num, - task=task, - ) - model.load_checkpoint_from_config(cfg) - # for sevila with qvh pretraining - # need load blip-2 q-former ckpt to q-former_loc - if 'loc' in task and 'qvh' not in task: - model.load_qformer_loc() - - return model \ No newline at end of file diff --git a/spaces/SeViLA/SeViLA/lavis/tasks/__init__.py b/spaces/SeViLA/SeViLA/lavis/tasks/__init__.py deleted file mode 100644 index b21df2af551cc4925d511030cafb1e2c8aaa59fb..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/tasks/__init__.py +++ /dev/null @@ -1,46 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -from lavis.common.registry import registry -from lavis.tasks.base_task import BaseTask -from lavis.tasks.captioning import CaptionTask -from lavis.tasks.image_text_pretrain import ImageTextPretrainTask -from lavis.tasks.multimodal_classification import ( - MultimodalClassificationTask, -) -from lavis.tasks.retrieval import RetrievalTask -from lavis.tasks.vqa import VQATask, GQATask, AOKVQATask, VideoQA, FrameQA -from lavis.tasks.vqa_reading_comprehension import VQARCTask, GQARCTask -from lavis.tasks.dialogue import DialogueTask - - -def setup_task(cfg): - assert "task" in cfg.run_cfg, "Task name must be provided." - - task_name = cfg.run_cfg.task - task = registry.get_task_class(task_name).setup_task(cfg=cfg) - assert task is not None, "Task {} not properly registered.".format(task_name) - - return task - - -__all__ = [ - "BaseTask", - "AOKVQATask", - "RetrievalTask", - "CaptionTask", - "VQATask", - "GQATask", - "VQARCTask", - "GQARCTask", - "MultimodalClassificationTask", - # "VisualEntailmentTask", - "VideoQA", - "FrameQA", - "ImageTextPretrainTask", - "DialogueTask", -] diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/slicer2.py b/spaces/ServerX/PorcoDiaz/infer/lib/slicer2.py deleted file mode 100644 index 5b29ee262aa54045e807be2cffeb41687499ba58..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/infer/lib/slicer2.py +++ /dev/null @@ -1,260 +0,0 @@ -import numpy as np - - -# This function is obtained from librosa. -def get_rms( - y, - frame_length=2048, - hop_length=512, - pad_mode="constant", -): - padding = (int(frame_length // 2), int(frame_length // 2)) - y = np.pad(y, padding, mode=pad_mode) - - axis = -1 - # put our new within-frame axis at the end for now - out_strides = y.strides + tuple([y.strides[axis]]) - # Reduce the shape on the framing axis - x_shape_trimmed = list(y.shape) - x_shape_trimmed[axis] -= frame_length - 1 - out_shape = tuple(x_shape_trimmed) + tuple([frame_length]) - xw = np.lib.stride_tricks.as_strided(y, shape=out_shape, strides=out_strides) - if axis < 0: - target_axis = axis - 1 - else: - target_axis = axis + 1 - xw = np.moveaxis(xw, -1, target_axis) - # Downsample along the target axis - slices = [slice(None)] * xw.ndim - slices[axis] = slice(0, None, hop_length) - x = xw[tuple(slices)] - - # Calculate power - power = np.mean(np.abs(x) ** 2, axis=-2, keepdims=True) - - return np.sqrt(power) - - -class Slicer: - def __init__( - self, - sr: int, - threshold: float = -40.0, - min_length: int = 5000, - min_interval: int = 300, - hop_size: int = 20, - max_sil_kept: int = 5000, - ): - if not min_length >= min_interval >= hop_size: - raise ValueError( - "The following condition must be satisfied: min_length >= min_interval >= hop_size" - ) - if not max_sil_kept >= hop_size: - raise ValueError( - "The following condition must be satisfied: max_sil_kept >= hop_size" - ) - min_interval = sr * min_interval / 1000 - self.threshold = 10 ** (threshold / 20.0) - self.hop_size = round(sr * hop_size / 1000) - self.win_size = min(round(min_interval), 4 * self.hop_size) - self.min_length = round(sr * min_length / 1000 / self.hop_size) - self.min_interval = round(min_interval / self.hop_size) - self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size) - - def _apply_slice(self, waveform, begin, end): - if len(waveform.shape) > 1: - return waveform[ - :, begin * self.hop_size : min(waveform.shape[1], end * self.hop_size) - ] - else: - return waveform[ - begin * self.hop_size : min(waveform.shape[0], end * self.hop_size) - ] - - # @timeit - def slice(self, waveform): - if len(waveform.shape) > 1: - samples = waveform.mean(axis=0) - else: - samples = waveform - if samples.shape[0] <= self.min_length: - return [waveform] - rms_list = get_rms( - y=samples, frame_length=self.win_size, hop_length=self.hop_size - ).squeeze(0) - sil_tags = [] - silence_start = None - clip_start = 0 - for i, rms in enumerate(rms_list): - # Keep looping while frame is silent. - if rms < self.threshold: - # Record start of silent frames. - if silence_start is None: - silence_start = i - continue - # Keep looping while frame is not silent and silence start has not been recorded. - if silence_start is None: - continue - # Clear recorded silence start if interval is not enough or clip is too short - is_leading_silence = silence_start == 0 and i > self.max_sil_kept - need_slice_middle = ( - i - silence_start >= self.min_interval - and i - clip_start >= self.min_length - ) - if not is_leading_silence and not need_slice_middle: - silence_start = None - continue - # Need slicing. Record the range of silent frames to be removed. - if i - silence_start <= self.max_sil_kept: - pos = rms_list[silence_start : i + 1].argmin() + silence_start - if silence_start == 0: - sil_tags.append((0, pos)) - else: - sil_tags.append((pos, pos)) - clip_start = pos - elif i - silence_start <= self.max_sil_kept * 2: - pos = rms_list[ - i - self.max_sil_kept : silence_start + self.max_sil_kept + 1 - ].argmin() - pos += i - self.max_sil_kept - pos_l = ( - rms_list[ - silence_start : silence_start + self.max_sil_kept + 1 - ].argmin() - + silence_start - ) - pos_r = ( - rms_list[i - self.max_sil_kept : i + 1].argmin() - + i - - self.max_sil_kept - ) - if silence_start == 0: - sil_tags.append((0, pos_r)) - clip_start = pos_r - else: - sil_tags.append((min(pos_l, pos), max(pos_r, pos))) - clip_start = max(pos_r, pos) - else: - pos_l = ( - rms_list[ - silence_start : silence_start + self.max_sil_kept + 1 - ].argmin() - + silence_start - ) - pos_r = ( - rms_list[i - self.max_sil_kept : i + 1].argmin() - + i - - self.max_sil_kept - ) - if silence_start == 0: - sil_tags.append((0, pos_r)) - else: - sil_tags.append((pos_l, pos_r)) - clip_start = pos_r - silence_start = None - # Deal with trailing silence. - total_frames = rms_list.shape[0] - if ( - silence_start is not None - and total_frames - silence_start >= self.min_interval - ): - silence_end = min(total_frames, silence_start + self.max_sil_kept) - pos = rms_list[silence_start : silence_end + 1].argmin() + silence_start - sil_tags.append((pos, total_frames + 1)) - # Apply and return slices. - if len(sil_tags) == 0: - return [waveform] - else: - chunks = [] - if sil_tags[0][0] > 0: - chunks.append(self._apply_slice(waveform, 0, sil_tags[0][0])) - for i in range(len(sil_tags) - 1): - chunks.append( - self._apply_slice(waveform, sil_tags[i][1], sil_tags[i + 1][0]) - ) - if sil_tags[-1][1] < total_frames: - chunks.append( - self._apply_slice(waveform, sil_tags[-1][1], total_frames) - ) - return chunks - - -def main(): - import os.path - from argparse import ArgumentParser - - import librosa - import soundfile - - parser = ArgumentParser() - parser.add_argument("audio", type=str, help="The audio to be sliced") - parser.add_argument( - "--out", type=str, help="Output directory of the sliced audio clips" - ) - parser.add_argument( - "--db_thresh", - type=float, - required=False, - default=-40, - help="The dB threshold for silence detection", - ) - parser.add_argument( - "--min_length", - type=int, - required=False, - default=5000, - help="The minimum milliseconds required for each sliced audio clip", - ) - parser.add_argument( - "--min_interval", - type=int, - required=False, - default=300, - help="The minimum milliseconds for a silence part to be sliced", - ) - parser.add_argument( - "--hop_size", - type=int, - required=False, - default=10, - help="Frame length in milliseconds", - ) - parser.add_argument( - "--max_sil_kept", - type=int, - required=False, - default=500, - help="The maximum silence length kept around the sliced clip, presented in milliseconds", - ) - args = parser.parse_args() - out = args.out - if out is None: - out = os.path.dirname(os.path.abspath(args.audio)) - audio, sr = librosa.load(args.audio, sr=None, mono=False) - slicer = Slicer( - sr=sr, - threshold=args.db_thresh, - min_length=args.min_length, - min_interval=args.min_interval, - hop_size=args.hop_size, - max_sil_kept=args.max_sil_kept, - ) - chunks = slicer.slice(audio) - if not os.path.exists(out): - os.makedirs(out) - for i, chunk in enumerate(chunks): - if len(chunk.shape) > 1: - chunk = chunk.T - soundfile.write( - os.path.join( - out, - f"%s_%d.wav" - % (os.path.basename(args.audio).rsplit(".", maxsplit=1)[0], i), - ), - chunk, - sr, - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/SoAp9035/mistral-7b-fast-chat/README.md b/spaces/SoAp9035/mistral-7b-fast-chat/README.md deleted file mode 100644 index ca86d5f573430dae1dd59c4262403070fefb862e..0000000000000000000000000000000000000000 --- a/spaces/SoAp9035/mistral-7b-fast-chat/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mistral Super Fast -emoji: 😻 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.45.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SpaceNMagic/OPEN_AI/Dockerfile b/spaces/SpaceNMagic/OPEN_AI/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/SpaceNMagic/OPEN_AI/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/SpacesExamples/single_file_phx_bumblebee_ml/README.md b/spaces/SpacesExamples/single_file_phx_bumblebee_ml/README.md deleted file mode 100644 index a4c32b00c073d807e53c8ba7699e30666b51c84e..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/single_file_phx_bumblebee_ml/README.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Phoenix image classification in a single file -emoji: 📚 -colorFrom: red -colorTo: indigo -sdk: docker -fullWidth: true -pinned: false -duplicated_from: livebook-dev/single_file_phx_bumblebee_ml ---- - -# Phoenix image classification in a single file - -To deploy your own app, duplicate this Space and get started. - -## Acknowledgments - -This Space is based on the [single file Phoenix app for Fly.io](https://github.com/chrismccord/single_file_phx_bumblebee_ml) from @chrismccord. diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/external/qt_for_kernel.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/external/qt_for_kernel.py deleted file mode 100644 index 11e88625d1db4258561254ff7eda958c53efdccf..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/external/qt_for_kernel.py +++ /dev/null @@ -1,124 +0,0 @@ -""" Import Qt in a manner suitable for an IPython kernel. - -This is the import used for the `gui=qt` or `matplotlib=qt` initialization. - -Import Priority: - -if Qt has been imported anywhere else: - use that - -if matplotlib has been imported and doesn't support v2 (<= 1.0.1): - use PyQt4 @v1 - -Next, ask QT_API env variable - -if QT_API not set: - ask matplotlib what it's using. If Qt4Agg or Qt5Agg, then use the - version matplotlib is configured with - - else: (matplotlib said nothing) - # this is the default path - nobody told us anything - try in this order: - PyQt default version, PySide, PyQt5 -else: - use what QT_API says - - Note that %gui's implementation will always set a `QT_API`, see - `IPython.terminal.pt_inputhooks.get_inputhook_name_and_func` - -""" -# NOTE: This is no longer an external, third-party module, and should be -# considered part of IPython. For compatibility however, it is being kept in -# IPython/external. - -import os -import sys - -from IPython.external.qt_loaders import ( - load_qt, - loaded_api, - enum_factory, - # QT6 - QT_API_PYQT6, - QT_API_PYSIDE6, - # QT5 - QT_API_PYQT5, - QT_API_PYSIDE2, - # QT4 - QT_API_PYQT, - QT_API_PYSIDE, - # default - QT_API_PYQT_DEFAULT, -) - -_qt_apis = ( - # QT6 - QT_API_PYQT6, - QT_API_PYSIDE6, - # QT5 - QT_API_PYQT5, - QT_API_PYSIDE2, - # default - QT_API_PYQT_DEFAULT, -) - - -def matplotlib_options(mpl): - """Constraints placed on an imported matplotlib.""" - if mpl is None: - return - backend = mpl.rcParams.get('backend', None) - if backend == 'Qt4Agg': - mpqt = mpl.rcParams.get('backend.qt4', None) - if mpqt is None: - return None - if mpqt.lower() == 'pyside': - return [QT_API_PYSIDE] - elif mpqt.lower() == 'pyqt4': - return [QT_API_PYQT_DEFAULT] - elif mpqt.lower() == 'pyqt4v2': - return [QT_API_PYQT] - raise ImportError("unhandled value for backend.qt4 from matplotlib: %r" % - mpqt) - elif backend == 'Qt5Agg': - mpqt = mpl.rcParams.get('backend.qt5', None) - if mpqt is None: - return None - if mpqt.lower() == 'pyqt5': - return [QT_API_PYQT5] - raise ImportError("unhandled value for backend.qt5 from matplotlib: %r" % - mpqt) - -def get_options(): - """Return a list of acceptable QT APIs, in decreasing order of preference.""" - #already imported Qt somewhere. Use that - loaded = loaded_api() - if loaded is not None: - return [loaded] - - mpl = sys.modules.get("matplotlib", None) - - if mpl is not None and tuple(mpl.__version__.split(".")) < ("1", "0", "2"): - # 1.0.1 only supports PyQt4 v1 - return [QT_API_PYQT_DEFAULT] - - qt_api = os.environ.get('QT_API', None) - if qt_api is None: - #no ETS variable. Ask mpl, then use default fallback path - return matplotlib_options(mpl) or [ - QT_API_PYQT_DEFAULT, - QT_API_PYQT6, - QT_API_PYSIDE6, - QT_API_PYQT5, - QT_API_PYSIDE2, - ] - elif qt_api not in _qt_apis: - raise RuntimeError("Invalid Qt API %r, valid values are: %r" % - (qt_api, ', '.join(_qt_apis))) - else: - return [qt_api] - - -api_opts = get_options() -QtCore, QtGui, QtSvg, QT_API = load_qt(api_opts) -enum_helper = enum_factory(QT_API, QtCore) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/adodbapi/examples/db_print.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/adodbapi/examples/db_print.py deleted file mode 100644 index 3f5f9d5be50f5415bd54e9b8aea49911ac39b26a..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/adodbapi/examples/db_print.py +++ /dev/null @@ -1,72 +0,0 @@ -""" db_print.py -- a simple demo for ADO database reads.""" - -import sys - -import adodbapi.ado_consts as adc - -cmd_args = ("filename", "table_name") -if "help" in sys.argv: - print("possible settings keywords are:", cmd_args) - sys.exit() - -kw_args = {} # pick up filename and proxy address from command line (optionally) -for arg in sys.argv: - s = arg.split("=") - if len(s) > 1: - if s[0] in cmd_args: - kw_args[s[0]] = s[1] - -kw_args.setdefault( - "filename", "test.mdb" -) # assumes server is running from examples folder -kw_args.setdefault("table_name", "Products") # the name of the demo table - -# the server needs to select the provider based on his Python installation -provider_switch = ["provider", "Microsoft.ACE.OLEDB.12.0", "Microsoft.Jet.OLEDB.4.0"] - -# ------------------------ START HERE ------------------------------------- -# create the connection -constr = "Provider=%(provider)s;Data Source=%(filename)s" -import adodbapi as db - -con = db.connect(constr, kw_args, macro_is64bit=provider_switch) - -if kw_args["table_name"] == "?": - print("The tables in your database are:") - for name in con.get_table_names(): - print(name) -else: - # make a cursor on the connection - with con.cursor() as c: - # run an SQL statement on the cursor - sql = "select * from %s" % kw_args["table_name"] - print('performing query="%s"' % sql) - c.execute(sql) - - # check the results - print( - 'result rowcount shows as= %d. (Note: -1 means "not known")' % (c.rowcount,) - ) - print("") - print("result data description is:") - print(" NAME Type DispSize IntrnlSz Prec Scale Null?") - for d in c.description: - print( - ("%16s %-12s %8s %8d %4d %5d %s") - % (d[0], adc.adTypeNames[d[1]], d[2], d[3], d[4], d[5], bool(d[6])) - ) - print("") - print("str() of first five records are...") - - # get the results - db = c.fetchmany(5) - - # print them - for rec in db: - print(rec) - - print("") - print("repr() of next row is...") - print(repr(c.fetchone())) - print("") -con.close() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/http.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/http.py deleted file mode 100644 index ca9dc54b215f7977970658250f23e3be137f1b3e..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/http.py +++ /dev/null @@ -1,70 +0,0 @@ -import http.server -import sys -from typing import Mapping, Tuple - -from . import __version__ -from .http_exceptions import HttpProcessingError as HttpProcessingError -from .http_parser import ( - HeadersParser as HeadersParser, - HttpParser as HttpParser, - HttpRequestParser as HttpRequestParser, - HttpResponseParser as HttpResponseParser, - RawRequestMessage as RawRequestMessage, - RawResponseMessage as RawResponseMessage, -) -from .http_websocket import ( - WS_CLOSED_MESSAGE as WS_CLOSED_MESSAGE, - WS_CLOSING_MESSAGE as WS_CLOSING_MESSAGE, - WS_KEY as WS_KEY, - WebSocketError as WebSocketError, - WebSocketReader as WebSocketReader, - WebSocketWriter as WebSocketWriter, - WSCloseCode as WSCloseCode, - WSMessage as WSMessage, - WSMsgType as WSMsgType, - ws_ext_gen as ws_ext_gen, - ws_ext_parse as ws_ext_parse, -) -from .http_writer import ( - HttpVersion as HttpVersion, - HttpVersion10 as HttpVersion10, - HttpVersion11 as HttpVersion11, - StreamWriter as StreamWriter, -) - -__all__ = ( - "HttpProcessingError", - "RESPONSES", - "SERVER_SOFTWARE", - # .http_writer - "StreamWriter", - "HttpVersion", - "HttpVersion10", - "HttpVersion11", - # .http_parser - "HeadersParser", - "HttpParser", - "HttpRequestParser", - "HttpResponseParser", - "RawRequestMessage", - "RawResponseMessage", - # .http_websocket - "WS_CLOSED_MESSAGE", - "WS_CLOSING_MESSAGE", - "WS_KEY", - "WebSocketReader", - "WebSocketWriter", - "ws_ext_gen", - "ws_ext_parse", - "WSMessage", - "WebSocketError", - "WSMsgType", - "WSCloseCode", -) - - -SERVER_SOFTWARE: str = "Python/{0[0]}.{0[1]} aiohttp/{1}".format( - sys.version_info, __version__ -) - -RESPONSES: Mapping[int, Tuple[str, str]] = http.server.BaseHTTPRequestHandler.responses diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/streams/tls.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/streams/tls.py deleted file mode 100644 index b494320434bb53dd89ccf7740922c675c2b51f18..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/streams/tls.py +++ /dev/null @@ -1,320 +0,0 @@ -from __future__ import annotations - -import logging -import re -import ssl -from dataclasses import dataclass -from functools import wraps -from typing import Any, Callable, Mapping, Tuple, TypeVar - -from .. import ( - BrokenResourceError, - EndOfStream, - aclose_forcefully, - get_cancelled_exc_class, -) -from .._core._typedattr import TypedAttributeSet, typed_attribute -from ..abc import AnyByteStream, ByteStream, Listener, TaskGroup - -T_Retval = TypeVar("T_Retval") -_PCTRTT = Tuple[Tuple[str, str], ...] -_PCTRTTT = Tuple[_PCTRTT, ...] - - -class TLSAttribute(TypedAttributeSet): - """Contains Transport Layer Security related attributes.""" - - #: the selected ALPN protocol - alpn_protocol: str | None = typed_attribute() - #: the channel binding for type ``tls-unique`` - channel_binding_tls_unique: bytes = typed_attribute() - #: the selected cipher - cipher: tuple[str, str, int] = typed_attribute() - #: the peer certificate in dictionary form (see :meth:`ssl.SSLSocket.getpeercert` for more - #: information) - peer_certificate: dict[str, str | _PCTRTTT | _PCTRTT] | None = typed_attribute() - #: the peer certificate in binary form - peer_certificate_binary: bytes | None = typed_attribute() - #: ``True`` if this is the server side of the connection - server_side: bool = typed_attribute() - #: ciphers shared by the client during the TLS handshake (``None`` if this is the - #: client side) - shared_ciphers: list[tuple[str, str, int]] | None = typed_attribute() - #: the :class:`~ssl.SSLObject` used for encryption - ssl_object: ssl.SSLObject = typed_attribute() - #: ``True`` if this stream does (and expects) a closing TLS handshake when the stream is being - #: closed - standard_compatible: bool = typed_attribute() - #: the TLS protocol version (e.g. ``TLSv1.2``) - tls_version: str = typed_attribute() - - -@dataclass(eq=False) -class TLSStream(ByteStream): - """ - A stream wrapper that encrypts all sent data and decrypts received data. - - This class has no public initializer; use :meth:`wrap` instead. - All extra attributes from :class:`~TLSAttribute` are supported. - - :var AnyByteStream transport_stream: the wrapped stream - - """ - - transport_stream: AnyByteStream - standard_compatible: bool - _ssl_object: ssl.SSLObject - _read_bio: ssl.MemoryBIO - _write_bio: ssl.MemoryBIO - - @classmethod - async def wrap( - cls, - transport_stream: AnyByteStream, - *, - server_side: bool | None = None, - hostname: str | None = None, - ssl_context: ssl.SSLContext | None = None, - standard_compatible: bool = True, - ) -> TLSStream: - """ - Wrap an existing stream with Transport Layer Security. - - This performs a TLS handshake with the peer. - - :param transport_stream: a bytes-transporting stream to wrap - :param server_side: ``True`` if this is the server side of the connection, ``False`` if - this is the client side (if omitted, will be set to ``False`` if ``hostname`` has been - provided, ``False`` otherwise). Used only to create a default context when an explicit - context has not been provided. - :param hostname: host name of the peer (if host name checking is desired) - :param ssl_context: the SSLContext object to use (if not provided, a secure default will be - created) - :param standard_compatible: if ``False``, skip the closing handshake when closing the - connection, and don't raise an exception if the peer does the same - :raises ~ssl.SSLError: if the TLS handshake fails - - """ - if server_side is None: - server_side = not hostname - - if not ssl_context: - purpose = ( - ssl.Purpose.CLIENT_AUTH if server_side else ssl.Purpose.SERVER_AUTH - ) - ssl_context = ssl.create_default_context(purpose) - - # Re-enable detection of unexpected EOFs if it was disabled by Python - if hasattr(ssl, "OP_IGNORE_UNEXPECTED_EOF"): - ssl_context.options &= ~ssl.OP_IGNORE_UNEXPECTED_EOF - - bio_in = ssl.MemoryBIO() - bio_out = ssl.MemoryBIO() - ssl_object = ssl_context.wrap_bio( - bio_in, bio_out, server_side=server_side, server_hostname=hostname - ) - wrapper = cls( - transport_stream=transport_stream, - standard_compatible=standard_compatible, - _ssl_object=ssl_object, - _read_bio=bio_in, - _write_bio=bio_out, - ) - await wrapper._call_sslobject_method(ssl_object.do_handshake) - return wrapper - - async def _call_sslobject_method( - self, func: Callable[..., T_Retval], *args: object - ) -> T_Retval: - while True: - try: - result = func(*args) - except ssl.SSLWantReadError: - try: - # Flush any pending writes first - if self._write_bio.pending: - await self.transport_stream.send(self._write_bio.read()) - - data = await self.transport_stream.receive() - except EndOfStream: - self._read_bio.write_eof() - except OSError as exc: - self._read_bio.write_eof() - self._write_bio.write_eof() - raise BrokenResourceError from exc - else: - self._read_bio.write(data) - except ssl.SSLWantWriteError: - await self.transport_stream.send(self._write_bio.read()) - except ssl.SSLSyscallError as exc: - self._read_bio.write_eof() - self._write_bio.write_eof() - raise BrokenResourceError from exc - except ssl.SSLError as exc: - self._read_bio.write_eof() - self._write_bio.write_eof() - if ( - isinstance(exc, ssl.SSLEOFError) - or "UNEXPECTED_EOF_WHILE_READING" in exc.strerror - ): - if self.standard_compatible: - raise BrokenResourceError from exc - else: - raise EndOfStream from None - - raise - else: - # Flush any pending writes first - if self._write_bio.pending: - await self.transport_stream.send(self._write_bio.read()) - - return result - - async def unwrap(self) -> tuple[AnyByteStream, bytes]: - """ - Does the TLS closing handshake. - - :return: a tuple of (wrapped byte stream, bytes left in the read buffer) - - """ - await self._call_sslobject_method(self._ssl_object.unwrap) - self._read_bio.write_eof() - self._write_bio.write_eof() - return self.transport_stream, self._read_bio.read() - - async def aclose(self) -> None: - if self.standard_compatible: - try: - await self.unwrap() - except BaseException: - await aclose_forcefully(self.transport_stream) - raise - - await self.transport_stream.aclose() - - async def receive(self, max_bytes: int = 65536) -> bytes: - data = await self._call_sslobject_method(self._ssl_object.read, max_bytes) - if not data: - raise EndOfStream - - return data - - async def send(self, item: bytes) -> None: - await self._call_sslobject_method(self._ssl_object.write, item) - - async def send_eof(self) -> None: - tls_version = self.extra(TLSAttribute.tls_version) - match = re.match(r"TLSv(\d+)(?:\.(\d+))?", tls_version) - if match: - major, minor = int(match.group(1)), int(match.group(2) or 0) - if (major, minor) < (1, 3): - raise NotImplementedError( - f"send_eof() requires at least TLSv1.3; current " - f"session uses {tls_version}" - ) - - raise NotImplementedError( - "send_eof() has not yet been implemented for TLS streams" - ) - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - return { - **self.transport_stream.extra_attributes, - TLSAttribute.alpn_protocol: self._ssl_object.selected_alpn_protocol, - TLSAttribute.channel_binding_tls_unique: self._ssl_object.get_channel_binding, - TLSAttribute.cipher: self._ssl_object.cipher, - TLSAttribute.peer_certificate: lambda: self._ssl_object.getpeercert(False), - TLSAttribute.peer_certificate_binary: lambda: self._ssl_object.getpeercert( - True - ), - TLSAttribute.server_side: lambda: self._ssl_object.server_side, - TLSAttribute.shared_ciphers: lambda: self._ssl_object.shared_ciphers() - if self._ssl_object.server_side - else None, - TLSAttribute.standard_compatible: lambda: self.standard_compatible, - TLSAttribute.ssl_object: lambda: self._ssl_object, - TLSAttribute.tls_version: self._ssl_object.version, - } - - -@dataclass(eq=False) -class TLSListener(Listener[TLSStream]): - """ - A convenience listener that wraps another listener and auto-negotiates a TLS session on every - accepted connection. - - If the TLS handshake times out or raises an exception, :meth:`handle_handshake_error` is - called to do whatever post-mortem processing is deemed necessary. - - Supports only the :attr:`~TLSAttribute.standard_compatible` extra attribute. - - :param Listener listener: the listener to wrap - :param ssl_context: the SSL context object - :param standard_compatible: a flag passed through to :meth:`TLSStream.wrap` - :param handshake_timeout: time limit for the TLS handshake - (passed to :func:`~anyio.fail_after`) - """ - - listener: Listener[Any] - ssl_context: ssl.SSLContext - standard_compatible: bool = True - handshake_timeout: float = 30 - - @staticmethod - async def handle_handshake_error(exc: BaseException, stream: AnyByteStream) -> None: - f""" - Handle an exception raised during the TLS handshake. - - This method does 3 things: - - #. Forcefully closes the original stream - #. Logs the exception (unless it was a cancellation exception) using the ``{__name__}`` - logger - #. Reraises the exception if it was a base exception or a cancellation exception - - :param exc: the exception - :param stream: the original stream - - """ - await aclose_forcefully(stream) - - # Log all except cancellation exceptions - if not isinstance(exc, get_cancelled_exc_class()): - logging.getLogger(__name__).exception("Error during TLS handshake") - - # Only reraise base exceptions and cancellation exceptions - if not isinstance(exc, Exception) or isinstance(exc, get_cancelled_exc_class()): - raise - - async def serve( - self, - handler: Callable[[TLSStream], Any], - task_group: TaskGroup | None = None, - ) -> None: - @wraps(handler) - async def handler_wrapper(stream: AnyByteStream) -> None: - from .. import fail_after - - try: - with fail_after(self.handshake_timeout): - wrapped_stream = await TLSStream.wrap( - stream, - ssl_context=self.ssl_context, - standard_compatible=self.standard_compatible, - ) - except BaseException as exc: - await self.handle_handshake_error(exc, stream) - else: - await handler(wrapped_stream) - - await self.listener.serve(handler_wrapper, task_group) - - async def aclose(self) -> None: - await self.listener.aclose() - - @property - def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]: - return { - TLSAttribute.standard_compatible: lambda: self.standard_compatible, - } diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/index/abstract.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/index/abstract.py deleted file mode 100644 index a04fd4d81a903056ca9d1ecd4eadbeedb9c99e98..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/index/abstract.py +++ /dev/null @@ -1,1164 +0,0 @@ -import copy -import logging -from abc import ABC, abstractmethod -from dataclasses import dataclass, field, replace -from typing import ( - TYPE_CHECKING, - Any, - Dict, - Generator, - Generic, - Iterable, - List, - Mapping, - Optional, - Sequence, - Tuple, - Type, - TypeVar, - Union, - cast, -) - -import numpy as np -from pydantic.error_wrappers import ValidationError -from typing_inspect import get_args, is_optional_type, is_union_type - -from docarray import BaseDoc, DocList -from docarray.array.any_array import AnyDocArray -from docarray.typing import ID, AnyTensor -from docarray.typing.tensor.abstract_tensor import AbstractTensor -from docarray.utils._internal._typing import is_tensor_union, safe_issubclass -from docarray.utils._internal.misc import import_library -from docarray.utils.find import ( - FindResult, - FindResultBatched, - SubindexFindResult, - _FindResult, - _FindResultBatched, -) - -if TYPE_CHECKING: - import tensorflow as tf # type: ignore - import torch - from pydantic.fields import ModelField - - from docarray.typing import TensorFlowTensor -else: - tf = import_library('tensorflow', raise_error=False) - if tf is not None: - from docarray.typing import TensorFlowTensor - torch = import_library('torch', raise_error=False) - -TSchema = TypeVar('TSchema', bound=BaseDoc) - - -def _raise_not_composable(name): - def _inner(self, *args, **kwargs): - raise NotImplementedError( - f'`{name}` is not usable through the query builder of this Document index ({type(self)}). ' - f'But you can call `{type(self)}.{name}()` directly.' - ) - - return _inner - - -def _raise_not_supported(name): - def _inner(self, *args, **kwargs): - raise NotImplementedError( - f'`{name}` is not usable through the query builder of this Document index ({type(self)}). ' - ) - - return _inner - - -@dataclass -class _ColumnInfo: - docarray_type: Type - db_type: Any - n_dim: Optional[int] - config: Dict[str, Any] - - -class BaseDocIndex(ABC, Generic[TSchema]): - """Abstract class for all Document Stores""" - - # the BaseDoc that defines the schema of the store - # for subclasses this is filled automatically - _schema: Optional[Type[BaseDoc]] = None - - def __init__(self, db_config=None, subindex: bool = False, **kwargs): - if self._schema is None: - raise ValueError( - 'A DocumentIndex must be typed with a Document type.' - 'To do so, use the syntax: DocumentIndex[DocumentType]' - ) - if subindex: - - class _NewSchema(self._schema): # type: ignore - parent_id: Optional[ID] = None - - self._ori_schema = self._schema - self._schema = cast(Type[BaseDoc], _NewSchema) - - self._logger = logging.getLogger('docarray') - self._db_config = db_config or self.DBConfig(**kwargs) - if not isinstance(self._db_config, self.DBConfig): - raise ValueError(f'db_config must be of type {self.DBConfig}') - self._logger.info('DB config created') - self._runtime_config = self.RuntimeConfig() - self._logger.info('Runtime config created') - self._column_infos: Dict[str, _ColumnInfo] = self._create_column_infos( - self._schema - ) - self._is_subindex = subindex - self._subindices: Dict[str, BaseDocIndex] = {} - self._init_subindex() - - ############################################### - # Inner classes for query builder and configs # - # Subclasses must subclass & implement these # - ############################################### - - class QueryBuilder(ABC): - @abstractmethod - def build(self, *args, **kwargs) -> Any: - """Build the DB specific query object. - The DB specific implementation can leverage self._queries to do so. - The output of this should be able to be passed to execute_query(). - """ - ... - - # TODO support subindex in QueryBuilder - - # the methods below need to be implemented by subclasses - # If, in your subclass, one of these is not usable in a query builder, but - # can be called directly on the DocumentIndex, use `_raise_not_composable`. - # If the method is not supported _at all_, use `_raise_not_supported`. - find = abstractmethod(lambda *args, **kwargs: ...) - filter = abstractmethod(lambda *args, **kwargs: ...) - text_search = abstractmethod(lambda *args, **kwargs: ...) - find_batched = abstractmethod(lambda *args, **kwargs: ...) - filter_batched = abstractmethod(lambda *args, **kwargs: ...) - text_search_batched = abstractmethod(lambda *args, **kwargs: ...) - - @dataclass - class DBConfig(ABC): - index_name: Optional[str] = None - - @dataclass - class RuntimeConfig(ABC): - # default configurations for every column type - # a dictionary from a column type (DB specific) to a dictionary - # of default configurations for that type - # These configs are used if no configs are specified in the `Field(...)` - # of a field in the Document schema (`cls._schema`) - # Example: `default_column_config['VARCHAR'] = {'length': 255}` - default_column_config: Dict[Type, Dict[str, Any]] = field(default_factory=dict) - - @property - def index_name(self): - """Return the name of the index in the database.""" - ... - - ##################################### - # Abstract methods # - # Subclasses must implement these # - ##################################### - - @abstractmethod - def python_type_to_db_type(self, python_type: Type) -> Any: - """Map python type to database type. - Takes any python type and returns the corresponding database column type. - - :param python_type: a python type. - :return: the corresponding database column type, - or None if ``python_type`` is not supported. - """ - ... - - @abstractmethod - def _index(self, column_to_data: Dict[str, Generator[Any, None, None]]): - """index a document into the store""" - # `column_to_data` is a dictionary from column name to a generator - # that yields the data for that column. - # If you want to work directly on documents, you can implement index() instead - # If you implement index(), _index() only needs a dummy implementation. - ... - - @abstractmethod - def num_docs(self) -> int: - """Return the number of indexed documents""" - ... - - @abstractmethod - def _del_items(self, doc_ids: Sequence[str]): - """Delete Documents from the index. - - :param doc_ids: ids to delete from the Document Store - """ - ... - - @abstractmethod - def _get_items( - self, doc_ids: Sequence[str] - ) -> Union[Sequence[TSchema], Sequence[Dict[str, Any]]]: - """Get Documents from the index, by `id`. - If no document is found, a KeyError is raised. - - :param doc_ids: ids to get from the Document index - :return: Sequence of Documents, sorted corresponding to the order of `doc_ids`. Duplicate `doc_ids` can be omitted in the output. - """ - ... - - @abstractmethod - def execute_query(self, query: Any, *args, **kwargs) -> Any: - """ - Execute a query on the database. - - Can take two kinds of inputs: - - 1. A native query of the underlying database. This is meant as a passthrough so that you - can enjoy any functionality that is not available through the Document index API. - 2. The output of this Document index' `QueryBuilder.build()` method. - - :param query: the query to execute - :param args: positional arguments to pass to the query - :param kwargs: keyword arguments to pass to the query - :return: the result of the query - """ - ... - - @abstractmethod - def _find( - self, - query: np.ndarray, - limit: int, - search_field: str = '', - ) -> _FindResult: - """Find documents in the index - - :param query: query vector for KNN/ANN search. Has single axis. - :param limit: maximum number of documents to return per query - :param search_field: name of the field to search on - :return: a named tuple containing `documents` and `scores` - """ - # NOTE: in standard implementations, - # `search_field` is equal to the column name to search on - ... - - @abstractmethod - def _find_batched( - self, - queries: np.ndarray, - limit: int, - search_field: str = '', - ) -> _FindResultBatched: - """Find documents in the index - - :param queries: query vectors for KNN/ANN search. - Has shape (batch_size, vector_dim) - :param limit: maximum number of documents to return - :param search_field: name of the field to search on - :return: a named tuple containing `documents` and `scores` - """ - ... - - @abstractmethod - def _filter( - self, - filter_query: Any, - limit: int, - ) -> Union[DocList, List[Dict]]: - """Find documents in the index based on a filter query - - :param filter_query: the DB specific filter query to execute - :param limit: maximum number of documents to return - :return: a DocList containing the documents that match the filter query - """ - ... - - @abstractmethod - def _filter_batched( - self, - filter_queries: Any, - limit: int, - ) -> Union[List[DocList], List[List[Dict]]]: - """Find documents in the index based on multiple filter queries. - Each query is considered individually, and results are returned per query. - - :param filter_queries: the DB specific filter queries to execute - :param limit: maximum number of documents to return per query - :return: List of DocLists containing the documents that match the filter - queries - """ - ... - - @abstractmethod - def _text_search( - self, - query: str, - limit: int, - search_field: str = '', - ) -> _FindResult: - """Find documents in the index based on a text search query - - :param query: The text to search for - :param limit: maximum number of documents to return - :param search_field: name of the field to search on - :return: a named tuple containing `documents` and `scores` - """ - # NOTE: in standard implementations, - # `search_field` is equal to the column name to search on - ... - - @abstractmethod - def _text_search_batched( - self, - queries: Sequence[str], - limit: int, - search_field: str = '', - ) -> _FindResultBatched: - """Find documents in the index based on a text search query - - :param queries: The texts to search for - :param limit: maximum number of documents to return per query - :param search_field: name of the field to search on - :return: a named tuple containing `documents` and `scores` - """ - # NOTE: in standard implementations, - # `search_field` is equal to the column name to search on - ... - - #################################################### - # Optional overrides # - # Subclasses may or may not need to change these # - #################################################### - - def __getitem__( - self, key: Union[str, Sequence[str]] - ) -> Union[TSchema, DocList[TSchema]]: - """Get one or multiple Documents into the index, by `id`. - If no document is found, a KeyError is raised. - - :param key: id or ids to get from the Document index - """ - # normalize input - if isinstance(key, str): - return_singleton = True - key = [key] - else: - return_singleton = False - - # retrieve data - doc_sequence = self._get_items(key) - - # check data - if len(doc_sequence) == 0: - raise KeyError(f'No document with id {key} found') - - # retrieve nested data - for field_name, type_, _ in self._flatten_schema( - cast(Type[BaseDoc], self._schema) - ): - if issubclass(type_, AnyDocArray) and isinstance(doc_sequence[0], Dict): - for doc in doc_sequence: - self._get_subindex_doclist(doc, field_name) # type: ignore - - # cast output - if isinstance(doc_sequence, DocList): - out_docs: DocList[TSchema] = doc_sequence - elif isinstance(doc_sequence[0], Dict): - out_docs = self._dict_list_to_docarray(doc_sequence) # type: ignore - else: - docs_cls = DocList.__class_getitem__(cast(Type[BaseDoc], self._schema)) - out_docs = docs_cls(doc_sequence) - - return out_docs[0] if return_singleton else out_docs - - def __delitem__(self, key: Union[str, Sequence[str]]): - """Delete one or multiple Documents from the index, by `id`. - If no document is found, a KeyError is raised. - - :param key: id or ids to delete from the Document index - """ - self._logger.info(f'Deleting documents with id(s) {key} from the index') - if isinstance(key, str): - key = [key] - - # delete nested data - for field_name, type_, _ in self._flatten_schema( - cast(Type[BaseDoc], self._schema) - ): - if safe_issubclass(type_, AnyDocArray): - for doc_id in key: - nested_docs_id = self._subindices[field_name]._filter_by_parent_id( - doc_id - ) - if nested_docs_id: - del self._subindices[field_name][nested_docs_id] - # delete data - self._del_items(key) - - def configure(self, runtime_config=None, **kwargs): - """ - Configure the DocumentIndex. - You can either pass a config object to `config` or pass individual config - parameters as keyword arguments. - If a configuration object is passed, it will replace the current configuration. - If keyword arguments are passed, they will update the current configuration. - - :param runtime_config: the configuration to apply - :param kwargs: individual configuration parameters - """ - if runtime_config is None: - self._runtime_config = replace(self._runtime_config, **kwargs) - else: - if not isinstance(runtime_config, self.RuntimeConfig): - raise ValueError(f'runtime_config must be of type {self.RuntimeConfig}') - self._runtime_config = runtime_config - - def index(self, docs: Union[BaseDoc, Sequence[BaseDoc]], **kwargs): - """index Documents into the index. - - !!! note - Passing a sequence of Documents that is not a DocList - (such as a List of Docs) comes at a performance penalty. - This is because the Index needs to check compatibility between itself and - the data. With a DocList as input this is a single check; for other inputs - compatibility needs to be checked for every Document individually. - - :param docs: Documents to index. - """ - n_docs = 1 if isinstance(docs, BaseDoc) else len(docs) - self._logger.debug(f'Indexing {n_docs} documents') - docs_validated = self._validate_docs(docs) - self._update_subindex_data(docs_validated) - data_by_columns = self._get_col_value_dict(docs_validated) - self._index(data_by_columns, **kwargs) - - def find( - self, - query: Union[AnyTensor, BaseDoc], - search_field: str = '', - limit: int = 10, - **kwargs, - ) -> FindResult: - """Find documents in the index using nearest neighbor search. - - :param query: query vector for KNN/ANN search. - Can be either a tensor-like (np.array, torch.Tensor, etc.) - with a single axis, or a Document - :param search_field: name of the field to search on. - Documents in the index are retrieved based on this similarity - of this field to the query. - :param limit: maximum number of documents to return - :return: a named tuple containing `documents` and `scores` - """ - self._logger.debug(f'Executing `find` for search field {search_field}') - - self._validate_search_field(search_field) - if isinstance(query, BaseDoc): - query_vec = self._get_values_by_column([query], search_field)[0] - else: - query_vec = query - query_vec_np = self._to_numpy(query_vec) - docs, scores = self._find( - query_vec_np, search_field=search_field, limit=limit, **kwargs - ) - - if isinstance(docs, List) and not isinstance(docs, DocList): - docs = self._dict_list_to_docarray(docs) - - return FindResult(documents=docs, scores=scores) - - def find_subindex( - self, - query: Union[AnyTensor, BaseDoc], - subindex: str = '', - search_field: str = '', - limit: int = 10, - **kwargs, - ) -> SubindexFindResult: - """Find documents in subindex level. - - :param query: query vector for KNN/ANN search. - Can be either a tensor-like (np.array, torch.Tensor, etc.) - with a single axis, or a Document - :param subindex: name of the subindex to search on - :param search_field: name of the field to search on - :param limit: maximum number of documents to return - :return: a named tuple containing root docs, subindex docs and scores - """ - self._logger.debug(f'Executing `find_subindex` for search field {search_field}') - - sub_docs, scores = self._find_subdocs( - query, subindex=subindex, search_field=search_field, limit=limit, **kwargs - ) - - fields = subindex.split('__') - root_ids = [ - self._get_root_doc_id(doc.id, fields[0], '__'.join(fields[1:])) - for doc in sub_docs - ] - root_docs = DocList[self._schema]() # type: ignore - for id in root_ids: - root_docs.append(self[id]) - - return SubindexFindResult( - root_documents=root_docs, sub_documents=sub_docs, scores=scores # type: ignore - ) - - def find_batched( - self, - queries: Union[AnyTensor, DocList], - search_field: str = '', - limit: int = 10, - **kwargs, - ) -> FindResultBatched: - """Find documents in the index using nearest neighbor search. - - :param queries: query vector for KNN/ANN search. - Can be either a tensor-like (np.array, torch.Tensor, etc.) with a, - or a DocList. - If a tensor-like is passed, it should have shape (batch_size, vector_dim) - :param search_field: name of the field to search on. - Documents in the index are retrieved based on this similarity - of this field to the query. - :param limit: maximum number of documents to return per query - :return: a named tuple containing `documents` and `scores` - """ - self._logger.debug(f'Executing `find_batched` for search field {search_field}') - - if search_field: - if '__' in search_field: - fields = search_field.split('__') - if issubclass(self._schema._get_field_type(fields[0]), AnyDocArray): # type: ignore - return self._subindices[fields[0]].find_batched( - queries, - search_field='__'.join(fields[1:]), - limit=limit, - **kwargs, - ) - - self._validate_search_field(search_field) - if isinstance(queries, Sequence): - query_vec_list = self._get_values_by_column(queries, search_field) - query_vec_np = np.stack( - tuple(self._to_numpy(query_vec) for query_vec in query_vec_list) - ) - else: - query_vec_np = self._to_numpy(queries) - - da_list, scores = self._find_batched( - query_vec_np, search_field=search_field, limit=limit, **kwargs - ) - - if len(da_list) > 0 and isinstance(da_list[0], List): - da_list = [self._dict_list_to_docarray(docs) for docs in da_list] - - return FindResultBatched(documents=da_list, scores=scores) # type: ignore - - def filter( - self, - filter_query: Any, - limit: int = 10, - **kwargs, - ) -> DocList: - """Find documents in the index based on a filter query - - :param filter_query: the DB specific filter query to execute - :param limit: maximum number of documents to return - :return: a DocList containing the documents that match the filter query - """ - self._logger.debug(f'Executing `filter` for the query {filter_query}') - docs = self._filter(filter_query, limit=limit, **kwargs) - - if isinstance(docs, List): - docs = self._dict_list_to_docarray(docs) - - return docs - - def filter_subindex( - self, - filter_query: Any, - subindex: str, - limit: int = 10, - **kwargs, - ) -> DocList: - """Find documents in subindex level based on a filter query - - :param filter_query: the DB specific filter query to execute - :param subindex: name of the subindex to search on - :param limit: maximum number of documents to return - :return: a DocList containing the subindex level documents that match the filter query - """ - self._logger.debug( - f'Executing `filter` for the query {filter_query} in subindex {subindex}' - ) - if '__' in subindex: - fields = subindex.split('__') - return self._subindices[fields[0]].filter_subindex( - filter_query, '__'.join(fields[1:]), limit=limit, **kwargs - ) - else: - return self._subindices[subindex].filter( - filter_query, limit=limit, **kwargs - ) - - def filter_batched( - self, - filter_queries: Any, - limit: int = 10, - **kwargs, - ) -> List[DocList]: - """Find documents in the index based on multiple filter queries. - - :param filter_queries: the DB specific filter query to execute - :param limit: maximum number of documents to return - :return: a DocList containing the documents that match the filter query - """ - self._logger.debug( - f'Executing `filter_batched` for the queries {filter_queries}' - ) - da_list = self._filter_batched(filter_queries, limit=limit, **kwargs) - - if len(da_list) > 0 and isinstance(da_list[0], List): - da_list = [self._dict_list_to_docarray(docs) for docs in da_list] - - return da_list # type: ignore - - def text_search( - self, - query: Union[str, BaseDoc], - search_field: str = '', - limit: int = 10, - **kwargs, - ) -> FindResult: - """Find documents in the index based on a text search query. - - :param query: The text to search for - :param search_field: name of the field to search on - :param limit: maximum number of documents to return - :return: a named tuple containing `documents` and `scores` - """ - self._logger.debug(f'Executing `text_search` for search field {search_field}') - self._validate_search_field(search_field) - if isinstance(query, BaseDoc): - query_text = self._get_values_by_column([query], search_field)[0] - else: - query_text = query - docs, scores = self._text_search( - query_text, search_field=search_field, limit=limit, **kwargs - ) - - if isinstance(docs, List): - docs = self._dict_list_to_docarray(docs) - - return FindResult(documents=docs, scores=scores) - - def text_search_batched( - self, - queries: Union[Sequence[str], Sequence[BaseDoc]], - search_field: str = '', - limit: int = 10, - **kwargs, - ) -> FindResultBatched: - """Find documents in the index based on a text search query. - - :param queries: The texts to search for - :param search_field: name of the field to search on - :param limit: maximum number of documents to return - :return: a named tuple containing `documents` and `scores` - """ - self._logger.debug( - f'Executing `text_search_batched` for search field {search_field}' - ) - self._validate_search_field(search_field) - if isinstance(queries[0], BaseDoc): - query_docs: Sequence[BaseDoc] = cast(Sequence[BaseDoc], queries) - query_texts: Sequence[str] = self._get_values_by_column( - query_docs, search_field - ) - else: - query_texts = cast(Sequence[str], queries) - da_list, scores = self._text_search_batched( - query_texts, search_field=search_field, limit=limit, **kwargs - ) - - if len(da_list) > 0 and isinstance(da_list[0], List): - docs = [self._dict_list_to_docarray(docs) for docs in da_list] - return FindResultBatched(documents=docs, scores=scores) - - da_list_ = cast(List[DocList], da_list) - return FindResultBatched(documents=da_list_, scores=scores) - - def _filter_by_parent_id(self, id: str) -> Optional[List[str]]: - """Filter the ids of the subindex documents given id of root document. - - :param id: the root document id to filter by - :return: a list of ids of the subindex documents - """ - return None - - ########################################################## - # Helper methods # - # These might be useful in your subclass implementation # - ########################################################## - - @staticmethod - def _get_values_by_column(docs: Sequence[BaseDoc], col_name: str) -> List[Any]: - """Get the value of a column of a document. - - :param docs: The DocList to get the values from - :param col_name: The name of the column, e.g. 'text' or 'image__tensor' - :return: The value of the column of `doc` - """ - leaf_vals = [] - for doc in docs: - if '__' in col_name: - fields = col_name.split('__') - leaf_doc: BaseDoc = doc - for f in fields[:-1]: - leaf_doc = getattr(leaf_doc, f) - leaf_vals.append(getattr(leaf_doc, fields[-1])) - else: - leaf_vals.append(getattr(doc, col_name)) - return leaf_vals - - @staticmethod - def _transpose_col_value_dict( - col_value_dict: Mapping[str, Iterable[Any]] - ) -> Generator[Dict[str, Any], None, None]: - """'Transpose' the output of `_get_col_value_dict()`: Yield rows of columns, where each row represent one Document. - Since a generator is returned, this process comes at negligible cost. - - :param docs: The DocList to get the values from - :return: The `docs` flattened out as rows. Each row is a dictionary mapping from column name to value - """ - return (dict(zip(col_value_dict, row)) for row in zip(*col_value_dict.values())) - - def _get_col_value_dict( - self, docs: Union[BaseDoc, Sequence[BaseDoc]] - ) -> Dict[str, Generator[Any, None, None]]: - """ - Get all data from a (sequence of) document(s), flattened out by column. - This can be seen as the transposed representation of `_get_rows()`. - - :param docs: The document(s) to get the data from - :return: A dictionary mapping column names to a generator of values - """ - if isinstance(docs, BaseDoc): - docs_seq: Sequence[BaseDoc] = [docs] - else: - docs_seq = docs - - def _col_gen(col_name: str): - return ( - self._to_numpy( - self._get_values_by_column([doc], col_name)[0], - allow_passthrough=True, - ) - for doc in docs_seq - ) - - return {col_name: _col_gen(col_name) for col_name in self._column_infos} - - def _update_subindex_data( - self, - docs: DocList[BaseDoc], - ): - """ - Add `parent_id` to all sublevel documents. - - :param docs: The document(s) to update the `parent_id` for - """ - for field_name, type_, _ in self._flatten_schema( - cast(Type[BaseDoc], self._schema) - ): - if safe_issubclass(type_, AnyDocArray): - for doc in docs: - _list = getattr(doc, field_name) - for i, nested_doc in enumerate(_list): - nested_doc = self._subindices[field_name]._schema( # type: ignore - **nested_doc.__dict__ - ) - nested_doc.parent_id = doc.id - _list[i] = nested_doc - - ################################################## - # Behind-the-scenes magic # - # Subclasses should not need to implement these # - ################################################## - def __class_getitem__(cls, item: Type[TSchema]): - if not isinstance(item, type): - # do nothing - # enables use in static contexts with type vars, e.g. as type annotation - return Generic.__class_getitem__.__func__(cls, item) - if not issubclass(item, BaseDoc): - raise ValueError( - f'{cls.__name__}[item] `item` should be a Document not a {item} ' - ) - - class _DocumentIndexTyped(cls): # type: ignore - _schema: Type[TSchema] = item - - _DocumentIndexTyped.__name__ = f'{cls.__name__}[{item.__name__}]' - _DocumentIndexTyped.__qualname__ = f'{cls.__qualname__}[{item.__name__}]' - - return _DocumentIndexTyped - - def build_query(self) -> QueryBuilder: - """ - Build a query for this DocumentIndex. - - :return: a new `QueryBuilder` object for this DocumentIndex - """ - return self.QueryBuilder() # type: ignore - - @classmethod - def _flatten_schema( - cls, schema: Type[BaseDoc], name_prefix: str = '' - ) -> List[Tuple[str, Type, 'ModelField']]: - """Flatten the schema of a Document into a list of column names and types. - Nested Documents are handled in a recursive manner by adding `'__'` as a prefix to the column name. - - :param schema: The schema to flatten - :param name_prefix: prefix to append to the column names. Used for recursive calls to handle nesting. - :return: A list of column names, types, and fields - """ - names_types_fields: List[Tuple[str, Type, 'ModelField']] = [] - for field_name, field_ in schema.__fields__.items(): - t_ = schema._get_field_type(field_name) - inner_prefix = name_prefix + field_name + '__' - - if is_union_type(t_): - union_args = get_args(t_) - - if is_tensor_union(t_): - names_types_fields.append( - (name_prefix + field_name, AbstractTensor, field_) - ) - - elif len(union_args) == 2 and type(None) in union_args: - # simple "Optional" type, treat as special case: - # treat as if it was a single non-optional type - for t_arg in union_args: - if t_arg is not type(None): - if issubclass(t_arg, BaseDoc): - names_types_fields.extend( - cls._flatten_schema(t_arg, name_prefix=inner_prefix) - ) - else: - names_types_fields.append( - (name_prefix + field_name, t_arg, field_) - ) - else: - raise ValueError( - f'Union type {t_} is not supported. Only Union of subclasses of AbstractTensor or Union[type, None] are supported.' - ) - elif safe_issubclass(t_, BaseDoc): - names_types_fields.extend( - cls._flatten_schema(t_, name_prefix=inner_prefix) - ) - elif safe_issubclass(t_, AbstractTensor): - names_types_fields.append( - (name_prefix + field_name, AbstractTensor, field_) - ) - else: - names_types_fields.append((name_prefix + field_name, t_, field_)) - return names_types_fields - - def _create_column_infos(self, schema: Type[BaseDoc]) -> Dict[str, _ColumnInfo]: - """Collects information about every column that is implied by a given schema. - - :param schema: The schema (subclass of BaseDoc) to analyze and parse - columns from - :returns: A dictionary mapping from column names to column information. - """ - column_infos: Dict[str, _ColumnInfo] = dict() - for field_name, type_, field_ in self._flatten_schema(schema): - # Union types are handle in _flatten_schema - if safe_issubclass(type_, AnyDocArray): - column_infos[field_name] = _ColumnInfo( - docarray_type=type_, db_type=None, config=dict(), n_dim=None - ) - else: - column_infos[field_name] = self._create_single_column(field_, type_) - return column_infos - - def _create_single_column(self, field: 'ModelField', type_: Type) -> _ColumnInfo: - custom_config = field.field_info.extra - if 'col_type' in custom_config.keys(): - db_type = custom_config['col_type'] - custom_config.pop('col_type') - if db_type not in self._runtime_config.default_column_config.keys(): - raise ValueError( - f'The given col_type is not a valid db type: {db_type}' - ) - else: - db_type = self.python_type_to_db_type(type_) - - config = self._runtime_config.default_column_config[db_type].copy() - config.update(custom_config) - # parse n_dim from parametrized tensor type - if ( - hasattr(field.type_, '__docarray_target_shape__') - and field.type_.__docarray_target_shape__ - ): - if len(field.type_.__docarray_target_shape__) == 1: - n_dim = field.type_.__docarray_target_shape__[0] - else: - n_dim = field.type_.__docarray_target_shape__ - else: - n_dim = None - return _ColumnInfo( - docarray_type=type_, db_type=db_type, config=config, n_dim=n_dim - ) - - def _init_subindex( - self, - ): - """Initialize subindices if any column is subclass of AnyDocArray.""" - for col_name, col in self._column_infos.items(): - if safe_issubclass(col.docarray_type, AnyDocArray): - sub_db_config = copy.deepcopy(self._db_config) - sub_db_config.index_name = f'{self.index_name}__{col_name}' - self._subindices[col_name] = self.__class__[col.docarray_type.doc_type]( # type: ignore - db_config=sub_db_config, subindex=True - ) - - def _validate_docs( - self, docs: Union[BaseDoc, Sequence[BaseDoc]] - ) -> DocList[BaseDoc]: - """Validates Document against the schema of the Document Index. - For validation to pass, the schema of `docs` and the schema of the Document - Index need to evaluate to the same flattened columns. - If Validation fails, a ValueError is raised. - - :param docs: Document to evaluate. If this is a DocList, validation is - performed using its `doc_type` (parametrization), without having to check - ever Document in `docs`. If this check fails, or if `docs` is not a - DocList, evaluation is performed for every Document in `docs`. - :return: A DocList containing the Documents in `docs` - """ - if isinstance(docs, BaseDoc): - docs = [docs] - if isinstance(docs, DocList): - # validation shortcut for DocList; only look at the schema - reference_schema_flat = self._flatten_schema( - cast(Type[BaseDoc], self._schema) - ) - reference_names = [name for (name, _, _) in reference_schema_flat] - reference_types = [t_ for (_, t_, _) in reference_schema_flat] - try: - input_schema_flat = self._flatten_schema(docs.doc_type) - except ValueError: - pass - else: - input_names = [name for (name, _, _) in input_schema_flat] - input_types = [t_ for (_, t_, _) in input_schema_flat] - # this could be relaxed in the future, - # see schema translation ideas in the design doc - names_compatible = reference_names == input_names - types_compatible = all( - (issubclass(t2, t1)) - for (t1, t2) in zip(reference_types, input_types) - ) - if names_compatible and types_compatible: - return docs - - out_docs = [] - for i in range(len(docs)): - # validate the data - try: - out_docs.append(cast(Type[BaseDoc], self._schema).parse_obj(docs[i])) - except (ValueError, ValidationError): - raise ValueError( - 'The schema of the input Documents is not compatible with the schema of the Document Index.' - ' Ensure that the field names of your data match the field names of the Document Index schema,' - ' and that the types of your data match the types of the Document Index schema.' - ) - - return DocList[BaseDoc].construct(out_docs) - - def _validate_search_field(self, search_field: Union[str, None]) -> bool: - """ - Validate if the given `search_field` corresponds to one of the - columns that was parsed from the schema. - - Some backends, like weaviate, don't use search fields, so the function - returns True if `search_field` is empty or None. - - :param search_field: search field to validate. - :return: True if the field exists, False otherwise. - """ - if not search_field or search_field in self._column_infos.keys(): - if not search_field: - self._logger.info('Empty search field was passed') - return True - else: - valid_search_fields = ', '.join(self._column_infos.keys()) - raise ValueError( - f'{search_field} is not a valid search field. Valid search fields are: {valid_search_fields}' - ) - - def _to_numpy(self, val: Any, allow_passthrough=False) -> Any: - """ - Converts a value to a numpy array, if possible. - - :param val: The value to convert - :param allow_passthrough: If True, the value is returned as-is if it is not convertible to a numpy array. - If False, a `ValueError` is raised if the value is not convertible to a numpy array. - :return: The value as a numpy array, or as-is if `allow_passthrough` is True and the value is not convertible - """ - if isinstance(val, np.ndarray): - return val - if tf is not None and isinstance(val, TensorFlowTensor): - return val.unwrap().numpy() - if isinstance(val, (list, tuple)): - return np.array(val) - if torch is not None and isinstance(val, torch.Tensor): - return val.detach().numpy() - if tf is not None and isinstance(val, tf.Tensor): - return val.numpy() - if allow_passthrough: - return val - raise ValueError(f'Unsupported input type for {type(self)}: {type(val)}') - - def _convert_dict_to_doc( - self, doc_dict: Dict[str, Any], schema: Type[BaseDoc], inner=False - ) -> BaseDoc: - """ - Convert a dict to a Document object. - - :param doc_dict: A dict that contains all the flattened fields of a Document, the field names are the keys and follow the pattern {field_name} or {field_name}__{nested_name} - :param schema: The schema of the Document object - :return: A Document object - """ - for field_name, _ in schema.__fields__.items(): - t_ = schema._get_field_type(field_name) - - if not is_union_type(t_) and issubclass(t_, AnyDocArray): - self._get_subindex_doclist(doc_dict, field_name) - - if is_optional_type(t_): - for t_arg in get_args(t_): - if t_arg is not type(None): - t_ = t_arg - - if not is_union_type(t_) and issubclass(t_, BaseDoc): - inner_dict = {} - - fields = [ - key for key in doc_dict.keys() if key.startswith(f'{field_name}__') - ] - for key in fields: - nested_name = key[len(f'{field_name}__') :] - inner_dict[nested_name] = doc_dict.pop(key) - - doc_dict[field_name] = self._convert_dict_to_doc( - inner_dict, t_, inner=True - ) - - if self._is_subindex and not inner: - doc_dict.pop('parent_id', None) - schema_cls = cast(Type[BaseDoc], self._ori_schema) - else: - schema_cls = cast(Type[BaseDoc], schema) - doc = schema_cls(**doc_dict) - return doc - - def _dict_list_to_docarray(self, dict_list: Sequence[Dict[str, Any]]) -> DocList: - """Convert a list of docs in dict type to a DocList of the schema type.""" - doc_list = [self._convert_dict_to_doc(doc_dict, self._schema) for doc_dict in dict_list] # type: ignore - if self._is_subindex: - docs_cls = DocList.__class_getitem__(cast(Type[BaseDoc], self._ori_schema)) - else: - docs_cls = DocList.__class_getitem__(cast(Type[BaseDoc], self._schema)) - return docs_cls(doc_list) - - def __len__(self) -> int: - return self.num_docs() - - def _index_subindex(self, column_to_data: Dict[str, Generator[Any, None, None]]): - """Index subindex documents in the corresponding subindex. - - :param column_to_data: A dictionary from column name to a generator - """ - for col_name, col in self._column_infos.items(): - if safe_issubclass(col.docarray_type, AnyDocArray): - docs = [ - doc for doc_list in column_to_data[col_name] for doc in doc_list - ] - self._subindices[col_name].index(docs) - column_to_data.pop(col_name, None) - - def _get_subindex_doclist(self, doc: Dict[str, Any], field_name: str): - """Get subindex Documents from the index and assign them to `field_name`. - - :param doc: a dictionary mapping from column name to value - :param field_name: field name of the subindex Documents - """ - if field_name not in doc.keys(): - parent_id = doc['id'] - nested_docs_id = self._subindices[field_name]._filter_by_parent_id( - parent_id - ) - if nested_docs_id: - doc[field_name] = self._subindices[field_name].__getitem__( - nested_docs_id - ) - - def _find_subdocs( - self, - query: Union[AnyTensor, BaseDoc], - subindex: str = '', - search_field: str = '', - limit: int = 10, - **kwargs, - ) -> FindResult: - """Find documents in the subindex and return subindex docs and scores.""" - fields = subindex.split('__') - if not subindex or not issubclass( - self._schema._get_field_type(fields[0]), AnyDocArray # type: ignore - ): - raise ValueError(f'subindex {subindex} is not valid') - - if len(fields) == 1: - return self._subindices[fields[0]].find( - query, search_field=search_field, limit=limit, **kwargs - ) - - return self._subindices[fields[0]]._find_subdocs( - query, - subindex='___'.join(fields[1:]), - search_field=search_field, - limit=limit, - **kwargs, - ) - - def _get_root_doc_id(self, id: str, root: str, sub: str) -> str: - """Get the root_id given the id of a subindex Document and the root and subindex name - - :param id: id of the subindex Document - :param root: root index name - :param sub: subindex name - :return: the root_id of the Document - """ - subindex = self._subindices[root] - - if not sub: - sub_doc = subindex._get_items([id]) - parent_id = ( - sub_doc[0]['parent_id'] - if isinstance(sub_doc[0], dict) - else sub_doc[0].parent_id - ) - return parent_id - else: - fields = sub.split('__') - cur_root_id = subindex._get_root_doc_id( - id, fields[0], '__'.join(fields[1:]) - ) - return self._get_root_doc_id(cur_root_id, root, '') diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/datasets/cityscapes.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/datasets/cityscapes.py deleted file mode 100644 index f21867c63e1835f6fceb61f066e802fd8fd2a735..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/datasets/cityscapes.py +++ /dev/null @@ -1,54 +0,0 @@ -# dataset settings -dataset_type = 'CityscapesDataset' -data_root = 'data/cityscapes/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -crop_size = (512, 1024) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 1024), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='leftImg8bit/train', - ann_dir='gtFine/train', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='leftImg8bit/val', - ann_dir='gtFine/val', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='leftImg8bit/val', - ann_dir='gtFine/val', - pipeline=test_pipeline)) diff --git a/spaces/Syrahealthorg/HealthCare_workforce/README.md b/spaces/Syrahealthorg/HealthCare_workforce/README.md deleted file mode 100644 index 8c3c2137b5d3b9d553e367fe40271c4d43d0b220..0000000000000000000000000000000000000000 --- a/spaces/Syrahealthorg/HealthCare_workforce/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: HealthCare Workforce -emoji: 👁 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TEnngal/bingo/src/pages/api/image.ts b/spaces/TEnngal/bingo/src/pages/api/image.ts deleted file mode 100644 index 26fdb31076a9c71e70d1725a630844b27f5a3221..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/pages/api/image.ts +++ /dev/null @@ -1,38 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { debug } from '@/lib/isomorphic' -import { createHeaders } from '@/lib/utils' -import { createImage } from '@/lib/bots/bing/utils' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const { prompt, id } = req.query - if (!prompt) { - return res.json({ - result: { - value: 'Image', - message: 'No Prompt' - } - }) - } - try { - const headers = createHeaders(req.cookies, 'image') - - debug('headers', headers) - const response = await createImage(String(prompt), String(id), { - ...headers, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - }) - res.writeHead(200, { - 'Content-Type': 'text/plain; charset=UTF-8', - }) - return res.end(response) - } catch (e) { - return res.json({ - result: { - value: 'Error', - message: `${e}` - } - }) - } -} diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/index.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/index.py deleted file mode 100644 index 9b6d129ed690361770738bec73f44ba7e10a21c5..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/index.py +++ /dev/null @@ -1,508 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2013 Vinay Sajip. -# Licensed to the Python Software Foundation under a contributor agreement. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -import hashlib -import logging -import os -import shutil -import subprocess -import tempfile -try: - from threading import Thread -except ImportError: # pragma: no cover - from dummy_threading import Thread - -from . import DistlibException -from .compat import (HTTPBasicAuthHandler, Request, HTTPPasswordMgr, - urlparse, build_opener, string_types) -from .util import zip_dir, ServerProxy - -logger = logging.getLogger(__name__) - -DEFAULT_INDEX = 'https://pypi.org/pypi' -DEFAULT_REALM = 'pypi' - -class PackageIndex(object): - """ - This class represents a package index compatible with PyPI, the Python - Package Index. - """ - - boundary = b'----------ThIs_Is_tHe_distlib_index_bouNdaRY_$' - - def __init__(self, url=None): - """ - Initialise an instance. - - :param url: The URL of the index. If not specified, the URL for PyPI is - used. - """ - self.url = url or DEFAULT_INDEX - self.read_configuration() - scheme, netloc, path, params, query, frag = urlparse(self.url) - if params or query or frag or scheme not in ('http', 'https'): - raise DistlibException('invalid repository: %s' % self.url) - self.password_handler = None - self.ssl_verifier = None - self.gpg = None - self.gpg_home = None - with open(os.devnull, 'w') as sink: - # Use gpg by default rather than gpg2, as gpg2 insists on - # prompting for passwords - for s in ('gpg', 'gpg2'): - try: - rc = subprocess.check_call([s, '--version'], stdout=sink, - stderr=sink) - if rc == 0: - self.gpg = s - break - except OSError: - pass - - def _get_pypirc_command(self): - """ - Get the distutils command for interacting with PyPI configurations. - :return: the command. - """ - from .util import _get_pypirc_command as cmd - return cmd() - - def read_configuration(self): - """ - Read the PyPI access configuration as supported by distutils. This populates - ``username``, ``password``, ``realm`` and ``url`` attributes from the - configuration. - """ - from .util import _load_pypirc - cfg = _load_pypirc(self) - self.username = cfg.get('username') - self.password = cfg.get('password') - self.realm = cfg.get('realm', 'pypi') - self.url = cfg.get('repository', self.url) - - def save_configuration(self): - """ - Save the PyPI access configuration. You must have set ``username`` and - ``password`` attributes before calling this method. - """ - self.check_credentials() - from .util import _store_pypirc - _store_pypirc(self) - - def check_credentials(self): - """ - Check that ``username`` and ``password`` have been set, and raise an - exception if not. - """ - if self.username is None or self.password is None: - raise DistlibException('username and password must be set') - pm = HTTPPasswordMgr() - _, netloc, _, _, _, _ = urlparse(self.url) - pm.add_password(self.realm, netloc, self.username, self.password) - self.password_handler = HTTPBasicAuthHandler(pm) - - def register(self, metadata): # pragma: no cover - """ - Register a distribution on PyPI, using the provided metadata. - - :param metadata: A :class:`Metadata` instance defining at least a name - and version number for the distribution to be - registered. - :return: The HTTP response received from PyPI upon submission of the - request. - """ - self.check_credentials() - metadata.validate() - d = metadata.todict() - d[':action'] = 'verify' - request = self.encode_request(d.items(), []) - response = self.send_request(request) - d[':action'] = 'submit' - request = self.encode_request(d.items(), []) - return self.send_request(request) - - def _reader(self, name, stream, outbuf): - """ - Thread runner for reading lines of from a subprocess into a buffer. - - :param name: The logical name of the stream (used for logging only). - :param stream: The stream to read from. This will typically a pipe - connected to the output stream of a subprocess. - :param outbuf: The list to append the read lines to. - """ - while True: - s = stream.readline() - if not s: - break - s = s.decode('utf-8').rstrip() - outbuf.append(s) - logger.debug('%s: %s' % (name, s)) - stream.close() - - def get_sign_command(self, filename, signer, sign_password, keystore=None): # pragma: no cover - """ - Return a suitable command for signing a file. - - :param filename: The pathname to the file to be signed. - :param signer: The identifier of the signer of the file. - :param sign_password: The passphrase for the signer's - private key used for signing. - :param keystore: The path to a directory which contains the keys - used in verification. If not specified, the - instance's ``gpg_home`` attribute is used instead. - :return: The signing command as a list suitable to be - passed to :class:`subprocess.Popen`. - """ - cmd = [self.gpg, '--status-fd', '2', '--no-tty'] - if keystore is None: - keystore = self.gpg_home - if keystore: - cmd.extend(['--homedir', keystore]) - if sign_password is not None: - cmd.extend(['--batch', '--passphrase-fd', '0']) - td = tempfile.mkdtemp() - sf = os.path.join(td, os.path.basename(filename) + '.asc') - cmd.extend(['--detach-sign', '--armor', '--local-user', - signer, '--output', sf, filename]) - logger.debug('invoking: %s', ' '.join(cmd)) - return cmd, sf - - def run_command(self, cmd, input_data=None): - """ - Run a command in a child process , passing it any input data specified. - - :param cmd: The command to run. - :param input_data: If specified, this must be a byte string containing - data to be sent to the child process. - :return: A tuple consisting of the subprocess' exit code, a list of - lines read from the subprocess' ``stdout``, and a list of - lines read from the subprocess' ``stderr``. - """ - kwargs = { - 'stdout': subprocess.PIPE, - 'stderr': subprocess.PIPE, - } - if input_data is not None: - kwargs['stdin'] = subprocess.PIPE - stdout = [] - stderr = [] - p = subprocess.Popen(cmd, **kwargs) - # We don't use communicate() here because we may need to - # get clever with interacting with the command - t1 = Thread(target=self._reader, args=('stdout', p.stdout, stdout)) - t1.start() - t2 = Thread(target=self._reader, args=('stderr', p.stderr, stderr)) - t2.start() - if input_data is not None: - p.stdin.write(input_data) - p.stdin.close() - - p.wait() - t1.join() - t2.join() - return p.returncode, stdout, stderr - - def sign_file(self, filename, signer, sign_password, keystore=None): # pragma: no cover - """ - Sign a file. - - :param filename: The pathname to the file to be signed. - :param signer: The identifier of the signer of the file. - :param sign_password: The passphrase for the signer's - private key used for signing. - :param keystore: The path to a directory which contains the keys - used in signing. If not specified, the instance's - ``gpg_home`` attribute is used instead. - :return: The absolute pathname of the file where the signature is - stored. - """ - cmd, sig_file = self.get_sign_command(filename, signer, sign_password, - keystore) - rc, stdout, stderr = self.run_command(cmd, - sign_password.encode('utf-8')) - if rc != 0: - raise DistlibException('sign command failed with error ' - 'code %s' % rc) - return sig_file - - def upload_file(self, metadata, filename, signer=None, sign_password=None, - filetype='sdist', pyversion='source', keystore=None): - """ - Upload a release file to the index. - - :param metadata: A :class:`Metadata` instance defining at least a name - and version number for the file to be uploaded. - :param filename: The pathname of the file to be uploaded. - :param signer: The identifier of the signer of the file. - :param sign_password: The passphrase for the signer's - private key used for signing. - :param filetype: The type of the file being uploaded. This is the - distutils command which produced that file, e.g. - ``sdist`` or ``bdist_wheel``. - :param pyversion: The version of Python which the release relates - to. For code compatible with any Python, this would - be ``source``, otherwise it would be e.g. ``3.2``. - :param keystore: The path to a directory which contains the keys - used in signing. If not specified, the instance's - ``gpg_home`` attribute is used instead. - :return: The HTTP response received from PyPI upon submission of the - request. - """ - self.check_credentials() - if not os.path.exists(filename): - raise DistlibException('not found: %s' % filename) - metadata.validate() - d = metadata.todict() - sig_file = None - if signer: - if not self.gpg: - logger.warning('no signing program available - not signed') - else: - sig_file = self.sign_file(filename, signer, sign_password, - keystore) - with open(filename, 'rb') as f: - file_data = f.read() - md5_digest = hashlib.md5(file_data).hexdigest() - sha256_digest = hashlib.sha256(file_data).hexdigest() - d.update({ - ':action': 'file_upload', - 'protocol_version': '1', - 'filetype': filetype, - 'pyversion': pyversion, - 'md5_digest': md5_digest, - 'sha256_digest': sha256_digest, - }) - files = [('content', os.path.basename(filename), file_data)] - if sig_file: - with open(sig_file, 'rb') as f: - sig_data = f.read() - files.append(('gpg_signature', os.path.basename(sig_file), - sig_data)) - shutil.rmtree(os.path.dirname(sig_file)) - request = self.encode_request(d.items(), files) - return self.send_request(request) - - def upload_documentation(self, metadata, doc_dir): # pragma: no cover - """ - Upload documentation to the index. - - :param metadata: A :class:`Metadata` instance defining at least a name - and version number for the documentation to be - uploaded. - :param doc_dir: The pathname of the directory which contains the - documentation. This should be the directory that - contains the ``index.html`` for the documentation. - :return: The HTTP response received from PyPI upon submission of the - request. - """ - self.check_credentials() - if not os.path.isdir(doc_dir): - raise DistlibException('not a directory: %r' % doc_dir) - fn = os.path.join(doc_dir, 'index.html') - if not os.path.exists(fn): - raise DistlibException('not found: %r' % fn) - metadata.validate() - name, version = metadata.name, metadata.version - zip_data = zip_dir(doc_dir).getvalue() - fields = [(':action', 'doc_upload'), - ('name', name), ('version', version)] - files = [('content', name, zip_data)] - request = self.encode_request(fields, files) - return self.send_request(request) - - def get_verify_command(self, signature_filename, data_filename, - keystore=None): - """ - Return a suitable command for verifying a file. - - :param signature_filename: The pathname to the file containing the - signature. - :param data_filename: The pathname to the file containing the - signed data. - :param keystore: The path to a directory which contains the keys - used in verification. If not specified, the - instance's ``gpg_home`` attribute is used instead. - :return: The verifying command as a list suitable to be - passed to :class:`subprocess.Popen`. - """ - cmd = [self.gpg, '--status-fd', '2', '--no-tty'] - if keystore is None: - keystore = self.gpg_home - if keystore: - cmd.extend(['--homedir', keystore]) - cmd.extend(['--verify', signature_filename, data_filename]) - logger.debug('invoking: %s', ' '.join(cmd)) - return cmd - - def verify_signature(self, signature_filename, data_filename, - keystore=None): - """ - Verify a signature for a file. - - :param signature_filename: The pathname to the file containing the - signature. - :param data_filename: The pathname to the file containing the - signed data. - :param keystore: The path to a directory which contains the keys - used in verification. If not specified, the - instance's ``gpg_home`` attribute is used instead. - :return: True if the signature was verified, else False. - """ - if not self.gpg: - raise DistlibException('verification unavailable because gpg ' - 'unavailable') - cmd = self.get_verify_command(signature_filename, data_filename, - keystore) - rc, stdout, stderr = self.run_command(cmd) - if rc not in (0, 1): - raise DistlibException('verify command failed with error ' - 'code %s' % rc) - return rc == 0 - - def download_file(self, url, destfile, digest=None, reporthook=None): - """ - This is a convenience method for downloading a file from an URL. - Normally, this will be a file from the index, though currently - no check is made for this (i.e. a file can be downloaded from - anywhere). - - The method is just like the :func:`urlretrieve` function in the - standard library, except that it allows digest computation to be - done during download and checking that the downloaded data - matched any expected value. - - :param url: The URL of the file to be downloaded (assumed to be - available via an HTTP GET request). - :param destfile: The pathname where the downloaded file is to be - saved. - :param digest: If specified, this must be a (hasher, value) - tuple, where hasher is the algorithm used (e.g. - ``'md5'``) and ``value`` is the expected value. - :param reporthook: The same as for :func:`urlretrieve` in the - standard library. - """ - if digest is None: - digester = None - logger.debug('No digest specified') - else: - if isinstance(digest, (list, tuple)): - hasher, digest = digest - else: - hasher = 'md5' - digester = getattr(hashlib, hasher)() - logger.debug('Digest specified: %s' % digest) - # The following code is equivalent to urlretrieve. - # We need to do it this way so that we can compute the - # digest of the file as we go. - with open(destfile, 'wb') as dfp: - # addinfourl is not a context manager on 2.x - # so we have to use try/finally - sfp = self.send_request(Request(url)) - try: - headers = sfp.info() - blocksize = 8192 - size = -1 - read = 0 - blocknum = 0 - if "content-length" in headers: - size = int(headers["Content-Length"]) - if reporthook: - reporthook(blocknum, blocksize, size) - while True: - block = sfp.read(blocksize) - if not block: - break - read += len(block) - dfp.write(block) - if digester: - digester.update(block) - blocknum += 1 - if reporthook: - reporthook(blocknum, blocksize, size) - finally: - sfp.close() - - # check that we got the whole file, if we can - if size >= 0 and read < size: - raise DistlibException( - 'retrieval incomplete: got only %d out of %d bytes' - % (read, size)) - # if we have a digest, it must match. - if digester: - actual = digester.hexdigest() - if digest != actual: - raise DistlibException('%s digest mismatch for %s: expected ' - '%s, got %s' % (hasher, destfile, - digest, actual)) - logger.debug('Digest verified: %s', digest) - - def send_request(self, req): - """ - Send a standard library :class:`Request` to PyPI and return its - response. - - :param req: The request to send. - :return: The HTTP response from PyPI (a standard library HTTPResponse). - """ - handlers = [] - if self.password_handler: - handlers.append(self.password_handler) - if self.ssl_verifier: - handlers.append(self.ssl_verifier) - opener = build_opener(*handlers) - return opener.open(req) - - def encode_request(self, fields, files): - """ - Encode fields and files for posting to an HTTP server. - - :param fields: The fields to send as a list of (fieldname, value) - tuples. - :param files: The files to send as a list of (fieldname, filename, - file_bytes) tuple. - """ - # Adapted from packaging, which in turn was adapted from - # http://code.activestate.com/recipes/146306 - - parts = [] - boundary = self.boundary - for k, values in fields: - if not isinstance(values, (list, tuple)): - values = [values] - - for v in values: - parts.extend(( - b'--' + boundary, - ('Content-Disposition: form-data; name="%s"' % - k).encode('utf-8'), - b'', - v.encode('utf-8'))) - for key, filename, value in files: - parts.extend(( - b'--' + boundary, - ('Content-Disposition: form-data; name="%s"; filename="%s"' % - (key, filename)).encode('utf-8'), - b'', - value)) - - parts.extend((b'--' + boundary + b'--', b'')) - - body = b'\r\n'.join(parts) - ct = b'multipart/form-data; boundary=' + boundary - headers = { - 'Content-type': ct, - 'Content-length': str(len(body)) - } - return Request(self.url, body, headers) - - def search(self, terms, operator=None): # pragma: no cover - if isinstance(terms, string_types): - terms = {'name': terms} - rpc_proxy = ServerProxy(self.url, timeout=3.0) - try: - return rpc_proxy.search(terms, operator or 'and') - finally: - rpc_proxy('close')() diff --git a/spaces/TandCAcceptMe/face-swap-docker/plugins/core_video.py b/spaces/TandCAcceptMe/face-swap-docker/plugins/core_video.py deleted file mode 100644 index 236c161bd59e529707849415aef4817a17d6c65a..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/plugins/core_video.py +++ /dev/null @@ -1,26 +0,0 @@ -# Core plugin -# author: Vladislav Janvarev - -from chain_img_processor import ChainImgProcessor, ChainVideoProcessor - -# start function -def start(core:ChainImgProcessor): - manifest = { - "name": "Core video plugin", - "version": "2.0", - - "default_options": { - "video_save_codec": "libx264", # default codec to save - "video_save_crf": 14, # default crf to save - }, - - } - return manifest - -def start_with_options(core:ChainVideoProcessor, manifest:dict): - options = manifest["options"] - - core.video_save_codec = options["video_save_codec"] - core.video_save_crf = options["video_save_crf"] - - return manifest diff --git a/spaces/Taocan/Chatty/app.py b/spaces/Taocan/Chatty/app.py deleted file mode 100644 index 52e7f8c44a6dede14decfdadb312530875dec48b..0000000000000000000000000000000000000000 --- a/spaces/Taocan/Chatty/app.py +++ /dev/null @@ -1,130 +0,0 @@ - -import openai -import os -import gradio as gr -openai.api_key = os.environ.get("OPENAI_API_KEY") - - -class Conversation: - def __init__(self, prompt, num_of_round): - self.prompt = prompt - self.num_of_round = num_of_round - self.messages = [] - self.messages.append({"role": "system", "content": self.prompt}) - - def ask(self, question): - try: - self.messages.append({"role": "user", "content": question}) - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=self.messages, - temperature=0.5, - max_tokens=256, - top_p=1, - ) - except Exception as e: - print(e) - return e - message = response["choices"][0]["message"]["content"] - self.messages.append({"role": "assistant", "content": message}) - if len(self.messages) > 3: - del self.messages[1:3] - return message - -def add_text(history, text): - history = history + [(text, None)] - return history, gr.update(value="", interactive=False) - -def add_file(history, file): - history = history + [((file.name,), None)] - return history - -def bot(history, chatbot): - prompt = ". ".join([m["content"] for m in history if m["role"] == "user"]) - response = openai.Completion.create( - engine="davinci", - prompt=prompt, - temperature=0.5, - max_tokens=100, - top_p=1.0 - ) - message = response["choices"][0]["text"] - chatbot.add_message(content=message, role="assistant") - -def answer(question, history=None): - if history is None: - history = [] - history.append(question) - response = conv.ask(question) - history.append(response) - responses = [(u,b) for u,b in zip(history[::2], history[1::2])] - return responses, history - -def on_text_submit(chatbot, txt): - prompt = ". ".join([m[0] for m in history if m[1] == "user"]) - response = openai.Completion.create( - engine="davinci", - prompt=prompt, - temperature=0.5, - max_tokens=100, - top_p=1.0 - ) - message = response["choices"][0]["text"] - chatbot.add(message, "assistant") - -def on_file_upload(chatbot, btn): - chatbot.add((btn.uploaded_file.name,), "user") - -with gr.Blocks(css="#chatbot{height:300px} .overflow-y-auto{height:500px}") as demo: - chatbot = gr.Chatbot([], elem_id="chatbot").style(height=750) - - -prompt = """假如你是GPT-4,你可以回答用户提问的任何问题""" -conv = Conversation(prompt, 10) -# with gr.Blocks(css="#chatbot{height:300px} .overflow-y-auto{height:500px}") as demo: -# chatbot = gr.Chatbot([], elem_id="chatbot").style(height=750) -# state = gr.State() - -# with gr.Row(): -# with gr.Column(scale=0.85): -# txt = gr.Textbox( -# show_label=False, -# placeholder="Enter text and press enter, or upload an image", -# ).style(container=False) -# with gr.Column(scale=0.15, min_width=0): -# btn = gr.UploadButton("📁", file_types=["image", "video", "audio"]) - -# txt_msg = txt.submit(answer, [txt, state], chatbot, state, queue=False).then( -# lambda x: chatbot.add(*x), -# None, -# [chatbot]) -# txt_msg.then(lambda: gr.update(interactive=True), None, [txt], queue=False) -# file_msg = btn.upload(add_file, chatbot, btn, chatbot, btn, queue=False).then( -# lambda x: chatbot.add(*x), -# None, -# [chatbot] -# ) -# demo.launch() - -# with gr.Blocks(css="#chatbot{height:800px} .overflow-y-auto{height:500px}") as demo: -# chatbot = gr.Chatbot([], elem_id="chatbot").style(height=750) -with gr.Blocks(css="#chatty{height:800px} .overflow-y-auto{height:500px}") as demo: - chatty = gr.Chatbot([], name="Chatty", elem_id="chatbot").style(height=800, width=800) - - with gr.Row(): - with gr.Column(scale=0.85): - txt = gr.Textbox( - show_label=False, - placeholder="Enter text and press enter, or upload an image", - ).style(container=False) - with gr.Column(scale=0.15, min_width=0): - btn = gr.UploadButton("📁", file_types=["image", "video", "audio"]) - - txt_msg = txt.submit(add_text, [chatty, txt], [chatty, txt], queue=False).then( - bot, chatty, chatty - ) - txt_msg.then(lambda: gr.update(interactive=True), None, [txt], queue=False) - file_msg = btn.upload(add_file, [chatty, btn], [chatty], queue=False).then( - bot, chatty, chatty - ) -demo.launch() diff --git a/spaces/Theivaprakasham/layoutlmv2_invoice/app.py b/spaces/Theivaprakasham/layoutlmv2_invoice/app.py deleted file mode 100644 index 90a91290f818803069f00015ca9b5aab0d3ff41f..0000000000000000000000000000000000000000 --- a/spaces/Theivaprakasham/layoutlmv2_invoice/app.py +++ /dev/null @@ -1,97 +0,0 @@ -import os - -os.system('pip install torch==1.8.0+cpu torchvision==0.9.0+cpu -f https://download.pytorch.org/whl/torch_stable.html') -os.system('pip install -q detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cpu/torch1.8/index.html') - - -import gradio as gr -import numpy as np -from transformers import LayoutLMv2Processor, LayoutLMv2ForTokenClassification -from datasets import load_dataset -from PIL import Image, ImageDraw, ImageFont - -processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased") -model = LayoutLMv2ForTokenClassification.from_pretrained("Theivaprakasham/layoutlmv2-finetuned-sroie_mod") - -# load image example -dataset = load_dataset("darentang/generated", split="test") -Image.open(dataset[2]["image_path"]).convert("RGB").save("example1.png") -Image.open(dataset[1]["image_path"]).convert("RGB").save("example2.png") -Image.open(dataset[0]["image_path"]).convert("RGB").save("example3.png") -# define id2label, label2color -labels = dataset.features['ner_tags'].feature.names -id2label = {v: k for v, k in enumerate(labels)} -label2color = {'b-abn': "blue", - 'b-biller': "blue", - 'b-biller_address': "black", - 'b-biller_post_code': "green", - 'b-due_date': "orange", - 'b-gst': 'red', - 'b-invoice_date': 'red', - 'b-invoice_number': 'violet', - 'b-subtotal': 'green', - 'b-total': 'green', - 'i-biller_address': 'blue', - 'o': 'violet'} - -def unnormalize_box(bbox, width, height): - return [ - width * (bbox[0] / 1000), - height * (bbox[1] / 1000), - width * (bbox[2] / 1000), - height * (bbox[3] / 1000), - ] - -def iob_to_label(label): - return label - -def process_image(image): - width, height = image.size - - # encode - encoding = processor(image, truncation=True, return_offsets_mapping=True, return_tensors="pt") - offset_mapping = encoding.pop('offset_mapping') - - # forward pass - outputs = model(**encoding) - - # get predictions - predictions = outputs.logits.argmax(-1).squeeze().tolist() - token_boxes = encoding.bbox.squeeze().tolist() - - # only keep non-subword predictions - is_subword = np.array(offset_mapping.squeeze().tolist())[:,0] != 0 - true_predictions = [id2label[pred] for idx, pred in enumerate(predictions) if not is_subword[idx]] - true_boxes = [unnormalize_box(box, width, height) for idx, box in enumerate(token_boxes) if not is_subword[idx]] - - # draw predictions over the image - draw = ImageDraw.Draw(image) - font = ImageFont.load_default() - for prediction, box in zip(true_predictions, true_boxes): - predicted_label = iob_to_label(prediction).lower() - draw.rectangle(box, outline=label2color[predicted_label]) - draw.text((box[0]+10, box[1]-10), text=predicted_label, fill=label2color[predicted_label], font=font) - - return image - - -title = "Invoice Information extraction using LayoutLMv2 model" -description = "Invoice Information Extraction - We use Microsoft's LayoutLMv2 trained on Invoice Dataset to predict the Biller Name, Biller Address, Biller post_code, Due_date, GST, Invoice_date, Invoice_number, Subtotal and Total. To use it, simply upload an image or use the example image below. Results will show up in a few seconds." - -article="References
    [1] Y. Xu et al., “LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding.” 2022. Paper Link
    [2] LayoutLMv2 training and inference" - -examples =[['example1.png'],['example2.png'],['example3.png']] - - -css = """.output_image, .input_image {height: 600px !important}""" - -iface = gr.Interface(fn=process_image, - inputs=gr.inputs.Image(type="pil"), - outputs=gr.outputs.Image(type="pil", label="annotated image"), - title=title, - description=description, - article=article, - examples=examples, - css=css, - analytics_enabled = True, enable_queue=True) -iface.launch(inline=False,debug=False) \ No newline at end of file diff --git a/spaces/VinayHajare/Speech-To-Speech-Translation-For-Marathi-To-English/README.md b/spaces/VinayHajare/Speech-To-Speech-Translation-For-Marathi-To-English/README.md deleted file mode 100644 index c7d2830f9cace9a5d20c0681ef232e5f39e9f1c8..0000000000000000000000000000000000000000 --- a/spaces/VinayHajare/Speech-To-Speech-Translation-For-Marathi-To-English/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Speech To Speech Translation For Marathi To English -emoji: 📊 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Vipul-Chauhan/20newsgroup_QA/README.md b/spaces/Vipul-Chauhan/20newsgroup_QA/README.md deleted file mode 100644 index 0ffed933ad1ba6d253d1bdb06a2f2a64e660dd38..0000000000000000000000000000000000000000 --- a/spaces/Vipul-Chauhan/20newsgroup_QA/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: 20newsgroup QA -emoji: 🐢 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/tts_package.py b/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/tts_package.py deleted file mode 100644 index 622d3a5825886816eb732af08fbd8749c2a7913f..0000000000000000000000000000000000000000 --- a/spaces/VoiceHero69/changer/setup_tools/magicinstaller/requirements/tts_package.py +++ /dev/null @@ -1,5 +0,0 @@ -from setup_tools.magicinstaller.requirement import SimpleRequirement - - -class TTS(SimpleRequirement): - package_name = 'TTS' diff --git a/spaces/Widium/Style-Recreation/functions/image.py b/spaces/Widium/Style-Recreation/functions/image.py deleted file mode 100644 index daaba1eee98308894d45b28b935d2fc85acbed3b..0000000000000000000000000000000000000000 --- a/spaces/Widium/Style-Recreation/functions/image.py +++ /dev/null @@ -1,98 +0,0 @@ -# *************************************************************************** # -# # -# image.py # -# # -# By: Widium # -# Github : https://github.com/widium # -# # -# Created: 2022/11/10 08:51:01 by ebennace # -# Updated: 2023/05/03 16:05:48 by Widium # -# # -# **************************************************************************** # - -import tensorflow as tf -import numpy as np - -from tensorflow import Tensor -from PIL import Image - -from cv2 import cvtColor -from cv2 import imread -from cv2 import COLOR_BGR2RGB - -from .processing import Normalize_image -from .processing import inverse_normalize_image -from .processing import remove_batch_dimension - -# ======================================== # - -def load_image_path(path : str): - """ - Load and preprocess the color of imag with OpenCV - - Args: - path (str): filepath of image - - Returns: - np.array: img in Numpy Array Format - """ - img = imread(path) - img = cvtColor(img, COLOR_BGR2RGB) - img = Normalize_image(img) - - return (img) - -# ======================================== # - -def tensor_to_image(tensor : Tensor): - """ - Convert a tensor to an image in PIL format. - - Args: - tensor: The input image as a tensor. - - Returns: - Image: The converted image in PIL format. - """ - tensor = inverse_normalize_image(tensor) - array = np.array(tensor, dtype=np.uint8) - array = remove_batch_dimension(array) - img_pil = Image.fromarray(array) - return img_pil - -# ======================================== # - -def clip_pixel(image : Tensor): - """ - Clip pixel values of an image tensor between 0 and 1. - - Args: - image: The input image as a tensor. - - Returns: - Tensor: The clipped image tensor. - """ - cliped_image = tf.clip_by_value( - t=image, - clip_value_min=0.0, - clip_value_max=1.0 - ) - - return (cliped_image) - -# ======================================== # - -def create_noisy_imag(img : Tensor): - """ - Create Noisy image with Random pixel with same shape of input img - - Args: - img: The input image as a tensor. - - Returns: - np.ndarray: The noisy image as a NumPy array. - """ - noise_img = np.random.randn(*img.shape) - return (noise_img) - -# ===================================================== # \ No newline at end of file diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/models/encodec.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/models/encodec.py deleted file mode 100644 index 69621a695887b0b41614c51cae020f6fd0af221d..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/models/encodec.py +++ /dev/null @@ -1,302 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from abc import ABC, abstractmethod -import typing as tp - -from einops import rearrange -import torch -from torch import nn - -from .. import quantization as qt - - -class CompressionModel(ABC, nn.Module): - - @abstractmethod - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - ... - - @abstractmethod - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - """See `EncodecModel.encode`""" - ... - - @abstractmethod - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - """See `EncodecModel.decode`""" - ... - - @property - @abstractmethod - def channels(self) -> int: - ... - - @property - @abstractmethod - def frame_rate(self) -> int: - ... - - @property - @abstractmethod - def sample_rate(self) -> int: - ... - - @property - @abstractmethod - def cardinality(self) -> int: - ... - - @property - @abstractmethod - def num_codebooks(self) -> int: - ... - - @property - @abstractmethod - def total_codebooks(self) -> int: - ... - - @abstractmethod - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - """ - ... - - -class EncodecModel(CompressionModel): - """Encodec model operating on the raw waveform. - - Args: - encoder (nn.Module): Encoder network. - decoder (nn.Module): Decoder network. - quantizer (qt.BaseQuantizer): Quantizer network. - frame_rate (int): Frame rate for the latent representation. - sample_rate (int): Audio sample rate. - channels (int): Number of audio channels. - causal (bool): Whether to use a causal version of the model. - renormalize (bool): Whether to renormalize the audio before running the model. - """ - # we need assignement to override the property in the abstract class, - # I couldn't find a better way... - frame_rate: int = 0 - sample_rate: int = 0 - channels: int = 0 - - def __init__(self, - encoder: nn.Module, - decoder: nn.Module, - quantizer: qt.BaseQuantizer, - frame_rate: int, - sample_rate: int, - channels: int, - causal: bool = False, - renormalize: bool = False): - super().__init__() - self.encoder = encoder - self.decoder = decoder - self.quantizer = quantizer - self.frame_rate = frame_rate - self.sample_rate = sample_rate - self.channels = channels - self.renormalize = renormalize - self.causal = causal - if self.causal: - # we force disabling here to avoid handling linear overlap of segments - # as supported in original EnCodec codebase. - assert not self.renormalize, 'Causal model does not support renormalize' - - @property - def total_codebooks(self): - """Total number of quantizer codebooks available. - """ - return self.quantizer.total_codebooks - - @property - def num_codebooks(self): - """Active number of codebooks used by the quantizer. - """ - return self.quantizer.num_codebooks - - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - """ - self.quantizer.set_num_codebooks(n) - - @property - def cardinality(self): - """Cardinality of each codebook. - """ - return self.quantizer.bins - - def preprocess(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - scale: tp.Optional[torch.Tensor] - if self.renormalize: - mono = x.mean(dim=1, keepdim=True) - volume = mono.pow(2).mean(dim=2, keepdim=True).sqrt() - scale = 1e-8 + volume - x = x / scale - scale = scale.view(-1, 1) - else: - scale = None - return x, scale - - def postprocess(self, - x: torch.Tensor, - scale: tp.Optional[torch.Tensor] = None) -> torch.Tensor: - if scale is not None: - assert self.renormalize - x = x * scale.view(-1, 1, 1) - return x - - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - assert x.dim() == 3 - length = x.shape[-1] - x, scale = self.preprocess(x) - - emb = self.encoder(x) - q_res = self.quantizer(emb, self.frame_rate) - out = self.decoder(q_res.x) - - # remove extra padding added by the encoder and decoder - assert out.shape[-1] >= length, (out.shape[-1], length) - out = out[..., :length] - - q_res.x = self.postprocess(out, scale) - - return q_res - - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - """Encode the given input tensor to quantized representation along with scale parameter. - - Args: - x (torch.Tensor): Float tensor of shape [B, C, T] - - Returns: - codes, scale (tp.Tuple[torch.Tensor, torch.Tensor]): Tuple composed of: - codes a float tensor of shape [B, K, T] with K the number of codebooks used and T the timestep. - scale a float tensor containing the scale for audio renormalizealization. - """ - assert x.dim() == 3 - x, scale = self.preprocess(x) - emb = self.encoder(x) - codes = self.quantizer.encode(emb) - return codes, scale - - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - """Decode the given codes to a reconstructed representation, using the scale to perform - audio denormalization if needed. - - Args: - codes (torch.Tensor): Int tensor of shape [B, K, T] - scale (tp.Optional[torch.Tensor]): Float tensor containing the scale value. - - Returns: - out (torch.Tensor): Float tensor of shape [B, C, T], the reconstructed audio. - """ - emb = self.quantizer.decode(codes) - out = self.decoder(emb) - out = self.postprocess(out, scale) - # out contains extra padding added by the encoder and decoder - return out - - -class FlattenedCompressionModel(CompressionModel): - """Wraps a CompressionModel and flatten its codebooks, e.g. - instead of returning [B, K, T], return [B, S, T * (K // S)] with - S the number of codebooks per step, and `K // S` the number of 'virtual steps' - for each real time step. - - Args: - model (CompressionModel): compression model to wrap. - codebooks_per_step (int): number of codebooks to keep per step, - this must divide the number of codebooks provided by the wrapped model. - extend_cardinality (bool): if True, and for instance if codebooks_per_step = 1, - if each codebook has a cardinality N, then the first codebook will - use the range [0, N - 1], and the second [N, 2 N - 1] etc. - On decoding, this can lead to potentially invalid sequences. - Any invalid entry will be silently remapped to the proper range - with a modulo. - """ - def __init__(self, model: CompressionModel, codebooks_per_step: int = 1, - extend_cardinality: bool = True): - super().__init__() - self.model = model - self.codebooks_per_step = codebooks_per_step - self.extend_cardinality = extend_cardinality - - @property - def total_codebooks(self): - return self.model.total_codebooks - - @property - def num_codebooks(self): - """Active number of codebooks used by the quantizer. - - ..Warning:: this reports the number of codebooks after the flattening - of the codebooks! - """ - assert self.model.num_codebooks % self.codebooks_per_step == 0 - return self.codebooks_per_step - - def set_num_codebooks(self, n: int): - """Set the active number of codebooks used by the quantizer. - - ..Warning:: this sets the number of codebooks **before** the flattening - of the codebooks. - """ - assert n % self.codebooks_per_step == 0 - self.model.set_num_codebooks(n) - - @property - def num_virtual_steps(self) -> int: - """Return the number of virtual steps, e.g. one real step - will be split into that many steps. - """ - return self.model.num_codebooks // self.codebooks_per_step - - @property - def frame_rate(self) -> int: - return self.model.frame_rate * self.num_virtual_steps - - @property - def sample_rate(self) -> int: - return self.model.sample_rate - - @property - def channels(self) -> int: - return self.model.channels - - @property - def cardinality(self): - """Cardinality of each codebook. - """ - if self.extend_cardinality: - return self.model.cardinality * self.num_virtual_steps - else: - return self.model.cardinality - - def forward(self, x: torch.Tensor) -> qt.QuantizedResult: - raise NotImplementedError("Not supported, use encode and decode.") - - def encode(self, x: torch.Tensor) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]: - indices, scales = self.model.encode(x) - B, K, T = indices.shape - indices = rearrange(indices, 'b (k v) t -> b k t v', k=self.codebooks_per_step) - if self.extend_cardinality: - for virtual_step in range(1, self.num_virtual_steps): - indices[..., virtual_step] += self.model.cardinality * virtual_step - indices = rearrange(indices, 'b k t v -> b k (t v)') - return (indices, scales) - - def decode(self, codes: torch.Tensor, scale: tp.Optional[torch.Tensor] = None): - B, K, T = codes.shape - assert T % self.num_virtual_steps == 0 - codes = rearrange(codes, 'b k (t v) -> b (k v) t', v=self.num_virtual_steps) - # We silently ignore potential errors from the LM when - # using extend_cardinality. - codes = codes % self.model.cardinality - return self.model.decode(codes, scale) diff --git a/spaces/Xule/ChuanhuChatGPT/locale/extract_locale.py b/spaces/Xule/ChuanhuChatGPT/locale/extract_locale.py deleted file mode 100644 index 32b0924bd6dffe150cb3e481ddadef836b91b83c..0000000000000000000000000000000000000000 --- a/spaces/Xule/ChuanhuChatGPT/locale/extract_locale.py +++ /dev/null @@ -1,26 +0,0 @@ -import os -import json -import re - -# Define regular expression patterns -pattern = r'i18n\((\"{3}.*?\"{3}|\".*?\")\)' - -# Load the .py file -with open('ChuanhuChatbot.py', 'r', encoding='utf-8') as f: - contents = f.read() - -# Load the .py files in the modules folder -for filename in os.listdir("modules"): - if filename.endswith(".py"): - with open(os.path.join("modules", filename), "r", encoding="utf-8") as f: - contents += f.read() - -# Matching with regular expressions -matches = re.findall(pattern, contents, re.DOTALL) - -# Convert to key/value pairs -data = {match.strip('()"'): '' for match in matches} - -# Save as a JSON file -with open('labels.json', 'w', encoding='utf-8') as f: - json.dump(data, f, ensure_ascii=False, indent=4) \ No newline at end of file diff --git a/spaces/XzJosh/Azuma-Bert-VITS2/README.md b/spaces/XzJosh/Azuma-Bert-VITS2/README.md deleted file mode 100644 index 436723c2374ee914966afb4fa330d441a6c8ffce..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Azuma-Bert-VITS2/README.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -license: mit -sdk: gradio -title: AI東雪蓮 ---- \ No newline at end of file diff --git a/spaces/XzJosh/Eileen-Bert-VITS2/modules.py b/spaces/XzJosh/Eileen-Bert-VITS2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Eileen-Bert-VITS2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/YUANAI/DiffspeechResearch/utils/nn/seq_utils.py b/spaces/YUANAI/DiffspeechResearch/utils/nn/seq_utils.py deleted file mode 100644 index 1308bf7d1806a6c36de9c8af5e9d217eaefa7b56..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/utils/nn/seq_utils.py +++ /dev/null @@ -1,305 +0,0 @@ -from collections import defaultdict -import torch -import torch.nn.functional as F - - -def make_positions(tensor, padding_idx): - """Replace non-padding symbols with their position numbers. - - Position numbers begin at padding_idx+1. Padding symbols are ignored. - """ - # The series of casts and type-conversions here are carefully - # balanced to both work with ONNX export and XLA. In particular XLA - # prefers ints, cumsum defaults to output longs, and ONNX doesn't know - # how to handle the dtype kwarg in cumsum. - mask = tensor.ne(padding_idx).int() - return ( - torch.cumsum(mask, dim=1).type_as(mask) * mask - ).long() + padding_idx - - -def softmax(x, dim): - return F.softmax(x, dim=dim, dtype=torch.float32) - - -def sequence_mask(lengths, maxlen, dtype=torch.bool): - if maxlen is None: - maxlen = lengths.max() - mask = ~(torch.ones((len(lengths), maxlen)).to(lengths.device).cumsum(dim=1).t() > lengths).t() - mask.type(dtype) - return mask - - -def weights_nonzero_speech(target): - # target : B x T x mel - # Assign weight 1.0 to all labels except for padding (id=0). - dim = target.size(-1) - return target.abs().sum(-1, keepdim=True).ne(0).float().repeat(1, 1, dim) - - -INCREMENTAL_STATE_INSTANCE_ID = defaultdict(lambda: 0) - - -def _get_full_incremental_state_key(module_instance, key): - module_name = module_instance.__class__.__name__ - - # assign a unique ID to each module instance, so that incremental state is - # not shared across module instances - if not hasattr(module_instance, '_instance_id'): - INCREMENTAL_STATE_INSTANCE_ID[module_name] += 1 - module_instance._instance_id = INCREMENTAL_STATE_INSTANCE_ID[module_name] - - return '{}.{}.{}'.format(module_name, module_instance._instance_id, key) - - -def get_incremental_state(module, incremental_state, key): - """Helper for getting incremental state for an nn.Module.""" - full_key = _get_full_incremental_state_key(module, key) - if incremental_state is None or full_key not in incremental_state: - return None - return incremental_state[full_key] - - -def set_incremental_state(module, incremental_state, key, value): - """Helper for setting incremental state for an nn.Module.""" - if incremental_state is not None: - full_key = _get_full_incremental_state_key(module, key) - incremental_state[full_key] = value - - -def fill_with_neg_inf(t): - """FP16-compatible function that fills a tensor with -inf.""" - return t.float().fill_(float('-inf')).type_as(t) - - -def fill_with_neg_inf2(t): - """FP16-compatible function that fills a tensor with -inf.""" - return t.float().fill_(-1e8).type_as(t) - - -def select_attn(attn_logits, type='best'): - """ - - :param attn_logits: [n_layers, B, n_head, T_sp, T_txt] - :return: - """ - encdec_attn = torch.stack(attn_logits, 0).transpose(1, 2) - # [n_layers * n_head, B, T_sp, T_txt] - encdec_attn = (encdec_attn.reshape([-1, *encdec_attn.shape[2:]])).softmax(-1) - if type == 'best': - indices = encdec_attn.max(-1).values.sum(-1).argmax(0) - encdec_attn = encdec_attn.gather( - 0, indices[None, :, None, None].repeat(1, 1, encdec_attn.size(-2), encdec_attn.size(-1)))[0] - return encdec_attn - elif type == 'mean': - return encdec_attn.mean(0) - - -def make_pad_mask(lengths, xs=None, length_dim=-1): - """Make mask tensor containing indices of padded part. - Args: - lengths (LongTensor or List): Batch of lengths (B,). - xs (Tensor, optional): The reference tensor. - If set, masks will be the same shape as this tensor. - length_dim (int, optional): Dimension indicator of the above tensor. - See the example. - Returns: - Tensor: Mask tensor containing indices of padded part. - dtype=torch.uint8 in PyTorch 1.2- - dtype=torch.bool in PyTorch 1.2+ (including 1.2) - Examples: - With only lengths. - >>> lengths = [5, 3, 2] - >>> make_non_pad_mask(lengths) - masks = [[0, 0, 0, 0 ,0], - [0, 0, 0, 1, 1], - [0, 0, 1, 1, 1]] - With the reference tensor. - >>> xs = torch.zeros((3, 2, 4)) - >>> make_pad_mask(lengths, xs) - tensor([[[0, 0, 0, 0], - [0, 0, 0, 0]], - [[0, 0, 0, 1], - [0, 0, 0, 1]], - [[0, 0, 1, 1], - [0, 0, 1, 1]]], dtype=torch.uint8) - >>> xs = torch.zeros((3, 2, 6)) - >>> make_pad_mask(lengths, xs) - tensor([[[0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1]], - [[0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1]], - [[0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8) - With the reference tensor and dimension indicator. - >>> xs = torch.zeros((3, 6, 6)) - >>> make_pad_mask(lengths, xs, 1) - tensor([[[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1]], - [[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1]], - [[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1]]], dtype=torch.uint8) - >>> make_pad_mask(lengths, xs, 2) - tensor([[[0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1], - [0, 0, 0, 0, 0, 1]], - [[0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1], - [0, 0, 0, 1, 1, 1]], - [[0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1], - [0, 0, 1, 1, 1, 1]]], dtype=torch.uint8) - """ - if length_dim == 0: - raise ValueError("length_dim cannot be 0: {}".format(length_dim)) - - if not isinstance(lengths, list): - lengths = lengths.tolist() - bs = int(len(lengths)) - if xs is None: - maxlen = int(max(lengths)) - else: - maxlen = xs.size(length_dim) - - seq_range = torch.arange(0, maxlen, dtype=torch.int64) - seq_range_expand = seq_range.unsqueeze(0).expand(bs, maxlen) - seq_length_expand = seq_range_expand.new(lengths).unsqueeze(-1) - mask = seq_range_expand >= seq_length_expand - - if xs is not None: - assert xs.size(0) == bs, (xs.size(0), bs) - - if length_dim < 0: - length_dim = xs.dim() + length_dim - # ind = (:, None, ..., None, :, , None, ..., None) - ind = tuple( - slice(None) if i in (0, length_dim) else None for i in range(xs.dim()) - ) - mask = mask[ind].expand_as(xs).to(xs.device) - return mask - - -def make_non_pad_mask(lengths, xs=None, length_dim=-1): - """Make mask tensor containing indices of non-padded part. - Args: - lengths (LongTensor or List): Batch of lengths (B,). - xs (Tensor, optional): The reference tensor. - If set, masks will be the same shape as this tensor. - length_dim (int, optional): Dimension indicator of the above tensor. - See the example. - Returns: - ByteTensor: mask tensor containing indices of padded part. - dtype=torch.uint8 in PyTorch 1.2- - dtype=torch.bool in PyTorch 1.2+ (including 1.2) - Examples: - With only lengths. - >>> lengths = [5, 3, 2] - >>> make_non_pad_mask(lengths) - masks = [[1, 1, 1, 1 ,1], - [1, 1, 1, 0, 0], - [1, 1, 0, 0, 0]] - With the reference tensor. - >>> xs = torch.zeros((3, 2, 4)) - >>> make_non_pad_mask(lengths, xs) - tensor([[[1, 1, 1, 1], - [1, 1, 1, 1]], - [[1, 1, 1, 0], - [1, 1, 1, 0]], - [[1, 1, 0, 0], - [1, 1, 0, 0]]], dtype=torch.uint8) - >>> xs = torch.zeros((3, 2, 6)) - >>> make_non_pad_mask(lengths, xs) - tensor([[[1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0]], - [[1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0]], - [[1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8) - With the reference tensor and dimension indicator. - >>> xs = torch.zeros((3, 6, 6)) - >>> make_non_pad_mask(lengths, xs, 1) - tensor([[[1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0]], - [[1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0]], - [[1, 1, 1, 1, 1, 1], - [1, 1, 1, 1, 1, 1], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0]]], dtype=torch.uint8) - >>> make_non_pad_mask(lengths, xs, 2) - tensor([[[1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0], - [1, 1, 1, 1, 1, 0]], - [[1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0], - [1, 1, 1, 0, 0, 0]], - [[1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0], - [1, 1, 0, 0, 0, 0]]], dtype=torch.uint8) - """ - return ~make_pad_mask(lengths, xs, length_dim) - - -def get_mask_from_lengths(lengths): - max_len = torch.max(lengths).item() - ids = torch.arange(0, max_len).to(lengths.device) - mask = (ids < lengths.unsqueeze(1)).bool() - return mask - - -def group_hidden_by_segs(h, seg_ids, max_len): - """ - - :param h: [B, T, H] - :param seg_ids: [B, T] - :return: h_ph: [B, T_ph, H] - """ - B, T, H = h.shape - h_gby_segs = h.new_zeros([B, max_len + 1, H]).scatter_add_(1, seg_ids[:, :, None].repeat([1, 1, H]), h) - all_ones = h.new_ones(h.shape[:2]) - cnt_gby_segs = h.new_zeros([B, max_len + 1]).scatter_add_(1, seg_ids, all_ones).contiguous() - h_gby_segs = h_gby_segs[:, 1:] - cnt_gby_segs = cnt_gby_segs[:, 1:] - h_gby_segs = h_gby_segs / torch.clamp(cnt_gby_segs[:, :, None], min=1) - return h_gby_segs, cnt_gby_segs diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE.md b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE.md deleted file mode 100644 index 5e8aaa2d3722e7e73a3d94b2b7dfc4f751d7a240..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE.md +++ /dev/null @@ -1,5 +0,0 @@ - -Please select an issue template from -https://github.com/facebookresearch/detectron2/issues/new/choose . - -Otherwise your issue will be closed. diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/layers/__init__.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/layers/__init__.py deleted file mode 100644 index 3d015c530b3e33de8ea60943a0a98b135f013dd7..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/layers/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .batch_norm import FrozenBatchNorm2d, get_norm, NaiveSyncBatchNorm, CycleBatchNormList -from .deform_conv import DeformConv, ModulatedDeformConv -from .mask_ops import paste_masks_in_image -from .nms import batched_nms, batched_nms_rotated, nms, nms_rotated -from .roi_align import ROIAlign, roi_align -from .roi_align_rotated import ROIAlignRotated, roi_align_rotated -from .shape_spec import ShapeSpec -from .wrappers import ( - BatchNorm2d, - Conv2d, - ConvTranspose2d, - cat, - interpolate, - Linear, - nonzero_tuple, - cross_entropy, - shapes_to_tensor, -) -from .blocks import CNNBlockBase, DepthwiseSeparableConv2d -from .aspp import ASPP -from .losses import ciou_loss, diou_loss - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/YlcldKlns/bing/src/pages/api/healthz.ts b/spaces/YlcldKlns/bing/src/pages/api/healthz.ts deleted file mode 100644 index f6ae44ff0fd66ccd3f7feaa550025fbf2a83bf77..0000000000000000000000000000000000000000 --- a/spaces/YlcldKlns/bing/src/pages/api/healthz.ts +++ /dev/null @@ -1,7 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - res.status(200).end('ok') -} diff --git a/spaces/YuanMio/vits-uma-genshin-honkai/attentions.py b/spaces/YuanMio/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/YuanMio/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Yuliang/ECON/lib/pymafx/models/hr_module.py b/spaces/Yuliang/ECON/lib/pymafx/models/hr_module.py deleted file mode 100644 index ad6243a463a45733a0c518e34c0dbcb115d39bcc..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/lib/pymafx/models/hr_module.py +++ /dev/null @@ -1,464 +0,0 @@ -import logging -import os - -import torch -import torch._utils -import torch.nn as nn -import torch.nn.functional as F - -# from core.cfgs import cfg -from .res_module import BasicBlock, Bottleneck - -logger = logging.getLogger(__name__) - -BN_MOMENTUM = 0.1 - - -class HighResolutionModule(nn.Module): - def __init__( - self, - num_branches, - blocks, - num_blocks, - num_inchannels, - num_channels, - fuse_method, - multi_scale_output=True - ): - super().__init__() - self._check_branches(num_branches, blocks, num_blocks, num_inchannels, num_channels) - - self.num_inchannels = num_inchannels - self.fuse_method = fuse_method - self.num_branches = num_branches - - self.multi_scale_output = multi_scale_output - - self.branches = self._make_branches(num_branches, blocks, num_blocks, num_channels) - self.fuse_layers = self._make_fuse_layers() - self.relu = nn.ReLU(True) - - def _check_branches(self, num_branches, blocks, num_blocks, num_inchannels, num_channels): - if num_branches != len(num_blocks): - error_msg = 'NUM_BRANCHES({}) <> NUM_BLOCKS({})'.format(num_branches, len(num_blocks)) - logger.error(error_msg) - raise ValueError(error_msg) - - if num_branches != len(num_channels): - error_msg = 'NUM_BRANCHES({}) <> NUM_CHANNELS({})'.format( - num_branches, len(num_channels) - ) - logger.error(error_msg) - raise ValueError(error_msg) - - if num_branches != len(num_inchannels): - error_msg = 'NUM_BRANCHES({}) <> NUM_INCHANNELS({})'.format( - num_branches, len(num_inchannels) - ) - logger.error(error_msg) - raise ValueError(error_msg) - - def _make_one_branch(self, branch_index, block, num_blocks, num_channels, stride=1): - downsample = None - if stride != 1 or \ - self.num_inchannels[branch_index] != num_channels[branch_index] * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - self.num_inchannels[branch_index], - num_channels[branch_index] * block.expansion, - kernel_size=1, - stride=stride, - bias=False - ), - nn.BatchNorm2d(num_channels[branch_index] * block.expansion, momentum=BN_MOMENTUM), - ) - - layers = [] - layers.append( - block( - self.num_inchannels[branch_index], num_channels[branch_index], stride, downsample - ) - ) - self.num_inchannels[branch_index] = \ - num_channels[branch_index] * block.expansion - for i in range(1, num_blocks[branch_index]): - layers.append(block(self.num_inchannels[branch_index], num_channels[branch_index])) - - return nn.Sequential(*layers) - - def _make_branches(self, num_branches, block, num_blocks, num_channels): - branches = [] - - for i in range(num_branches): - branches.append(self._make_one_branch(i, block, num_blocks, num_channels)) - - return nn.ModuleList(branches) - - def _make_fuse_layers(self): - if self.num_branches == 1: - return None - - num_branches = self.num_branches - num_inchannels = self.num_inchannels - fuse_layers = [] - for i in range(num_branches if self.multi_scale_output else 1): - fuse_layer = [] - for j in range(num_branches): - if j > i: - fuse_layer.append( - nn.Sequential( - nn.Conv2d(num_inchannels[j], num_inchannels[i], 1, 1, 0, bias=False), - nn.BatchNorm2d(num_inchannels[i]), - nn.Upsample(scale_factor=2**(j - i), mode='nearest') - ) - ) - elif j == i: - fuse_layer.append(None) - else: - conv3x3s = [] - for k in range(i - j): - if k == i - j - 1: - num_outchannels_conv3x3 = num_inchannels[i] - conv3x3s.append( - nn.Sequential( - nn.Conv2d( - num_inchannels[j], - num_outchannels_conv3x3, - 3, - 2, - 1, - bias=False - ), nn.BatchNorm2d(num_outchannels_conv3x3) - ) - ) - else: - num_outchannels_conv3x3 = num_inchannels[j] - conv3x3s.append( - nn.Sequential( - nn.Conv2d( - num_inchannels[j], - num_outchannels_conv3x3, - 3, - 2, - 1, - bias=False - ), nn.BatchNorm2d(num_outchannels_conv3x3), nn.ReLU(True) - ) - ) - fuse_layer.append(nn.Sequential(*conv3x3s)) - fuse_layers.append(nn.ModuleList(fuse_layer)) - - return nn.ModuleList(fuse_layers) - - def get_num_inchannels(self): - return self.num_inchannels - - def forward(self, x): - if self.num_branches == 1: - return [self.branches[0](x[0])] - - for i in range(self.num_branches): - x[i] = self.branches[i](x[i]) - - x_fuse = [] - - for i in range(len(self.fuse_layers)): - y = x[0] if i == 0 else self.fuse_layers[i][0](x[0]) - for j in range(1, self.num_branches): - if i == j: - y = y + x[j] - else: - y = y + self.fuse_layers[i][j](x[j]) - x_fuse.append(self.relu(y)) - - return x_fuse - - -blocks_dict = {'BASIC': BasicBlock, 'BOTTLENECK': Bottleneck} - - -class PoseHighResolutionNet(nn.Module): - def __init__(self, cfg, pretrained=True, global_mode=False): - self.inplanes = 64 - extra = cfg.HR_MODEL.EXTRA - super().__init__() - - # stem net - self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=2, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(64, momentum=BN_MOMENTUM) - self.conv2 = nn.Conv2d(64, 64, kernel_size=3, stride=2, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(64, momentum=BN_MOMENTUM) - self.relu = nn.ReLU(inplace=True) - self.layer1 = self._make_layer(Bottleneck, self.inplanes, 64, 4) - - self.stage2_cfg = cfg['HR_MODEL']['EXTRA']['STAGE2'] - num_channels = self.stage2_cfg['NUM_CHANNELS'] - block = blocks_dict[self.stage2_cfg['BLOCK']] - num_channels = [num_channels[i] * block.expansion for i in range(len(num_channels))] - self.transition1 = self._make_transition_layer([256], num_channels) - self.stage2, pre_stage_channels = self._make_stage(self.stage2_cfg, num_channels) - - self.stage3_cfg = cfg['HR_MODEL']['EXTRA']['STAGE3'] - num_channels = self.stage3_cfg['NUM_CHANNELS'] - block = blocks_dict[self.stage3_cfg['BLOCK']] - num_channels = [num_channels[i] * block.expansion for i in range(len(num_channels))] - self.transition2 = self._make_transition_layer(pre_stage_channels, num_channels) - self.stage3, pre_stage_channels = self._make_stage(self.stage3_cfg, num_channels) - - self.stage4_cfg = cfg['HR_MODEL']['EXTRA']['STAGE4'] - num_channels = self.stage4_cfg['NUM_CHANNELS'] - block = blocks_dict[self.stage4_cfg['BLOCK']] - num_channels = [num_channels[i] * block.expansion for i in range(len(num_channels))] - self.transition3 = self._make_transition_layer(pre_stage_channels, num_channels) - self.stage4, pre_stage_channels = self._make_stage( - self.stage4_cfg, num_channels, multi_scale_output=True - ) - - # Classification Head - self.global_mode = global_mode - if self.global_mode: - self.incre_modules, self.downsamp_modules, \ - self.final_layer = self._make_head(pre_stage_channels) - - self.pretrained_layers = cfg['HR_MODEL']['EXTRA']['PRETRAINED_LAYERS'] - - def _make_head(self, pre_stage_channels): - head_block = Bottleneck - head_channels = [32, 64, 128, 256] - - # Increasing the #channels on each resolution - # from C, 2C, 4C, 8C to 128, 256, 512, 1024 - incre_modules = [] - for i, channels in enumerate(pre_stage_channels): - incre_module = self._make_layer(head_block, channels, head_channels[i], 1, stride=1) - incre_modules.append(incre_module) - incre_modules = nn.ModuleList(incre_modules) - - # downsampling modules - downsamp_modules = [] - for i in range(len(pre_stage_channels) - 1): - in_channels = head_channels[i] * head_block.expansion - out_channels = head_channels[i + 1] * head_block.expansion - - downsamp_module = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=3, - stride=2, - padding=1 - ), nn.BatchNorm2d(out_channels, momentum=BN_MOMENTUM), nn.ReLU(inplace=True) - ) - - downsamp_modules.append(downsamp_module) - downsamp_modules = nn.ModuleList(downsamp_modules) - - final_layer = nn.Sequential( - nn.Conv2d( - in_channels=head_channels[3] * head_block.expansion, - out_channels=2048, - kernel_size=1, - stride=1, - padding=0 - ), nn.BatchNorm2d(2048, momentum=BN_MOMENTUM), nn.ReLU(inplace=True) - ) - - return incre_modules, downsamp_modules, final_layer - - def _make_transition_layer(self, num_channels_pre_layer, num_channels_cur_layer): - num_branches_cur = len(num_channels_cur_layer) - num_branches_pre = len(num_channels_pre_layer) - - transition_layers = [] - for i in range(num_branches_cur): - if i < num_branches_pre: - if num_channels_cur_layer[i] != num_channels_pre_layer[i]: - transition_layers.append( - nn.Sequential( - nn.Conv2d( - num_channels_pre_layer[i], - num_channels_cur_layer[i], - 3, - 1, - 1, - bias=False - ), nn.BatchNorm2d(num_channels_cur_layer[i]), nn.ReLU(inplace=True) - ) - ) - else: - transition_layers.append(None) - else: - conv3x3s = [] - for j in range(i + 1 - num_branches_pre): - inchannels = num_channels_pre_layer[-1] - outchannels = num_channels_cur_layer[i] \ - if j == i-num_branches_pre else inchannels - conv3x3s.append( - nn.Sequential( - nn.Conv2d(inchannels, outchannels, 3, 2, 1, bias=False), - nn.BatchNorm2d(outchannels), nn.ReLU(inplace=True) - ) - ) - transition_layers.append(nn.Sequential(*conv3x3s)) - - return nn.ModuleList(transition_layers) - - def _make_layer(self, block, inplanes, planes, blocks, stride=1): - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False - ), - nn.BatchNorm2d(planes * block.expansion, momentum=BN_MOMENTUM), - ) - - layers = [] - layers.append(block(inplanes, planes, stride, downsample)) - inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(inplanes, planes)) - - return nn.Sequential(*layers) - - def _make_stage(self, layer_config, num_inchannels, multi_scale_output=True): - num_modules = layer_config['NUM_MODULES'] - num_branches = layer_config['NUM_BRANCHES'] - num_blocks = layer_config['NUM_BLOCKS'] - num_channels = layer_config['NUM_CHANNELS'] - block = blocks_dict[layer_config['BLOCK']] - fuse_method = layer_config['FUSE_METHOD'] - - modules = [] - for i in range(num_modules): - # multi_scale_output is only used last module - if not multi_scale_output and i == num_modules - 1: - reset_multi_scale_output = False - else: - reset_multi_scale_output = True - - modules.append( - HighResolutionModule( - num_branches, block, num_blocks, num_inchannels, num_channels, fuse_method, - reset_multi_scale_output - ) - ) - num_inchannels = modules[-1].get_num_inchannels() - - return nn.Sequential(*modules), num_inchannels - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.conv2(x) - x = self.bn2(x) - x = self.relu(x) - x = self.layer1(x) - - x_list = [] - for i in range(self.stage2_cfg['NUM_BRANCHES']): - if self.transition1[i] is not None: - x_list.append(self.transition1[i](x)) - else: - x_list.append(x) - y_list = self.stage2(x_list) - - s_feat_s2 = y_list[0] - - x_list = [] - for i in range(self.stage3_cfg['NUM_BRANCHES']): - if self.transition2[i] is not None: - x_list.append(self.transition2[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage3(x_list) - - s_feat_s3 = y_list[0] - - x_list = [] - for i in range(self.stage4_cfg['NUM_BRANCHES']): - if self.transition3[i] is not None: - x_list.append(self.transition3[i](y_list[-1])) - else: - x_list.append(y_list[i]) - y_list = self.stage4(x_list) - - s_feat = [y_list[-2], y_list[-3], y_list[-4]] - - # s_feat_s4 = y_list[0] - - # if cfg.MODEL.PyMAF.HR_FEAT_STAGE == 2: - # s_feat = s_feat_s2 - # elif cfg.MODEL.PyMAF.HR_FEAT_STAGE == 3: - # s_feat = s_feat_s3 - # elif cfg.MODEL.PyMAF.HR_FEAT_STAGE == 4: - # s_feat = s_feat_s4 - # else: - # raise ValueError('HR_FEAT_STAGE should be 2, 3, or 4.') - - # Classification Head - if self.global_mode: - y = self.incre_modules[0](y_list[0]) - for i in range(len(self.downsamp_modules)): - y = self.incre_modules[i + 1](y_list[i + 1]) + \ - self.downsamp_modules[i](y) - - y = self.final_layer(y) - - if torch._C._get_tracing_state(): - xf = y.flatten(start_dim=2).mean(dim=2) - else: - xf = F.avg_pool2d(y, kernel_size=y.size()[2:]).view(y.size(0), -1) - else: - xf = None - - return s_feat, xf - - def init_weights(self, pretrained=''): - # logger.info('=> init weights from normal distribution') - for m in self.modules(): - if isinstance(m, nn.Conv2d): - # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - nn.init.normal_(m.weight, std=0.001) - for name, _ in m.named_parameters(): - if name in ['bias']: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.ConvTranspose2d): - nn.init.normal_(m.weight, std=0.001) - for name, _ in m.named_parameters(): - if name in ['bias']: - nn.init.constant_(m.bias, 0) - - if os.path.isfile(pretrained): - pretrained_state_dict = torch.load(pretrained) - # logger.info('=> loading pretrained HRnet model {}'.format(pretrained)) - - need_init_state_dict = {} - for name, m in pretrained_state_dict.items(): - if name.split('.')[0] in self.pretrained_layers \ - or self.pretrained_layers[0] is '*': - need_init_state_dict[name] = m - self.load_state_dict(need_init_state_dict, strict=False) - elif pretrained: - logger.error('=> please download pre-trained models first!') - raise ValueError('{} is not exist!'.format(pretrained)) - - -def get_hrnet_encoder(cfg, init_weight=True, global_mode=False, **kwargs): - model = PoseHighResolutionNet(cfg, global_mode=global_mode) - - if init_weight: - if cfg.HR_MODEL.PRETR_SET in ['imagenet']: - model.init_weights(cfg.HR_MODEL.PRETRAINED_IM) - logger.info('loaded HRNet imagenet pretrained model') - elif cfg.HR_MODEL.PRETR_SET in ['coco']: - model.init_weights(cfg.HR_MODEL.PRETRAINED_COCO) - logger.info('loaded HRNet coco pretrained model') - else: - model.init_weights() - - return model diff --git a/spaces/Zwicky18/vits-models/models.py b/spaces/Zwicky18/vits-models/models.py deleted file mode 100644 index 8353b867f441de7e4d05aef980e672899c3a8889..0000000000000000000000000000000000000000 --- a/spaces/Zwicky18/vits-models/models.py +++ /dev/null @@ -1,533 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/abdvl/datahub_qa_bot/docs/how/ui-tabs-guide.md b/spaces/abdvl/datahub_qa_bot/docs/how/ui-tabs-guide.md deleted file mode 100644 index 6a82a36cd813c3ef9e72634c33af03d3ecea6e3b..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/how/ui-tabs-guide.md +++ /dev/null @@ -1,17 +0,0 @@ -# UI Tabs Guide - -Some of the tabs in the UI might not be enabled by default. This guide is supposed to tell Admins of DataHub how to enable those UI tabs. - -## Datasets -### Stats and Queries Tab - -To enable these tabs you need to use one of the usage sources which gets the relevant metadata from your sources and ingests them into DataHub. These usage sources are listed under other sources which support them e.g. [Snowflake source](../../docs/generated/ingestion/sources/snowflake.md), [BigQuery source](../../docs/generated/ingestion/sources/bigquery.md) - -### Validation Tab - -This tab is enabled if you use [Data Quality Integration with Great Expectations](../../metadata-ingestion/integration_docs/great-expectations.md). - -## Common to multiple entities -### Properties Tab - -Properties are a catch-all bag for metadata not captured in other aspects stored for a Dataset. These are populated via the various source connectors when [metadata is ingested](../../metadata-ingestion/README.md). \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py deleted file mode 100644 index 7a38772b0c93a8608f32c6357b8616e77c139dc9..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from ...dist_utils import master_only -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class NeptuneLoggerHook(LoggerHook): - """Class to log metrics to NeptuneAI. - - It requires `neptune-client` to be installed. - - Args: - init_kwargs (dict): a dict contains the initialization keys as below: - - project (str): Name of a project in a form of - namespace/project_name. If None, the value of - NEPTUNE_PROJECT environment variable will be taken. - - api_token (str): User’s API token. - If None, the value of NEPTUNE_API_TOKEN environment - variable will be taken. Note: It is strongly recommended - to use NEPTUNE_API_TOKEN environment variable rather than - placing your API token in plain text in your source code. - - name (str, optional, default is 'Untitled'): Editable name of - the run. Name is displayed in the run's Details and in - Runs table as a column. - Check https://docs.neptune.ai/api-reference/neptune#init for - more init arguments. - interval (int): Logging interval (every k iterations). - ignore_last (bool): Ignore the log of last iterations in each epoch - if less than `interval`. - reset_flag (bool): Whether to clear the output buffer after logging - by_epoch (bool): Whether EpochBasedRunner is used. - - .. _NeptuneAI: - https://docs.neptune.ai/you-should-know/logging-metadata - """ - - def __init__(self, - init_kwargs=None, - interval=10, - ignore_last=True, - reset_flag=True, - with_step=True, - by_epoch=True): - - super(NeptuneLoggerHook, self).__init__(interval, ignore_last, - reset_flag, by_epoch) - self.import_neptune() - self.init_kwargs = init_kwargs - self.with_step = with_step - - def import_neptune(self): - try: - import neptune.new as neptune - except ImportError: - raise ImportError( - 'Please run "pip install neptune-client" to install neptune') - self.neptune = neptune - self.run = None - - @master_only - def before_run(self, runner): - if self.init_kwargs: - self.run = self.neptune.init(**self.init_kwargs) - else: - self.run = self.neptune.init() - - @master_only - def log(self, runner): - tags = self.get_loggable_tags(runner) - if tags: - for tag_name, tag_value in tags.items(): - if self.with_step: - self.run[tag_name].log( - tag_value, step=self.get_iter(runner)) - else: - tags['global_step'] = self.get_iter(runner) - self.run[tag_name].log(tags) - - @master_only - def after_run(self, runner): - self.run.stop() diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/pspnet_unet_s5-d16.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/pspnet_unet_s5-d16.py deleted file mode 100644 index fcff9ec4f41fad158344ecd77313dc14564f3682..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/pspnet_unet_s5-d16.py +++ /dev/null @@ -1,50 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UNet', - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False), - decode_head=dict( - type='PSPHead', - in_channels=64, - in_index=4, - channels=16, - pool_scales=(1, 2, 3, 6), - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=128, - in_index=3, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide', crop_size=256, stride=170)) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv_custom/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv_custom/__init__.py deleted file mode 100644 index 0df4eca2b98fa2fcfe20338cfe9f153c8cd11b70..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv_custom/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala - * Modified from MMCV repo: From https://github.com/open-mmlab/mmcv - * Copyright (c) OpenMMLab. All rights reserved. -''' - -# -*- coding: utf-8 -*- - -from .checkpoint import load_checkpoint - -__all__ = ['load_checkpoint'] \ No newline at end of file diff --git a/spaces/adirik/kakao-brain-vit/backbone/layers.py b/spaces/adirik/kakao-brain-vit/backbone/layers.py deleted file mode 100644 index 3dfb9b8bfd82da5b8e152c6ac84fac1464e4cf8f..0000000000000000000000000000000000000000 --- a/spaces/adirik/kakao-brain-vit/backbone/layers.py +++ /dev/null @@ -1,8 +0,0 @@ -import tensorflow as tf - -class Identity(tf.keras.layers.Layer): - def __init__(self, name): - super(Identity, self).__init__(name=name) - - def call(self, x): - return x \ No newline at end of file diff --git a/spaces/akashAD/yolov5-classify/app.py b/spaces/akashAD/yolov5-classify/app.py deleted file mode 100644 index ba77c3e798cd59bc8ed8a2e3ba4417ba8f7ee800..0000000000000000000000000000000000000000 --- a/spaces/akashAD/yolov5-classify/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import torch -from torchvision import transforms -import gradio as gr -import requests -from PIL import Image - -#load models from pytorch hub - -model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5m-cls.pt').eval() # load from PyTorch Hub -model.classify = True -model.conf = 0.40 - - -# load imagenet 1000 labels -response = requests.get("https://git.io/JJkYN") -labels = response.text.split("\n") - -def preprocess_image(inp): - # Define the preprocessing steps - preprocess = transforms.Compose([ - transforms.Resize(256), - transforms.CenterCrop(224), - transforms.ToTensor(), - transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) - ]) - # Apply the preprocessing steps to the image - image = preprocess(inp) - # Convert the image to a PyTorch tensor - image = torch.tensor(image).unsqueeze(0) - - return image - -def predict(inp): - - with torch.no_grad(): - prediction = torch.nn.functional.softmax(model(preprocess_image(inp))[0], dim=0) - - print(prediction) - confidences = {labels[i]: float(prediction[i]) for i in range(1000)} - return confidences - - -gr.Interface(fn=predict, - inputs=gr.Image(type="pil"), - outputs=gr.Label(num_top_classes=7), - examples=["karda3.png", "lion.png"]).launch() - - \ No newline at end of file diff --git a/spaces/akhaliq/Mask2Former/mask2former/data/datasets/register_ade20k_instance.py b/spaces/akhaliq/Mask2Former/mask2former/data/datasets/register_ade20k_instance.py deleted file mode 100644 index 1ded7095cde756dfa1d94c25b2f7d1d2e5da6313..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former/data/datasets/register_ade20k_instance.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import json -import logging -import numpy as np -import os -from PIL import Image - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.coco import load_coco_json, register_coco_instances -from detectron2.utils.file_io import PathManager - -ADE_CATEGORIES = [{'id': 7, 'name': 'bed'}, {'id': 8, 'name': 'windowpane'}, {'id': 10, 'name': 'cabinet'}, {'id': 12, 'name': 'person'}, {'id': 14, 'name': 'door'}, {'id': 15, 'name': 'table'}, {'id': 18, 'name': 'curtain'}, {'id': 19, 'name': 'chair'}, {'id': 20, 'name': 'car'}, {'id': 22, 'name': 'painting'}, {'id': 23, 'name': 'sofa'}, {'id': 24, 'name': 'shelf'}, {'id': 27, 'name': 'mirror'}, {'id': 30, 'name': 'armchair'}, {'id': 31, 'name': 'seat'}, {'id': 32, 'name': 'fence'}, {'id': 33, 'name': 'desk'}, {'id': 35, 'name': 'wardrobe'}, {'id': 36, 'name': 'lamp'}, {'id': 37, 'name': 'bathtub'}, {'id': 38, 'name': 'railing'}, {'id': 39, 'name': 'cushion'}, {'id': 41, 'name': 'box'}, {'id': 42, 'name': 'column'}, {'id': 43, 'name': 'signboard'}, {'id': 44, 'name': 'chest of drawers'}, {'id': 45, 'name': 'counter'}, {'id': 47, 'name': 'sink'}, {'id': 49, 'name': 'fireplace'}, {'id': 50, 'name': 'refrigerator'}, {'id': 53, 'name': 'stairs'}, {'id': 55, 'name': 'case'}, {'id': 56, 'name': 'pool table'}, {'id': 57, 'name': 'pillow'}, {'id': 58, 'name': 'screen door'}, {'id': 62, 'name': 'bookcase'}, {'id': 64, 'name': 'coffee table'}, {'id': 65, 'name': 'toilet'}, {'id': 66, 'name': 'flower'}, {'id': 67, 'name': 'book'}, {'id': 69, 'name': 'bench'}, {'id': 70, 'name': 'countertop'}, {'id': 71, 'name': 'stove'}, {'id': 72, 'name': 'palm'}, {'id': 73, 'name': 'kitchen island'}, {'id': 74, 'name': 'computer'}, {'id': 75, 'name': 'swivel chair'}, {'id': 76, 'name': 'boat'}, {'id': 78, 'name': 'arcade machine'}, {'id': 80, 'name': 'bus'}, {'id': 81, 'name': 'towel'}, {'id': 82, 'name': 'light'}, {'id': 83, 'name': 'truck'}, {'id': 85, 'name': 'chandelier'}, {'id': 86, 'name': 'awning'}, {'id': 87, 'name': 'streetlight'}, {'id': 88, 'name': 'booth'}, {'id': 89, 'name': 'television receiver'}, {'id': 90, 'name': 'airplane'}, {'id': 92, 'name': 'apparel'}, {'id': 93, 'name': 'pole'}, {'id': 95, 'name': 'bannister'}, {'id': 97, 'name': 'ottoman'}, {'id': 98, 'name': 'bottle'}, {'id': 102, 'name': 'van'}, {'id': 103, 'name': 'ship'}, {'id': 104, 'name': 'fountain'}, {'id': 107, 'name': 'washer'}, {'id': 108, 'name': 'plaything'}, {'id': 110, 'name': 'stool'}, {'id': 111, 'name': 'barrel'}, {'id': 112, 'name': 'basket'}, {'id': 115, 'name': 'bag'}, {'id': 116, 'name': 'minibike'}, {'id': 118, 'name': 'oven'}, {'id': 119, 'name': 'ball'}, {'id': 120, 'name': 'food'}, {'id': 121, 'name': 'step'}, {'id': 123, 'name': 'trade name'}, {'id': 124, 'name': 'microwave'}, {'id': 125, 'name': 'pot'}, {'id': 126, 'name': 'animal'}, {'id': 127, 'name': 'bicycle'}, {'id': 129, 'name': 'dishwasher'}, {'id': 130, 'name': 'screen'}, {'id': 132, 'name': 'sculpture'}, {'id': 133, 'name': 'hood'}, {'id': 134, 'name': 'sconce'}, {'id': 135, 'name': 'vase'}, {'id': 136, 'name': 'traffic light'}, {'id': 137, 'name': 'tray'}, {'id': 138, 'name': 'ashcan'}, {'id': 139, 'name': 'fan'}, {'id': 142, 'name': 'plate'}, {'id': 143, 'name': 'monitor'}, {'id': 144, 'name': 'bulletin board'}, {'id': 146, 'name': 'radiator'}, {'id': 147, 'name': 'glass'}, {'id': 148, 'name': 'clock'}, {'id': 149, 'name': 'flag'}] - - -_PREDEFINED_SPLITS = { - # point annotations without masks - "ade20k_instance_train": ( - "ADEChallengeData2016/images/training", - "ADEChallengeData2016/ade20k_instance_train.json", - ), - "ade20k_instance_val": ( - "ADEChallengeData2016/images/validation", - "ADEChallengeData2016/ade20k_instance_val.json", - ), -} - - -def _get_ade_instances_meta(): - thing_ids = [k["id"] for k in ADE_CATEGORIES] - assert len(thing_ids) == 100, len(thing_ids) - # Mapping from the incontiguous ADE category id to an id in [0, 99] - thing_dataset_id_to_contiguous_id = {k: i for i, k in enumerate(thing_ids)} - thing_classes = [k["name"] for k in ADE_CATEGORIES] - ret = { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes, - } - return ret - - -def register_all_ade20k_instance(root): - for key, (image_root, json_file) in _PREDEFINED_SPLITS.items(): - # Assume pre-defined datasets live in `./datasets`. - register_coco_instances( - key, - _get_ade_instances_meta(), - os.path.join(root, json_file) if "://" not in json_file else json_file, - os.path.join(root, image_root), - ) - - -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_ade20k_instance(_root) diff --git a/spaces/akhaliq/SWAG/README.md b/spaces/akhaliq/SWAG/README.md deleted file mode 100644 index 77bdba974b3d9e304a6d6831f2806e933ebcb777..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SWAG/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: SWAG -emoji: 🔥 -colorFrom: purple -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/akhaliq/deeplab2/tensorflow_ops/kernels/merge_semantic_and_instance_maps_op_kernel.cu.cc b/spaces/akhaliq/deeplab2/tensorflow_ops/kernels/merge_semantic_and_instance_maps_op_kernel.cu.cc deleted file mode 100644 index 46a0413ab95d9fd7430d51f481db9b9d7e7bcfe7..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/tensorflow_ops/kernels/merge_semantic_and_instance_maps_op_kernel.cu.cc +++ /dev/null @@ -1,296 +0,0 @@ -// Copyright 2021 The Deeplab2 Authors. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -#include -#ifdef GOOGLE_CUDA -#define EIGEN_USE_GPU - -#include -#include -#include - -#include /*third_party*/"tensorflow/core/framework/op_kernel.h" -#include /*third_party*/"tensorflow/core/framework/register_types.h" -#include /*third_party*/"tensorflow/core/framework/tensor.h" -#include /*third_party*/"tensorflow/core/framework/tensor_shape.h" -#include /*third_party*/"tensorflow/core/framework/types.h" -#include /*third_party*/"tensorflow/core/util/gpu_kernel_helper.h" -#include /*third_party*/"merge_semantic_and_instance_maps_op_kernel.h" // local headers - -namespace tensorflow_models { -namespace deeplab { -namespace deeplab2 { - -namespace functor { - -namespace { - -using ::tensorflow::CudaGridRangeX; -using ::tensorflow::GetGpuLaunchConfig; -using ::tensorflow::GpuLaunchConfig; -using ::tensorflow::Tensor; -using ::tensorflow::TTypes; - -using GPUDevice = ::Eigen::GpuDevice; - -// Maximum number of instances and semantic classes. We default to -// 1024 and 256, respectively. Increase the values, if your dataset -// contains more instances per image or more semantic classes. -constexpr int32_t kMaxNumInstance = 1024; -constexpr int32_t kMaxNumSemantic = 256; - -// CUDA kernel that initializes memory with a constant value. -template -__global__ void SetToValue(const int num_threads, const T value, T* x) { - for (int idx : CudaGridRangeX(num_threads)) { - x[idx] = value; - } -} - -// CUDA kernel that goes over each pixel, and collects the following stats: -// 1. Whether this pixel belongs to "thing" class. -// 2. Semantic label count inside each instance. -// 3. Total pixel area of each "stuff" class. -// Size of each GPU array: -// semantic_data: [height * width] -// instance_data: [height * width] -// is_thing_per_semantic_id: [kMaxNumSemantic] -// is_thing_per_pixel: [height * width] -// semantic_count_per_instance: [kMaxNumInstance * kMaxNumSemantic] -// stuff_area: [kMaxNumSemantic] -__global__ void CollectPixelStats(const int num_threads, - const int32_t* semantic_data, - const int32_t* instance_data, - const bool* is_thing_per_semantic_id, - bool* is_thing_per_pixel, - int32_t* semantic_count_per_instance, - int32_t* stuff_area) { - for (int idx : CudaGridRangeX(num_threads)) { - const int32_t semantic_label = - std::min(semantic_data[idx], kMaxNumSemantic - 1); - const int32_t instance_label = - std::min(instance_data[idx], kMaxNumInstance - 1); - const bool is_thing = is_thing_per_semantic_id[semantic_label]; - is_thing_per_pixel[idx] = is_thing; - - const int offset = instance_label * kMaxNumSemantic + semantic_label; - if (is_thing) { - tensorflow::CudaAtomicAdd(semantic_count_per_instance + offset, 1); - } else { - tensorflow::CudaAtomicAdd(stuff_area + semantic_label, 1); - } - } -} - -// CUDA kernel that merges semantic and instance prediction into panoptic map. -// Merging rules: -// 1. For "thing" class, its instance label will be reordered, and its semantic -// label depends on major semantic label inside this instance. -// 2. For "stuff" class, its instance label is 0, and semantic label will be -// a) void, if stuff area is small, and b) original semantic label. -// Size of each GPU array: -// semantic_data: [height * width] -// instance_data: [height * width] -// is_thing_per_semantic_id: [kMaxNumSemantic] -// is_thing_per_pixel: [height * width] -// stuff_area: [kMaxNumSemantic] -// labels_per_instance: [kMaxNumInstance * 2] -// parsing_maps: [height * width] -__global__ void MergePredictions( - const int num_threads, const int32_t* semantic_data, - const int32_t* instance_data, const bool* is_thing_per_pixel, - const int32_t* stuff_area, const int32_t* labels_per_instance, - const int32_t stuff_area_limit, const int32_t label_divisor, - const int32_t void_label, int32_t* parsing_maps) { - for (int idx : CudaGridRangeX(num_threads)) { - const int32_t semantic_label = - std::min(semantic_data[idx], kMaxNumSemantic - 1); - const int32_t instance_label = - std::min(instance_data[idx], kMaxNumInstance - 1); - const int32_t is_thing = static_cast(is_thing_per_pixel[idx]); - - const int32_t semantic_label_if_is_thing = - labels_per_instance[instance_label * 2]; - const int32_t instance_label_if_is_thing = - labels_per_instance[instance_label * 2 + 1]; - const int32_t panoptic_label_if_is_thing = - semantic_label_if_is_thing * label_divisor + instance_label_if_is_thing; - - const int32_t is_void = static_cast( - stuff_area_limit > 0 && stuff_area[semantic_label] <= stuff_area_limit); - const int32_t semantic_label_if_is_stuff = - is_void * void_label + (1 - is_void) * semantic_label; - - parsing_maps[idx] = - is_thing * panoptic_label_if_is_thing + - (1 - is_thing) * (semantic_label_if_is_stuff * label_divisor); - } -} - -// Generates semantic and instance label for each predicted instance. -// Size of each GPU array: -// semantic_count_per_instance: [kMaxNumInstance * kMaxNumSemantic] -// labels_per_instance: [kMaxNumInstance * 2] -void CreateLabelsPerInstance(const GPUDevice& d, - const int32_t* semantic_count_per_instance, - int32_t* labels_per_instance) { - std::vector semantic_count_per_instance_host(kMaxNumInstance * - kMaxNumSemantic); - d.memcpyDeviceToHost(semantic_count_per_instance_host.data(), - semantic_count_per_instance, - kMaxNumInstance * kMaxNumSemantic * sizeof(int32_t)); - - // A flat 2D array with shape [kMaxNumInstance, 2], where each row - // represents (new semantic label, new instance label) for each instance. - std::vector labels_per_instance_host(kMaxNumInstance * 2); - - // Map semantic_label -> largest instance label of this semantic class. - std::unordered_map instance_count_per_semantic_class; - for (int i = 0; i < kMaxNumInstance; ++i) { - int max_pixel_count = 0; - int max_semantic_label = -1; - for (int j = 0; j < kMaxNumSemantic; ++j) { - const int current_count = - semantic_count_per_instance_host[i * kMaxNumSemantic + j]; - if (current_count > max_pixel_count) { - max_semantic_label = j; - max_pixel_count = current_count; - } - } - - labels_per_instance_host[2 * i] = std::max(0, max_semantic_label); - if (max_semantic_label >= 0) { - labels_per_instance_host[2 * i + 1] = - ++instance_count_per_semantic_class[max_semantic_label]; - } else { - labels_per_instance_host[2 * i + 1] = 0; - } - } - - d.memcpyHostToDevice(labels_per_instance, labels_per_instance_host.data(), - kMaxNumInstance * 2 * sizeof(int32_t)); -} - -} // namespace - -// Specialization of Convert1DInt32TensorToSet for GPU. -template <> -std::unordered_set Convert1DInt32TensorToSet(const GPUDevice& d, - const Tensor& tensor) { - const int n_vals = tensor.dim_size(0); - std::vector host_buffer(n_vals); - d.memcpyDeviceToHost(host_buffer.data(), tensor.tensor().data(), - n_vals * sizeof(int32_t)); - - return std::unordered_set(host_buffer.begin(), host_buffer.end()); -} - -// This function merges the semantic segmentation and class-agnostic -// instance segmentation to form the panoptic segmentation. In particular, -// the class label of each instance mask is inferred from the majority -// votes from the corresponding pixels in the semantic segmentation. This -// operation is first poposed in the DeeperLab paper and adopted by the -// Panoptic-DeepLab. -// - DeeperLab: Single-Shot Image Parser, T-J Yang, et al. arXiv:1902.05093. -// - Panoptic-DeepLab, B. Cheng, et al. In CVPR, 2020. -// Specialization of MergeSemanticAndInstanceMaps for GPU. -template <> -void MergeSemanticAndInstanceMaps::operator()( - const GPUDevice& d, typename TTypes::ConstTensor semantic_maps, - typename TTypes::ConstTensor instance_maps, - const std::unordered_set& thing_ids_set, int label_divisor, - int stuff_area_limit, int void_label, - typename TTypes::Tensor parsing_maps) { - const int num_batches = semantic_maps.dimension(0); - const int height = semantic_maps.dimension(1); - const int width = semantic_maps.dimension(2); - - // Allocate memory on host, which tells each semantic class is "thing" or not. - bool is_thing_per_semantic_id[kMaxNumSemantic]; - for (int i = 0; i < kMaxNumSemantic; ++i) { - is_thing_per_semantic_id[i] = - (thing_ids_set.find(i) != thing_ids_set.end()); - } - bool* is_thing_per_semantic_id_device = - reinterpret_cast(d.allocate_temp(kMaxNumSemantic * sizeof(bool))); - d.memcpyHostToDevice(is_thing_per_semantic_id_device, - is_thing_per_semantic_id, - kMaxNumSemantic * sizeof(bool)); - - // Allocate scratch memories on device. - bool* is_thing_per_pixel_device = - reinterpret_cast(d.allocate_temp(height * width * sizeof(bool))); - int32_t* semantic_count_per_instance_device = reinterpret_cast( - d.allocate_temp(kMaxNumInstance * kMaxNumSemantic * sizeof(int32_t))); - int32_t* stuff_area_device = reinterpret_cast( - d.allocate_temp(kMaxNumSemantic * sizeof(int32_t))); - int32_t* labels_per_instance_device = reinterpret_cast( - d.allocate_temp(kMaxNumInstance * 2 * sizeof(int32_t))); - - GpuLaunchConfig config; - int total_count = 0; - for (int b = 0; b < num_batches; ++b) { - const int batch_offset = b * height * width; - // Initialize memories that hold counters. - total_count = kMaxNumInstance * kMaxNumSemantic; - config = GetGpuLaunchConfig(total_count, d); - SetToValue<<>>( - config.virtual_thread_count, 0, semantic_count_per_instance_device); - - total_count = kMaxNumSemantic; - config = GetGpuLaunchConfig(total_count, d); - SetToValue<<>>( - config.virtual_thread_count, 0, stuff_area_device); - - // Step 1: Collect semantic and instance mask stats. Done on GPU. - total_count = height * width; - config = GetGpuLaunchConfig(total_count, d); - CollectPixelStats<<>>( - config.virtual_thread_count, semantic_maps.data() + batch_offset, - instance_maps.data() + batch_offset, is_thing_per_semantic_id_device, - is_thing_per_pixel_device, semantic_count_per_instance_device, - stuff_area_device); - - // Step 2: Loop over instance, find major "thing" semantic label, and - // reorder instance IDs to share same ID with different thing class. - // This process now runs on CPU. - CreateLabelsPerInstance(d, semantic_count_per_instance_device, - labels_per_instance_device); - - // Step 3: Create panoptic prediction. - total_count = width * height; - config = GetGpuLaunchConfig(total_count, d); - MergePredictions<<>>( - config.virtual_thread_count, semantic_maps.data() + batch_offset, - instance_maps.data() + batch_offset, is_thing_per_pixel_device, - stuff_area_device, labels_per_instance_device, stuff_area_limit, - label_divisor, void_label, parsing_maps.data() + batch_offset); - } - - // Free all temp memories. - d.deallocate_temp(is_thing_per_semantic_id_device); - d.deallocate_temp(is_thing_per_pixel_device); - d.deallocate_temp(semantic_count_per_instance_device); - d.deallocate_temp(stuff_area_device); - d.deallocate_temp(labels_per_instance_device); -} - -} // namespace functor -} // namespace deeplab2 -} // namespace deeplab -} // namespace tensorflow_models - -#endif // GOOGLE_CUDA diff --git a/spaces/akshayvkt/talk-To-SteveJobs/README.md b/spaces/akshayvkt/talk-To-SteveJobs/README.md deleted file mode 100644 index c2b199921459c6ec600beb6d03567d9e59e15710..0000000000000000000000000000000000000000 --- a/spaces/akshayvkt/talk-To-SteveJobs/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Talk To SteveJobs -emoji: 🐨 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: wtfpl ---- - -Check out configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/msgpack/__init__.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/msgpack/__init__.py deleted file mode 100644 index d6705e22b79dd53f3896bee29eb1e2f12bf106eb..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/msgpack/__init__.py +++ /dev/null @@ -1,54 +0,0 @@ -# coding: utf-8 -from ._version import version -from .exceptions import * -from .ext import ExtType, Timestamp - -import os -import sys - - -if os.environ.get("MSGPACK_PUREPYTHON") or sys.version_info[0] == 2: - from .fallback import Packer, unpackb, Unpacker -else: - try: - from ._cmsgpack import Packer, unpackb, Unpacker - except ImportError: - from .fallback import Packer, unpackb, Unpacker - - -def pack(o, stream, **kwargs): - """ - Pack object `o` and write it to `stream` - - See :class:`Packer` for options. - """ - packer = Packer(**kwargs) - stream.write(packer.pack(o)) - - -def packb(o, **kwargs): - """ - Pack object `o` and return packed bytes - - See :class:`Packer` for options. - """ - return Packer(**kwargs).pack(o) - - -def unpack(stream, **kwargs): - """ - Unpack an object from `stream`. - - Raises `ExtraData` when `stream` contains extra bytes. - See :class:`Unpacker` for options. - """ - data = stream.read() - return unpackb(data, **kwargs) - - -# alias for compatibility to simplejson/marshal/pickle. -load = unpack -loads = unpackb - -dump = pack -dumps = packb diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/sphinxext.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/sphinxext.py deleted file mode 100644 index 2412dee0ac339b503eebb1f2bb579b8571437777..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/sphinxext.py +++ /dev/null @@ -1,155 +0,0 @@ -""" - pygments.sphinxext - ~~~~~~~~~~~~~~~~~~ - - Sphinx extension to generate automatic documentation of lexers, - formatters and filters. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import sys - -from docutils import nodes -from docutils.statemachine import ViewList -from docutils.parsers.rst import Directive -from sphinx.util.nodes import nested_parse_with_titles - - -MODULEDOC = ''' -.. module:: %s - -%s -%s -''' - -LEXERDOC = ''' -.. class:: %s - - :Short names: %s - :Filenames: %s - :MIME types: %s - - %s - -''' - -FMTERDOC = ''' -.. class:: %s - - :Short names: %s - :Filenames: %s - - %s - -''' - -FILTERDOC = ''' -.. class:: %s - - :Name: %s - - %s - -''' - - -class PygmentsDoc(Directive): - """ - A directive to collect all lexers/formatters/filters and generate - autoclass directives for them. - """ - has_content = False - required_arguments = 1 - optional_arguments = 0 - final_argument_whitespace = False - option_spec = {} - - def run(self): - self.filenames = set() - if self.arguments[0] == 'lexers': - out = self.document_lexers() - elif self.arguments[0] == 'formatters': - out = self.document_formatters() - elif self.arguments[0] == 'filters': - out = self.document_filters() - else: - raise Exception('invalid argument for "pygmentsdoc" directive') - node = nodes.compound() - vl = ViewList(out.split('\n'), source='') - nested_parse_with_titles(self.state, vl, node) - for fn in self.filenames: - self.state.document.settings.record_dependencies.add(fn) - return node.children - - def document_lexers(self): - from pip._vendor.pygments.lexers._mapping import LEXERS - out = [] - modules = {} - moduledocstrings = {} - for classname, data in sorted(LEXERS.items(), key=lambda x: x[0]): - module = data[0] - mod = __import__(module, None, None, [classname]) - self.filenames.add(mod.__file__) - cls = getattr(mod, classname) - if not cls.__doc__: - print("Warning: %s does not have a docstring." % classname) - docstring = cls.__doc__ - if isinstance(docstring, bytes): - docstring = docstring.decode('utf8') - modules.setdefault(module, []).append(( - classname, - ', '.join(data[2]) or 'None', - ', '.join(data[3]).replace('*', '\\*').replace('_', '\\') or 'None', - ', '.join(data[4]) or 'None', - docstring)) - if module not in moduledocstrings: - moddoc = mod.__doc__ - if isinstance(moddoc, bytes): - moddoc = moddoc.decode('utf8') - moduledocstrings[module] = moddoc - - for module, lexers in sorted(modules.items(), key=lambda x: x[0]): - if moduledocstrings[module] is None: - raise Exception("Missing docstring for %s" % (module,)) - heading = moduledocstrings[module].splitlines()[4].strip().rstrip('.') - out.append(MODULEDOC % (module, heading, '-'*len(heading))) - for data in lexers: - out.append(LEXERDOC % data) - - return ''.join(out) - - def document_formatters(self): - from pip._vendor.pygments.formatters import FORMATTERS - - out = [] - for classname, data in sorted(FORMATTERS.items(), key=lambda x: x[0]): - module = data[0] - mod = __import__(module, None, None, [classname]) - self.filenames.add(mod.__file__) - cls = getattr(mod, classname) - docstring = cls.__doc__ - if isinstance(docstring, bytes): - docstring = docstring.decode('utf8') - heading = cls.__name__ - out.append(FMTERDOC % (heading, ', '.join(data[2]) or 'None', - ', '.join(data[3]).replace('*', '\\*') or 'None', - docstring)) - return ''.join(out) - - def document_filters(self): - from pip._vendor.pygments.filters import FILTERS - - out = [] - for name, cls in FILTERS.items(): - self.filenames.add(sys.modules[cls.__module__].__file__) - docstring = cls.__doc__ - if isinstance(docstring, bytes): - docstring = docstring.decode('utf8') - out.append(FILTERDOC % (cls.__name__, name, docstring)) - return ''.join(out) - - -def setup(app): - app.add_directive('pygmentsdoc', PygmentsDoc) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/__init__.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/__init__.py deleted file mode 100644 index 50f3815761176cc8af5eccf964e2dccc78c3ad72..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/__init__.py +++ /dev/null @@ -1,172 +0,0 @@ -"""Rich text and beautiful formatting in the terminal.""" - -import os -from typing import Callable, IO, TYPE_CHECKING, Any, Optional - -from ._extension import load_ipython_extension - -__all__ = ["get_console", "reconfigure", "print", "inspect"] - -if TYPE_CHECKING: - from .console import Console - -# Global console used by alternative print -_console: Optional["Console"] = None - -_IMPORT_CWD = os.path.abspath(os.getcwd()) - - -def get_console() -> "Console": - """Get a global :class:`~rich.console.Console` instance. This function is used when Rich requires a Console, - and hasn't been explicitly given one. - - Returns: - Console: A console instance. - """ - global _console - if _console is None: - from .console import Console - - _console = Console() - - return _console - - -def reconfigure(*args: Any, **kwargs: Any) -> None: - """Reconfigures the global console by replacing it with another. - - Args: - console (Console): Replacement console instance. - """ - from pip._vendor.rich.console import Console - - new_console = Console(*args, **kwargs) - _console = get_console() - _console.__dict__ = new_console.__dict__ - - -def print( - *objects: Any, - sep: str = " ", - end: str = "\n", - file: Optional[IO[str]] = None, - flush: bool = False, -) -> None: - r"""Print object(s) supplied via positional arguments. - This function has an identical signature to the built-in print. - For more advanced features, see the :class:`~rich.console.Console` class. - - Args: - sep (str, optional): Separator between printed objects. Defaults to " ". - end (str, optional): Character to write at end of output. Defaults to "\\n". - file (IO[str], optional): File to write to, or None for stdout. Defaults to None. - flush (bool, optional): Has no effect as Rich always flushes output. Defaults to False. - - """ - from .console import Console - - write_console = get_console() if file is None else Console(file=file) - return write_console.print(*objects, sep=sep, end=end) - - -def print_json( - json: Optional[str] = None, - *, - data: Any = None, - indent: int = 2, - highlight: bool = True, - skip_keys: bool = False, - ensure_ascii: bool = True, - check_circular: bool = True, - allow_nan: bool = True, - default: Optional[Callable[[Any], Any]] = None, - sort_keys: bool = False, -) -> None: - """Pretty prints JSON. Output will be valid JSON. - - Args: - json (str): A string containing JSON. - data (Any): If json is not supplied, then encode this data. - indent (int, optional): Number of spaces to indent. Defaults to 2. - highlight (bool, optional): Enable highlighting of output: Defaults to True. - skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False. - ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False. - check_circular (bool, optional): Check for circular references. Defaults to True. - allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True. - default (Callable, optional): A callable that converts values that can not be encoded - in to something that can be JSON encoded. Defaults to None. - sort_keys (bool, optional): Sort dictionary keys. Defaults to False. - """ - - get_console().print_json( - json, - data=data, - indent=indent, - highlight=highlight, - skip_keys=skip_keys, - ensure_ascii=ensure_ascii, - check_circular=check_circular, - allow_nan=allow_nan, - default=default, - sort_keys=sort_keys, - ) - - -def inspect( - obj: Any, - *, - console: Optional["Console"] = None, - title: Optional[str] = None, - help: bool = False, - methods: bool = False, - docs: bool = True, - private: bool = False, - dunder: bool = False, - sort: bool = True, - all: bool = False, - value: bool = True, -) -> None: - """Inspect any Python object. - - * inspect() to see summarized info. - * inspect(, methods=True) to see methods. - * inspect(, help=True) to see full (non-abbreviated) help. - * inspect(, private=True) to see private attributes (single underscore). - * inspect(, dunder=True) to see attributes beginning with double underscore. - * inspect(, all=True) to see all attributes. - - Args: - obj (Any): An object to inspect. - title (str, optional): Title to display over inspect result, or None use type. Defaults to None. - help (bool, optional): Show full help text rather than just first paragraph. Defaults to False. - methods (bool, optional): Enable inspection of callables. Defaults to False. - docs (bool, optional): Also render doc strings. Defaults to True. - private (bool, optional): Show private attributes (beginning with underscore). Defaults to False. - dunder (bool, optional): Show attributes starting with double underscore. Defaults to False. - sort (bool, optional): Sort attributes alphabetically. Defaults to True. - all (bool, optional): Show all attributes. Defaults to False. - value (bool, optional): Pretty print value. Defaults to True. - """ - _console = console or get_console() - from pip._vendor.rich._inspect import Inspect - - # Special case for inspect(inspect) - is_inspect = obj is inspect - - _inspect = Inspect( - obj, - title=title, - help=is_inspect or help, - methods=is_inspect or methods, - docs=is_inspect or docs, - private=private, - dunder=dunder, - sort=sort, - all=all, - value=value, - ) - _console.print(_inspect) - - -if __name__ == "__main__": # pragma: no cover - print("Hello, **World**") diff --git a/spaces/ali-ghamdan/deoldify/fastai/callbacks/misc.py b/spaces/ali-ghamdan/deoldify/fastai/callbacks/misc.py deleted file mode 100644 index 3e1b63423e9bcc2d16dc2322a863a5afee80e481..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/deoldify/fastai/callbacks/misc.py +++ /dev/null @@ -1,12 +0,0 @@ -" Miscellaneous callbacks " - -from fastai.callback import Callback - -class StopAfterNBatches(Callback): - "Stop training after n batches of the first epoch." - def __init__(self, n_batches:int=2): - self.stop,self.n_batches = False,n_batches-1 # iteration starts from 0 - - def on_batch_end(self, iteration, **kwargs): - if iteration == self.n_batches: - return {'stop_epoch': True, 'stop_training': True, 'skip_validate': True} diff --git a/spaces/allknowingroger/Image-Models-Test160/README.md b/spaces/allknowingroger/Image-Models-Test160/README.md deleted file mode 100644 index a3a43bf672ca727d8113068aed4ea790c9de9309..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test160/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -duplicated_from: allknowingroger/Image-Models-Test142 ---- - - \ No newline at end of file diff --git a/spaces/alphunt/diffdock-alphunt-demo/esm/esm/inverse_folding/gvp_utils.py b/spaces/alphunt/diffdock-alphunt-demo/esm/esm/inverse_folding/gvp_utils.py deleted file mode 100644 index 1bd4a1ae378afb59d9d035307bd8adcccca19f2a..0000000000000000000000000000000000000000 --- a/spaces/alphunt/diffdock-alphunt-demo/esm/esm/inverse_folding/gvp_utils.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -def flatten_graph(node_embeddings, edge_embeddings, edge_index): - """ - Flattens the graph into a batch size one (with disconnected subgraphs for - each example) to be compatible with pytorch-geometric package. - Args: - node_embeddings: node embeddings in tuple form (scalar, vector) - - scalar: shape batch size x nodes x node_embed_dim - - vector: shape batch size x nodes x node_embed_dim x 3 - edge_embeddings: edge embeddings of in tuple form (scalar, vector) - - scalar: shape batch size x edges x edge_embed_dim - - vector: shape batch size x edges x edge_embed_dim x 3 - edge_index: shape batch_size x 2 (source node and target node) x edges - Returns: - node_embeddings: node embeddings in tuple form (scalar, vector) - - scalar: shape batch total_nodes x node_embed_dim - - vector: shape batch total_nodes x node_embed_dim x 3 - edge_embeddings: edge embeddings of in tuple form (scalar, vector) - - scalar: shape batch total_edges x edge_embed_dim - - vector: shape batch total_edges x edge_embed_dim x 3 - edge_index: shape 2 x total_edges - """ - x_s, x_v = node_embeddings - e_s, e_v = edge_embeddings - batch_size, N = x_s.shape[0], x_s.shape[1] - node_embeddings = (torch.flatten(x_s, 0, 1), torch.flatten(x_v, 0, 1)) - edge_embeddings = (torch.flatten(e_s, 0, 1), torch.flatten(e_v, 0, 1)) - - edge_mask = torch.any(edge_index != -1, dim=1) - # Re-number the nodes by adding batch_idx * N to each batch - edge_index = edge_index + (torch.arange(batch_size, device=edge_index.device) * - N).unsqueeze(-1).unsqueeze(-1) - edge_index = edge_index.permute(1, 0, 2).flatten(1, 2) - edge_mask = edge_mask.flatten() - edge_index = edge_index[:, edge_mask] - edge_embeddings = ( - edge_embeddings[0][edge_mask, :], - edge_embeddings[1][edge_mask, :] - ) - return node_embeddings, edge_embeddings, edge_index - - -def unflatten_graph(node_embeddings, batch_size): - """ - Unflattens node embeddings. - Args: - node_embeddings: node embeddings in tuple form (scalar, vector) - - scalar: shape batch total_nodes x node_embed_dim - - vector: shape batch total_nodes x node_embed_dim x 3 - batch_size: int - Returns: - node_embeddings: node embeddings in tuple form (scalar, vector) - - scalar: shape batch size x nodes x node_embed_dim - - vector: shape batch size x nodes x node_embed_dim x 3 - """ - x_s, x_v = node_embeddings - x_s = x_s.reshape(batch_size, -1, x_s.shape[1]) - x_v = x_v.reshape(batch_size, -1, x_v.shape[1], x_v.shape[2]) - return (x_s, x_v) - - diff --git a/spaces/amanatid/ArxivGPT_Streamlit/base.py b/spaces/amanatid/ArxivGPT_Streamlit/base.py deleted file mode 100644 index dbeffae47cd7fe87417d6a3128fa8266631bb2a9..0000000000000000000000000000000000000000 --- a/spaces/amanatid/ArxivGPT_Streamlit/base.py +++ /dev/null @@ -1,386 +0,0 @@ -"""Read Arxiv Papers.""" -import hashlib -import logging -import os -from typing import List, Optional, Tuple - -from llama_index import download_loader -from llama_index.readers.base import BaseReader -from llama_index.readers.schema.base import Document -from fpdf import FPDF -################# -from llama_index import SimpleDirectoryReader -################# - -class ArxivReader_mod(BaseReader): - """Arxiv Reader. - - Gets a search query, return a list of Documents of the top corresponding scientific papers on Arxiv. - """ - - def __init__( - self, - ): - """Initialize with parameters.""" - super().__init__() - - def _hacky_hash(self, some_string): - _hash = hashlib.md5(some_string.encode("utf-8")).hexdigest() - return _hash - - def load_data( - self, - search_query: str, - papers_dir: Optional[str] = ".papers", - max_results: Optional[int] = 50, - search_criterion: Optional[int] = 0, - ) -> List[Document]: - """Search for a topic on Arxiv, download the PDFs of the top results locally, then read them. - - Args: - search_query (str): A topic to search for (e.g. "Artificial Intelligence"). - papers_dir (Optional[str]): Locally directory to store the papers - max_results (Optional[int]): Maximum number of papers to fetch. - - Returns: - List[Document]: A list of Document objects. - """ - # find papers - import arxiv - if search_criterion == 0: - sort_criterion = arxiv.SortCriterion.Relevance - - if search_criterion == 1: - sort_criterion = arxiv.SortCriterion.LastUpdatedDate - - if search_criterion == 2: - sort_criterion = arxiv.SortCriterion.SubmittedDate - - arxiv_search = arxiv.Search( - query=search_query, - id_list=[], - max_results=max_results, - sort_by=sort_criterion, - ) - - search_results = list(arxiv_search.results()) - logging.debug(f"> Successfully fetched {len(search_results)} papers") - # Delete downloaded papers - try: - for f in os.listdir(papers_dir): - os.remove(os.path.join(papers_dir, f)) - logging.debug(f"> Deleted file: {f}") - os.rmdir(papers_dir) - logging.debug(f"> Deleted directory: {papers_dir}") - except OSError: - print("Unable to delete files or directory") - - #create directory - if not os.path.exists(papers_dir): - os.makedirs(papers_dir) - - paper_lookup = {} - for paper in search_results: - # Hash filename to avoid bad charaters in file path - filename = f"{self._hacky_hash(paper.title)}.pdf" - #filename = f"{paper.title}.pdf" - paper_lookup[os.path.join(papers_dir, filename)] = { - "Title of this paper": paper.title, - "Authors": (", ").join([a.name for a in paper.authors]), - "Date published": paper.published.strftime("%m/%d/%Y"), - "URL": paper.entry_id, - # "summary": paper.summary - } - paper.download_pdf(dirpath=papers_dir, filename=filename) - logging.debug(f"> Downloading {filename}...") - - def get_paper_metadata(filename): - return paper_lookup[filename] - - ######## SimpleDirectoryReader = download_loader("SimpleDirectoryReader") - arxiv_documents = SimpleDirectoryReader(papers_dir, file_metadata=get_paper_metadata).load_data() - ######################################################################### - # Include extra documents containing the abstracts - abstract_documents = [] - for paper in search_results: - d = f"The following is a summary of the paper: {paper.title}\n\nSummary: {paper.summary}" - abstract_documents.append(Document(text=d)) - - - return arxiv_documents + abstract_documents - - def load_papers_and_abstracts( - self, - search_query: str, - papers_dir: Optional[str] = ".papers", - max_results: Optional[int] = 10, - ) -> Tuple[List[Document], List[Document]]: - """Search for a topic on Arxiv, download the PDFs of the top results locally, then read them. - - Args: - search_query (str): A topic to search for (e.g. "Artificial Intelligence"). - papers_dir (Optional[str]): Locally directory to store the papers - max_results (Optional[int]): Maximum number of papers to fetch. - - Returns: - List[Document]: A list of Document objects representing the papers themselves - List[Document]: A list of Document objects representing abstracts only - """ - import arxiv - - arxiv_search = arxiv.Search( - query=search_query, - id_list=[], - max_results=max_results, - sort_by=arxiv.SortCriterion.Relevance, - ) - search_results = list(arxiv_search.results()) - logging.debug(f"> Successfully fetched {len(search_results)} papers") - - if not os.path.exists(papers_dir): - os.makedirs(papers_dir) - - paper_lookup = {} - for paper in search_results: - # Hash filename to avoid bad charaters in file path - filename = f"{self._hacky_hash(paper.title)}.pdf" - paper_lookup[os.path.join(papers_dir, filename)] = { - "Title of this paper": paper.title, - "Authors": (", ").join([a.name for a in paper.authors]), - "Date published": paper.published.strftime("%m/%d/%Y"), - "URL": paper.entry_id, - # "summary": paper.summary - } - paper.download_pdf(dirpath=papers_dir, filename=filename) - logging.debug(f"> Downloading {filename}...") - - def get_paper_metadata(filename): - return paper_lookup[filename] - - SimpleDirectoryReader = download_loader("SimpleDirectoryReader") - arxiv_documents = SimpleDirectoryReader( - papers_dir, file_metadata=get_paper_metadata - ).load_data() - # Include extra documents containing the abstracts - abstract_documents = [] - for paper in search_results: - d = f"The following is a summary of the paper: {paper.title}\n\nSummary: {paper.summary}" - abstract_documents.append(Document(d)) - - # Delete downloaded papers - try: - for f in os.listdir(papers_dir): - os.remove(os.path.join(papers_dir, f)) - logging.debug(f"> Deleted file: {f}") - os.rmdir(papers_dir) - logging.debug(f"> Deleted directory: {papers_dir}") - except OSError: - print("Unable to delete files or directory") - - return arxiv_documents, abstract_documents - -class ArxivReader_mod_search(BaseReader): - """Arxiv Reader. - - Gets a search query, return a list of Documents of the top corresponding scientific papers on Arxiv. - """ - - def __init__( - self, - ): - """Initialize with parameters.""" - super().__init__() - - def _hacky_hash(self, some_string): - _hash = hashlib.md5(some_string.encode("utf-8")).hexdigest() - return _hash - - def load_data( - self, - search_query: str, - papers_dir: Optional[str] = ".papers", - max_results: Optional[int] = 50, - search_criterion: Optional[int] = 0, - ) -> List[Document]: - """Search for a topic on Arxiv, download the PDFs of the top results locally, then read them. - - Args: - search_query (str): A topic to search for (e.g. "Artificial Intelligence"). - papers_dir (Optional[str]): Locally directory to store the papers - max_results (Optional[int]): Maximum number of papers to fetch. - - Returns: - List[Document]: A list of Document objects. - """ - #find papers - import arxiv - if search_criterion == 0: - sort_criterion = arxiv.SortCriterion.Relevance - - if search_criterion == 1: - sort_criterion = arxiv.SortCriterion.LastUpdatedDate - - if search_criterion == 2: - sort_criterion = arxiv.SortCriterion.SubmittedDate - - arxiv_search = arxiv.Search( - query=search_query, - id_list=[], - max_results=max_results, - sort_by= sort_criterion, - ) - search_results = list(arxiv_search.results()) - logging.debug(f"> Successfully fetched {len(search_results)} papers") - - #create directory - if not os.path.exists(papers_dir): - os.makedirs(papers_dir) - else: - # Delete downloaded papers - try: - for f in os.listdir(papers_dir): - os.remove(os.path.join(papers_dir, f)) - logging.debug(f"> Deleted file: {f}") - os.rmdir(papers_dir) - logging.debug(f"> Deleted directory: {papers_dir}") - os.makedirs(papers_dir) - except OSError: - print("Unable to delete files or directory") - - paper_lookup = {} - for paper in search_results: - # Hash filename to avoid bad charaters in file path - filename = f"{self._hacky_hash(paper.title)}.pdf" - #filename = f"{paper.title}.pdf" - paper_lookup[os.path.join(papers_dir, filename)] = { - "Title of this paper": paper.title, - "Authors": (", ").join([a.name for a in paper.authors]), - "Date published": paper.published.strftime("%m/%d/%Y"), - "URL": paper.entry_id, - "summary": paper.summary, - } - - paper.download_pdf(dirpath=papers_dir, filename=filename) - logging.debug(f"> Downloading {filename}...") - - def get_paper_metadata(filename): - return paper_lookup[filename] - - SimpleDirectoryReader = download_loader("SimpleDirectoryReader") - arxiv_documents = SimpleDirectoryReader( - papers_dir, file_metadata=get_paper_metadata - ).load_data() - # Include extra documents containing the abstracts - - # save FPDF() class into - # a variable pdf - pdf = FPDF() - - # Add a page - pdf.add_page() - - # set style and size of font - # that you want in the pdf - pdf.set_font("Arial", size=15) - - # insert the texts in pdf - for paper in search_results: - authors = (", ").join([a.name for a in paper.authors]) - pub_paper = paper.published.strftime("%m/%d/%Y") - d = f"Title: {paper.title}\n\nAuthors:{authors}\n\nDate:{pub_paper}\n\nAbstract: {paper.summary}\n" - pdf.multi_cell(0, 10, txt= d, border = 0) - pdf.add_page() - - - # save the pdf with name .pdf - pdf.output(papers_dir+"/abstracts.pdf") - - - - - - abstract_documents = [] - for paper in search_results: - authors =(", ").join([a.name for a in paper.authors]) - pub_paper =paper.published.strftime("%m/%d/%Y") - d = f"The following is a summary of the paper: {paper.title}\n\nAuthors:{authors}\n\nDate:{pub_paper}\n\nSummary: {paper.summary}" -# print(d) - abstract_documents.append(Document(d)) - - - return arxiv_documents + abstract_documents - - def load_papers_and_abstracts( - self, - search_query: str, - papers_dir: Optional[str] = ".papers", - max_results: Optional[int] = 10, - ) -> Tuple[List[Document], List[Document]]: - """Search for a topic on Arxiv, download the PDFs of the top results locally, then read them. - - Args: - search_query (str): A topic to search for (e.g. "Artificial Intelligence"). - papers_dir (Optional[str]): Locally directory to store the papers - max_results (Optional[int]): Maximum number of papers to fetch. - - Returns: - List[Document]: A list of Document objects representing the papers themselves - List[Document]: A list of Document objects representing abstracts only - """ - import arxiv - - arxiv_search = arxiv.Search( - query=search_query, - id_list=[], - max_results=max_results, - sort_by=arxiv.SortCriterion.Relevance, - ) - search_results = list(arxiv_search.results()) - logging.debug(f"> Successfully fetched {len(search_results)} papers") - - if not os.path.exists(papers_dir): - os.makedirs(papers_dir) - - paper_lookup = {} - for paper in search_results: - # Hash filename to avoid bad charaters in file path - filename = f"{self._hacky_hash(paper.title)}.pdf" - paper_lookup[os.path.join(papers_dir, filename)] = { - "Title of this paper": paper.title, - "Authors": (", ").join([a.name for a in paper.authors]), - "Date published": paper.published.strftime("%m/%d/%Y"), - "URL": paper.entry_id, - # "summary": paper.summary - } - paper.download_pdf(dirpath=papers_dir, filename=filename) - logging.debug(f"> Downloading {filename}...") - - def get_paper_metadata(filename): - return paper_lookup[filename] - - SimpleDirectoryReader = download_loader("SimpleDirectoryReader") - arxiv_documents = SimpleDirectoryReader( - papers_dir, file_metadata=get_paper_metadata - ).load_data() - # Include extra documents containing the abstracts - abstract_documents = [] - for paper in search_results: - d = f"The following is a summary of the paper: {paper.title}\n\nSummary: {paper.summary}" - abstract_documents.append(Document(d)) - - # Delete downloaded papers - try: - for f in os.listdir(papers_dir): - os.remove(os.path.join(papers_dir, f)) - logging.debug(f"> Deleted file: {f}") - os.rmdir(papers_dir) - logging.debug(f"> Deleted directory: {papers_dir}") - except OSError: - print("Unable to delete files or directory") - - return arxiv_documents, abstract_documents - - -#test = ArxivReader_mod_search() -#test.load_data(search_query='quantum gravity', -# max_results=3, search_criterion =1) \ No newline at end of file diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/pablio/test_rw_echo.c b/spaces/amarchheda/ChordDuplicate/portaudio/pablio/test_rw_echo.c deleted file mode 100644 index 431587c4bbf282378d61c0cece93ae5cb6f60aa9..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/pablio/test_rw_echo.c +++ /dev/null @@ -1,129 +0,0 @@ -/* - * $Id$ - * test_rw_echo.c - * Echo delayed input to output. - * - * Author: Phil Burk, http://www.softsynth.com/portaudio/ - * - * This program uses PABLIO, the Portable Audio Blocking I/O Library. - * PABLIO is built on top of PortAudio, the Portable Audio Library. - * - * Note that if you need low latency, you should not use PABLIO. - * Use the PA_OpenStream callback technique which is lower level - * than PABLIO. - * - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include -#include -#include "pablio.h" -#include - -/* -** Note that many of the older ISA sound cards on PCs do NOT support -** full duplex audio (simultaneous record and playback). -** And some only support full duplex at lower sample rates. -*/ -#define SAMPLE_RATE (22050) -#define NUM_SECONDS (20) -#define SAMPLES_PER_FRAME (2) - -/* Select whether we will use floats or shorts. */ -#if 1 -#define SAMPLE_TYPE paFloat32 -typedef float SAMPLE; -#else -#define SAMPLE_TYPE paInt16 -typedef short SAMPLE; -#endif - -#define NUM_ECHO_FRAMES (2*SAMPLE_RATE) -SAMPLE samples[NUM_ECHO_FRAMES][SAMPLES_PER_FRAME] = {0.0}; - -/*******************************************************************/ -int main(void); -int main(void) -{ - int i; - PaError err; - PABLIO_Stream *aInStream; - PABLIO_Stream *aOutStream; - int index; - - printf("Full duplex sound test using PABLIO\n"); - fflush(stdout); - - /* Open simplified blocking I/O layer on top of PortAudio. */ - /* Open input first so it can start to fill buffers. */ - err = OpenAudioStream( &aInStream, SAMPLE_RATE, SAMPLE_TYPE, - (PABLIO_READ | PABLIO_STEREO) ); - if( err != paNoError ) goto error; - /* printf("opened input\n"); fflush(stdout); /**/ - - err = OpenAudioStream( &aOutStream, SAMPLE_RATE, SAMPLE_TYPE, - (PABLIO_WRITE | PABLIO_STEREO) ); - if( err != paNoError ) goto error; - /* printf("opened output\n"); fflush(stdout); /**/ - - /* Process samples in the foreground. */ - index = 0; - for( i=0; i<(NUM_SECONDS * SAMPLE_RATE); i++ ) - { - /* Write old frame of data to output. */ - /* samples[index][1] = (i&256) * (1.0f/256.0f); /* sawtooth */ - WriteAudioStream( aOutStream, &samples[index][0], 1 ); - - /* Read one frame of data into sample array for later output. */ - ReadAudioStream( aInStream, &samples[index][0], 1 ); - index += 1; - if( index >= NUM_ECHO_FRAMES ) index = 0; - - if( (i & 0xFFFF) == 0 ) printf("i = %d\n", i ); fflush(stdout); /**/ - } - - CloseAudioStream( aOutStream ); - CloseAudioStream( aInStream ); - - printf("R/W echo sound test complete.\n" ); - fflush(stdout); - return 0; - -error: - fprintf( stderr, "An error occurred while using PortAudio\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return -1; -} diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/source/what_makes_a_good_dataset.md b/spaces/artificialguybr/video-dubbing/TTS/docs/source/what_makes_a_good_dataset.md deleted file mode 100644 index 18c87453f7b7704315222612f23977662451a287..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/docs/source/what_makes_a_good_dataset.md +++ /dev/null @@ -1,20 +0,0 @@ -(what_makes_a_good_dataset)= -# What makes a good TTS dataset - -## What Makes a Good Dataset -* **Gaussian like distribution on clip and text lengths**. So plot the distribution of clip lengths and check if it covers enough short and long voice clips. -* **Mistake free**. Remove any wrong or broken files. Check annotations, compare transcript and audio length. -* **Noise free**. Background noise might lead your model to struggle, especially for a good alignment. Even if it learns the alignment, the final result is likely to be suboptimial. -* **Compatible tone and pitch among voice clips**. For instance, if you are using audiobook recordings for your project, it might have impersonations for different characters in the book. These differences between samples downgrade the model performance. -* **Good phoneme coverage**. Make sure that your dataset covers a good portion of the phonemes, di-phonemes, and in some languages tri-phonemes. -* **Naturalness of recordings**. For your model WISIAIL (What it sees is all it learns). Therefore, your dataset should accommodate all the attributes you want to hear from your model. - -## Preprocessing Dataset -If you like to use a bespoken dataset, you might like to perform a couple of quality checks before training. 🐸TTS provides a couple of notebooks (CheckSpectrograms, AnalyzeDataset) to expedite this part for you. - -* **AnalyzeDataset** is for checking dataset distribution in terms of the clip and transcript lengths. It is good to find outlier instances (too long, short text but long voice clip, etc.)and remove them before training. Keep in mind that we like to have a good balance between long and short clips to prevent any bias in training. If you have only short clips (1-3 secs), then your model might suffer for long sentences and if your instances are long, then it might not learn the alignment or might take too long to train the model. - -* **CheckSpectrograms** is to measure the noise level of the clips and find good audio processing parameters. The noise level might be observed by checking spectrograms. If spectrograms look cluttered, especially in silent parts, this dataset might not be a good candidate for a TTS project. If your voice clips are too noisy in the background, it makes things harder for your model to learn the alignment, and the final result might be different than the voice you are given. -If the spectrograms look good, then the next step is to find a good set of audio processing parameters, defined in ```config.json```. In the notebook, you can compare different sets of parameters and see the resynthesis results in relation to the given ground-truth. Find the best parameters that give the best possible synthesis performance. - -Another practical detail is the quantization level of the clips. If your dataset has a very high bit-rate, that might cause slow data-load time and consequently slow training. It is better to reduce the sample-rate of your dataset to around 16000-22050. \ No newline at end of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Math/_IntegerNative.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Math/_IntegerNative.py deleted file mode 100644 index a8bcb3db915da538108429c718eb505cad7143b7..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Math/_IntegerNative.py +++ /dev/null @@ -1,395 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2014, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -from ._IntegerBase import IntegerBase - -from Crypto.Util.number import long_to_bytes, bytes_to_long - - -class IntegerNative(IntegerBase): - """A class to model a natural integer (including zero)""" - - def __init__(self, value): - if isinstance(value, float): - raise ValueError("A floating point type is not a natural number") - try: - self._value = value._value - except AttributeError: - self._value = value - - # Conversions - def __int__(self): - return self._value - - def __str__(self): - return str(int(self)) - - def __repr__(self): - return "Integer(%s)" % str(self) - - # Only Python 2.x - def __hex__(self): - return hex(self._value) - - # Only Python 3.x - def __index__(self): - return int(self._value) - - def to_bytes(self, block_size=0, byteorder='big'): - if self._value < 0: - raise ValueError("Conversion only valid for non-negative numbers") - result = long_to_bytes(self._value, block_size) - if len(result) > block_size > 0: - raise ValueError("Value too large to encode") - if byteorder == 'big': - pass - elif byteorder == 'little': - result = bytearray(result) - result.reverse() - result = bytes(result) - else: - raise ValueError("Incorrect byteorder") - return result - - @classmethod - def from_bytes(cls, byte_string, byteorder='big'): - if byteorder == 'big': - pass - elif byteorder == 'little': - byte_string = bytearray(byte_string) - byte_string.reverse() - else: - raise ValueError("Incorrect byteorder") - return cls(bytes_to_long(byte_string)) - - # Relations - def __eq__(self, term): - if term is None: - return False - return self._value == int(term) - - def __ne__(self, term): - return not self.__eq__(term) - - def __lt__(self, term): - return self._value < int(term) - - def __le__(self, term): - return self.__lt__(term) or self.__eq__(term) - - def __gt__(self, term): - return not self.__le__(term) - - def __ge__(self, term): - return not self.__lt__(term) - - def __nonzero__(self): - return self._value != 0 - __bool__ = __nonzero__ - - def is_negative(self): - return self._value < 0 - - # Arithmetic operations - def __add__(self, term): - try: - return self.__class__(self._value + int(term)) - except (ValueError, AttributeError, TypeError): - return NotImplemented - - def __sub__(self, term): - try: - return self.__class__(self._value - int(term)) - except (ValueError, AttributeError, TypeError): - return NotImplemented - - def __mul__(self, factor): - try: - return self.__class__(self._value * int(factor)) - except (ValueError, AttributeError, TypeError): - return NotImplemented - - def __floordiv__(self, divisor): - return self.__class__(self._value // int(divisor)) - - def __mod__(self, divisor): - divisor_value = int(divisor) - if divisor_value < 0: - raise ValueError("Modulus must be positive") - return self.__class__(self._value % divisor_value) - - def inplace_pow(self, exponent, modulus=None): - exp_value = int(exponent) - if exp_value < 0: - raise ValueError("Exponent must not be negative") - - if modulus is not None: - mod_value = int(modulus) - if mod_value < 0: - raise ValueError("Modulus must be positive") - if mod_value == 0: - raise ZeroDivisionError("Modulus cannot be zero") - else: - mod_value = None - self._value = pow(self._value, exp_value, mod_value) - return self - - def __pow__(self, exponent, modulus=None): - result = self.__class__(self) - return result.inplace_pow(exponent, modulus) - - def __abs__(self): - return abs(self._value) - - def sqrt(self, modulus=None): - - value = self._value - if modulus is None: - if value < 0: - raise ValueError("Square root of negative value") - # http://stackoverflow.com/questions/15390807/integer-square-root-in-python - - x = value - y = (x + 1) // 2 - while y < x: - x = y - y = (x + value // x) // 2 - result = x - else: - if modulus <= 0: - raise ValueError("Modulus must be positive") - result = self._tonelli_shanks(self % modulus, modulus) - - return self.__class__(result) - - def __iadd__(self, term): - self._value += int(term) - return self - - def __isub__(self, term): - self._value -= int(term) - return self - - def __imul__(self, term): - self._value *= int(term) - return self - - def __imod__(self, term): - modulus = int(term) - if modulus == 0: - raise ZeroDivisionError("Division by zero") - if modulus < 0: - raise ValueError("Modulus must be positive") - self._value %= modulus - return self - - # Boolean/bit operations - def __and__(self, term): - return self.__class__(self._value & int(term)) - - def __or__(self, term): - return self.__class__(self._value | int(term)) - - def __rshift__(self, pos): - try: - return self.__class__(self._value >> int(pos)) - except OverflowError: - if self._value >= 0: - return 0 - else: - return -1 - - def __irshift__(self, pos): - try: - self._value >>= int(pos) - except OverflowError: - if self._value >= 0: - return 0 - else: - return -1 - return self - - def __lshift__(self, pos): - try: - return self.__class__(self._value << int(pos)) - except OverflowError: - raise ValueError("Incorrect shift count") - - def __ilshift__(self, pos): - try: - self._value <<= int(pos) - except OverflowError: - raise ValueError("Incorrect shift count") - return self - - def get_bit(self, n): - if self._value < 0: - raise ValueError("no bit representation for negative values") - try: - try: - result = (self._value >> n._value) & 1 - if n._value < 0: - raise ValueError("negative bit count") - except AttributeError: - result = (self._value >> n) & 1 - if n < 0: - raise ValueError("negative bit count") - except OverflowError: - result = 0 - return result - - # Extra - def is_odd(self): - return (self._value & 1) == 1 - - def is_even(self): - return (self._value & 1) == 0 - - def size_in_bits(self): - - if self._value < 0: - raise ValueError("Conversion only valid for non-negative numbers") - - if self._value == 0: - return 1 - - bit_size = 0 - tmp = self._value - while tmp: - tmp >>= 1 - bit_size += 1 - - return bit_size - - def size_in_bytes(self): - return (self.size_in_bits() - 1) // 8 + 1 - - def is_perfect_square(self): - if self._value < 0: - return False - if self._value in (0, 1): - return True - - x = self._value // 2 - square_x = x ** 2 - - while square_x > self._value: - x = (square_x + self._value) // (2 * x) - square_x = x ** 2 - - return self._value == x ** 2 - - def fail_if_divisible_by(self, small_prime): - if (self._value % int(small_prime)) == 0: - raise ValueError("Value is composite") - - def multiply_accumulate(self, a, b): - self._value += int(a) * int(b) - return self - - def set(self, source): - self._value = int(source) - - def inplace_inverse(self, modulus): - modulus = int(modulus) - if modulus == 0: - raise ZeroDivisionError("Modulus cannot be zero") - if modulus < 0: - raise ValueError("Modulus cannot be negative") - r_p, r_n = self._value, modulus - s_p, s_n = 1, 0 - while r_n > 0: - q = r_p // r_n - r_p, r_n = r_n, r_p - q * r_n - s_p, s_n = s_n, s_p - q * s_n - if r_p != 1: - raise ValueError("No inverse value can be computed" + str(r_p)) - while s_p < 0: - s_p += modulus - self._value = s_p - return self - - def inverse(self, modulus): - result = self.__class__(self) - result.inplace_inverse(modulus) - return result - - def gcd(self, term): - r_p, r_n = abs(self._value), abs(int(term)) - while r_n > 0: - q = r_p // r_n - r_p, r_n = r_n, r_p - q * r_n - return self.__class__(r_p) - - def lcm(self, term): - term = int(term) - if self._value == 0 or term == 0: - return self.__class__(0) - return self.__class__(abs((self._value * term) // self.gcd(term)._value)) - - @staticmethod - def jacobi_symbol(a, n): - a = int(a) - n = int(n) - - if n <= 0: - raise ValueError("n must be a positive integer") - - if (n & 1) == 0: - raise ValueError("n must be odd for the Jacobi symbol") - - # Step 1 - a = a % n - # Step 2 - if a == 1 or n == 1: - return 1 - # Step 3 - if a == 0: - return 0 - # Step 4 - e = 0 - a1 = a - while (a1 & 1) == 0: - a1 >>= 1 - e += 1 - # Step 5 - if (e & 1) == 0: - s = 1 - elif n % 8 in (1, 7): - s = 1 - else: - s = -1 - # Step 6 - if n % 4 == 3 and a1 % 4 == 3: - s = -s - # Step 7 - n1 = n % a1 - # Step 8 - return s * IntegerNative.jacobi_symbol(n1, a1) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/aiohttp/locks.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/aiohttp/locks.py deleted file mode 100644 index de2dc83d09dd950fc1ed8d7edaeb20e7697c94ba..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/aiohttp/locks.py +++ /dev/null @@ -1,41 +0,0 @@ -import asyncio -import collections -from typing import Any, Deque, Optional - - -class EventResultOrError: - """Event asyncio lock helper class. - - Wraps the Event asyncio lock allowing either to awake the - locked Tasks without any error or raising an exception. - - thanks to @vorpalsmith for the simple design. - """ - - def __init__(self, loop: asyncio.AbstractEventLoop) -> None: - self._loop = loop - self._exc: Optional[BaseException] = None - self._event = asyncio.Event() - self._waiters: Deque[asyncio.Future[Any]] = collections.deque() - - def set(self, exc: Optional[BaseException] = None) -> None: - self._exc = exc - self._event.set() - - async def wait(self) -> Any: - waiter = self._loop.create_task(self._event.wait()) - self._waiters.append(waiter) - try: - val = await waiter - finally: - self._waiters.remove(waiter) - - if self._exc is not None: - raise self._exc - - return val - - def cancel(self) -> None: - """Cancel all waiters""" - for waiter in self._waiters: - waiter.cancel() diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/abc/_testing.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/abc/_testing.py deleted file mode 100644 index 4e3621df7a99434caca4d5f0b2a1f0dbe4d02398..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/anyio/abc/_testing.py +++ /dev/null @@ -1,68 +0,0 @@ -import types -from abc import ABCMeta, abstractmethod -from collections.abc import AsyncGenerator, Iterable -from typing import Any, Callable, Coroutine, Dict, Optional, Type, TypeVar - -_T = TypeVar("_T") - - -class TestRunner(metaclass=ABCMeta): - """ - Encapsulates a running event loop. Every call made through this object will use the same event - loop. - """ - - def __enter__(self) -> "TestRunner": - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[types.TracebackType], - ) -> Optional[bool]: - self.close() - return None - - @abstractmethod - def close(self) -> None: - """Close the event loop.""" - - @abstractmethod - def run_asyncgen_fixture( - self, - fixture_func: Callable[..., "AsyncGenerator[_T, Any]"], - kwargs: Dict[str, Any], - ) -> "Iterable[_T]": - """ - Run an async generator fixture. - - :param fixture_func: the fixture function - :param kwargs: keyword arguments to call the fixture function with - :return: an iterator yielding the value yielded from the async generator - """ - - @abstractmethod - def run_fixture( - self, - fixture_func: Callable[..., Coroutine[Any, Any, _T]], - kwargs: Dict[str, Any], - ) -> _T: - """ - Run an async fixture. - - :param fixture_func: the fixture function - :param kwargs: keyword arguments to call the fixture function with - :return: the return value of the fixture function - """ - - @abstractmethod - def run_test( - self, test_func: Callable[..., Coroutine[Any, Any, Any]], kwargs: Dict[str, Any] - ) -> None: - """ - Run an async test function. - - :param test_func: the test function - :param kwargs: keyword arguments to call the test function with - """ diff --git a/spaces/aryadytm/remove-photo-background/src/__init__.py b/spaces/aryadytm/remove-photo-background/src/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Daniel Wood.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Daniel Wood.html deleted file mode 100644 index 3de28025b451891b357c380b5f716c9214228452..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Daniel Wood.html +++ /dev/null @@ -1,134 +0,0 @@ - - - - Daniel Wood - - - - -
    -

    Daniel Wood

    - -
    -
    How did you hear about SM?
    • Used to be a mentee
    • in my current role doing a lot of mentorship, forgot how much I like it

    Brief background
    • Working at Aetna as a Data Scientist
    • Health insurance 
    • by end of the year, managing a portfolio of 7 NBAs
    • got thrown into some deep waters and learned a ton, launched a product
      • care management - helping from the clinical side
      • predictive model to predict plecamspia 
      • and other things
    • Now in behavioral health

    Mentorship exp
    • as a post-doc, sole postdoc, running the lab
      • 6-7 masters students, 
        • helped them with exp design
        • teach them some code (matlab)
        • helped a lot of no-code GUIs
        • etc
      • 6-7 undergrad students
        • longer relationships
        • help them develop their skills
    • In current role, 2 mentees
      • As a manager, mentored one of his reports
        • she was good at analytics, helped her with confidence with stakeholders, and some SWE best practices
        • checking, make sure she was okay
      • Another:
        • same level, but he is going through a rough patch
        • giving him projects, working through gaps that come up
        • and some project management stuff
    • Interviewing 3 people a week!

    What do beginners need and how can you help?
    • Coming from academia
      • as an academic, you can as deep as you want for as long as you want
      • but that is not an effective way to think when interviewing or when on the job
      • "If I only had an hour how would I solve it"
    • confidence in how to think like a DS, how to operate 
    • Some folks just need to sharpen their skillset
    • Common mistakes in interviews
    -
    -
    Questions about SM:
    • What does the average mentee look like?
    • What % of mentors never get an actual mentee?
    • Resources for mentors?
    • In cases where there is a dispute, how do you handle them?
    -
    - -
    - - - \ No newline at end of file diff --git a/spaces/avivdm1/AutoGPT/tests/context.py b/spaces/avivdm1/AutoGPT/tests/context.py deleted file mode 100644 index cef969db69ab189109b935bba9ed06696cf5337a..0000000000000000000000000000000000000000 --- a/spaces/avivdm1/AutoGPT/tests/context.py +++ /dev/null @@ -1,6 +0,0 @@ -import os -import sys - -sys.path.insert( - 0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../scripts")) -) diff --git a/spaces/awacke1/AutoMLPandasProfilingSunburst/README.md b/spaces/awacke1/AutoMLPandasProfilingSunburst/README.md deleted file mode 100644 index 6ac45f21441c27196196838e0279a5ad1c247a88..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AutoMLPandasProfilingSunburst/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AutoMLPandasProfilingSunburst -emoji: ⚡ -colorFrom: gray -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Feature-Extraction-microsoft-codebert-base/app.py b/spaces/awacke1/Feature-Extraction-microsoft-codebert-base/app.py deleted file mode 100644 index fee5ca40a3d286afc4acd71643726746352f7ac7..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Feature-Extraction-microsoft-codebert-base/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/microsoft/codebert-base").launch() \ No newline at end of file diff --git a/spaces/awacke1/HTML5-Aframe-3D-Maps/style.css b/spaces/awacke1/HTML5-Aframe-3D-Maps/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/HTML5-Aframe-3D-Maps/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/awacke1/PrivateRealTimeDashboard/app.py b/spaces/awacke1/PrivateRealTimeDashboard/app.py deleted file mode 100644 index 3a58adace479e5fe8340e1ace672cc7796d69a78..0000000000000000000000000000000000000000 --- a/spaces/awacke1/PrivateRealTimeDashboard/app.py +++ /dev/null @@ -1,191 +0,0 @@ -import time # to simulate a real time data, time loop - -import numpy as np # np mean, np random -import pandas as pd # read csv, df manipulation -import plotly.express as px # interactive charts -import streamlit as st # 🎈 data web app development - - -# PersistDataset ----- -import os -import csv -import gradio as gr -from gradio import inputs, outputs -import huggingface_hub -from huggingface_hub import Repository, hf_hub_download, upload_file -from datetime import datetime - -# Dataset and Token links - change awacke1 to your own HF id, and add a HF_TOKEN copy to your repo for write permissions -# This should allow you to save your results to your own Dataset hosted on HF. --- -#DATASET_REPO_URL = "https://huggingface.co/datasets/awacke1/Carddata.csv" -DATASET_REPO_URL = "https://huggingface.co/datasets/" + "awacke1/PrivateASRWithMemory.csv" -#DATASET_REPO_ID = "awacke1/Carddata.csv" -DATASET_REPO_ID = "awacke1/PrivateASRWithMemory.csv" -DATA_FILENAME = "PrivateASRWithMemory.csv" -DATA_FILE = os.path.join("data", DATA_FILENAME) -HF_TOKEN = os.environ.get("HF_TOKEN") - -DataText = "" -# --------------------------------------------- - -SCRIPT = """ - -""" - -@st.experimental_singleton -def get_database_session(url): - # Create a database session object that points to the URL. - return session -#Clear memo -#Clear all in-memory and on-disk memo caches. - -@st.experimental_memo -def fetch_and_clean_data(url): - # Fetch data from URL here, and then clean it up. - return data - -if st.checkbox("Clear All"): - # Clear values from *all* memoized functions - st.experimental_memo.clear() - - try: - hf_hub_download( - repo_id=DATASET_REPO_ID, - filename=DATA_FILENAME, - cache_dir=DATA_DIRNAME, - force_filename=DATA_FILENAME - ) - except: - print("file not found") - repo = Repository(local_dir="data", clone_from=DATASET_REPO_URL,use_auth_token=HF_TOKEN) -# return session - print(repo) - DataText = repo - - st.markdown(DataText) - - -def generate_html() -> str: - with open(DATA_FILE) as csvfile: - reader = csv.DictReader(csvfile) - rows = [] - for row in reader: - rows.append(row) - rows.reverse() - if len(rows) == 0: - return "no messages yet" - else: - html = "
    " - for row in rows: - html += "
    " - html += f"{row['inputs']}" - html += f"{row['outputs']}" - html += "
    " - html += "
    " - return html - - -def store_message(name: str, message: str): - if name and message: - with open(DATA_FILE, "a") as csvfile: - writer = csv.DictWriter(csvfile, fieldnames=["name", "message", "time"]) - writer.writerow( - {"name": name.strip(), "message": message.strip(), "time": str(datetime.now())} - ) - # uncomment line below to begin saving - - commit_url = repo.push_to_hub() - return "" - - -#st.set_page_config( -# page_title="Real-Time Data Science Dashboard", -# page_icon="✅", -# layout="wide", -#) - -# read csv from a github repo -dataset_url = "https://raw.githubusercontent.com/Lexie88rus/bank-marketing-analysis/master/bank.csv" - -# read csv from a URL -@st.experimental_memo -def get_data() -> pd.DataFrame: - return pd.read_csv(dataset_url) - -df = get_data() - -# dashboard title -st.title("Real-Time / Live Data Science Dashboard") - -# top-level filters -job_filter = st.selectbox("Select the Job", pd.unique(df["job"])) - -# creating a single-element container -placeholder = st.empty() - -# dataframe filter -df = df[df["job"] == job_filter] - -# near real-time / live feed simulation -for seconds in range(200): - - df["age_new"] = df["age"] * np.random.choice(range(1, 5)) - df["balance_new"] = df["balance"] * np.random.choice(range(1, 5)) - - # creating KPIs - avg_age = np.mean(df["age_new"]) - - count_married = int( - df[(df["marital"] == "married")]["marital"].count() - + np.random.choice(range(1, 30)) - ) - - balance = np.mean(df["balance_new"]) - - with placeholder.container(): - - # create three columns - kpi1, kpi2, kpi3 = st.columns(3) - - # fill in those three columns with respective metrics or KPIs - kpi1.metric( - label="Age ⏳", - value=round(avg_age), - delta=round(avg_age) - 10, - ) - - kpi2.metric( - label="Married Count 💍", - value=int(count_married), - delta=-10 + count_married, - ) - - kpi3.metric( - label="A/C Balance $", - value=f"$ {round(balance,2)} ", - delta=-round(balance / count_married) * 100, - ) - - # create two columns for charts - fig_col1, fig_col2 = st.columns(2) - with fig_col1: - st.markdown("### First Chart") - fig = px.density_heatmap( - data_frame=df, y="age_new", x="marital" - ) - st.write(fig) - - with fig_col2: - st.markdown("### Second Chart") - fig2 = px.histogram(data_frame=df, x="age_new") - st.write(fig2) - - st.markdown("### Detailed Data View") - st.dataframe(df) - - time.sleep(1) \ No newline at end of file diff --git a/spaces/awacke1/TimerASRLive/app.py b/spaces/awacke1/TimerASRLive/app.py deleted file mode 100644 index ef321eeef629eae4a95fb72951420fd00d2cb683..0000000000000000000000000000000000000000 --- a/spaces/awacke1/TimerASRLive/app.py +++ /dev/null @@ -1,23 +0,0 @@ -from transformers import pipeline -import gradio as gr -import time - -p = pipeline("automatic-speech-recognition") - -def transcribe(audio, state=""): - time.sleep(5) - text = p(audio)["text"] - state += text + " " - return state, state - -gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type="filepath"), - "state" - ], - outputs=[ - "textbox", - "state" - ], - live=True).launch() diff --git a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/factory.py b/spaces/badayvedat/AudioSep/models/CLAP/open_clip/factory.py deleted file mode 100644 index 844f9ca0e12a0ff43ba3e042a3e43530ebe91b8c..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/AudioSep/models/CLAP/open_clip/factory.py +++ /dev/null @@ -1,277 +0,0 @@ -import json -import logging -import os -import pathlib -import re -from copy import deepcopy -from pathlib import Path - -import torch - -from .model import CLAP, convert_weights_to_fp16 -from .openai import load_openai_model -from .pretrained import get_pretrained_url, download_pretrained -from .transform import image_transform - -_MODEL_CONFIG_PATHS = [Path(__file__).parent / f"model_configs/"] -_MODEL_CONFIGS = {} # directory (model_name: config) of model architecture configs - - -def _natural_key(string_): - return [int(s) if s.isdigit() else s for s in re.split(r"(\d+)", string_.lower())] - - -def _rescan_model_configs(): - global _MODEL_CONFIGS - - config_ext = (".json",) - config_files = [] - for config_path in _MODEL_CONFIG_PATHS: - if config_path.is_file() and config_path.suffix in config_ext: - config_files.append(config_path) - elif config_path.is_dir(): - for ext in config_ext: - config_files.extend(config_path.glob(f"*{ext}")) - - for cf in config_files: - if os.path.basename(cf)[0] == ".": - continue # Ignore hidden files - - with open(cf, "r") as f: - model_cfg = json.load(f) - if all(a in model_cfg for a in ("embed_dim", "audio_cfg", "text_cfg")): - _MODEL_CONFIGS[cf.stem] = model_cfg - - _MODEL_CONFIGS = { - k: v - for k, v in sorted(_MODEL_CONFIGS.items(), key=lambda x: _natural_key(x[0])) - } - - -_rescan_model_configs() # initial populate of model config registry - - -def load_state_dict(checkpoint_path: str, map_location="cpu", skip_params=True): - checkpoint = torch.load(checkpoint_path, map_location=map_location) - if isinstance(checkpoint, dict) and "state_dict" in checkpoint: - state_dict = checkpoint["state_dict"] - else: - state_dict = checkpoint - if skip_params: - if next(iter(state_dict.items()))[0].startswith("module"): - state_dict = {k[7:]: v for k, v in state_dict.items()} - # for k in state_dict: - # if k.startswith('transformer'): - # v = state_dict.pop(k) - # state_dict['text_branch.' + k[12:]] = v - return state_dict - - -def create_model( - amodel_name: str, - tmodel_name: str, - pretrained: str = "", - precision: str = "fp32", - device: torch.device = torch.device("cpu"), - jit: bool = False, - force_quick_gelu: bool = False, - openai_model_cache_dir: str = os.path.expanduser("~/.cache/clip"), - skip_params=True, - pretrained_audio: str = "", - pretrained_text: str = "", - enable_fusion: bool = False, - fusion_type: str = "None" - # pretrained_image: bool = False, -): - amodel_name = amodel_name.replace( - "/", "-" - ) # for callers using old naming with / in ViT names - pretrained_orig = pretrained - pretrained = pretrained.lower() - if pretrained == "openai": - if amodel_name in _MODEL_CONFIGS: - logging.info(f"Loading {amodel_name} model config.") - model_cfg = deepcopy(_MODEL_CONFIGS[amodel_name]) - else: - logging.error( - f"Model config for {amodel_name} not found; available models {list_models()}." - ) - raise RuntimeError(f"Model config for {amodel_name} not found.") - - logging.info(f"Loading pretrained ViT-B-16 text encoder from OpenAI.") - # Hard Code in model name - model_cfg["text_cfg"]["model_type"] = tmodel_name - model = load_openai_model( - "ViT-B-16", - model_cfg, - device=device, - jit=jit, - cache_dir=openai_model_cache_dir, - enable_fusion=enable_fusion, - fusion_type=fusion_type, - ) - # See https://discuss.pytorch.org/t/valueerror-attemting-to-unscale-fp16-gradients/81372 - if precision == "amp" or precision == "fp32": - model = model.float() - else: - if amodel_name in _MODEL_CONFIGS: - logging.info(f"Loading {amodel_name} model config.") - model_cfg = deepcopy(_MODEL_CONFIGS[amodel_name]) - else: - logging.error( - f"Model config for {amodel_name} not found; available models {list_models()}." - ) - raise RuntimeError(f"Model config for {amodel_name} not found.") - - if force_quick_gelu: - # override for use of QuickGELU on non-OpenAI transformer models - model_cfg["quick_gelu"] = True - - # if pretrained_image: - # if 'timm_amodel_name' in model_cfg.get('vision_cfg', {}): - # # pretrained weight loading for timm models set via vision_cfg - # model_cfg['vision_cfg']['timm_model_pretrained'] = True - # else: - # assert False, 'pretrained image towers currently only supported for timm models' - model_cfg["text_cfg"]["model_type"] = tmodel_name - model_cfg["enable_fusion"] = enable_fusion - model_cfg["fusion_type"] = fusion_type - model = CLAP(**model_cfg) - - if pretrained: - checkpoint_path = "" - url = get_pretrained_url(amodel_name, pretrained) - if url: - checkpoint_path = download_pretrained(url, root=openai_model_cache_dir) - elif os.path.exists(pretrained_orig): - checkpoint_path = pretrained_orig - if checkpoint_path: - logging.info( - f"Loading pretrained {amodel_name}-{tmodel_name} weights ({pretrained})." - ) - ckpt = load_state_dict(checkpoint_path, skip_params=True) - model.load_state_dict(ckpt) - param_names = [n for n, p in model.named_parameters()] - # for n in param_names: - # print(n, "\t", "Loaded" if n in ckpt else "Unloaded") - else: - logging.warning( - f"Pretrained weights ({pretrained}) not found for model {amodel_name}." - ) - raise RuntimeError( - f"Pretrained weights ({pretrained}) not found for model {amodel_name}." - ) - - if pretrained_audio: - if amodel_name.startswith("PANN"): - if "Cnn14_mAP" in pretrained_audio: # official checkpoint - audio_ckpt = torch.load(pretrained_audio, map_location="cpu") - audio_ckpt = audio_ckpt["model"] - keys = list(audio_ckpt.keys()) - for key in keys: - if ( - "spectrogram_extractor" not in key - and "logmel_extractor" not in key - ): - v = audio_ckpt.pop(key) - audio_ckpt["audio_branch." + key] = v - elif os.path.basename(pretrained_audio).startswith( - "PANN" - ): # checkpoint trained via HTSAT codebase - audio_ckpt = torch.load(pretrained_audio, map_location="cpu") - audio_ckpt = audio_ckpt["state_dict"] - keys = list(audio_ckpt.keys()) - for key in keys: - if key.startswith("sed_model"): - v = audio_ckpt.pop(key) - audio_ckpt["audio_branch." + key[10:]] = v - elif os.path.basename(pretrained_audio).startswith( - "finetuned" - ): # checkpoint trained via linear probe codebase - audio_ckpt = torch.load(pretrained_audio, map_location="cpu") - else: - raise ValueError("Unknown audio checkpoint") - elif amodel_name.startswith("HTSAT"): - if "HTSAT_AudioSet_Saved" in pretrained_audio: # official checkpoint - audio_ckpt = torch.load(pretrained_audio, map_location="cpu") - audio_ckpt = audio_ckpt["state_dict"] - keys = list(audio_ckpt.keys()) - for key in keys: - if key.startswith("sed_model") and ( - "spectrogram_extractor" not in key - and "logmel_extractor" not in key - ): - v = audio_ckpt.pop(key) - audio_ckpt["audio_branch." + key[10:]] = v - elif os.path.basename(pretrained_audio).startswith( - "HTSAT" - ): # checkpoint trained via HTSAT codebase - audio_ckpt = torch.load(pretrained_audio, map_location="cpu") - audio_ckpt = audio_ckpt["state_dict"] - keys = list(audio_ckpt.keys()) - for key in keys: - if key.startswith("sed_model"): - v = audio_ckpt.pop(key) - audio_ckpt["audio_branch." + key[10:]] = v - elif os.path.basename(pretrained_audio).startswith( - "finetuned" - ): # checkpoint trained via linear probe codebase - audio_ckpt = torch.load(pretrained_audio, map_location="cpu") - else: - raise ValueError("Unknown audio checkpoint") - else: - raise f"this audio encoder pretrained checkpoint is not support" - - model.load_state_dict(audio_ckpt, strict=False) - logging.info( - f"Loading pretrained {amodel_name} weights ({pretrained_audio})." - ) - param_names = [n for n, p in model.named_parameters()] - for n in param_names: - print(n, "\t", "Loaded" if n in audio_ckpt else "Unloaded") - - model.to(device=device) - if precision == "fp16": - assert device.type != "cpu" - convert_weights_to_fp16(model) - - if jit: - model = torch.jit.script(model) - - return model, model_cfg - - -def create_model_and_transforms( - model_name: str, - pretrained: str = "", - precision: str = "fp32", - device: torch.device = torch.device("cpu"), - jit: bool = False, - force_quick_gelu: bool = False, - # pretrained_image: bool = False, -): - model = create_model( - model_name, - pretrained, - precision, - device, - jit, - force_quick_gelu=force_quick_gelu, - # pretrained_image=pretrained_image - ) - preprocess_train = image_transform(model.visual.image_size, is_train=True) - preprocess_val = image_transform(model.visual.image_size, is_train=False) - return model, preprocess_train, preprocess_val - - -def list_models(): - """enumerate available model architectures based on config files""" - return list(_MODEL_CONFIGS.keys()) - - -def add_model_config(path): - """add model config path or file and update registry""" - if not isinstance(path, Path): - path = Path(path) - _MODEL_CONFIG_PATHS.append(path) - _rescan_model_configs() diff --git a/spaces/banana-projects/web3d/node_modules/three/src/geometries/TubeGeometry.js b/spaces/banana-projects/web3d/node_modules/three/src/geometries/TubeGeometry.js deleted file mode 100644 index 8f9159fd246ee453782b9f8c1dcba707e54a52f3..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/geometries/TubeGeometry.js +++ /dev/null @@ -1,232 +0,0 @@ -/** - * @author oosmoxiecode / https://github.com/oosmoxiecode - * @author WestLangley / https://github.com/WestLangley - * @author zz85 / https://github.com/zz85 - * @author miningold / https://github.com/miningold - * @author jonobr1 / https://github.com/jonobr1 - * @author Mugen87 / https://github.com/Mugen87 - * - */ - -import { Geometry } from '../core/Geometry.js'; -import { BufferGeometry } from '../core/BufferGeometry.js'; -import { Float32BufferAttribute } from '../core/BufferAttribute.js'; -import { Vector2 } from '../math/Vector2.js'; -import { Vector3 } from '../math/Vector3.js'; - -// TubeGeometry - -function TubeGeometry( path, tubularSegments, radius, radialSegments, closed, taper ) { - - Geometry.call( this ); - - this.type = 'TubeGeometry'; - - this.parameters = { - path: path, - tubularSegments: tubularSegments, - radius: radius, - radialSegments: radialSegments, - closed: closed - }; - - if ( taper !== undefined ) console.warn( 'THREE.TubeGeometry: taper has been removed.' ); - - var bufferGeometry = new TubeBufferGeometry( path, tubularSegments, radius, radialSegments, closed ); - - // expose internals - - this.tangents = bufferGeometry.tangents; - this.normals = bufferGeometry.normals; - this.binormals = bufferGeometry.binormals; - - // create geometry - - this.fromBufferGeometry( bufferGeometry ); - this.mergeVertices(); - -} - -TubeGeometry.prototype = Object.create( Geometry.prototype ); -TubeGeometry.prototype.constructor = TubeGeometry; - -// TubeBufferGeometry - -function TubeBufferGeometry( path, tubularSegments, radius, radialSegments, closed ) { - - BufferGeometry.call( this ); - - this.type = 'TubeBufferGeometry'; - - this.parameters = { - path: path, - tubularSegments: tubularSegments, - radius: radius, - radialSegments: radialSegments, - closed: closed - }; - - tubularSegments = tubularSegments || 64; - radius = radius || 1; - radialSegments = radialSegments || 8; - closed = closed || false; - - var frames = path.computeFrenetFrames( tubularSegments, closed ); - - // expose internals - - this.tangents = frames.tangents; - this.normals = frames.normals; - this.binormals = frames.binormals; - - // helper variables - - var vertex = new Vector3(); - var normal = new Vector3(); - var uv = new Vector2(); - var P = new Vector3(); - - var i, j; - - // buffer - - var vertices = []; - var normals = []; - var uvs = []; - var indices = []; - - // create buffer data - - generateBufferData(); - - // build geometry - - this.setIndex( indices ); - this.addAttribute( 'position', new Float32BufferAttribute( vertices, 3 ) ); - this.addAttribute( 'normal', new Float32BufferAttribute( normals, 3 ) ); - this.addAttribute( 'uv', new Float32BufferAttribute( uvs, 2 ) ); - - // functions - - function generateBufferData() { - - for ( i = 0; i < tubularSegments; i ++ ) { - - generateSegment( i ); - - } - - // if the geometry is not closed, generate the last row of vertices and normals - // at the regular position on the given path - // - // if the geometry is closed, duplicate the first row of vertices and normals (uvs will differ) - - generateSegment( ( closed === false ) ? tubularSegments : 0 ); - - // uvs are generated in a separate function. - // this makes it easy compute correct values for closed geometries - - generateUVs(); - - // finally create faces - - generateIndices(); - - } - - function generateSegment( i ) { - - // we use getPointAt to sample evenly distributed points from the given path - - P = path.getPointAt( i / tubularSegments, P ); - - // retrieve corresponding normal and binormal - - var N = frames.normals[ i ]; - var B = frames.binormals[ i ]; - - // generate normals and vertices for the current segment - - for ( j = 0; j <= radialSegments; j ++ ) { - - var v = j / radialSegments * Math.PI * 2; - - var sin = Math.sin( v ); - var cos = - Math.cos( v ); - - // normal - - normal.x = ( cos * N.x + sin * B.x ); - normal.y = ( cos * N.y + sin * B.y ); - normal.z = ( cos * N.z + sin * B.z ); - normal.normalize(); - - normals.push( normal.x, normal.y, normal.z ); - - // vertex - - vertex.x = P.x + radius * normal.x; - vertex.y = P.y + radius * normal.y; - vertex.z = P.z + radius * normal.z; - - vertices.push( vertex.x, vertex.y, vertex.z ); - - } - - } - - function generateIndices() { - - for ( j = 1; j <= tubularSegments; j ++ ) { - - for ( i = 1; i <= radialSegments; i ++ ) { - - var a = ( radialSegments + 1 ) * ( j - 1 ) + ( i - 1 ); - var b = ( radialSegments + 1 ) * j + ( i - 1 ); - var c = ( radialSegments + 1 ) * j + i; - var d = ( radialSegments + 1 ) * ( j - 1 ) + i; - - // faces - - indices.push( a, b, d ); - indices.push( b, c, d ); - - } - - } - - } - - function generateUVs() { - - for ( i = 0; i <= tubularSegments; i ++ ) { - - for ( j = 0; j <= radialSegments; j ++ ) { - - uv.x = i / tubularSegments; - uv.y = j / radialSegments; - - uvs.push( uv.x, uv.y ); - - } - - } - - } - -} - -TubeBufferGeometry.prototype = Object.create( BufferGeometry.prototype ); -TubeBufferGeometry.prototype.constructor = TubeBufferGeometry; - -TubeBufferGeometry.prototype.toJSON = function () { - - var data = BufferGeometry.prototype.toJSON.call( this ); - - data.path = this.parameters.path.toJSON(); - - return data; - -}; - -export { TubeGeometry, TubeBufferGeometry }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/clipping_planes_pars_fragment.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/clipping_planes_pars_fragment.glsl.js deleted file mode 100644 index ccdd963fcba8a154aa1f567f64a4aad5bb62d273..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/clipping_planes_pars_fragment.glsl.js +++ /dev/null @@ -1,11 +0,0 @@ -export default /* glsl */` -#if NUM_CLIPPING_PLANES > 0 - - #if ! defined( PHYSICAL ) && ! defined( PHONG ) && ! defined( MATCAP ) - varying vec3 vViewPosition; - #endif - - uniform vec4 clippingPlanes[ NUM_CLIPPING_PLANES ]; - -#endif -`; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/encodings_fragment.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/encodings_fragment.glsl.js deleted file mode 100644 index 3072c76cd8a4421848878bff6a9279e6444d07f1..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/encodings_fragment.glsl.js +++ /dev/null @@ -1,3 +0,0 @@ -export default /* glsl */` -gl_FragColor = linearToOutputTexel( gl_FragColor ); -`; diff --git a/spaces/basakbuluz/turkish-question-answering/utils/__init__.py b/spaces/basakbuluz/turkish-question-answering/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/bioriAsaeru/text-to-voice/4t Tray Minimizer Pro Crack 23 dolly nazionale exce A Review of the Features and Benefits of the Program.md b/spaces/bioriAsaeru/text-to-voice/4t Tray Minimizer Pro Crack 23 dolly nazionale exce A Review of the Features and Benefits of the Program.md deleted file mode 100644 index 3617f25c9ff632a04c2f6dab51f8954eb54b88d4..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/4t Tray Minimizer Pro Crack 23 dolly nazionale exce A Review of the Features and Benefits of the Program.md +++ /dev/null @@ -1,6 +0,0 @@ -

    4t Tray Minimizer Pro Crack 23 dolly nazionale exce


    Download Filehttps://urloso.com/2uyOSz



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Codesmith Generator 5.3.4 Professional Crack WORK.md b/spaces/bioriAsaeru/text-to-voice/Codesmith Generator 5.3.4 Professional Crack WORK.md deleted file mode 100644 index f4d73eaeb88b712b5f2f65985204ce55c1b3d710..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Codesmith Generator 5.3.4 Professional Crack WORK.md +++ /dev/null @@ -1,9 +0,0 @@ -
    -

    i really like this feature that the program allows you to choose how big your output script is going to be. the other program i tried was tiny, and it was great, but this program seems to be a little better because it allows you to create larger scripts in one shot.

    -

    codesmith generator 5.3.4 professional crack


    Downloadhttps://urloso.com/2uyP3Y



    -

    i must say that i was really impressed by how easy codesmith generator makes it easy to compile a database. i was expecting more advanced features, but the thing that really gets me is just the support for multiple databases, which the other program did not seem to have.

    -

    in fact, codesmith generator professional crack is a famous tool which is used for creating all sorts of scripts related to the database table schemas. the entire script or script is composed on the basic code which is written via this program and the output is customized through the user-defined variables. this script creation process is automated for creating a database table schemas and generates t-sql queries. once the script is generated via this program you should change the variable names related to the specific database you are using.

    -

    once you create a table schema with the help of this software, you can also save it to a file which may be downloaded from the interface. this program includes advanced features like template creation, backward/forward editing of the template, and the ability to show the live preview of the generated scripts. you can also try manjaro linux 0.19.0 xfce 2018.07.09 crack. you may also want to try tinype accounting software with 2016 v10.

    -

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/If Only 2004 English Subtitles Download.md b/spaces/bioriAsaeru/text-to-voice/If Only 2004 English Subtitles Download.md deleted file mode 100644 index 38798282b231b9b8272aa98d37c9224de10dc495..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/If Only 2004 English Subtitles Download.md +++ /dev/null @@ -1,104 +0,0 @@ -
    -

    If Only 2004 English Subtitles Download

    - -

    If you are looking for a romantic fantasy film that will make you cry, you might want to watch If Only (2004) with English subtitles. This movie stars Jennifer Love Hewitt and Paul Nicholls as a couple who get a second chance to save their relationship after a tragic accident. But can they change their fate or will they lose each other again?

    - -

    In this article, we will show you how to download English subtitles for If Only (2004) and where to find them online. We will also give you some tips on how to enjoy this movie with subtitles.

    -

    If Only 2004 English Subtitles Download


    DOWNLOAD ····· https://urloso.com/2uyRgT



    - -

    How to Download English Subtitles for If Only (2004)

    - -

    There are many websites that offer subtitles for movies and TV shows, but not all of them are reliable or safe. Some of them may contain viruses, malware, or spam. Some of them may have poor quality or inaccurate subtitles. Some of them may not have the subtitles you are looking for.

    - -

    That's why we recommend you to use a trusted and reputable website that has a large database of subtitles in different languages and formats. One of such websites is OpenSubtitles.org, which has over 114 subtitles for If Only (2004) in various languages, including English.

    - -

    To download English subtitles for If Only (2004) from OpenSubtitles.org, follow these steps:

    - -
      -
    1. Go to https://www.opensubtitles.org/en/search/sublanguageid-all/idmovie-14652.
    2. -
    3. Select the subtitle file that matches your video file name and format. For example, if you have If.Only.2004.720p.BluRay.x264.AAC-[YTS.MX].mp4, you can choose If.Only.2004.720p.BluRay.x264.AAC-[YTS.MX].srt.
    4. -
    5. Click on the download button and save the subtitle file to your computer.
    6. -
    7. Extract the subtitle file from the zip folder if necessary.
    8. -
    9. Rename the subtitle file to match your video file name if necessary.
    10. -
    11. Place the subtitle file in the same folder as your video file.
    12. -
    - -

    Now you are ready to watch If Only (2004) with English subtitles on your preferred media player.

    - -

    Where to Find English Subtitles for If Only (2004)

    - -

    If you don't want to download subtitles for If Only (2004), you can also watch it online with subtitles on some streaming platforms. However, not all of them may have the movie available in your region or with your preferred language option. Here are some of the platforms that have If Only (2004) with English subtitles:

    - -
      -
    • YIFY Subtitles: This website provides subtitles for movies that are uploaded by YIFY, a popular torrent site. You can watch If Only (2004) with English subtitles on YIFY Subtitles by clicking on the play button next to the subtitle file.
    • -
    • SoundCloud: This is a platform that allows users to upload and stream audio files, including audiobooks and podcasts. You can listen to If Only (2004) with English subtitles on SoundCloud by following the link provided by Efbricinnvig1977, a user who uploaded the movie with subtitles.
    • -
    • NedEd: This is a forum that allows users to discuss various topics, including movies and TV shows. You can watch If Only (2004) with English subtitles on NedEd by clicking on the link provided by NedEd Admin, who shared the movie with subtitles.
    • -
    - -

    How to Enjoy If Only (2004) with Subtitles

    - -

    Watching a movie with subtitles can be a great way to improve your language skills, learn new vocabulary, and understand different accents and cultures. However, it can also be challenging or distracting if you are not used to it. Here are some tips on how to enjoy If Only (2004) with subtitles:

    - -
      -
    • Choose the right subtitle file: Make sure you select the subtitle file that matches your video file name and format, as well as your preferred language and dialect. For example, if you want to watch If Only (2004) in British English, you should avoid subtitle files that are in American English or other variants.
    • -
    • Adjust the subtitle settings: Depending on your media player, you may be able to adjust the subtitle settings such as font size, color, position, delay, or synchronization. This can help you read the subtitles more comfortably and avoid missing any important dialogue or action.
    • -
    • Focus on the main dialogue: Sometimes, subtitles may include background noises, music lyrics, or other sounds that are not essential for understanding the story. You don't have to read everything that appears on the screen; just focus on the main dialogue between the characters and ignore the rest.
    • -
    • Pause and rewind if necessary: If you miss something or don't understand something, don't hesitate to pause and rewind the video. You can also use online dictionaries or translators to look up unfamiliar words or phrases.
    • -
    • Enjoy the movie: Don't let subtitles ruin your enjoyment of the movie. Remember that they are there to help you understand and appreciate the story better. Try to immerse yourself in the movie and feel the emotions of the characters.
    • -
    - -

    Conclusion

    - -

    If Only (2004) is a romantic fantasy film that will touch your heart and make you think about life and love. If you want to watch it with English subtitles, you can either download them from OpenSubtitles.org or watch it online on YIFY Subtitles, SoundCloud, or NedEd. You can also follow our tips on how to enjoy If Only (2004) with subtitles and have a great time watching this movie.

    - -

    We hope this article was helpful for you and answered your query about If Only 2004 English Subtitles Download. If you have any questions or feedback, please leave a comment below.

    -

    -

    What is If Only (2004) About?

    - -

    If Only (2004) is a romantic fantasy film that explores the theme of "what if". It tells the story of Ian Wyndham (Paul Nicholls), a successful British businessman who lives in London with his girlfriend Samantha Andrews (Jennifer Love Hewitt), an aspiring musician. They have a rocky relationship due to Ian's workaholic nature and lack of attention to Samantha's needs and dreams.

    - -

    One day, after a heated argument, Samantha dies in a car accident while Ian survives. Ian is devastated and blames himself for not being more supportive and loving to her. However, he wakes up the next morning to find that Samantha is alive and well, and that he has been given another chance to relive the day of her death. He realizes that this is an opportunity to change the course of events and prevent her from dying.

    - -

    However, things are not as simple as they seem. Ian soon discovers that he has to make some difficult choices and sacrifices to save Samantha's life. He also learns more about himself, his relationship, and his true feelings for her. Will he be able to change their fate or will he lose her again?

    - -

    Why You Should Watch If Only (2004) with English Subtitles

    - -

    If Only (2004) is a movie that will touch your heart and make you think about life and love. It has a captivating plot, a beautiful soundtrack, and a great cast. Jennifer Love Hewitt and Paul Nicholls have a wonderful chemistry and deliver emotional performances. The movie also has some humorous moments and surprises that will keep you entertained.

    - -

    If you are not a native English speaker or if you have trouble understanding different accents, watching If Only (2004) with English subtitles can help you enjoy the movie better. You can follow the dialogue more easily and catch the nuances and expressions of the characters. You can also improve your vocabulary, grammar, and pronunciation by reading and listening to the subtitles.

    - -

    Watching If Only (2004) with English subtitles can also enhance your cultural awareness and appreciation. You can learn more about British culture, slang, and humor by watching how the characters interact and behave. You can also compare and contrast the differences and similarities between British and American English by watching how Jennifer Love Hewitt and Paul Nicholls speak.

    - -

    Therefore, watching If Only (2004) with English subtitles can be a rewarding and enjoyable experience for you. You can watch it alone or with your friends or family, and share your thoughts and feelings about the movie.

    -

    What are the Benefits of Watching If Only (2004) with English Subtitles

    - -

    Watching If Only (2004) with English subtitles can have many benefits for you, whether you are a fan of the movie, a learner of English, or a lover of romance. Here are some of the benefits of watching If Only (2004) with English subtitles:

    - -
      -
    • You can enjoy the movie more: Watching If Only (2004) with English subtitles can help you appreciate the movie more by understanding the dialogue, the emotions, and the messages better. You can also catch the details and subtleties that you might miss otherwise.
    • -
    • You can improve your English skills: Watching If Only (2004) with English subtitles can help you improve your English skills by exposing you to different vocabulary, grammar, and pronunciation. You can also learn how to use idioms, expressions, and slang in context. You can also practice your listening and reading comprehension by following the subtitles along with the audio.
    • -
    • You can have fun and learn at the same time: Watching If Only (2004) with English subtitles can be a fun and enjoyable way to learn English. You can watch it as a hobby, as a break, or as a reward. You can also watch it with your friends or family and have a good time together.
    • -
    - -

    How to Find More Movies Like If Only (2004) with English Subtitles

    - -

    If you liked If Only (2004) and want to watch more movies like it with English subtitles, you are in luck. There are many movies that share similar themes, genres, or styles with If Only (2004) that you can watch with English subtitles. Here are some of them:

    - -
      -
    • The Time Traveler's Wife (2009): This is a romantic drama film based on the novel of the same name by Audrey Niffenegger. It stars Rachel McAdams and Eric Bana as a couple who have to deal with the effects of time travel on their relationship.
    • -
    • Sliding Doors (1998): This is a romantic comedy film that explores the concept of "what if". It stars Gwyneth Paltrow as a woman who lives two parallel lives based on whether she catches or misses a train.
    • -
    • The Lake House (2006): This is a romantic fantasy film that involves a mysterious mailbox that connects two people who live in different times. It stars Sandra Bullock and Keanu Reeves as the lovers who communicate through letters.
    • -
    • About Time (2013): This is a romantic comedy film that features time travel as a plot device. It stars Domhnall Gleeson as a man who can travel back in time and change his life. He uses this ability to pursue his love interest, played by Rachel McAdams.
    • -
    • The Notebook (2004): This is a romantic drama film based on the novel of the same name by Nicholas Sparks. It stars Ryan Gosling and Rachel McAdams as a young couple who fall in love in the 1940s but are separated by their social differences.
    • -
    - -

    You can find these movies and more with English subtitles on various websites and platforms, such as OpenSubtitles.org, YIFY Subtitles, SoundCloud, NedEd, or other streaming services.

    -

    Conclusion

    - -

    If Only (2004) is a romantic fantasy film that will make you cry and think about life and love. It tells the story of a couple who get a second chance to save their relationship after a tragic accident. If you want to watch it with English subtitles, you can either download them from OpenSubtitles.org or watch it online on YIFY Subtitles, SoundCloud, or NedEd. You can also follow our tips on how to enjoy If Only (2004) with subtitles and have a great time watching this movie.

    - -

    Watching If Only (2004) with English subtitles can have many benefits for you, such as enjoying the movie more, improving your English skills, and having fun and learning at the same time. You can also find more movies like If Only (2004) with English subtitles on various websites and platforms.

    - -

    We hope this article was helpful for you and answered your query about If Only 2004 English Subtitles Download. If you have any questions or feedback, please leave a comment below.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/bodah/RVC-Models-bo/rmvpe.py b/spaces/bodah/RVC-Models-bo/rmvpe.py deleted file mode 100644 index 3ad346141340e03bdbaa20121e1ed435bb3da57a..0000000000000000000000000000000000000000 --- a/spaces/bodah/RVC-Models-bo/rmvpe.py +++ /dev/null @@ -1,432 +0,0 @@ -import sys, torch, numpy as np, traceback, pdb -import torch.nn as nn -from time import time as ttime -import torch.nn.functional as F - - -class BiGRU(nn.Module): - def __init__(self, input_features, hidden_features, num_layers): - super(BiGRU, self).__init__() - self.gru = nn.GRU( - input_features, - hidden_features, - num_layers=num_layers, - batch_first=True, - bidirectional=True, - ) - - def forward(self, x): - return self.gru(x)[0] - - -class ConvBlockRes(nn.Module): - def __init__(self, in_channels, out_channels, momentum=0.01): - super(ConvBlockRes, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - nn.Conv2d( - in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=(1, 1), - padding=(1, 1), - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - if in_channels != out_channels: - self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1)) - self.is_shortcut = True - else: - self.is_shortcut = False - - def forward(self, x): - if self.is_shortcut: - return self.conv(x) + self.shortcut(x) - else: - return self.conv(x) + x - - -class Encoder(nn.Module): - def __init__( - self, - in_channels, - in_size, - n_encoders, - kernel_size, - n_blocks, - out_channels=16, - momentum=0.01, - ): - super(Encoder, self).__init__() - self.n_encoders = n_encoders - self.bn = nn.BatchNorm2d(in_channels, momentum=momentum) - self.layers = nn.ModuleList() - self.latent_channels = [] - for i in range(self.n_encoders): - self.layers.append( - ResEncoderBlock( - in_channels, out_channels, kernel_size, n_blocks, momentum=momentum - ) - ) - self.latent_channels.append([out_channels, in_size]) - in_channels = out_channels - out_channels *= 2 - in_size //= 2 - self.out_size = in_size - self.out_channel = out_channels - - def forward(self, x): - concat_tensors = [] - x = self.bn(x) - for i in range(self.n_encoders): - _, x = self.layers[i](x) - concat_tensors.append(_) - return x, concat_tensors - - -class ResEncoderBlock(nn.Module): - def __init__( - self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01 - ): - super(ResEncoderBlock, self).__init__() - self.n_blocks = n_blocks - self.conv = nn.ModuleList() - self.conv.append(ConvBlockRes(in_channels, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv.append(ConvBlockRes(out_channels, out_channels, momentum)) - self.kernel_size = kernel_size - if self.kernel_size is not None: - self.pool = nn.AvgPool2d(kernel_size=kernel_size) - - def forward(self, x): - for i in range(self.n_blocks): - x = self.conv[i](x) - if self.kernel_size is not None: - return x, self.pool(x) - else: - return x - - -class Intermediate(nn.Module): # - def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01): - super(Intermediate, self).__init__() - self.n_inters = n_inters - self.layers = nn.ModuleList() - self.layers.append( - ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum) - ) - for i in range(self.n_inters - 1): - self.layers.append( - ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum) - ) - - def forward(self, x): - for i in range(self.n_inters): - x = self.layers[i](x) - return x - - -class ResDecoderBlock(nn.Module): - def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01): - super(ResDecoderBlock, self).__init__() - out_padding = (0, 1) if stride == (1, 2) else (1, 1) - self.n_blocks = n_blocks - self.conv1 = nn.Sequential( - nn.ConvTranspose2d( - in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), - stride=stride, - padding=(1, 1), - output_padding=out_padding, - bias=False, - ), - nn.BatchNorm2d(out_channels, momentum=momentum), - nn.ReLU(), - ) - self.conv2 = nn.ModuleList() - self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum)) - for i in range(n_blocks - 1): - self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum)) - - def forward(self, x, concat_tensor): - x = self.conv1(x) - x = torch.cat((x, concat_tensor), dim=1) - for i in range(self.n_blocks): - x = self.conv2[i](x) - return x - - -class Decoder(nn.Module): - def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01): - super(Decoder, self).__init__() - self.layers = nn.ModuleList() - self.n_decoders = n_decoders - for i in range(self.n_decoders): - out_channels = in_channels // 2 - self.layers.append( - ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum) - ) - in_channels = out_channels - - def forward(self, x, concat_tensors): - for i in range(self.n_decoders): - x = self.layers[i](x, concat_tensors[-1 - i]) - return x - - -class DeepUnet(nn.Module): - def __init__( - self, - kernel_size, - n_blocks, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(DeepUnet, self).__init__() - self.encoder = Encoder( - in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels - ) - self.intermediate = Intermediate( - self.encoder.out_channel // 2, - self.encoder.out_channel, - inter_layers, - n_blocks, - ) - self.decoder = Decoder( - self.encoder.out_channel, en_de_layers, kernel_size, n_blocks - ) - - def forward(self, x): - x, concat_tensors = self.encoder(x) - x = self.intermediate(x) - x = self.decoder(x, concat_tensors) - return x - - -class E2E(nn.Module): - def __init__( - self, - n_blocks, - n_gru, - kernel_size, - en_de_layers=5, - inter_layers=4, - in_channels=1, - en_out_channels=16, - ): - super(E2E, self).__init__() - self.unet = DeepUnet( - kernel_size, - n_blocks, - en_de_layers, - inter_layers, - in_channels, - en_out_channels, - ) - self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1)) - if n_gru: - self.fc = nn.Sequential( - BiGRU(3 * 128, 256, n_gru), - nn.Linear(512, 360), - nn.Dropout(0.25), - nn.Sigmoid(), - ) - else: - self.fc = nn.Sequential( - nn.Linear(3 * N_MELS, N_CLASS), nn.Dropout(0.25), nn.Sigmoid() - ) - - def forward(self, mel): - mel = mel.transpose(-1, -2).unsqueeze(1) - x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2) - x = self.fc(x) - return x - - -from librosa.filters import mel - - -class MelSpectrogram(torch.nn.Module): - def __init__( - self, - is_half, - n_mel_channels, - sampling_rate, - win_length, - hop_length, - n_fft=None, - mel_fmin=0, - mel_fmax=None, - clamp=1e-5, - ): - super().__init__() - n_fft = win_length if n_fft is None else n_fft - self.hann_window = {} - mel_basis = mel( - sr=sampling_rate, - n_fft=n_fft, - n_mels=n_mel_channels, - fmin=mel_fmin, - fmax=mel_fmax, - htk=True, - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - self.n_fft = win_length if n_fft is None else n_fft - self.hop_length = hop_length - self.win_length = win_length - self.sampling_rate = sampling_rate - self.n_mel_channels = n_mel_channels - self.clamp = clamp - self.is_half = is_half - - def forward(self, audio, keyshift=0, speed=1, center=True): - factor = 2 ** (keyshift / 12) - n_fft_new = int(np.round(self.n_fft * factor)) - win_length_new = int(np.round(self.win_length * factor)) - hop_length_new = int(np.round(self.hop_length * speed)) - keyshift_key = str(keyshift) + "_" + str(audio.device) - if keyshift_key not in self.hann_window: - self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to( - audio.device - ) - fft = torch.stft( - audio, - n_fft=n_fft_new, - hop_length=hop_length_new, - win_length=win_length_new, - window=self.hann_window[keyshift_key], - center=center, - return_complex=True, - ) - magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2)) - if keyshift != 0: - size = self.n_fft // 2 + 1 - resize = magnitude.size(1) - if resize < size: - magnitude = F.pad(magnitude, (0, 0, 0, size - resize)) - magnitude = magnitude[:, :size, :] * self.win_length / win_length_new - mel_output = torch.matmul(self.mel_basis, magnitude) - if self.is_half == True: - mel_output = mel_output.half() - log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp)) - return log_mel_spec - - -class RMVPE: - def __init__(self, model_path, is_half, device=None): - self.resample_kernel = {} - model = E2E(4, 1, (2, 2)) - ckpt = torch.load(model_path, map_location="cpu") - model.load_state_dict(ckpt) - model.eval() - if is_half == True: - model = model.half() - self.model = model - self.resample_kernel = {} - self.is_half = is_half - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - self.device = device - self.mel_extractor = MelSpectrogram( - is_half, 128, 16000, 1024, 160, None, 30, 8000 - ).to(device) - self.model = self.model.to(device) - cents_mapping = 20 * np.arange(360) + 1997.3794084376191 - self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368 - - def mel2hidden(self, mel): - with torch.no_grad(): - n_frames = mel.shape[-1] - mel = F.pad( - mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect" - ) - hidden = self.model(mel) - return hidden[:, :n_frames] - - def decode(self, hidden, thred=0.03): - cents_pred = self.to_local_average_cents(hidden, thred=thred) - f0 = 10 * (2 ** (cents_pred / 1200)) - f0[f0 == 10] = 0 - # f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred]) - return f0 - - def infer_from_audio(self, audio, thred=0.03): - audio = torch.from_numpy(audio).float().to(self.device).unsqueeze(0) - # torch.cuda.synchronize() - # t0=ttime() - mel = self.mel_extractor(audio, center=True) - # torch.cuda.synchronize() - # t1=ttime() - hidden = self.mel2hidden(mel) - # torch.cuda.synchronize() - # t2=ttime() - hidden = hidden.squeeze(0).cpu().numpy() - if self.is_half == True: - hidden = hidden.astype("float32") - f0 = self.decode(hidden, thred=thred) - # torch.cuda.synchronize() - # t3=ttime() - # print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0)) - return f0 - - def to_local_average_cents(self, salience, thred=0.05): - # t0 = ttime() - center = np.argmax(salience, axis=1) # 帧长#index - salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368 - # t1 = ttime() - center += 4 - todo_salience = [] - todo_cents_mapping = [] - starts = center - 4 - ends = center + 5 - for idx in range(salience.shape[0]): - todo_salience.append(salience[:, starts[idx] : ends[idx]][idx]) - todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]]) - # t2 = ttime() - todo_salience = np.array(todo_salience) # 帧长,9 - todo_cents_mapping = np.array(todo_cents_mapping) # 帧长,9 - product_sum = np.sum(todo_salience * todo_cents_mapping, 1) - weight_sum = np.sum(todo_salience, 1) # 帧长 - devided = product_sum / weight_sum # 帧长 - # t3 = ttime() - maxx = np.max(salience, axis=1) # 帧长 - devided[maxx <= thred] = 0 - # t4 = ttime() - # print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3)) - return devided - - -# if __name__ == '__main__': -# audio, sampling_rate = sf.read("卢本伟语录~1.wav") -# if len(audio.shape) > 1: -# audio = librosa.to_mono(audio.transpose(1, 0)) -# audio_bak = audio.copy() -# if sampling_rate != 16000: -# audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) -# model_path = "/bili-coeus/jupyter/jupyterhub-liujing04/vits_ch/test-RMVPE/weights/rmvpe_llc_half.pt" -# thred = 0.03 # 0.01 -# device = 'cuda' if torch.cuda.is_available() else 'cpu' -# rmvpe = RMVPE(model_path,is_half=False, device=device) -# t0=ttime() -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# f0 = rmvpe.infer_from_audio(audio, thred=thred) -# t1=ttime() -# print(f0.shape,t1-t0) diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/c10.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/c10.py deleted file mode 100644 index e9a3ee38c8df7c05ac53985b5ec1c5535f360187..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/c10.py +++ /dev/null @@ -1,571 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import math -from typing import Dict -import torch -import torch.nn.functional as F - -from detectron2.layers import ShapeSpec, cat -from detectron2.layers.roi_align_rotated import ROIAlignRotated -from detectron2.modeling import poolers -from detectron2.modeling.proposal_generator import rpn -from detectron2.modeling.roi_heads.mask_head import mask_rcnn_inference -from detectron2.structures import Boxes, ImageList, Instances, Keypoints, RotatedBoxes - -from .shared import alias, to_device - - -""" -This file contains caffe2-compatible implementation of several detectron2 components. -""" - - -class Caffe2Boxes(Boxes): - """ - Representing a list of detectron2.structures.Boxes from minibatch, each box - is represented by a 5d vector (batch index + 4 coordinates), or a 6d vector - (batch index + 5 coordinates) for RotatedBoxes. - """ - - def __init__(self, tensor): - assert isinstance(tensor, torch.Tensor) - assert tensor.dim() == 2 and tensor.size(-1) in [4, 5, 6], tensor.size() - # TODO: make tensor immutable when dim is Nx5 for Boxes, - # and Nx6 for RotatedBoxes? - self.tensor = tensor - - -# TODO clean up this class, maybe just extend Instances -class InstancesList(object): - """ - Tensor representation of a list of Instances object for a batch of images. - - When dealing with a batch of images with Caffe2 ops, a list of bboxes - (instances) are usually represented by single Tensor with size - (sigma(Ni), 5) or (sigma(Ni), 4) plus a batch split Tensor. This class is - for providing common functions to convert between these two representations. - """ - - def __init__(self, im_info, indices, extra_fields=None): - # [N, 3] -> (H, W, Scale) - self.im_info = im_info - # [N,] -> indice of batch to which the instance belongs - self.indices = indices - # [N, ...] - self.batch_extra_fields = extra_fields or {} - - self.image_size = self.im_info - - def get_fields(self): - """like `get_fields` in the Instances object, - but return each field in tensor representations""" - ret = {} - for k, v in self.batch_extra_fields.items(): - # if isinstance(v, torch.Tensor): - # tensor_rep = v - # elif isinstance(v, (Boxes, Keypoints)): - # tensor_rep = v.tensor - # else: - # raise ValueError("Can't find tensor representation for: {}".format()) - ret[k] = v - return ret - - def has(self, name): - return name in self.batch_extra_fields - - def set(self, name, value): - # len(tensor) is a bad practice that generates ONNX constants during tracing. - # Although not a problem for the `assert` statement below, torch ONNX exporter - # still raises a misleading warning as it does not this call comes from `assert` - if isinstance(value, Boxes): - data_len = value.tensor.shape[0] - elif isinstance(value, torch.Tensor): - data_len = value.shape[0] - else: - data_len = len(value) - if len(self.batch_extra_fields): - assert ( - len(self) == data_len - ), "Adding a field of length {} to a Instances of length {}".format(data_len, len(self)) - self.batch_extra_fields[name] = value - - def __getattr__(self, name): - if name not in self.batch_extra_fields: - raise AttributeError("Cannot find field '{}' in the given Instances!".format(name)) - return self.batch_extra_fields[name] - - def __len__(self): - return len(self.indices) - - def flatten(self): - ret = [] - for _, v in self.batch_extra_fields.items(): - if isinstance(v, (Boxes, Keypoints)): - ret.append(v.tensor) - else: - ret.append(v) - return ret - - @staticmethod - def to_d2_instances_list(instances_list): - """ - Convert InstancesList to List[Instances]. The input `instances_list` can - also be a List[Instances], in this case this method is a non-op. - """ - if not isinstance(instances_list, InstancesList): - assert all(isinstance(x, Instances) for x in instances_list) - return instances_list - - ret = [] - for i, info in enumerate(instances_list.im_info): - instances = Instances(torch.Size([int(info[0].item()), int(info[1].item())])) - - ids = instances_list.indices == i - for k, v in instances_list.batch_extra_fields.items(): - if isinstance(v, torch.Tensor): - instances.set(k, v[ids]) - continue - elif isinstance(v, Boxes): - instances.set(k, v[ids, -4:]) - continue - - target_type, tensor_source = v - assert isinstance(tensor_source, torch.Tensor) - assert tensor_source.shape[0] == instances_list.indices.shape[0] - tensor_source = tensor_source[ids] - - if issubclass(target_type, Boxes): - instances.set(k, Boxes(tensor_source[:, -4:])) - elif issubclass(target_type, Keypoints): - instances.set(k, Keypoints(tensor_source)) - elif issubclass(target_type, torch.Tensor): - instances.set(k, tensor_source) - else: - raise ValueError("Can't handle targe type: {}".format(target_type)) - - ret.append(instances) - return ret - - -class Caffe2Compatible(object): - """ - A model can inherit this class to indicate that it can be traced and deployed with caffe2. - """ - - def _get_tensor_mode(self): - return self._tensor_mode - - def _set_tensor_mode(self, v): - self._tensor_mode = v - - tensor_mode = property(_get_tensor_mode, _set_tensor_mode) - """ - If true, the model expects C2-style tensor only inputs/outputs format. - """ - - -class Caffe2RPN(Caffe2Compatible, rpn.RPN): - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - ret = super(Caffe2Compatible, cls).from_config(cfg, input_shape) - assert tuple(cfg.MODEL.RPN.BBOX_REG_WEIGHTS) == (1.0, 1.0, 1.0, 1.0) or tuple( - cfg.MODEL.RPN.BBOX_REG_WEIGHTS - ) == (1.0, 1.0, 1.0, 1.0, 1.0) - return ret - - def _generate_proposals( - self, images, objectness_logits_pred, anchor_deltas_pred, gt_instances=None - ): - assert isinstance(images, ImageList) - if self.tensor_mode: - im_info = images.image_sizes - else: - im_info = torch.tensor([[im_sz[0], im_sz[1], 1.0] for im_sz in images.image_sizes]).to( - images.tensor.device - ) - assert isinstance(im_info, torch.Tensor) - - rpn_rois_list = [] - rpn_roi_probs_list = [] - for scores, bbox_deltas, cell_anchors_tensor, feat_stride in zip( - objectness_logits_pred, - anchor_deltas_pred, - [b for (n, b) in self.anchor_generator.cell_anchors.named_buffers()], - self.anchor_generator.strides, - ): - scores = scores.detach() - bbox_deltas = bbox_deltas.detach() - - rpn_rois, rpn_roi_probs = torch.ops._caffe2.GenerateProposals( - scores, - bbox_deltas, - im_info, - cell_anchors_tensor, - spatial_scale=1.0 / feat_stride, - pre_nms_topN=self.pre_nms_topk[self.training], - post_nms_topN=self.post_nms_topk[self.training], - nms_thresh=self.nms_thresh, - min_size=self.min_box_size, - # correct_transform_coords=True, # deprecated argument - angle_bound_on=True, # Default - angle_bound_lo=-180, - angle_bound_hi=180, - clip_angle_thresh=1.0, # Default - legacy_plus_one=False, - ) - rpn_rois_list.append(rpn_rois) - rpn_roi_probs_list.append(rpn_roi_probs) - - # For FPN in D2, in RPN all proposals from different levels are concated - # together, ranked and picked by top post_nms_topk. Then in ROIPooler - # it calculates level_assignments and calls the RoIAlign from - # the corresponding level. - - if len(objectness_logits_pred) == 1: - rpn_rois = rpn_rois_list[0] - rpn_roi_probs = rpn_roi_probs_list[0] - else: - assert len(rpn_rois_list) == len(rpn_roi_probs_list) - rpn_post_nms_topN = self.post_nms_topk[self.training] - - device = rpn_rois_list[0].device - input_list = [to_device(x, "cpu") for x in (rpn_rois_list + rpn_roi_probs_list)] - - # TODO remove this after confirming rpn_max_level/rpn_min_level - # is not needed in CollectRpnProposals. - feature_strides = list(self.anchor_generator.strides) - rpn_min_level = int(math.log2(feature_strides[0])) - rpn_max_level = int(math.log2(feature_strides[-1])) - assert (rpn_max_level - rpn_min_level + 1) == len( - rpn_rois_list - ), "CollectRpnProposals requires continuous levels" - - rpn_rois = torch.ops._caffe2.CollectRpnProposals( - input_list, - # NOTE: in current implementation, rpn_max_level and rpn_min_level - # are not needed, only the subtraction of two matters and it - # can be infer from the number of inputs. Keep them now for - # consistency. - rpn_max_level=2 + len(rpn_rois_list) - 1, - rpn_min_level=2, - rpn_post_nms_topN=rpn_post_nms_topN, - ) - rpn_rois = to_device(rpn_rois, device) - rpn_roi_probs = [] - - proposals = self.c2_postprocess(im_info, rpn_rois, rpn_roi_probs, self.tensor_mode) - return proposals, {} - - def forward(self, images, features, gt_instances=None): - assert not self.training - features = [features[f] for f in self.in_features] - objectness_logits_pred, anchor_deltas_pred = self.rpn_head(features) - return self._generate_proposals( - images, - objectness_logits_pred, - anchor_deltas_pred, - gt_instances, - ) - - @staticmethod - def c2_postprocess(im_info, rpn_rois, rpn_roi_probs, tensor_mode): - proposals = InstancesList( - im_info=im_info, - indices=rpn_rois[:, 0], - extra_fields={ - "proposal_boxes": Caffe2Boxes(rpn_rois), - "objectness_logits": (torch.Tensor, rpn_roi_probs), - }, - ) - if not tensor_mode: - proposals = InstancesList.to_d2_instances_list(proposals) - else: - proposals = [proposals] - return proposals - - -class Caffe2ROIPooler(Caffe2Compatible, poolers.ROIPooler): - @staticmethod - def c2_preprocess(box_lists): - assert all(isinstance(x, Boxes) for x in box_lists) - if all(isinstance(x, Caffe2Boxes) for x in box_lists): - # input is pure-tensor based - assert len(box_lists) == 1 - pooler_fmt_boxes = box_lists[0].tensor - else: - pooler_fmt_boxes = poolers.convert_boxes_to_pooler_format(box_lists) - return pooler_fmt_boxes - - def forward(self, x, box_lists): - assert not self.training - - pooler_fmt_boxes = self.c2_preprocess(box_lists) - num_level_assignments = len(self.level_poolers) - - if num_level_assignments == 1: - if isinstance(self.level_poolers[0], ROIAlignRotated): - c2_roi_align = torch.ops._caffe2.RoIAlignRotated - aligned = True - else: - c2_roi_align = torch.ops._caffe2.RoIAlign - aligned = self.level_poolers[0].aligned - - x0 = x[0] - if x0.is_quantized: - x0 = x0.dequantize() - - out = c2_roi_align( - x0, - pooler_fmt_boxes, - order="NCHW", - spatial_scale=float(self.level_poolers[0].spatial_scale), - pooled_h=int(self.output_size[0]), - pooled_w=int(self.output_size[1]), - sampling_ratio=int(self.level_poolers[0].sampling_ratio), - aligned=aligned, - ) - return out - - device = pooler_fmt_boxes.device - assert ( - self.max_level - self.min_level + 1 == 4 - ), "Currently DistributeFpnProposals only support 4 levels" - fpn_outputs = torch.ops._caffe2.DistributeFpnProposals( - to_device(pooler_fmt_boxes, "cpu"), - roi_canonical_scale=self.canonical_box_size, - roi_canonical_level=self.canonical_level, - roi_max_level=self.max_level, - roi_min_level=self.min_level, - legacy_plus_one=False, - ) - fpn_outputs = [to_device(x, device) for x in fpn_outputs] - - rois_fpn_list = fpn_outputs[:-1] - rois_idx_restore_int32 = fpn_outputs[-1] - - roi_feat_fpn_list = [] - for roi_fpn, x_level, pooler in zip(rois_fpn_list, x, self.level_poolers): - if isinstance(pooler, ROIAlignRotated): - c2_roi_align = torch.ops._caffe2.RoIAlignRotated - aligned = True - else: - c2_roi_align = torch.ops._caffe2.RoIAlign - aligned = bool(pooler.aligned) - - if x_level.is_quantized: - x_level = x_level.dequantize() - - roi_feat_fpn = c2_roi_align( - x_level, - roi_fpn, - order="NCHW", - spatial_scale=float(pooler.spatial_scale), - pooled_h=int(self.output_size[0]), - pooled_w=int(self.output_size[1]), - sampling_ratio=int(pooler.sampling_ratio), - aligned=aligned, - ) - roi_feat_fpn_list.append(roi_feat_fpn) - - roi_feat_shuffled = cat(roi_feat_fpn_list, dim=0) - assert roi_feat_shuffled.numel() > 0 and rois_idx_restore_int32.numel() > 0, ( - "Caffe2 export requires tracing with a model checkpoint + input that can produce valid" - " detections. But no detections were obtained with the given checkpoint and input!" - ) - roi_feat = torch.ops._caffe2.BatchPermutation(roi_feat_shuffled, rois_idx_restore_int32) - return roi_feat - - -def caffe2_fast_rcnn_outputs_inference(tensor_mode, box_predictor, predictions, proposals): - """equivalent to FastRCNNOutputLayers.inference""" - num_classes = box_predictor.num_classes - score_thresh = box_predictor.test_score_thresh - nms_thresh = box_predictor.test_nms_thresh - topk_per_image = box_predictor.test_topk_per_image - is_rotated = len(box_predictor.box2box_transform.weights) == 5 - - if is_rotated: - box_dim = 5 - assert box_predictor.box2box_transform.weights[4] == 1, ( - "The weights for Rotated BBoxTransform in C2 have only 4 dimensions," - + " thus enforcing the angle weight to be 1 for now" - ) - box2box_transform_weights = box_predictor.box2box_transform.weights[:4] - else: - box_dim = 4 - box2box_transform_weights = box_predictor.box2box_transform.weights - - class_logits, box_regression = predictions - if num_classes + 1 == class_logits.shape[1]: - class_prob = F.softmax(class_logits, -1) - else: - assert num_classes == class_logits.shape[1] - class_prob = F.sigmoid(class_logits) - # BoxWithNMSLimit will infer num_classes from the shape of the class_prob - # So append a zero column as placeholder for the background class - class_prob = torch.cat((class_prob, torch.zeros(class_prob.shape[0], 1)), dim=1) - - assert box_regression.shape[1] % box_dim == 0 - cls_agnostic_bbox_reg = box_regression.shape[1] // box_dim == 1 - - input_tensor_mode = proposals[0].proposal_boxes.tensor.shape[1] == box_dim + 1 - - proposal_boxes = proposals[0].proposal_boxes - if isinstance(proposal_boxes, Caffe2Boxes): - rois = Caffe2Boxes.cat([p.proposal_boxes for p in proposals]) - elif isinstance(proposal_boxes, RotatedBoxes): - rois = RotatedBoxes.cat([p.proposal_boxes for p in proposals]) - elif isinstance(proposal_boxes, Boxes): - rois = Boxes.cat([p.proposal_boxes for p in proposals]) - else: - raise NotImplementedError( - 'Expected proposals[0].proposal_boxes to be type "Boxes", ' - f"instead got {type(proposal_boxes)}" - ) - - device, dtype = rois.tensor.device, rois.tensor.dtype - if input_tensor_mode: - im_info = proposals[0].image_size - rois = rois.tensor - else: - im_info = torch.tensor([[sz[0], sz[1], 1.0] for sz in [x.image_size for x in proposals]]) - batch_ids = cat( - [ - torch.full((b, 1), i, dtype=dtype, device=device) - for i, b in enumerate(len(p) for p in proposals) - ], - dim=0, - ) - rois = torch.cat([batch_ids, rois.tensor], dim=1) - - roi_pred_bbox, roi_batch_splits = torch.ops._caffe2.BBoxTransform( - to_device(rois, "cpu"), - to_device(box_regression, "cpu"), - to_device(im_info, "cpu"), - weights=box2box_transform_weights, - apply_scale=True, - rotated=is_rotated, - angle_bound_on=True, - angle_bound_lo=-180, - angle_bound_hi=180, - clip_angle_thresh=1.0, - legacy_plus_one=False, - ) - roi_pred_bbox = to_device(roi_pred_bbox, device) - roi_batch_splits = to_device(roi_batch_splits, device) - - nms_outputs = torch.ops._caffe2.BoxWithNMSLimit( - to_device(class_prob, "cpu"), - to_device(roi_pred_bbox, "cpu"), - to_device(roi_batch_splits, "cpu"), - score_thresh=float(score_thresh), - nms=float(nms_thresh), - detections_per_im=int(topk_per_image), - soft_nms_enabled=False, - soft_nms_method="linear", - soft_nms_sigma=0.5, - soft_nms_min_score_thres=0.001, - rotated=is_rotated, - cls_agnostic_bbox_reg=cls_agnostic_bbox_reg, - input_boxes_include_bg_cls=False, - output_classes_include_bg_cls=False, - legacy_plus_one=False, - ) - roi_score_nms = to_device(nms_outputs[0], device) - roi_bbox_nms = to_device(nms_outputs[1], device) - roi_class_nms = to_device(nms_outputs[2], device) - roi_batch_splits_nms = to_device(nms_outputs[3], device) - roi_keeps_nms = to_device(nms_outputs[4], device) - roi_keeps_size_nms = to_device(nms_outputs[5], device) - if not tensor_mode: - roi_class_nms = roi_class_nms.to(torch.int64) - - roi_batch_ids = cat( - [ - torch.full((b, 1), i, dtype=dtype, device=device) - for i, b in enumerate(int(x.item()) for x in roi_batch_splits_nms) - ], - dim=0, - ) - - roi_class_nms = alias(roi_class_nms, "class_nms") - roi_score_nms = alias(roi_score_nms, "score_nms") - roi_bbox_nms = alias(roi_bbox_nms, "bbox_nms") - roi_batch_splits_nms = alias(roi_batch_splits_nms, "batch_splits_nms") - roi_keeps_nms = alias(roi_keeps_nms, "keeps_nms") - roi_keeps_size_nms = alias(roi_keeps_size_nms, "keeps_size_nms") - - results = InstancesList( - im_info=im_info, - indices=roi_batch_ids[:, 0], - extra_fields={ - "pred_boxes": Caffe2Boxes(roi_bbox_nms), - "scores": roi_score_nms, - "pred_classes": roi_class_nms, - }, - ) - - if not tensor_mode: - results = InstancesList.to_d2_instances_list(results) - batch_splits = roi_batch_splits_nms.int().tolist() - kept_indices = list(roi_keeps_nms.to(torch.int64).split(batch_splits)) - else: - results = [results] - kept_indices = [roi_keeps_nms] - - return results, kept_indices - - -class Caffe2FastRCNNOutputsInference: - def __init__(self, tensor_mode): - self.tensor_mode = tensor_mode # whether the output is caffe2 tensor mode - - def __call__(self, box_predictor, predictions, proposals): - return caffe2_fast_rcnn_outputs_inference( - self.tensor_mode, box_predictor, predictions, proposals - ) - - -def caffe2_mask_rcnn_inference(pred_mask_logits, pred_instances): - """equivalent to mask_head.mask_rcnn_inference""" - if all(isinstance(x, InstancesList) for x in pred_instances): - assert len(pred_instances) == 1 - mask_probs_pred = pred_mask_logits.sigmoid() - mask_probs_pred = alias(mask_probs_pred, "mask_fcn_probs") - pred_instances[0].set("pred_masks", mask_probs_pred) - else: - mask_rcnn_inference(pred_mask_logits, pred_instances) - - -class Caffe2MaskRCNNInference: - def __call__(self, pred_mask_logits, pred_instances): - return caffe2_mask_rcnn_inference(pred_mask_logits, pred_instances) - - -def caffe2_keypoint_rcnn_inference(use_heatmap_max_keypoint, pred_keypoint_logits, pred_instances): - # just return the keypoint heatmap for now, - # there will be option to call HeatmapMaxKeypointOp - output = alias(pred_keypoint_logits, "kps_score") - if all(isinstance(x, InstancesList) for x in pred_instances): - assert len(pred_instances) == 1 - if use_heatmap_max_keypoint: - device = output.device - output = torch.ops._caffe2.HeatmapMaxKeypoint( - to_device(output, "cpu"), - pred_instances[0].pred_boxes.tensor, - should_output_softmax=True, # worth make it configerable? - ) - output = to_device(output, device) - output = alias(output, "keypoints_out") - pred_instances[0].set("pred_keypoints", output) - return pred_keypoint_logits - - -class Caffe2KeypointRCNNInference: - def __init__(self, use_heatmap_max_keypoint): - self.use_heatmap_max_keypoint = use_heatmap_max_keypoint - - def __call__(self, pred_keypoint_logits, pred_instances): - return caffe2_keypoint_rcnn_inference( - self.use_heatmap_max_keypoint, pred_keypoint_logits, pred_instances - ) diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/training/lp_train.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/training/lp_train.py deleted file mode 100644 index 24a19bacd0a4b789415cfccbce1f8bc99bc493ed..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/clap/training/lp_train.py +++ /dev/null @@ -1,301 +0,0 @@ -import json -import logging -import math -import os -import time -from contextlib import suppress - -import numpy as np -import torch -import torch.nn.functional as F - -try: - import wandb -except ImportError: - wandb = None - -from open_clip import LPLoss, LPMetrics, lp_gather_features -from open_clip.utils import do_mixup, get_mix_lambda -from .distributed import is_master -from .zero_shot import zero_shot_eval - - -class AverageMeter(object): - """Computes and stores the average and current value""" - - def __init__(self): - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - -def unwrap_model(model): - if hasattr(model, "module"): - return model.module - else: - return model - - -def train_one_epoch( - model, - data, - epoch, - optimizer, - scaler, - scheduler, - args, - tb_writer=None, - extra_suffix="", -): - device = torch.device(args.device) - autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress - model.train() - loss = LPLoss(args.lp_loss) - - dataloader, sampler = data["train"].dataloader, data["train"].sampler - if args.distributed and sampler is not None: - sampler.set_epoch(epoch) - num_batches_per_epoch = dataloader.num_batches - sample_digits = math.ceil(math.log(dataloader.num_samples + 1, 10)) - - # for toy dataset - if args.dataset_type == "toy": - dataloader.dataset.generate_queue() - - loss_m = AverageMeter() - batch_time_m = AverageMeter() - data_time_m = AverageMeter() - end = time.time() - - for i, batch in enumerate(dataloader): - step = num_batches_per_epoch * epoch + i - - if isinstance(scheduler, dict): - for s in scheduler.values(): - s(step) - else: - scheduler(step) - - audio = batch # contains mel_spec, wavform, and longer list - class_label = batch["class_label"] - # audio = audio.to(device=device, non_blocking=True) - class_label = class_label.to(device=device, non_blocking=True) - - if args.mixup: - # https://github.com/RetroCirce/HTS-Audio-Transformer/blob/main/utils.py#L146 - mix_lambda = torch.from_numpy( - get_mix_lambda(0.5, len(audio["waveform"])) - ).to(device) - class_label = do_mixup(class_label, mix_lambda) - else: - mix_lambda = None - - data_time_m.update(time.time() - end) - if isinstance(optimizer, dict): - for o_ in optimizer.values(): - o_.zero_grad() - else: - optimizer.zero_grad() - - with autocast(): - pred = model(audio, mix_lambda=mix_lambda, device=device) - total_loss = loss(pred, class_label) - - if isinstance(optimizer, dict): - if scaler is not None: - scaler.scale(total_loss).backward() - for o_ in optimizer.values(): - if args.horovod: - o_.synchronize() - scaler.unscale_(o_) - with o_.skip_synchronize(): - scaler.step(o_) - else: - scaler.step(o_) - scaler.update() - else: - total_loss.backward() - for o_ in optimizer.values(): - o_.step() - else: - if scaler is not None: - scaler.scale(total_loss).backward() - if args.horovod: - optimizer.synchronize() - scaler.unscale_(optimizer) - with optimizer.skip_synchronize(): - scaler.step(optimizer) - else: - scaler.step(optimizer) - scaler.update() - else: - total_loss.backward() - optimizer.step() - - # Note: we clamp to 4.6052 = ln(100), as in the original paper. - with torch.no_grad(): - unwrap_model(model).clap_model.logit_scale_a.clamp_(0, math.log(100)) - unwrap_model(model).clap_model.logit_scale_t.clamp_(0, math.log(100)) - - batch_time_m.update(time.time() - end) - end = time.time() - batch_count = i + 1 - - if is_master(args) and (i % 100 == 0 or batch_count == num_batches_per_epoch): - if isinstance(audio, dict): - batch_size = len(audio["waveform"]) - else: - batch_size = len(audio) - num_samples = batch_count * batch_size * args.world_size - samples_per_epoch = dataloader.num_samples - percent_complete = 100.0 * batch_count / num_batches_per_epoch - - # NOTE loss is coarsely sampled, just master node and per log update - loss_m.update(total_loss.item(), batch_size) - if isinstance(optimizer, dict): - logging.info( - f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] " - f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) " - f"Data (t): {data_time_m.avg:.3f} " - f"Batch (t): {batch_time_m.avg:.3f} " - f"LR: {[o_.param_groups[0]['lr'] for o_ in optimizer.values()]}" - ) - log_data = { - "loss": loss_m.val, - "data_time": data_time_m.val, - "batch_time": batch_time_m.val, - "lr": [o_.param_groups[0]["lr"] for o_ in optimizer.values()], - } - else: - logging.info( - f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] " - f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) " - f"Data (t): {data_time_m.avg:.3f} " - f"Batch (t): {batch_time_m.avg:.3f} " - f"LR: {optimizer.param_groups[0]['lr']:5f} " - ) - - # Save train loss / etc. Using non avg meter values as loggers have their own smoothing - log_data = { - "loss": loss_m.val, - "data_time": data_time_m.val, - "batch_time": batch_time_m.val, - "lr": optimizer.param_groups[0]["lr"], - } - for name, val in log_data.items(): - name = f"train{extra_suffix}/{name}" - if tb_writer is not None: - tb_writer.add_scalar(name, val, step) - if args.wandb: - assert wandb is not None, "Please install wandb." - wandb.log({name: val, "step": step}) - - # resetting batch / data time meters per log window - batch_time_m.reset() - data_time_m.reset() - # end for - - -def evaluate(model, data, epoch, args, tb_writer=None, extra_suffix=""): - metrics = {} - if not args.parallel_eval: - if not is_master(args): - return metrics - device = torch.device(args.device) - model.eval() - - # CHANGE - # zero_shot_metrics = zero_shot_eval(model, data, epoch, args) - # metrics.update(zero_shot_metrics) - if is_master(args): - print("Evaluating...") - metric_names = args.lp_metrics.split(",") - eval_tool = LPMetrics(metric_names=metric_names) - - autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress - if "val" in data and ( - args.val_frequency - and ((epoch % args.val_frequency) == 0 or epoch == args.epochs) - ): - if args.parallel_eval: - dataloader, sampler = data["val"].dataloader, data["val"].sampler - if args.distributed and sampler is not None: - sampler.set_epoch(epoch) - samples_per_val = dataloader.num_samples - else: - dataloader = data["val"].dataloader - num_samples = 0 - samples_per_val = dataloader.num_samples - - eval_info = {"pred": [], "target": []} - with torch.no_grad(): - for i, batch in enumerate(dataloader): - audio = batch # contains mel_spec, wavform, and longer list - class_label = batch["class_label"] - - # audio = audio.to(device=device, non_blocking=True) - class_label = class_label.to(device=device, non_blocking=True) - - with autocast(): - pred = model(audio, device=device) - if args.parallel_eval: - pred, class_label = lp_gather_features( - pred, class_label, args.world_size, args.horovod - ) - eval_info["pred"].append(pred) - eval_info["target"].append(class_label) - - num_samples += class_label.shape[0] - - if (i % 100) == 0: # and i != 0: - logging.info( - f"Eval Epoch: {epoch} [{num_samples} / {samples_per_val}]" - ) - - if is_master(args): - eval_info["pred"] = torch.cat(eval_info["pred"], 0).cpu() - eval_info["target"] = torch.cat(eval_info["target"], 0).cpu() - metric_dict = eval_tool.evaluate_mertics( - eval_info["pred"], eval_info["target"] - ) - metrics.update(metric_dict) - if "epoch" not in metrics.keys(): - metrics.update({"epoch": epoch}) - - if is_master(args): - if not metrics: - return metrics - - logging.info( - f"Eval Epoch: {epoch} " - + "\n".join( - ["\t".join([f"{m}: {round(metrics[m], 4):.4f}"]) for m in metrics] - ) - ) - if args.save_logs: - for name, val in metrics.items(): - if tb_writer is not None: - tb_writer.add_scalar(f"val{extra_suffix}/{name}", val, epoch) - - with open(os.path.join(args.checkpoint_path, "results.jsonl"), "a+") as f: - f.write(json.dumps(metrics)) - f.write("\n") - - if args.wandb: - assert wandb is not None, "Please install wandb." - for name, val in metrics.items(): - wandb.log({f"val{extra_suffix}/{name}": val, "epoch": epoch}) - - return metrics - else: - return metrics diff --git a/spaces/ccolas/TastyPiano/src/music/utilities/handcoded_rep_utilities/__init__.py b/spaces/ccolas/TastyPiano/src/music/utilities/handcoded_rep_utilities/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/dataset_zoo/utils.py b/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/dataset_zoo/utils.py deleted file mode 100644 index 2e2ef979e68cc50959a6681b028c40005f79f724..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/dataset_zoo/utils.py +++ /dev/null @@ -1,15 +0,0 @@ -class AverageMeter(object): - def __init__(self): - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag-end2end-retriever/kb_encode_utils.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/rag-end2end-retriever/kb_encode_utils.py deleted file mode 100644 index 444c07b2bab16a66731b312693611b252d7ad310..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag-end2end-retriever/kb_encode_utils.py +++ /dev/null @@ -1,80 +0,0 @@ -import os -from functools import partial -from glob import glob - -import faiss -from datasets import Features, Sequence, Value, concatenate_datasets, load_dataset, load_from_disk - -from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast - - -def split_text(text, n=100, character=" "): - """Split the text every ``n``-th occurrence of ``character``""" - text = text.split(character) - return [character.join(text[i : i + n]).strip() for i in range(0, len(text), n)] - - -def split_documents(documents): - """Split documents into passages""" - titles, texts = [], [] - for title, text in zip(documents["title"], documents["text"]): - if text is not None: - for passage in split_text(text): - titles.append(title if title is not None else "") - texts.append(passage) - return {"title": titles, "text": texts} - - -def embed_update(ctx_encoder, total_processes, device, process_num, shard_dir, csv_path): - kb_dataset = load_dataset( - "csv", data_files=[csv_path], split="train", delimiter="\t", column_names=["title", "text"] - ) - kb_dataset = kb_dataset.map( - split_documents, batched=True, num_proc=1 - ) # if you want you can load already splitted csv. - kb_list = [kb_dataset.shard(total_processes, i, contiguous=True) for i in range(total_processes)] - data_shrad = kb_list[process_num] - - arrow_folder = "data_" + str(process_num) - passages_path = os.path.join(shard_dir, arrow_folder) - - context_tokenizer = DPRContextEncoderTokenizerFast.from_pretrained("facebook/dpr-ctx_encoder-multiset-base") - ctx_encoder = ctx_encoder.to(device=device) - - def embed( - documents: dict, ctx_encoder: DPRContextEncoder, ctx_tokenizer: DPRContextEncoderTokenizerFast, device - ) -> dict: - """Compute the DPR embeddings of document passages""" - input_ids = ctx_tokenizer( - documents["title"], documents["text"], truncation=True, padding="longest", return_tensors="pt" - )["input_ids"] - embeddings = ctx_encoder(input_ids.to(device=device), return_dict=True).pooler_output - return {"embeddings": embeddings.detach().cpu().numpy()} - - new_features = Features( - {"text": Value("string"), "title": Value("string"), "embeddings": Sequence(Value("float32"))} - ) # optional, save as float32 instead of float64 to save space - - dataset = data_shrad.map( - partial(embed, ctx_encoder=ctx_encoder, ctx_tokenizer=context_tokenizer, device=device), - batched=True, - batch_size=16, - features=new_features, - ) - dataset.save_to_disk(passages_path) - - -def add_index(shard_dir, index_path): - data_shard_list = [] - - for shard_address in glob(str(shard_dir) + "/*/"): - data_shard_list.append(load_from_disk(shard_address)) - - concat = concatenate_datasets(data_shard_list) - faiss.omp_set_num_threads(96) - - index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT) - concat.add_faiss_index("embeddings", custom_index=index) - concat.get_index("embeddings").save( - index_path - ) # since we load the index in to memory,we can directly update the index in the disk diff --git a/spaces/chendl/compositional_test/transformers/src/transformers/models/albert/modeling_tf_albert.py b/spaces/chendl/compositional_test/transformers/src/transformers/models/albert/modeling_tf_albert.py deleted file mode 100644 index 247ee395dc60fe23ee3b6acd96c8da70805fbd50..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/src/transformers/models/albert/modeling_tf_albert.py +++ /dev/null @@ -1,1487 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The OpenAI Team Authors and HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" TF 2.0 ALBERT model.""" - -import math -from dataclasses import dataclass -from typing import Dict, Optional, Tuple, Union - -import numpy as np -import tensorflow as tf - -from ...activations_tf import get_tf_activation -from ...modeling_tf_outputs import ( - TFBaseModelOutput, - TFBaseModelOutputWithPooling, - TFMaskedLMOutput, - TFMultipleChoiceModelOutput, - TFQuestionAnsweringModelOutput, - TFSequenceClassifierOutput, - TFTokenClassifierOutput, -) -from ...modeling_tf_utils import ( - TFMaskedLanguageModelingLoss, - TFModelInputType, - TFMultipleChoiceLoss, - TFPreTrainedModel, - TFQuestionAnsweringLoss, - TFSequenceClassificationLoss, - TFTokenClassificationLoss, - get_initializer, - keras_serializable, - unpack_inputs, -) -from ...tf_utils import shape_list, stable_softmax -from ...utils import ( - MULTIPLE_CHOICE_DUMMY_INPUTS, - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from .configuration_albert import AlbertConfig - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "albert-base-v2" -_CONFIG_FOR_DOC = "AlbertConfig" - -TF_ALBERT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "albert-base-v1", - "albert-large-v1", - "albert-xlarge-v1", - "albert-xxlarge-v1", - "albert-base-v2", - "albert-large-v2", - "albert-xlarge-v2", - "albert-xxlarge-v2", - # See all ALBERT models at https://huggingface.co/models?filter=albert -] - - -class TFAlbertPreTrainingLoss: - """ - Loss function suitable for ALBERT pretraining, that is, the task of pretraining a language model by combining SOP + - MLM. .. note:: Any label of -100 will be ignored (along with the corresponding logits) in the loss computation. - """ - - def hf_compute_loss(self, labels: tf.Tensor, logits: tf.Tensor) -> tf.Tensor: - loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( - from_logits=True, reduction=tf.keras.losses.Reduction.NONE - ) - if self.config.tf_legacy_loss: - # make sure only labels that are not equal to -100 - # are taken into account as loss - masked_lm_active_loss = tf.not_equal(tf.reshape(tensor=labels["labels"], shape=(-1,)), -100) - masked_lm_reduced_logits = tf.boolean_mask( - tensor=tf.reshape(tensor=logits[0], shape=(-1, shape_list(logits[0])[2])), - mask=masked_lm_active_loss, - ) - masked_lm_labels = tf.boolean_mask( - tensor=tf.reshape(tensor=labels["labels"], shape=(-1,)), mask=masked_lm_active_loss - ) - sentence_order_active_loss = tf.not_equal( - tf.reshape(tensor=labels["sentence_order_label"], shape=(-1,)), -100 - ) - sentence_order_reduced_logits = tf.boolean_mask( - tensor=tf.reshape(tensor=logits[1], shape=(-1, 2)), mask=sentence_order_active_loss - ) - sentence_order_label = tf.boolean_mask( - tensor=tf.reshape(tensor=labels["sentence_order_label"], shape=(-1,)), mask=sentence_order_active_loss - ) - masked_lm_loss = loss_fn(y_true=masked_lm_labels, y_pred=masked_lm_reduced_logits) - sentence_order_loss = loss_fn(y_true=sentence_order_label, y_pred=sentence_order_reduced_logits) - masked_lm_loss = tf.reshape(tensor=masked_lm_loss, shape=(-1, shape_list(sentence_order_loss)[0])) - masked_lm_loss = tf.reduce_mean(input_tensor=masked_lm_loss, axis=0) - - return masked_lm_loss + sentence_order_loss - - # Clip negative labels to zero here to avoid NaNs and errors - those positions will get masked later anyway - unmasked_lm_losses = loss_fn(y_true=tf.nn.relu(labels["labels"]), y_pred=logits[0]) - # make sure only labels that are not equal to -100 - # are taken into account for the loss computation - lm_loss_mask = tf.cast(labels["labels"] != -100, dtype=unmasked_lm_losses.dtype) - masked_lm_losses = unmasked_lm_losses * lm_loss_mask - reduced_masked_lm_loss = tf.reduce_sum(masked_lm_losses) / tf.reduce_sum(lm_loss_mask) - - sop_logits = tf.reshape(logits[1], (-1, 2)) - # Clip negative labels to zero here to avoid NaNs and errors - those positions will get masked later anyway - unmasked_sop_loss = loss_fn(y_true=tf.nn.relu(labels["sentence_order_label"]), y_pred=sop_logits) - sop_loss_mask = tf.cast(labels["sentence_order_label"] != -100, dtype=unmasked_sop_loss.dtype) - - masked_sop_loss = unmasked_sop_loss * sop_loss_mask - reduced_masked_sop_loss = tf.reduce_sum(masked_sop_loss) / tf.reduce_sum(sop_loss_mask) - - return tf.reshape(reduced_masked_lm_loss + reduced_masked_sop_loss, (1,)) - - -class TFAlbertEmbeddings(tf.keras.layers.Layer): - """Construct the embeddings from word, position and token_type embeddings.""" - - def __init__(self, config: AlbertConfig, **kwargs): - super().__init__(**kwargs) - - self.config = config - self.embedding_size = config.embedding_size - self.max_position_embeddings = config.max_position_embeddings - self.initializer_range = config.initializer_range - self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") - self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) - - def build(self, input_shape: tf.TensorShape): - with tf.name_scope("word_embeddings"): - self.weight = self.add_weight( - name="weight", - shape=[self.config.vocab_size, self.embedding_size], - initializer=get_initializer(self.initializer_range), - ) - - with tf.name_scope("token_type_embeddings"): - self.token_type_embeddings = self.add_weight( - name="embeddings", - shape=[self.config.type_vocab_size, self.embedding_size], - initializer=get_initializer(self.initializer_range), - ) - - with tf.name_scope("position_embeddings"): - self.position_embeddings = self.add_weight( - name="embeddings", - shape=[self.max_position_embeddings, self.embedding_size], - initializer=get_initializer(self.initializer_range), - ) - - super().build(input_shape) - - # Copied from transformers.models.bert.modeling_tf_bert.TFBertEmbeddings.call - def call( - self, - input_ids: tf.Tensor = None, - position_ids: tf.Tensor = None, - token_type_ids: tf.Tensor = None, - inputs_embeds: tf.Tensor = None, - past_key_values_length=0, - training: bool = False, - ) -> tf.Tensor: - """ - Applies embedding based on inputs tensor. - - Returns: - final_embeddings (`tf.Tensor`): output embedding tensor. - """ - if input_ids is None and inputs_embeds is None: - raise ValueError("Need to provide either `input_ids` or `input_embeds`.") - - if input_ids is not None: - # Note: tf.gather, on which the embedding layer is based, won't check positive out of bound - # indices on GPU, returning zeros instead. This is a dangerous silent behavior. - tf.debugging.assert_less( - input_ids, - tf.cast(self.config.vocab_size, dtype=input_ids.dtype), - message=( - "input_ids must be smaller than the embedding layer's input dimension (got" - f" {tf.math.reduce_max(input_ids)} >= {self.config.vocab_size})" - ), - ) - inputs_embeds = tf.gather(params=self.weight, indices=input_ids) - - input_shape = shape_list(inputs_embeds)[:-1] - - if token_type_ids is None: - token_type_ids = tf.fill(dims=input_shape, value=0) - - if position_ids is None: - position_ids = tf.expand_dims( - tf.range(start=past_key_values_length, limit=input_shape[1] + past_key_values_length), axis=0 - ) - - position_embeds = tf.gather(params=self.position_embeddings, indices=position_ids) - token_type_embeds = tf.gather(params=self.token_type_embeddings, indices=token_type_ids) - final_embeddings = inputs_embeds + position_embeds + token_type_embeds - final_embeddings = self.LayerNorm(inputs=final_embeddings) - final_embeddings = self.dropout(inputs=final_embeddings, training=training) - - return final_embeddings - - -class TFAlbertAttention(tf.keras.layers.Layer): - """Contains the complete attention sublayer, including both dropouts and layer norm.""" - - def __init__(self, config: AlbertConfig, **kwargs): - super().__init__(**kwargs) - - if config.hidden_size % config.num_attention_heads != 0: - raise ValueError( - f"The hidden size ({config.hidden_size}) is not a multiple of the number " - f"of attention heads ({config.num_attention_heads})" - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - self.sqrt_att_head_size = math.sqrt(self.attention_head_size) - self.output_attentions = config.output_attentions - - self.query = tf.keras.layers.Dense( - units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="query" - ) - self.key = tf.keras.layers.Dense( - units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="key" - ) - self.value = tf.keras.layers.Dense( - units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="value" - ) - self.dense = tf.keras.layers.Dense( - units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") - # Two different dropout probabilities; see https://github.com/google-research/albert/blob/master/modeling.py#L971-L993 - self.attention_dropout = tf.keras.layers.Dropout(rate=config.attention_probs_dropout_prob) - self.output_dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) - - def transpose_for_scores(self, tensor: tf.Tensor, batch_size: int) -> tf.Tensor: - # Reshape from [batch_size, seq_length, all_head_size] to [batch_size, seq_length, num_attention_heads, attention_head_size] - tensor = tf.reshape(tensor=tensor, shape=(batch_size, -1, self.num_attention_heads, self.attention_head_size)) - - # Transpose the tensor from [batch_size, seq_length, num_attention_heads, attention_head_size] to [batch_size, num_attention_heads, seq_length, attention_head_size] - return tf.transpose(tensor, perm=[0, 2, 1, 3]) - - def call( - self, - input_tensor: tf.Tensor, - attention_mask: tf.Tensor, - head_mask: tf.Tensor, - output_attentions: bool, - training: bool = False, - ) -> Tuple[tf.Tensor]: - batch_size = shape_list(input_tensor)[0] - mixed_query_layer = self.query(inputs=input_tensor) - mixed_key_layer = self.key(inputs=input_tensor) - mixed_value_layer = self.value(inputs=input_tensor) - query_layer = self.transpose_for_scores(mixed_query_layer, batch_size) - key_layer = self.transpose_for_scores(mixed_key_layer, batch_size) - value_layer = self.transpose_for_scores(mixed_value_layer, batch_size) - - # Take the dot product between "query" and "key" to get the raw attention scores. - # (batch size, num_heads, seq_len_q, seq_len_k) - attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True) - dk = tf.cast(self.sqrt_att_head_size, dtype=attention_scores.dtype) - attention_scores = tf.divide(attention_scores, dk) - - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in TFAlbertModel call() function) - attention_scores = tf.add(attention_scores, attention_mask) - - # Normalize the attention scores to probabilities. - attention_probs = stable_softmax(logits=attention_scores, axis=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.attention_dropout(inputs=attention_probs, training=training) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = tf.multiply(attention_probs, head_mask) - - context_layer = tf.matmul(attention_probs, value_layer) - context_layer = tf.transpose(context_layer, perm=[0, 2, 1, 3]) - - # (batch_size, seq_len_q, all_head_size) - context_layer = tf.reshape(tensor=context_layer, shape=(batch_size, -1, self.all_head_size)) - self_outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - hidden_states = self_outputs[0] - hidden_states = self.dense(inputs=hidden_states) - hidden_states = self.output_dropout(inputs=hidden_states, training=training) - attention_output = self.LayerNorm(inputs=hidden_states + input_tensor) - - # add attentions if we output them - outputs = (attention_output,) + self_outputs[1:] - - return outputs - - -class TFAlbertLayer(tf.keras.layers.Layer): - def __init__(self, config: AlbertConfig, **kwargs): - super().__init__(**kwargs) - - self.attention = TFAlbertAttention(config, name="attention") - self.ffn = tf.keras.layers.Dense( - units=config.intermediate_size, kernel_initializer=get_initializer(config.initializer_range), name="ffn" - ) - - if isinstance(config.hidden_act, str): - self.activation = get_tf_activation(config.hidden_act) - else: - self.activation = config.hidden_act - - self.ffn_output = tf.keras.layers.Dense( - units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="ffn_output" - ) - self.full_layer_layer_norm = tf.keras.layers.LayerNormalization( - epsilon=config.layer_norm_eps, name="full_layer_layer_norm" - ) - self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) - - def call( - self, - hidden_states: tf.Tensor, - attention_mask: tf.Tensor, - head_mask: tf.Tensor, - output_attentions: bool, - training: bool = False, - ) -> Tuple[tf.Tensor]: - attention_outputs = self.attention( - input_tensor=hidden_states, - attention_mask=attention_mask, - head_mask=head_mask, - output_attentions=output_attentions, - training=training, - ) - ffn_output = self.ffn(inputs=attention_outputs[0]) - ffn_output = self.activation(ffn_output) - ffn_output = self.ffn_output(inputs=ffn_output) - ffn_output = self.dropout(inputs=ffn_output, training=training) - hidden_states = self.full_layer_layer_norm(inputs=ffn_output + attention_outputs[0]) - - # add attentions if we output them - outputs = (hidden_states,) + attention_outputs[1:] - - return outputs - - -class TFAlbertLayerGroup(tf.keras.layers.Layer): - def __init__(self, config: AlbertConfig, **kwargs): - super().__init__(**kwargs) - - self.albert_layers = [ - TFAlbertLayer(config, name=f"albert_layers_._{i}") for i in range(config.inner_group_num) - ] - - def call( - self, - hidden_states: tf.Tensor, - attention_mask: tf.Tensor, - head_mask: tf.Tensor, - output_attentions: bool, - output_hidden_states: bool, - training: bool = False, - ) -> Union[TFBaseModelOutput, Tuple[tf.Tensor]]: - layer_hidden_states = () if output_hidden_states else None - layer_attentions = () if output_attentions else None - - for layer_index, albert_layer in enumerate(self.albert_layers): - if output_hidden_states: - layer_hidden_states = layer_hidden_states + (hidden_states,) - - layer_output = albert_layer( - hidden_states=hidden_states, - attention_mask=attention_mask, - head_mask=head_mask[layer_index], - output_attentions=output_attentions, - training=training, - ) - hidden_states = layer_output[0] - - if output_attentions: - layer_attentions = layer_attentions + (layer_output[1],) - - # Add last layer - if output_hidden_states: - layer_hidden_states = layer_hidden_states + (hidden_states,) - - return tuple(v for v in [hidden_states, layer_hidden_states, layer_attentions] if v is not None) - - -class TFAlbertTransformer(tf.keras.layers.Layer): - def __init__(self, config: AlbertConfig, **kwargs): - super().__init__(**kwargs) - - self.num_hidden_layers = config.num_hidden_layers - self.num_hidden_groups = config.num_hidden_groups - # Number of layers in a hidden group - self.layers_per_group = int(config.num_hidden_layers / config.num_hidden_groups) - self.embedding_hidden_mapping_in = tf.keras.layers.Dense( - units=config.hidden_size, - kernel_initializer=get_initializer(config.initializer_range), - name="embedding_hidden_mapping_in", - ) - self.albert_layer_groups = [ - TFAlbertLayerGroup(config, name=f"albert_layer_groups_._{i}") for i in range(config.num_hidden_groups) - ] - - def call( - self, - hidden_states: tf.Tensor, - attention_mask: tf.Tensor, - head_mask: tf.Tensor, - output_attentions: bool, - output_hidden_states: bool, - return_dict: bool, - training: bool = False, - ) -> Union[TFBaseModelOutput, Tuple[tf.Tensor]]: - hidden_states = self.embedding_hidden_mapping_in(inputs=hidden_states) - all_attentions = () if output_attentions else None - all_hidden_states = (hidden_states,) if output_hidden_states else None - - for i in range(self.num_hidden_layers): - # Index of the hidden group - group_idx = int(i / (self.num_hidden_layers / self.num_hidden_groups)) - layer_group_output = self.albert_layer_groups[group_idx]( - hidden_states=hidden_states, - attention_mask=attention_mask, - head_mask=head_mask[group_idx * self.layers_per_group : (group_idx + 1) * self.layers_per_group], - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - training=training, - ) - hidden_states = layer_group_output[0] - - if output_attentions: - all_attentions = all_attentions + layer_group_output[-1] - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None) - - return TFBaseModelOutput( - last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions - ) - - -class TFAlbertPreTrainedModel(TFPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = AlbertConfig - base_model_prefix = "albert" - - -class TFAlbertMLMHead(tf.keras.layers.Layer): - def __init__(self, config: AlbertConfig, input_embeddings: tf.keras.layers.Layer, **kwargs): - super().__init__(**kwargs) - - self.config = config - self.embedding_size = config.embedding_size - self.dense = tf.keras.layers.Dense( - config.embedding_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" - ) - if isinstance(config.hidden_act, str): - self.activation = get_tf_activation(config.hidden_act) - else: - self.activation = config.hidden_act - - self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") - - # The output weights are the same as the input embeddings, but there is - # an output-only bias for each token. - self.decoder = input_embeddings - - def build(self, input_shape: tf.TensorShape): - self.bias = self.add_weight(shape=(self.config.vocab_size,), initializer="zeros", trainable=True, name="bias") - self.decoder_bias = self.add_weight( - shape=(self.config.vocab_size,), initializer="zeros", trainable=True, name="decoder/bias" - ) - - super().build(input_shape) - - def get_output_embeddings(self) -> tf.keras.layers.Layer: - return self.decoder - - def set_output_embeddings(self, value: tf.Variable): - self.decoder.weight = value - self.decoder.vocab_size = shape_list(value)[0] - - def get_bias(self) -> Dict[str, tf.Variable]: - return {"bias": self.bias, "decoder_bias": self.decoder_bias} - - def set_bias(self, value: tf.Variable): - self.bias = value["bias"] - self.decoder_bias = value["decoder_bias"] - self.config.vocab_size = shape_list(value["bias"])[0] - - def call(self, hidden_states: tf.Tensor) -> tf.Tensor: - hidden_states = self.dense(inputs=hidden_states) - hidden_states = self.activation(hidden_states) - hidden_states = self.LayerNorm(inputs=hidden_states) - seq_length = shape_list(tensor=hidden_states)[1] - hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, self.embedding_size]) - hidden_states = tf.matmul(a=hidden_states, b=self.decoder.weight, transpose_b=True) - hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, seq_length, self.config.vocab_size]) - hidden_states = tf.nn.bias_add(value=hidden_states, bias=self.decoder_bias) - - return hidden_states - - -@keras_serializable -class TFAlbertMainLayer(tf.keras.layers.Layer): - config_class = AlbertConfig - - def __init__(self, config: AlbertConfig, add_pooling_layer: bool = True, **kwargs): - super().__init__(**kwargs) - - self.config = config - - self.embeddings = TFAlbertEmbeddings(config, name="embeddings") - self.encoder = TFAlbertTransformer(config, name="encoder") - self.pooler = ( - tf.keras.layers.Dense( - units=config.hidden_size, - kernel_initializer=get_initializer(config.initializer_range), - activation="tanh", - name="pooler", - ) - if add_pooling_layer - else None - ) - - def get_input_embeddings(self) -> tf.keras.layers.Layer: - return self.embeddings - - def set_input_embeddings(self, value: tf.Variable): - self.embeddings.weight = value - self.embeddings.vocab_size = shape_list(value)[0] - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - raise NotImplementedError - - @unpack_inputs - def call( - self, - input_ids: Optional[TFModelInputType] = None, - attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: bool = False, - ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = shape_list(input_ids) - elif inputs_embeds is not None: - input_shape = shape_list(inputs_embeds)[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - if attention_mask is None: - attention_mask = tf.fill(dims=input_shape, value=1) - - if token_type_ids is None: - token_type_ids = tf.fill(dims=input_shape, value=0) - - embedding_output = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - training=training, - ) - - # We create a 3D attention mask from a 2D tensor mask. - # Sizes are [batch_size, 1, 1, to_seq_length] - # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] - # this attention mask is more simple than the triangular masking of causal attention - # used in OpenAI GPT, we just need to prepare the broadcast dimension here. - extended_attention_mask = tf.reshape(attention_mask, (input_shape[0], 1, 1, input_shape[1])) - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and -10000.0 for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - extended_attention_mask = tf.cast(extended_attention_mask, dtype=embedding_output.dtype) - one_cst = tf.constant(1.0, dtype=embedding_output.dtype) - ten_thousand_cst = tf.constant(-10000.0, dtype=embedding_output.dtype) - extended_attention_mask = tf.multiply(tf.subtract(one_cst, extended_attention_mask), ten_thousand_cst) - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - if head_mask is not None: - raise NotImplementedError - else: - head_mask = [None] * self.config.num_hidden_layers - - encoder_outputs = self.encoder( - hidden_states=embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - sequence_output = encoder_outputs[0] - pooled_output = self.pooler(inputs=sequence_output[:, 0]) if self.pooler is not None else None - - if not return_dict: - return ( - sequence_output, - pooled_output, - ) + encoder_outputs[1:] - - return TFBaseModelOutputWithPooling( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - -@dataclass -class TFAlbertForPreTrainingOutput(ModelOutput): - """ - Output type of [`TFAlbertForPreTraining`]. - - Args: - prediction_logits (`tf.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - sop_logits (`tf.Tensor` of shape `(batch_size, 2)`): - Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation - before SoftMax). - hidden_states (`tuple(tf.Tensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(tf.Tensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `tf.Tensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: tf.Tensor = None - prediction_logits: tf.Tensor = None - sop_logits: tf.Tensor = None - hidden_states: Optional[Tuple[tf.Tensor]] = None - attentions: Optional[Tuple[tf.Tensor]] = None - - -ALBERT_START_DOCSTRING = r""" - - This model inherits from [`TFPreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a [tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) subclass. Use it - as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and - behavior. - - - - TensorFlow models and layers in `transformers` accept two formats as input: - - - having all inputs as keyword arguments (like PyTorch models), or - - having all inputs as a list, tuple or dict in the first positional argument. - - The reason the second format is supported is that Keras methods prefer this format when passing inputs to models - and layers. Because of this support, when using methods like `model.fit()` things should "just work" for you - just - pass your inputs and labels in any format that `model.fit()` supports! If, however, you want to use the second - format outside of Keras methods like `fit()` and `predict()`, such as when creating your own layers or models with - the Keras `Functional` API, there are three possibilities you can use to gather all the input Tensors in the first - positional argument: - - - a single Tensor with `input_ids` only and nothing else: `model(input_ids)` - - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: - `model([input_ids, attention_mask])` or `model([input_ids, attention_mask, token_type_ids])` - - a dictionary with one or several input Tensors associated to the input names given in the docstring: - `model({"input_ids": input_ids, "token_type_ids": token_type_ids})` - - Note that when creating models and layers with - [subclassing](https://keras.io/guides/making_new_layers_and_models_via_subclassing/) then you don't need to worry - about any of this, as you can just pass inputs like you would to any other Python function! - - - - Args: - config ([`AlbertConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -ALBERT_INPUTS_DOCSTRING = r""" - Args: - input_ids (`Numpy array` or `tf.Tensor` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.__call__`] and - [`PreTrainedTokenizer.encode`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`Numpy array` or `tf.Tensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`Numpy array` or `tf.Tensor` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`Numpy array` or `tf.Tensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`Numpy array` or `tf.Tensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`tf.Tensor` of shape `({0}, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the - config will be used instead. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. This argument can be used only in eager mode, in graph mode the value in the config will be - used instead. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used in - eager mode, in graph mode the value will always be set to True. - training (`bool`, *optional*, defaults to `False`): - Whether or not to use the model in training mode (some modules like dropout modules have different - behaviors between training and evaluation). -""" - - -@add_start_docstrings( - "The bare Albert Model transformer outputting raw hidden-states without any specific head on top.", - ALBERT_START_DOCSTRING, -) -class TFAlbertModel(TFAlbertPreTrainedModel): - def __init__(self, config: AlbertConfig, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.albert = TFAlbertMainLayer(config, name="albert") - - @unpack_inputs - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFBaseModelOutputWithPooling, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: Optional[TFModelInputType] = None, - attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - training: Optional[bool] = False, - ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: - outputs = self.albert( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - - return outputs - - def serving_output(self, output: TFBaseModelOutputWithPooling) -> TFBaseModelOutputWithPooling: - hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None - attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None - - return TFBaseModelOutputWithPooling( - last_hidden_state=output.last_hidden_state, - pooler_output=output.pooler_output, - hidden_states=hs, - attentions=attns, - ) - - -@add_start_docstrings( - """ - Albert Model with two heads on top for pretraining: a `masked language modeling` head and a `sentence order - prediction` (classification) head. - """, - ALBERT_START_DOCSTRING, -) -class TFAlbertForPreTraining(TFAlbertPreTrainedModel, TFAlbertPreTrainingLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [r"predictions.decoder.weight"] - - def __init__(self, config: AlbertConfig, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.num_labels = config.num_labels - - self.albert = TFAlbertMainLayer(config, name="albert") - self.predictions = TFAlbertMLMHead(config, input_embeddings=self.albert.embeddings, name="predictions") - self.sop_classifier = TFAlbertSOPHead(config, name="sop_classifier") - - def get_lm_head(self) -> tf.keras.layers.Layer: - return self.predictions - - @unpack_inputs - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=TFAlbertForPreTrainingOutput, config_class=_CONFIG_FOR_DOC) - def call( - self, - input_ids: Optional[TFModelInputType] = None, - attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: Optional[Union[np.ndarray, tf.Tensor]] = None, - sentence_order_label: Optional[Union[np.ndarray, tf.Tensor]] = None, - training: Optional[bool] = False, - ) -> Union[TFAlbertForPreTrainingOutput, Tuple[tf.Tensor]]: - r""" - Return: - - Example: - - ```python - >>> import tensorflow as tf - >>> from transformers import AutoTokenizer, TFAlbertForPreTraining - - >>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2") - >>> model = TFAlbertForPreTraining.from_pretrained("albert-base-v2") - - >>> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :] - >>> # Batch size 1 - >>> outputs = model(input_ids) - - >>> prediction_logits = outputs.prediction_logits - >>> sop_logits = outputs.sop_logits - ```""" - - outputs = self.albert( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output, pooled_output = outputs[:2] - prediction_scores = self.predictions(hidden_states=sequence_output) - sop_scores = self.sop_classifier(pooled_output=pooled_output, training=training) - total_loss = None - - if labels is not None and sentence_order_label is not None: - d_labels = {"labels": labels} - d_labels["sentence_order_label"] = sentence_order_label - total_loss = self.hf_compute_loss(labels=d_labels, logits=(prediction_scores, sop_scores)) - - if not return_dict: - output = (prediction_scores, sop_scores) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return TFAlbertForPreTrainingOutput( - loss=total_loss, - prediction_logits=prediction_scores, - sop_logits=sop_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - def serving_output(self, output: TFAlbertForPreTrainingOutput) -> TFAlbertForPreTrainingOutput: - hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None - attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None - - return TFAlbertForPreTrainingOutput( - prediction_logits=output.prediction_logits, - sop_logits=output.sop_logits, - hidden_states=hs, - attentions=attns, - ) - - -class TFAlbertSOPHead(tf.keras.layers.Layer): - def __init__(self, config: AlbertConfig, **kwargs): - super().__init__(**kwargs) - - self.dropout = tf.keras.layers.Dropout(rate=config.classifier_dropout_prob) - self.classifier = tf.keras.layers.Dense( - units=config.num_labels, - kernel_initializer=get_initializer(config.initializer_range), - name="classifier", - ) - - def call(self, pooled_output: tf.Tensor, training: bool) -> tf.Tensor: - dropout_pooled_output = self.dropout(inputs=pooled_output, training=training) - logits = self.classifier(inputs=dropout_pooled_output) - - return logits - - -@add_start_docstrings("""Albert Model with a `language modeling` head on top.""", ALBERT_START_DOCSTRING) -class TFAlbertForMaskedLM(TFAlbertPreTrainedModel, TFMaskedLanguageModelingLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [r"pooler", r"predictions.decoder.weight"] - - def __init__(self, config: AlbertConfig, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.albert = TFAlbertMainLayer(config, add_pooling_layer=False, name="albert") - self.predictions = TFAlbertMLMHead(config, input_embeddings=self.albert.embeddings, name="predictions") - - def get_lm_head(self) -> tf.keras.layers.Layer: - return self.predictions - - @unpack_inputs - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=TFMaskedLMOutput, config_class=_CONFIG_FOR_DOC) - def call( - self, - input_ids: Optional[TFModelInputType] = None, - attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: Optional[Union[np.ndarray, tf.Tensor]] = None, - training: Optional[bool] = False, - ) -> Union[TFMaskedLMOutput, Tuple[tf.Tensor]]: - r""" - labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., - config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the - loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - - Returns: - - Example: - - ```python - >>> import tensorflow as tf - >>> from transformers import AutoTokenizer, TFAlbertForMaskedLM - - >>> tokenizer = AutoTokenizer.from_pretrained("albert-base-v2") - >>> model = TFAlbertForMaskedLM.from_pretrained("albert-base-v2") - - >>> # add mask_token - >>> inputs = tokenizer(f"The capital of [MASK] is Paris.", return_tensors="tf") - >>> logits = model(**inputs).logits - - >>> # retrieve index of [MASK] - >>> mask_token_index = tf.where(inputs.input_ids == tokenizer.mask_token_id)[0][1] - >>> predicted_token_id = tf.math.argmax(logits[0, mask_token_index], axis=-1) - >>> tokenizer.decode(predicted_token_id) - 'france' - ``` - - ```python - >>> labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] - >>> labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) - >>> outputs = model(**inputs, labels=labels) - >>> round(float(outputs.loss), 2) - 0.81 - ``` - """ - outputs = self.albert( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = outputs[0] - prediction_scores = self.predictions(hidden_states=sequence_output, training=training) - loss = None if labels is None else self.hf_compute_loss(labels=labels, logits=prediction_scores) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - - return ((loss,) + output) if loss is not None else output - - return TFMaskedLMOutput( - loss=loss, - logits=prediction_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - # Copied from transformers.models.bert.modeling_tf_bert.TFBertForMaskedLM.serving_output - def serving_output(self, output: TFMaskedLMOutput) -> TFMaskedLMOutput: - hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None - attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None - - return TFMaskedLMOutput(logits=output.logits, hidden_states=hs, attentions=attns) - - -@add_start_docstrings( - """ - Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled - output) e.g. for GLUE tasks. - """, - ALBERT_START_DOCSTRING, -) -class TFAlbertForSequenceClassification(TFAlbertPreTrainedModel, TFSequenceClassificationLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [r"predictions"] - _keys_to_ignore_on_load_missing = [r"dropout"] - - def __init__(self, config: AlbertConfig, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.num_labels = config.num_labels - - self.albert = TFAlbertMainLayer(config, name="albert") - self.dropout = tf.keras.layers.Dropout(rate=config.classifier_dropout_prob) - self.classifier = tf.keras.layers.Dense( - units=config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint="vumichien/albert-base-v2-imdb", - output_type=TFSequenceClassifierOutput, - config_class=_CONFIG_FOR_DOC, - expected_output="'LABEL_1'", - expected_loss=0.12, - ) - def call( - self, - input_ids: Optional[TFModelInputType] = None, - attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: Optional[Union[np.ndarray, tf.Tensor]] = None, - training: Optional[bool] = False, - ) -> Union[TFSequenceClassifierOutput, Tuple[tf.Tensor]]: - r""" - labels (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - outputs = self.albert( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - pooled_output = outputs[1] - pooled_output = self.dropout(inputs=pooled_output, training=training) - logits = self.classifier(inputs=pooled_output) - loss = None if labels is None else self.hf_compute_loss(labels=labels, logits=logits) - - if not return_dict: - output = (logits,) + outputs[2:] - - return ((loss,) + output) if loss is not None else output - - return TFSequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - # Copied from transformers.models.bert.modeling_tf_bert.TFBertForSequenceClassification.serving_output - def serving_output(self, output: TFSequenceClassifierOutput) -> TFSequenceClassifierOutput: - hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None - attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None - - return TFSequenceClassifierOutput(logits=output.logits, hidden_states=hs, attentions=attns) - - -@add_start_docstrings( - """ - Albert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for - Named-Entity-Recognition (NER) tasks. - """, - ALBERT_START_DOCSTRING, -) -class TFAlbertForTokenClassification(TFAlbertPreTrainedModel, TFTokenClassificationLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [r"pooler", r"predictions"] - _keys_to_ignore_on_load_missing = [r"dropout"] - - def __init__(self, config: AlbertConfig, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.num_labels = config.num_labels - - self.albert = TFAlbertMainLayer(config, add_pooling_layer=False, name="albert") - classifier_dropout_prob = ( - config.classifier_dropout_prob - if config.classifier_dropout_prob is not None - else config.hidden_dropout_prob - ) - self.dropout = tf.keras.layers.Dropout(rate=classifier_dropout_prob) - self.classifier = tf.keras.layers.Dense( - units=config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFTokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: Optional[TFModelInputType] = None, - attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: Optional[Union[np.ndarray, tf.Tensor]] = None, - training: Optional[bool] = False, - ) -> Union[TFTokenClassifierOutput, Tuple[tf.Tensor]]: - r""" - labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. - """ - outputs = self.albert( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = outputs[0] - sequence_output = self.dropout(inputs=sequence_output, training=training) - logits = self.classifier(inputs=sequence_output) - loss = None if labels is None else self.hf_compute_loss(labels=labels, logits=logits) - - if not return_dict: - output = (logits,) + outputs[2:] - - return ((loss,) + output) if loss is not None else output - - return TFTokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - # Copied from transformers.models.bert.modeling_tf_bert.TFBertForTokenClassification.serving_output - def serving_output(self, output: TFTokenClassifierOutput) -> TFTokenClassifierOutput: - hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None - attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None - - return TFTokenClassifierOutput(logits=output.logits, hidden_states=hs, attentions=attns) - - -@add_start_docstrings( - """ - Albert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear - layer on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - ALBERT_START_DOCSTRING, -) -class TFAlbertForQuestionAnswering(TFAlbertPreTrainedModel, TFQuestionAnsweringLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [r"pooler", r"predictions"] - - def __init__(self, config: AlbertConfig, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.num_labels = config.num_labels - - self.albert = TFAlbertMainLayer(config, add_pooling_layer=False, name="albert") - self.qa_outputs = tf.keras.layers.Dense( - units=config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="qa_outputs" - ) - - @unpack_inputs - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - checkpoint="vumichien/albert-base-v2-squad2", - output_type=TFQuestionAnsweringModelOutput, - config_class=_CONFIG_FOR_DOC, - qa_target_start_index=12, - qa_target_end_index=13, - expected_output="'a nice puppet'", - expected_loss=7.36, - ) - def call( - self, - input_ids: Optional[TFModelInputType] = None, - attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - start_positions: Optional[Union[np.ndarray, tf.Tensor]] = None, - end_positions: Optional[Union[np.ndarray, tf.Tensor]] = None, - training: Optional[bool] = False, - ) -> Union[TFQuestionAnsweringModelOutput, Tuple[tf.Tensor]]: - r""" - start_positions (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - """ - outputs = self.albert( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - sequence_output = outputs[0] - logits = self.qa_outputs(inputs=sequence_output) - start_logits, end_logits = tf.split(value=logits, num_or_size_splits=2, axis=-1) - start_logits = tf.squeeze(input=start_logits, axis=-1) - end_logits = tf.squeeze(input=end_logits, axis=-1) - loss = None - - if start_positions is not None and end_positions is not None: - labels = {"start_position": start_positions} - labels["end_position"] = end_positions - loss = self.hf_compute_loss(labels=labels, logits=(start_logits, end_logits)) - - if not return_dict: - output = (start_logits, end_logits) + outputs[2:] - - return ((loss,) + output) if loss is not None else output - - return TFQuestionAnsweringModelOutput( - loss=loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - # Copied from transformers.models.bert.modeling_tf_bert.TFBertForQuestionAnswering.serving_output - def serving_output(self, output: TFQuestionAnsweringModelOutput) -> TFQuestionAnsweringModelOutput: - hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None - attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None - - return TFQuestionAnsweringModelOutput( - start_logits=output.start_logits, end_logits=output.end_logits, hidden_states=hs, attentions=attns - ) - - -@add_start_docstrings( - """ - Albert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a - softmax) e.g. for RocStories/SWAG tasks. - """, - ALBERT_START_DOCSTRING, -) -class TFAlbertForMultipleChoice(TFAlbertPreTrainedModel, TFMultipleChoiceLoss): - # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model - _keys_to_ignore_on_load_unexpected = [r"pooler", r"predictions"] - _keys_to_ignore_on_load_missing = [r"dropout"] - - def __init__(self, config: AlbertConfig, *inputs, **kwargs): - super().__init__(config, *inputs, **kwargs) - - self.albert = TFAlbertMainLayer(config, name="albert") - self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) - self.classifier = tf.keras.layers.Dense( - units=1, kernel_initializer=get_initializer(config.initializer_range), name="classifier" - ) - - @property - def dummy_inputs(self): - """ - Dummy inputs to build the network. - - Returns: - tf.Tensor with dummy inputs - """ - return {"input_ids": tf.constant(MULTIPLE_CHOICE_DUMMY_INPUTS, dtype=tf.int32)} - - @unpack_inputs - @add_start_docstrings_to_model_forward(ALBERT_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TFMultipleChoiceModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def call( - self, - input_ids: Optional[TFModelInputType] = None, - attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, - head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, - inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - labels: Optional[Union[np.ndarray, tf.Tensor]] = None, - training: Optional[bool] = False, - ) -> Union[TFMultipleChoiceModelOutput, Tuple[tf.Tensor]]: - r""" - labels (`tf.Tensor` of shape `(batch_size,)`, *optional*): - Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., num_choices]` - where `num_choices` is the size of the second dimension of the input tensors. (See `input_ids` above) - """ - - if input_ids is not None: - num_choices = shape_list(input_ids)[1] - seq_length = shape_list(input_ids)[2] - else: - num_choices = shape_list(inputs_embeds)[1] - seq_length = shape_list(inputs_embeds)[2] - - flat_input_ids = tf.reshape(input_ids, (-1, seq_length)) if input_ids is not None else None - flat_attention_mask = ( - tf.reshape(tensor=attention_mask, shape=(-1, seq_length)) if attention_mask is not None else None - ) - flat_token_type_ids = ( - tf.reshape(tensor=token_type_ids, shape=(-1, seq_length)) if token_type_ids is not None else None - ) - flat_position_ids = ( - tf.reshape(tensor=position_ids, shape=(-1, seq_length)) if position_ids is not None else None - ) - flat_inputs_embeds = ( - tf.reshape(tensor=inputs_embeds, shape=(-1, seq_length, shape_list(inputs_embeds)[3])) - if inputs_embeds is not None - else None - ) - outputs = self.albert( - input_ids=flat_input_ids, - attention_mask=flat_attention_mask, - token_type_ids=flat_token_type_ids, - position_ids=flat_position_ids, - head_mask=head_mask, - inputs_embeds=flat_inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - training=training, - ) - pooled_output = outputs[1] - pooled_output = self.dropout(inputs=pooled_output, training=training) - logits = self.classifier(inputs=pooled_output) - reshaped_logits = tf.reshape(tensor=logits, shape=(-1, num_choices)) - loss = None if labels is None else self.hf_compute_loss(labels=labels, logits=reshaped_logits) - - if not return_dict: - output = (reshaped_logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TFMultipleChoiceModelOutput( - loss=loss, - logits=reshaped_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - @tf.function( - input_signature=[ - { - "input_ids": tf.TensorSpec((None, None, None), tf.int32, name="input_ids"), - "attention_mask": tf.TensorSpec((None, None, None), tf.int32, name="attention_mask"), - "token_type_ids": tf.TensorSpec((None, None, None), tf.int32, name="token_type_ids"), - } - ] - ) - # Copied from transformers.models.bert.modeling_tf_bert.TFBertForMultipleChoice.serving - def serving(self, inputs: Dict[str, tf.Tensor]) -> TFMultipleChoiceModelOutput: - output = self.call(input_ids=inputs) - - return self.serving_output(output) - - # Copied from transformers.models.bert.modeling_tf_bert.TFBertForMultipleChoice.serving_output - def serving_output(self, output: TFMultipleChoiceModelOutput) -> TFMultipleChoiceModelOutput: - hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None - attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None - - return TFMultipleChoiceModelOutput(logits=output.logits, hidden_states=hs, attentions=attns) diff --git a/spaces/chinhon/headline_writer/README.md b/spaces/chinhon/headline_writer/README.md deleted file mode 100644 index d8db38e909e0271ffcd04f023573e5d21ee0d0a1..0000000000000000000000000000000000000000 --- a/spaces/chinhon/headline_writer/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Headline_writer -emoji: 👁 -colorFrom: pink -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/chongjie/ZoeDepth_slim/utils.py b/spaces/chongjie/ZoeDepth_slim/utils.py deleted file mode 100644 index eb5e76e1022b7e94da6f234abd6c8846d7bd67eb..0000000000000000000000000000000000000000 --- a/spaces/chongjie/ZoeDepth_slim/utils.py +++ /dev/null @@ -1,85 +0,0 @@ -# MIT License - -# Copyright (c) 2022 Intelligent Systems Lab Org - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -# File author: Shariq Farooq Bhat - -import matplotlib -import matplotlib.cm -import numpy as np -import torch - -def colorize(value, vmin=None, vmax=None, cmap='magma_r', invalid_val=-99, invalid_mask=None, background_color=(128, 128, 128, 255), gamma_corrected=False, value_transform=None): - """Converts a depth map to a color image. - - Args: - value (torch.Tensor, numpy.ndarry): Input depth map. Shape: (H, W) or (1, H, W) or (1, 1, H, W). All singular dimensions are squeezed - vmin (float, optional): vmin-valued entries are mapped to start color of cmap. If None, value.min() is used. Defaults to None. - vmax (float, optional): vmax-valued entries are mapped to end color of cmap. If None, value.max() is used. Defaults to None. - cmap (str, optional): matplotlib colormap to use. Defaults to 'magma_r'. - invalid_val (int, optional): Specifies value of invalid pixels that should be colored as 'background_color'. Defaults to -99. - invalid_mask (numpy.ndarray, optional): Boolean mask for invalid regions. Defaults to None. - background_color (tuple[int], optional): 4-tuple RGB color to give to invalid pixels. Defaults to (128, 128, 128, 255). - gamma_corrected (bool, optional): Apply gamma correction to colored image. Defaults to False. - value_transform (Callable, optional): Apply transform function to valid pixels before coloring. Defaults to None. - - Returns: - numpy.ndarray, dtype - uint8: Colored depth map. Shape: (H, W, 4) - """ - if isinstance(value, torch.Tensor): - value = value.detach().cpu().numpy() - - value = value.squeeze() - if invalid_mask is None: - invalid_mask = value == invalid_val - mask = np.logical_not(invalid_mask) - - # normalize - vmin = np.percentile(value[mask],2) if vmin is None else vmin - vmax = np.percentile(value[mask],85) if vmax is None else vmax - if vmin != vmax: - value = (value - vmin) / (vmax - vmin) # vmin..vmax - else: - # Avoid 0-division - value = value * 0. - - # squeeze last dim if it exists - # grey out the invalid values - - value[invalid_mask] = np.nan - cmapper = matplotlib.cm.get_cmap(cmap) - if value_transform: - value = value_transform(value) - # value = value / value.max() - value = cmapper(value, bytes=True) # (nxmx4) - - # img = value[:, :, :] - img = value[...] - img[invalid_mask] = background_color - - # return img.transpose((2, 0, 1)) - if gamma_corrected: - # gamma correction - img = img / 255 - img = np.power(img, 2.2) - img = img * 255 - img = img.astype(np.uint8) - return img diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/enums.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/enums.py deleted file mode 100644 index 5e3e198233698f2b007489dd299cecb87d971067..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/enums.py +++ /dev/null @@ -1,85 +0,0 @@ -""" -All of the Enums that are used throughout the chardet package. - -:author: Dan Blanchard (dan.blanchard@gmail.com) -""" - -from enum import Enum, Flag - - -class InputState: - """ - This enum represents the different states a universal detector can be in. - """ - - PURE_ASCII = 0 - ESC_ASCII = 1 - HIGH_BYTE = 2 - - -class LanguageFilter(Flag): - """ - This enum represents the different language filters we can apply to a - ``UniversalDetector``. - """ - - NONE = 0x00 - CHINESE_SIMPLIFIED = 0x01 - CHINESE_TRADITIONAL = 0x02 - JAPANESE = 0x04 - KOREAN = 0x08 - NON_CJK = 0x10 - ALL = 0x1F - CHINESE = CHINESE_SIMPLIFIED | CHINESE_TRADITIONAL - CJK = CHINESE | JAPANESE | KOREAN - - -class ProbingState(Enum): - """ - This enum represents the different states a prober can be in. - """ - - DETECTING = 0 - FOUND_IT = 1 - NOT_ME = 2 - - -class MachineState: - """ - This enum represents the different states a state machine can be in. - """ - - START = 0 - ERROR = 1 - ITS_ME = 2 - - -class SequenceLikelihood: - """ - This enum represents the likelihood of a character following the previous one. - """ - - NEGATIVE = 0 - UNLIKELY = 1 - LIKELY = 2 - POSITIVE = 3 - - @classmethod - def get_num_categories(cls) -> int: - """:returns: The number of likelihood categories in the enum.""" - return 4 - - -class CharacterCategory: - """ - This enum represents the different categories language models for - ``SingleByteCharsetProber`` put characters into. - - Anything less than CONTROL is considered a letter. - """ - - UNDEFINED = 255 - LINE_BREAK = 254 - SYMBOL = 253 - DIGIT = 252 - CONTROL = 251 diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/rrule.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/rrule.py deleted file mode 100644 index b3203393c61203c9c6f12db7a857aee89be85e5c..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/rrule.py +++ /dev/null @@ -1,1737 +0,0 @@ -# -*- coding: utf-8 -*- -""" -The rrule module offers a small, complete, and very fast, implementation of -the recurrence rules documented in the -`iCalendar RFC `_, -including support for caching of results. -""" -import calendar -import datetime -import heapq -import itertools -import re -import sys -from functools import wraps -# For warning about deprecation of until and count -from warnings import warn - -from six import advance_iterator, integer_types - -from six.moves import _thread, range - -from ._common import weekday as weekdaybase - -try: - from math import gcd -except ImportError: - from fractions import gcd - -__all__ = ["rrule", "rruleset", "rrulestr", - "YEARLY", "MONTHLY", "WEEKLY", "DAILY", - "HOURLY", "MINUTELY", "SECONDLY", - "MO", "TU", "WE", "TH", "FR", "SA", "SU"] - -# Every mask is 7 days longer to handle cross-year weekly periods. -M366MASK = tuple([1]*31+[2]*29+[3]*31+[4]*30+[5]*31+[6]*30 + - [7]*31+[8]*31+[9]*30+[10]*31+[11]*30+[12]*31+[1]*7) -M365MASK = list(M366MASK) -M29, M30, M31 = list(range(1, 30)), list(range(1, 31)), list(range(1, 32)) -MDAY366MASK = tuple(M31+M29+M31+M30+M31+M30+M31+M31+M30+M31+M30+M31+M31[:7]) -MDAY365MASK = list(MDAY366MASK) -M29, M30, M31 = list(range(-29, 0)), list(range(-30, 0)), list(range(-31, 0)) -NMDAY366MASK = tuple(M31+M29+M31+M30+M31+M30+M31+M31+M30+M31+M30+M31+M31[:7]) -NMDAY365MASK = list(NMDAY366MASK) -M366RANGE = (0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366) -M365RANGE = (0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365) -WDAYMASK = [0, 1, 2, 3, 4, 5, 6]*55 -del M29, M30, M31, M365MASK[59], MDAY365MASK[59], NMDAY365MASK[31] -MDAY365MASK = tuple(MDAY365MASK) -M365MASK = tuple(M365MASK) - -FREQNAMES = ['YEARLY', 'MONTHLY', 'WEEKLY', 'DAILY', 'HOURLY', 'MINUTELY', 'SECONDLY'] - -(YEARLY, - MONTHLY, - WEEKLY, - DAILY, - HOURLY, - MINUTELY, - SECONDLY) = list(range(7)) - -# Imported on demand. -easter = None -parser = None - - -class weekday(weekdaybase): - """ - This version of weekday does not allow n = 0. - """ - def __init__(self, wkday, n=None): - if n == 0: - raise ValueError("Can't create weekday with n==0") - - super(weekday, self).__init__(wkday, n) - - -MO, TU, WE, TH, FR, SA, SU = weekdays = tuple(weekday(x) for x in range(7)) - - -def _invalidates_cache(f): - """ - Decorator for rruleset methods which may invalidate the - cached length. - """ - @wraps(f) - def inner_func(self, *args, **kwargs): - rv = f(self, *args, **kwargs) - self._invalidate_cache() - return rv - - return inner_func - - -class rrulebase(object): - def __init__(self, cache=False): - if cache: - self._cache = [] - self._cache_lock = _thread.allocate_lock() - self._invalidate_cache() - else: - self._cache = None - self._cache_complete = False - self._len = None - - def __iter__(self): - if self._cache_complete: - return iter(self._cache) - elif self._cache is None: - return self._iter() - else: - return self._iter_cached() - - def _invalidate_cache(self): - if self._cache is not None: - self._cache = [] - self._cache_complete = False - self._cache_gen = self._iter() - - if self._cache_lock.locked(): - self._cache_lock.release() - - self._len = None - - def _iter_cached(self): - i = 0 - gen = self._cache_gen - cache = self._cache - acquire = self._cache_lock.acquire - release = self._cache_lock.release - while gen: - if i == len(cache): - acquire() - if self._cache_complete: - break - try: - for j in range(10): - cache.append(advance_iterator(gen)) - except StopIteration: - self._cache_gen = gen = None - self._cache_complete = True - break - release() - yield cache[i] - i += 1 - while i < self._len: - yield cache[i] - i += 1 - - def __getitem__(self, item): - if self._cache_complete: - return self._cache[item] - elif isinstance(item, slice): - if item.step and item.step < 0: - return list(iter(self))[item] - else: - return list(itertools.islice(self, - item.start or 0, - item.stop or sys.maxsize, - item.step or 1)) - elif item >= 0: - gen = iter(self) - try: - for i in range(item+1): - res = advance_iterator(gen) - except StopIteration: - raise IndexError - return res - else: - return list(iter(self))[item] - - def __contains__(self, item): - if self._cache_complete: - return item in self._cache - else: - for i in self: - if i == item: - return True - elif i > item: - return False - return False - - # __len__() introduces a large performance penalty. - def count(self): - """ Returns the number of recurrences in this set. It will have go - trough the whole recurrence, if this hasn't been done before. """ - if self._len is None: - for x in self: - pass - return self._len - - def before(self, dt, inc=False): - """ Returns the last recurrence before the given datetime instance. The - inc keyword defines what happens if dt is an occurrence. With - inc=True, if dt itself is an occurrence, it will be returned. """ - if self._cache_complete: - gen = self._cache - else: - gen = self - last = None - if inc: - for i in gen: - if i > dt: - break - last = i - else: - for i in gen: - if i >= dt: - break - last = i - return last - - def after(self, dt, inc=False): - """ Returns the first recurrence after the given datetime instance. The - inc keyword defines what happens if dt is an occurrence. With - inc=True, if dt itself is an occurrence, it will be returned. """ - if self._cache_complete: - gen = self._cache - else: - gen = self - if inc: - for i in gen: - if i >= dt: - return i - else: - for i in gen: - if i > dt: - return i - return None - - def xafter(self, dt, count=None, inc=False): - """ - Generator which yields up to `count` recurrences after the given - datetime instance, equivalent to `after`. - - :param dt: - The datetime at which to start generating recurrences. - - :param count: - The maximum number of recurrences to generate. If `None` (default), - dates are generated until the recurrence rule is exhausted. - - :param inc: - If `dt` is an instance of the rule and `inc` is `True`, it is - included in the output. - - :yields: Yields a sequence of `datetime` objects. - """ - - if self._cache_complete: - gen = self._cache - else: - gen = self - - # Select the comparison function - if inc: - comp = lambda dc, dtc: dc >= dtc - else: - comp = lambda dc, dtc: dc > dtc - - # Generate dates - n = 0 - for d in gen: - if comp(d, dt): - if count is not None: - n += 1 - if n > count: - break - - yield d - - def between(self, after, before, inc=False, count=1): - """ Returns all the occurrences of the rrule between after and before. - The inc keyword defines what happens if after and/or before are - themselves occurrences. With inc=True, they will be included in the - list, if they are found in the recurrence set. """ - if self._cache_complete: - gen = self._cache - else: - gen = self - started = False - l = [] - if inc: - for i in gen: - if i > before: - break - elif not started: - if i >= after: - started = True - l.append(i) - else: - l.append(i) - else: - for i in gen: - if i >= before: - break - elif not started: - if i > after: - started = True - l.append(i) - else: - l.append(i) - return l - - -class rrule(rrulebase): - """ - That's the base of the rrule operation. It accepts all the keywords - defined in the RFC as its constructor parameters (except byday, - which was renamed to byweekday) and more. The constructor prototype is:: - - rrule(freq) - - Where freq must be one of YEARLY, MONTHLY, WEEKLY, DAILY, HOURLY, MINUTELY, - or SECONDLY. - - .. note:: - Per RFC section 3.3.10, recurrence instances falling on invalid dates - and times are ignored rather than coerced: - - Recurrence rules may generate recurrence instances with an invalid - date (e.g., February 30) or nonexistent local time (e.g., 1:30 AM - on a day where the local time is moved forward by an hour at 1:00 - AM). Such recurrence instances MUST be ignored and MUST NOT be - counted as part of the recurrence set. - - This can lead to possibly surprising behavior when, for example, the - start date occurs at the end of the month: - - >>> from dateutil.rrule import rrule, MONTHLY - >>> from datetime import datetime - >>> start_date = datetime(2014, 12, 31) - >>> list(rrule(freq=MONTHLY, count=4, dtstart=start_date)) - ... # doctest: +NORMALIZE_WHITESPACE - [datetime.datetime(2014, 12, 31, 0, 0), - datetime.datetime(2015, 1, 31, 0, 0), - datetime.datetime(2015, 3, 31, 0, 0), - datetime.datetime(2015, 5, 31, 0, 0)] - - Additionally, it supports the following keyword arguments: - - :param dtstart: - The recurrence start. Besides being the base for the recurrence, - missing parameters in the final recurrence instances will also be - extracted from this date. If not given, datetime.now() will be used - instead. - :param interval: - The interval between each freq iteration. For example, when using - YEARLY, an interval of 2 means once every two years, but with HOURLY, - it means once every two hours. The default interval is 1. - :param wkst: - The week start day. Must be one of the MO, TU, WE constants, or an - integer, specifying the first day of the week. This will affect - recurrences based on weekly periods. The default week start is got - from calendar.firstweekday(), and may be modified by - calendar.setfirstweekday(). - :param count: - If given, this determines how many occurrences will be generated. - - .. note:: - As of version 2.5.0, the use of the keyword ``until`` in conjunction - with ``count`` is deprecated, to make sure ``dateutil`` is fully - compliant with `RFC-5545 Sec. 3.3.10 `_. Therefore, ``until`` and ``count`` - **must not** occur in the same call to ``rrule``. - :param until: - If given, this must be a datetime instance specifying the upper-bound - limit of the recurrence. The last recurrence in the rule is the greatest - datetime that is less than or equal to the value specified in the - ``until`` parameter. - - .. note:: - As of version 2.5.0, the use of the keyword ``until`` in conjunction - with ``count`` is deprecated, to make sure ``dateutil`` is fully - compliant with `RFC-5545 Sec. 3.3.10 `_. Therefore, ``until`` and ``count`` - **must not** occur in the same call to ``rrule``. - :param bysetpos: - If given, it must be either an integer, or a sequence of integers, - positive or negative. Each given integer will specify an occurrence - number, corresponding to the nth occurrence of the rule inside the - frequency period. For example, a bysetpos of -1 if combined with a - MONTHLY frequency, and a byweekday of (MO, TU, WE, TH, FR), will - result in the last work day of every month. - :param bymonth: - If given, it must be either an integer, or a sequence of integers, - meaning the months to apply the recurrence to. - :param bymonthday: - If given, it must be either an integer, or a sequence of integers, - meaning the month days to apply the recurrence to. - :param byyearday: - If given, it must be either an integer, or a sequence of integers, - meaning the year days to apply the recurrence to. - :param byeaster: - If given, it must be either an integer, or a sequence of integers, - positive or negative. Each integer will define an offset from the - Easter Sunday. Passing the offset 0 to byeaster will yield the Easter - Sunday itself. This is an extension to the RFC specification. - :param byweekno: - If given, it must be either an integer, or a sequence of integers, - meaning the week numbers to apply the recurrence to. Week numbers - have the meaning described in ISO8601, that is, the first week of - the year is that containing at least four days of the new year. - :param byweekday: - If given, it must be either an integer (0 == MO), a sequence of - integers, one of the weekday constants (MO, TU, etc), or a sequence - of these constants. When given, these variables will define the - weekdays where the recurrence will be applied. It's also possible to - use an argument n for the weekday instances, which will mean the nth - occurrence of this weekday in the period. For example, with MONTHLY, - or with YEARLY and BYMONTH, using FR(+1) in byweekday will specify the - first friday of the month where the recurrence happens. Notice that in - the RFC documentation, this is specified as BYDAY, but was renamed to - avoid the ambiguity of that keyword. - :param byhour: - If given, it must be either an integer, or a sequence of integers, - meaning the hours to apply the recurrence to. - :param byminute: - If given, it must be either an integer, or a sequence of integers, - meaning the minutes to apply the recurrence to. - :param bysecond: - If given, it must be either an integer, or a sequence of integers, - meaning the seconds to apply the recurrence to. - :param cache: - If given, it must be a boolean value specifying to enable or disable - caching of results. If you will use the same rrule instance multiple - times, enabling caching will improve the performance considerably. - """ - def __init__(self, freq, dtstart=None, - interval=1, wkst=None, count=None, until=None, bysetpos=None, - bymonth=None, bymonthday=None, byyearday=None, byeaster=None, - byweekno=None, byweekday=None, - byhour=None, byminute=None, bysecond=None, - cache=False): - super(rrule, self).__init__(cache) - global easter - if not dtstart: - if until and until.tzinfo: - dtstart = datetime.datetime.now(tz=until.tzinfo).replace(microsecond=0) - else: - dtstart = datetime.datetime.now().replace(microsecond=0) - elif not isinstance(dtstart, datetime.datetime): - dtstart = datetime.datetime.fromordinal(dtstart.toordinal()) - else: - dtstart = dtstart.replace(microsecond=0) - self._dtstart = dtstart - self._tzinfo = dtstart.tzinfo - self._freq = freq - self._interval = interval - self._count = count - - # Cache the original byxxx rules, if they are provided, as the _byxxx - # attributes do not necessarily map to the inputs, and this can be - # a problem in generating the strings. Only store things if they've - # been supplied (the string retrieval will just use .get()) - self._original_rule = {} - - if until and not isinstance(until, datetime.datetime): - until = datetime.datetime.fromordinal(until.toordinal()) - self._until = until - - if self._dtstart and self._until: - if (self._dtstart.tzinfo is not None) != (self._until.tzinfo is not None): - # According to RFC5545 Section 3.3.10: - # https://tools.ietf.org/html/rfc5545#section-3.3.10 - # - # > If the "DTSTART" property is specified as a date with UTC - # > time or a date with local time and time zone reference, - # > then the UNTIL rule part MUST be specified as a date with - # > UTC time. - raise ValueError( - 'RRULE UNTIL values must be specified in UTC when DTSTART ' - 'is timezone-aware' - ) - - if count is not None and until: - warn("Using both 'count' and 'until' is inconsistent with RFC 5545" - " and has been deprecated in dateutil. Future versions will " - "raise an error.", DeprecationWarning) - - if wkst is None: - self._wkst = calendar.firstweekday() - elif isinstance(wkst, integer_types): - self._wkst = wkst - else: - self._wkst = wkst.weekday - - if bysetpos is None: - self._bysetpos = None - elif isinstance(bysetpos, integer_types): - if bysetpos == 0 or not (-366 <= bysetpos <= 366): - raise ValueError("bysetpos must be between 1 and 366, " - "or between -366 and -1") - self._bysetpos = (bysetpos,) - else: - self._bysetpos = tuple(bysetpos) - for pos in self._bysetpos: - if pos == 0 or not (-366 <= pos <= 366): - raise ValueError("bysetpos must be between 1 and 366, " - "or between -366 and -1") - - if self._bysetpos: - self._original_rule['bysetpos'] = self._bysetpos - - if (byweekno is None and byyearday is None and bymonthday is None and - byweekday is None and byeaster is None): - if freq == YEARLY: - if bymonth is None: - bymonth = dtstart.month - self._original_rule['bymonth'] = None - bymonthday = dtstart.day - self._original_rule['bymonthday'] = None - elif freq == MONTHLY: - bymonthday = dtstart.day - self._original_rule['bymonthday'] = None - elif freq == WEEKLY: - byweekday = dtstart.weekday() - self._original_rule['byweekday'] = None - - # bymonth - if bymonth is None: - self._bymonth = None - else: - if isinstance(bymonth, integer_types): - bymonth = (bymonth,) - - self._bymonth = tuple(sorted(set(bymonth))) - - if 'bymonth' not in self._original_rule: - self._original_rule['bymonth'] = self._bymonth - - # byyearday - if byyearday is None: - self._byyearday = None - else: - if isinstance(byyearday, integer_types): - byyearday = (byyearday,) - - self._byyearday = tuple(sorted(set(byyearday))) - self._original_rule['byyearday'] = self._byyearday - - # byeaster - if byeaster is not None: - if not easter: - from dateutil import easter - if isinstance(byeaster, integer_types): - self._byeaster = (byeaster,) - else: - self._byeaster = tuple(sorted(byeaster)) - - self._original_rule['byeaster'] = self._byeaster - else: - self._byeaster = None - - # bymonthday - if bymonthday is None: - self._bymonthday = () - self._bynmonthday = () - else: - if isinstance(bymonthday, integer_types): - bymonthday = (bymonthday,) - - bymonthday = set(bymonthday) # Ensure it's unique - - self._bymonthday = tuple(sorted(x for x in bymonthday if x > 0)) - self._bynmonthday = tuple(sorted(x for x in bymonthday if x < 0)) - - # Storing positive numbers first, then negative numbers - if 'bymonthday' not in self._original_rule: - self._original_rule['bymonthday'] = tuple( - itertools.chain(self._bymonthday, self._bynmonthday)) - - # byweekno - if byweekno is None: - self._byweekno = None - else: - if isinstance(byweekno, integer_types): - byweekno = (byweekno,) - - self._byweekno = tuple(sorted(set(byweekno))) - - self._original_rule['byweekno'] = self._byweekno - - # byweekday / bynweekday - if byweekday is None: - self._byweekday = None - self._bynweekday = None - else: - # If it's one of the valid non-sequence types, convert to a - # single-element sequence before the iterator that builds the - # byweekday set. - if isinstance(byweekday, integer_types) or hasattr(byweekday, "n"): - byweekday = (byweekday,) - - self._byweekday = set() - self._bynweekday = set() - for wday in byweekday: - if isinstance(wday, integer_types): - self._byweekday.add(wday) - elif not wday.n or freq > MONTHLY: - self._byweekday.add(wday.weekday) - else: - self._bynweekday.add((wday.weekday, wday.n)) - - if not self._byweekday: - self._byweekday = None - elif not self._bynweekday: - self._bynweekday = None - - if self._byweekday is not None: - self._byweekday = tuple(sorted(self._byweekday)) - orig_byweekday = [weekday(x) for x in self._byweekday] - else: - orig_byweekday = () - - if self._bynweekday is not None: - self._bynweekday = tuple(sorted(self._bynweekday)) - orig_bynweekday = [weekday(*x) for x in self._bynweekday] - else: - orig_bynweekday = () - - if 'byweekday' not in self._original_rule: - self._original_rule['byweekday'] = tuple(itertools.chain( - orig_byweekday, orig_bynweekday)) - - # byhour - if byhour is None: - if freq < HOURLY: - self._byhour = {dtstart.hour} - else: - self._byhour = None - else: - if isinstance(byhour, integer_types): - byhour = (byhour,) - - if freq == HOURLY: - self._byhour = self.__construct_byset(start=dtstart.hour, - byxxx=byhour, - base=24) - else: - self._byhour = set(byhour) - - self._byhour = tuple(sorted(self._byhour)) - self._original_rule['byhour'] = self._byhour - - # byminute - if byminute is None: - if freq < MINUTELY: - self._byminute = {dtstart.minute} - else: - self._byminute = None - else: - if isinstance(byminute, integer_types): - byminute = (byminute,) - - if freq == MINUTELY: - self._byminute = self.__construct_byset(start=dtstart.minute, - byxxx=byminute, - base=60) - else: - self._byminute = set(byminute) - - self._byminute = tuple(sorted(self._byminute)) - self._original_rule['byminute'] = self._byminute - - # bysecond - if bysecond is None: - if freq < SECONDLY: - self._bysecond = ((dtstart.second,)) - else: - self._bysecond = None - else: - if isinstance(bysecond, integer_types): - bysecond = (bysecond,) - - self._bysecond = set(bysecond) - - if freq == SECONDLY: - self._bysecond = self.__construct_byset(start=dtstart.second, - byxxx=bysecond, - base=60) - else: - self._bysecond = set(bysecond) - - self._bysecond = tuple(sorted(self._bysecond)) - self._original_rule['bysecond'] = self._bysecond - - if self._freq >= HOURLY: - self._timeset = None - else: - self._timeset = [] - for hour in self._byhour: - for minute in self._byminute: - for second in self._bysecond: - self._timeset.append( - datetime.time(hour, minute, second, - tzinfo=self._tzinfo)) - self._timeset.sort() - self._timeset = tuple(self._timeset) - - def __str__(self): - """ - Output a string that would generate this RRULE if passed to rrulestr. - This is mostly compatible with RFC5545, except for the - dateutil-specific extension BYEASTER. - """ - - output = [] - h, m, s = [None] * 3 - if self._dtstart: - output.append(self._dtstart.strftime('DTSTART:%Y%m%dT%H%M%S')) - h, m, s = self._dtstart.timetuple()[3:6] - - parts = ['FREQ=' + FREQNAMES[self._freq]] - if self._interval != 1: - parts.append('INTERVAL=' + str(self._interval)) - - if self._wkst: - parts.append('WKST=' + repr(weekday(self._wkst))[0:2]) - - if self._count is not None: - parts.append('COUNT=' + str(self._count)) - - if self._until: - parts.append(self._until.strftime('UNTIL=%Y%m%dT%H%M%S')) - - if self._original_rule.get('byweekday') is not None: - # The str() method on weekday objects doesn't generate - # RFC5545-compliant strings, so we should modify that. - original_rule = dict(self._original_rule) - wday_strings = [] - for wday in original_rule['byweekday']: - if wday.n: - wday_strings.append('{n:+d}{wday}'.format( - n=wday.n, - wday=repr(wday)[0:2])) - else: - wday_strings.append(repr(wday)) - - original_rule['byweekday'] = wday_strings - else: - original_rule = self._original_rule - - partfmt = '{name}={vals}' - for name, key in [('BYSETPOS', 'bysetpos'), - ('BYMONTH', 'bymonth'), - ('BYMONTHDAY', 'bymonthday'), - ('BYYEARDAY', 'byyearday'), - ('BYWEEKNO', 'byweekno'), - ('BYDAY', 'byweekday'), - ('BYHOUR', 'byhour'), - ('BYMINUTE', 'byminute'), - ('BYSECOND', 'bysecond'), - ('BYEASTER', 'byeaster')]: - value = original_rule.get(key) - if value: - parts.append(partfmt.format(name=name, vals=(','.join(str(v) - for v in value)))) - - output.append('RRULE:' + ';'.join(parts)) - return '\n'.join(output) - - def replace(self, **kwargs): - """Return new rrule with same attributes except for those attributes given new - values by whichever keyword arguments are specified.""" - new_kwargs = {"interval": self._interval, - "count": self._count, - "dtstart": self._dtstart, - "freq": self._freq, - "until": self._until, - "wkst": self._wkst, - "cache": False if self._cache is None else True } - new_kwargs.update(self._original_rule) - new_kwargs.update(kwargs) - return rrule(**new_kwargs) - - def _iter(self): - year, month, day, hour, minute, second, weekday, yearday, _ = \ - self._dtstart.timetuple() - - # Some local variables to speed things up a bit - freq = self._freq - interval = self._interval - wkst = self._wkst - until = self._until - bymonth = self._bymonth - byweekno = self._byweekno - byyearday = self._byyearday - byweekday = self._byweekday - byeaster = self._byeaster - bymonthday = self._bymonthday - bynmonthday = self._bynmonthday - bysetpos = self._bysetpos - byhour = self._byhour - byminute = self._byminute - bysecond = self._bysecond - - ii = _iterinfo(self) - ii.rebuild(year, month) - - getdayset = {YEARLY: ii.ydayset, - MONTHLY: ii.mdayset, - WEEKLY: ii.wdayset, - DAILY: ii.ddayset, - HOURLY: ii.ddayset, - MINUTELY: ii.ddayset, - SECONDLY: ii.ddayset}[freq] - - if freq < HOURLY: - timeset = self._timeset - else: - gettimeset = {HOURLY: ii.htimeset, - MINUTELY: ii.mtimeset, - SECONDLY: ii.stimeset}[freq] - if ((freq >= HOURLY and - self._byhour and hour not in self._byhour) or - (freq >= MINUTELY and - self._byminute and minute not in self._byminute) or - (freq >= SECONDLY and - self._bysecond and second not in self._bysecond)): - timeset = () - else: - timeset = gettimeset(hour, minute, second) - - total = 0 - count = self._count - while True: - # Get dayset with the right frequency - dayset, start, end = getdayset(year, month, day) - - # Do the "hard" work ;-) - filtered = False - for i in dayset[start:end]: - if ((bymonth and ii.mmask[i] not in bymonth) or - (byweekno and not ii.wnomask[i]) or - (byweekday and ii.wdaymask[i] not in byweekday) or - (ii.nwdaymask and not ii.nwdaymask[i]) or - (byeaster and not ii.eastermask[i]) or - ((bymonthday or bynmonthday) and - ii.mdaymask[i] not in bymonthday and - ii.nmdaymask[i] not in bynmonthday) or - (byyearday and - ((i < ii.yearlen and i+1 not in byyearday and - -ii.yearlen+i not in byyearday) or - (i >= ii.yearlen and i+1-ii.yearlen not in byyearday and - -ii.nextyearlen+i-ii.yearlen not in byyearday)))): - dayset[i] = None - filtered = True - - # Output results - if bysetpos and timeset: - poslist = [] - for pos in bysetpos: - if pos < 0: - daypos, timepos = divmod(pos, len(timeset)) - else: - daypos, timepos = divmod(pos-1, len(timeset)) - try: - i = [x for x in dayset[start:end] - if x is not None][daypos] - time = timeset[timepos] - except IndexError: - pass - else: - date = datetime.date.fromordinal(ii.yearordinal+i) - res = datetime.datetime.combine(date, time) - if res not in poslist: - poslist.append(res) - poslist.sort() - for res in poslist: - if until and res > until: - self._len = total - return - elif res >= self._dtstart: - if count is not None: - count -= 1 - if count < 0: - self._len = total - return - total += 1 - yield res - else: - for i in dayset[start:end]: - if i is not None: - date = datetime.date.fromordinal(ii.yearordinal + i) - for time in timeset: - res = datetime.datetime.combine(date, time) - if until and res > until: - self._len = total - return - elif res >= self._dtstart: - if count is not None: - count -= 1 - if count < 0: - self._len = total - return - - total += 1 - yield res - - # Handle frequency and interval - fixday = False - if freq == YEARLY: - year += interval - if year > datetime.MAXYEAR: - self._len = total - return - ii.rebuild(year, month) - elif freq == MONTHLY: - month += interval - if month > 12: - div, mod = divmod(month, 12) - month = mod - year += div - if month == 0: - month = 12 - year -= 1 - if year > datetime.MAXYEAR: - self._len = total - return - ii.rebuild(year, month) - elif freq == WEEKLY: - if wkst > weekday: - day += -(weekday+1+(6-wkst))+self._interval*7 - else: - day += -(weekday-wkst)+self._interval*7 - weekday = wkst - fixday = True - elif freq == DAILY: - day += interval - fixday = True - elif freq == HOURLY: - if filtered: - # Jump to one iteration before next day - hour += ((23-hour)//interval)*interval - - if byhour: - ndays, hour = self.__mod_distance(value=hour, - byxxx=self._byhour, - base=24) - else: - ndays, hour = divmod(hour+interval, 24) - - if ndays: - day += ndays - fixday = True - - timeset = gettimeset(hour, minute, second) - elif freq == MINUTELY: - if filtered: - # Jump to one iteration before next day - minute += ((1439-(hour*60+minute))//interval)*interval - - valid = False - rep_rate = (24*60) - for j in range(rep_rate // gcd(interval, rep_rate)): - if byminute: - nhours, minute = \ - self.__mod_distance(value=minute, - byxxx=self._byminute, - base=60) - else: - nhours, minute = divmod(minute+interval, 60) - - div, hour = divmod(hour+nhours, 24) - if div: - day += div - fixday = True - filtered = False - - if not byhour or hour in byhour: - valid = True - break - - if not valid: - raise ValueError('Invalid combination of interval and ' + - 'byhour resulting in empty rule.') - - timeset = gettimeset(hour, minute, second) - elif freq == SECONDLY: - if filtered: - # Jump to one iteration before next day - second += (((86399 - (hour * 3600 + minute * 60 + second)) - // interval) * interval) - - rep_rate = (24 * 3600) - valid = False - for j in range(0, rep_rate // gcd(interval, rep_rate)): - if bysecond: - nminutes, second = \ - self.__mod_distance(value=second, - byxxx=self._bysecond, - base=60) - else: - nminutes, second = divmod(second+interval, 60) - - div, minute = divmod(minute+nminutes, 60) - if div: - hour += div - div, hour = divmod(hour, 24) - if div: - day += div - fixday = True - - if ((not byhour or hour in byhour) and - (not byminute or minute in byminute) and - (not bysecond or second in bysecond)): - valid = True - break - - if not valid: - raise ValueError('Invalid combination of interval, ' + - 'byhour and byminute resulting in empty' + - ' rule.') - - timeset = gettimeset(hour, minute, second) - - if fixday and day > 28: - daysinmonth = calendar.monthrange(year, month)[1] - if day > daysinmonth: - while day > daysinmonth: - day -= daysinmonth - month += 1 - if month == 13: - month = 1 - year += 1 - if year > datetime.MAXYEAR: - self._len = total - return - daysinmonth = calendar.monthrange(year, month)[1] - ii.rebuild(year, month) - - def __construct_byset(self, start, byxxx, base): - """ - If a `BYXXX` sequence is passed to the constructor at the same level as - `FREQ` (e.g. `FREQ=HOURLY,BYHOUR={2,4,7},INTERVAL=3`), there are some - specifications which cannot be reached given some starting conditions. - - This occurs whenever the interval is not coprime with the base of a - given unit and the difference between the starting position and the - ending position is not coprime with the greatest common denominator - between the interval and the base. For example, with a FREQ of hourly - starting at 17:00 and an interval of 4, the only valid values for - BYHOUR would be {21, 1, 5, 9, 13, 17}, because 4 and 24 are not - coprime. - - :param start: - Specifies the starting position. - :param byxxx: - An iterable containing the list of allowed values. - :param base: - The largest allowable value for the specified frequency (e.g. - 24 hours, 60 minutes). - - This does not preserve the type of the iterable, returning a set, since - the values should be unique and the order is irrelevant, this will - speed up later lookups. - - In the event of an empty set, raises a :exception:`ValueError`, as this - results in an empty rrule. - """ - - cset = set() - - # Support a single byxxx value. - if isinstance(byxxx, integer_types): - byxxx = (byxxx, ) - - for num in byxxx: - i_gcd = gcd(self._interval, base) - # Use divmod rather than % because we need to wrap negative nums. - if i_gcd == 1 or divmod(num - start, i_gcd)[1] == 0: - cset.add(num) - - if len(cset) == 0: - raise ValueError("Invalid rrule byxxx generates an empty set.") - - return cset - - def __mod_distance(self, value, byxxx, base): - """ - Calculates the next value in a sequence where the `FREQ` parameter is - specified along with a `BYXXX` parameter at the same "level" - (e.g. `HOURLY` specified with `BYHOUR`). - - :param value: - The old value of the component. - :param byxxx: - The `BYXXX` set, which should have been generated by - `rrule._construct_byset`, or something else which checks that a - valid rule is present. - :param base: - The largest allowable value for the specified frequency (e.g. - 24 hours, 60 minutes). - - If a valid value is not found after `base` iterations (the maximum - number before the sequence would start to repeat), this raises a - :exception:`ValueError`, as no valid values were found. - - This returns a tuple of `divmod(n*interval, base)`, where `n` is the - smallest number of `interval` repetitions until the next specified - value in `byxxx` is found. - """ - accumulator = 0 - for ii in range(1, base + 1): - # Using divmod() over % to account for negative intervals - div, value = divmod(value + self._interval, base) - accumulator += div - if value in byxxx: - return (accumulator, value) - - -class _iterinfo(object): - __slots__ = ["rrule", "lastyear", "lastmonth", - "yearlen", "nextyearlen", "yearordinal", "yearweekday", - "mmask", "mrange", "mdaymask", "nmdaymask", - "wdaymask", "wnomask", "nwdaymask", "eastermask"] - - def __init__(self, rrule): - for attr in self.__slots__: - setattr(self, attr, None) - self.rrule = rrule - - def rebuild(self, year, month): - # Every mask is 7 days longer to handle cross-year weekly periods. - rr = self.rrule - if year != self.lastyear: - self.yearlen = 365 + calendar.isleap(year) - self.nextyearlen = 365 + calendar.isleap(year + 1) - firstyday = datetime.date(year, 1, 1) - self.yearordinal = firstyday.toordinal() - self.yearweekday = firstyday.weekday() - - wday = datetime.date(year, 1, 1).weekday() - if self.yearlen == 365: - self.mmask = M365MASK - self.mdaymask = MDAY365MASK - self.nmdaymask = NMDAY365MASK - self.wdaymask = WDAYMASK[wday:] - self.mrange = M365RANGE - else: - self.mmask = M366MASK - self.mdaymask = MDAY366MASK - self.nmdaymask = NMDAY366MASK - self.wdaymask = WDAYMASK[wday:] - self.mrange = M366RANGE - - if not rr._byweekno: - self.wnomask = None - else: - self.wnomask = [0]*(self.yearlen+7) - # no1wkst = firstwkst = self.wdaymask.index(rr._wkst) - no1wkst = firstwkst = (7-self.yearweekday+rr._wkst) % 7 - if no1wkst >= 4: - no1wkst = 0 - # Number of days in the year, plus the days we got - # from last year. - wyearlen = self.yearlen+(self.yearweekday-rr._wkst) % 7 - else: - # Number of days in the year, minus the days we - # left in last year. - wyearlen = self.yearlen-no1wkst - div, mod = divmod(wyearlen, 7) - numweeks = div+mod//4 - for n in rr._byweekno: - if n < 0: - n += numweeks+1 - if not (0 < n <= numweeks): - continue - if n > 1: - i = no1wkst+(n-1)*7 - if no1wkst != firstwkst: - i -= 7-firstwkst - else: - i = no1wkst - for j in range(7): - self.wnomask[i] = 1 - i += 1 - if self.wdaymask[i] == rr._wkst: - break - if 1 in rr._byweekno: - # Check week number 1 of next year as well - # TODO: Check -numweeks for next year. - i = no1wkst+numweeks*7 - if no1wkst != firstwkst: - i -= 7-firstwkst - if i < self.yearlen: - # If week starts in next year, we - # don't care about it. - for j in range(7): - self.wnomask[i] = 1 - i += 1 - if self.wdaymask[i] == rr._wkst: - break - if no1wkst: - # Check last week number of last year as - # well. If no1wkst is 0, either the year - # started on week start, or week number 1 - # got days from last year, so there are no - # days from last year's last week number in - # this year. - if -1 not in rr._byweekno: - lyearweekday = datetime.date(year-1, 1, 1).weekday() - lno1wkst = (7-lyearweekday+rr._wkst) % 7 - lyearlen = 365+calendar.isleap(year-1) - if lno1wkst >= 4: - lno1wkst = 0 - lnumweeks = 52+(lyearlen + - (lyearweekday-rr._wkst) % 7) % 7//4 - else: - lnumweeks = 52+(self.yearlen-no1wkst) % 7//4 - else: - lnumweeks = -1 - if lnumweeks in rr._byweekno: - for i in range(no1wkst): - self.wnomask[i] = 1 - - if (rr._bynweekday and (month != self.lastmonth or - year != self.lastyear)): - ranges = [] - if rr._freq == YEARLY: - if rr._bymonth: - for month in rr._bymonth: - ranges.append(self.mrange[month-1:month+1]) - else: - ranges = [(0, self.yearlen)] - elif rr._freq == MONTHLY: - ranges = [self.mrange[month-1:month+1]] - if ranges: - # Weekly frequency won't get here, so we may not - # care about cross-year weekly periods. - self.nwdaymask = [0]*self.yearlen - for first, last in ranges: - last -= 1 - for wday, n in rr._bynweekday: - if n < 0: - i = last+(n+1)*7 - i -= (self.wdaymask[i]-wday) % 7 - else: - i = first+(n-1)*7 - i += (7-self.wdaymask[i]+wday) % 7 - if first <= i <= last: - self.nwdaymask[i] = 1 - - if rr._byeaster: - self.eastermask = [0]*(self.yearlen+7) - eyday = easter.easter(year).toordinal()-self.yearordinal - for offset in rr._byeaster: - self.eastermask[eyday+offset] = 1 - - self.lastyear = year - self.lastmonth = month - - def ydayset(self, year, month, day): - return list(range(self.yearlen)), 0, self.yearlen - - def mdayset(self, year, month, day): - dset = [None]*self.yearlen - start, end = self.mrange[month-1:month+1] - for i in range(start, end): - dset[i] = i - return dset, start, end - - def wdayset(self, year, month, day): - # We need to handle cross-year weeks here. - dset = [None]*(self.yearlen+7) - i = datetime.date(year, month, day).toordinal()-self.yearordinal - start = i - for j in range(7): - dset[i] = i - i += 1 - # if (not (0 <= i < self.yearlen) or - # self.wdaymask[i] == self.rrule._wkst): - # This will cross the year boundary, if necessary. - if self.wdaymask[i] == self.rrule._wkst: - break - return dset, start, i - - def ddayset(self, year, month, day): - dset = [None] * self.yearlen - i = datetime.date(year, month, day).toordinal() - self.yearordinal - dset[i] = i - return dset, i, i + 1 - - def htimeset(self, hour, minute, second): - tset = [] - rr = self.rrule - for minute in rr._byminute: - for second in rr._bysecond: - tset.append(datetime.time(hour, minute, second, - tzinfo=rr._tzinfo)) - tset.sort() - return tset - - def mtimeset(self, hour, minute, second): - tset = [] - rr = self.rrule - for second in rr._bysecond: - tset.append(datetime.time(hour, minute, second, tzinfo=rr._tzinfo)) - tset.sort() - return tset - - def stimeset(self, hour, minute, second): - return (datetime.time(hour, minute, second, - tzinfo=self.rrule._tzinfo),) - - -class rruleset(rrulebase): - """ The rruleset type allows more complex recurrence setups, mixing - multiple rules, dates, exclusion rules, and exclusion dates. The type - constructor takes the following keyword arguments: - - :param cache: If True, caching of results will be enabled, improving - performance of multiple queries considerably. """ - - class _genitem(object): - def __init__(self, genlist, gen): - try: - self.dt = advance_iterator(gen) - genlist.append(self) - except StopIteration: - pass - self.genlist = genlist - self.gen = gen - - def __next__(self): - try: - self.dt = advance_iterator(self.gen) - except StopIteration: - if self.genlist[0] is self: - heapq.heappop(self.genlist) - else: - self.genlist.remove(self) - heapq.heapify(self.genlist) - - next = __next__ - - def __lt__(self, other): - return self.dt < other.dt - - def __gt__(self, other): - return self.dt > other.dt - - def __eq__(self, other): - return self.dt == other.dt - - def __ne__(self, other): - return self.dt != other.dt - - def __init__(self, cache=False): - super(rruleset, self).__init__(cache) - self._rrule = [] - self._rdate = [] - self._exrule = [] - self._exdate = [] - - @_invalidates_cache - def rrule(self, rrule): - """ Include the given :py:class:`rrule` instance in the recurrence set - generation. """ - self._rrule.append(rrule) - - @_invalidates_cache - def rdate(self, rdate): - """ Include the given :py:class:`datetime` instance in the recurrence - set generation. """ - self._rdate.append(rdate) - - @_invalidates_cache - def exrule(self, exrule): - """ Include the given rrule instance in the recurrence set exclusion - list. Dates which are part of the given recurrence rules will not - be generated, even if some inclusive rrule or rdate matches them. - """ - self._exrule.append(exrule) - - @_invalidates_cache - def exdate(self, exdate): - """ Include the given datetime instance in the recurrence set - exclusion list. Dates included that way will not be generated, - even if some inclusive rrule or rdate matches them. """ - self._exdate.append(exdate) - - def _iter(self): - rlist = [] - self._rdate.sort() - self._genitem(rlist, iter(self._rdate)) - for gen in [iter(x) for x in self._rrule]: - self._genitem(rlist, gen) - exlist = [] - self._exdate.sort() - self._genitem(exlist, iter(self._exdate)) - for gen in [iter(x) for x in self._exrule]: - self._genitem(exlist, gen) - lastdt = None - total = 0 - heapq.heapify(rlist) - heapq.heapify(exlist) - while rlist: - ritem = rlist[0] - if not lastdt or lastdt != ritem.dt: - while exlist and exlist[0] < ritem: - exitem = exlist[0] - advance_iterator(exitem) - if exlist and exlist[0] is exitem: - heapq.heapreplace(exlist, exitem) - if not exlist or ritem != exlist[0]: - total += 1 - yield ritem.dt - lastdt = ritem.dt - advance_iterator(ritem) - if rlist and rlist[0] is ritem: - heapq.heapreplace(rlist, ritem) - self._len = total - - - - -class _rrulestr(object): - """ Parses a string representation of a recurrence rule or set of - recurrence rules. - - :param s: - Required, a string defining one or more recurrence rules. - - :param dtstart: - If given, used as the default recurrence start if not specified in the - rule string. - - :param cache: - If set ``True`` caching of results will be enabled, improving - performance of multiple queries considerably. - - :param unfold: - If set ``True`` indicates that a rule string is split over more - than one line and should be joined before processing. - - :param forceset: - If set ``True`` forces a :class:`dateutil.rrule.rruleset` to - be returned. - - :param compatible: - If set ``True`` forces ``unfold`` and ``forceset`` to be ``True``. - - :param ignoretz: - If set ``True``, time zones in parsed strings are ignored and a naive - :class:`datetime.datetime` object is returned. - - :param tzids: - If given, a callable or mapping used to retrieve a - :class:`datetime.tzinfo` from a string representation. - Defaults to :func:`dateutil.tz.gettz`. - - :param tzinfos: - Additional time zone names / aliases which may be present in a string - representation. See :func:`dateutil.parser.parse` for more - information. - - :return: - Returns a :class:`dateutil.rrule.rruleset` or - :class:`dateutil.rrule.rrule` - """ - - _freq_map = {"YEARLY": YEARLY, - "MONTHLY": MONTHLY, - "WEEKLY": WEEKLY, - "DAILY": DAILY, - "HOURLY": HOURLY, - "MINUTELY": MINUTELY, - "SECONDLY": SECONDLY} - - _weekday_map = {"MO": 0, "TU": 1, "WE": 2, "TH": 3, - "FR": 4, "SA": 5, "SU": 6} - - def _handle_int(self, rrkwargs, name, value, **kwargs): - rrkwargs[name.lower()] = int(value) - - def _handle_int_list(self, rrkwargs, name, value, **kwargs): - rrkwargs[name.lower()] = [int(x) for x in value.split(',')] - - _handle_INTERVAL = _handle_int - _handle_COUNT = _handle_int - _handle_BYSETPOS = _handle_int_list - _handle_BYMONTH = _handle_int_list - _handle_BYMONTHDAY = _handle_int_list - _handle_BYYEARDAY = _handle_int_list - _handle_BYEASTER = _handle_int_list - _handle_BYWEEKNO = _handle_int_list - _handle_BYHOUR = _handle_int_list - _handle_BYMINUTE = _handle_int_list - _handle_BYSECOND = _handle_int_list - - def _handle_FREQ(self, rrkwargs, name, value, **kwargs): - rrkwargs["freq"] = self._freq_map[value] - - def _handle_UNTIL(self, rrkwargs, name, value, **kwargs): - global parser - if not parser: - from dateutil import parser - try: - rrkwargs["until"] = parser.parse(value, - ignoretz=kwargs.get("ignoretz"), - tzinfos=kwargs.get("tzinfos")) - except ValueError: - raise ValueError("invalid until date") - - def _handle_WKST(self, rrkwargs, name, value, **kwargs): - rrkwargs["wkst"] = self._weekday_map[value] - - def _handle_BYWEEKDAY(self, rrkwargs, name, value, **kwargs): - """ - Two ways to specify this: +1MO or MO(+1) - """ - l = [] - for wday in value.split(','): - if '(' in wday: - # If it's of the form TH(+1), etc. - splt = wday.split('(') - w = splt[0] - n = int(splt[1][:-1]) - elif len(wday): - # If it's of the form +1MO - for i in range(len(wday)): - if wday[i] not in '+-0123456789': - break - n = wday[:i] or None - w = wday[i:] - if n: - n = int(n) - else: - raise ValueError("Invalid (empty) BYDAY specification.") - - l.append(weekdays[self._weekday_map[w]](n)) - rrkwargs["byweekday"] = l - - _handle_BYDAY = _handle_BYWEEKDAY - - def _parse_rfc_rrule(self, line, - dtstart=None, - cache=False, - ignoretz=False, - tzinfos=None): - if line.find(':') != -1: - name, value = line.split(':') - if name != "RRULE": - raise ValueError("unknown parameter name") - else: - value = line - rrkwargs = {} - for pair in value.split(';'): - name, value = pair.split('=') - name = name.upper() - value = value.upper() - try: - getattr(self, "_handle_"+name)(rrkwargs, name, value, - ignoretz=ignoretz, - tzinfos=tzinfos) - except AttributeError: - raise ValueError("unknown parameter '%s'" % name) - except (KeyError, ValueError): - raise ValueError("invalid '%s': %s" % (name, value)) - return rrule(dtstart=dtstart, cache=cache, **rrkwargs) - - def _parse_date_value(self, date_value, parms, rule_tzids, - ignoretz, tzids, tzinfos): - global parser - if not parser: - from dateutil import parser - - datevals = [] - value_found = False - TZID = None - - for parm in parms: - if parm.startswith("TZID="): - try: - tzkey = rule_tzids[parm.split('TZID=')[-1]] - except KeyError: - continue - if tzids is None: - from . import tz - tzlookup = tz.gettz - elif callable(tzids): - tzlookup = tzids - else: - tzlookup = getattr(tzids, 'get', None) - if tzlookup is None: - msg = ('tzids must be a callable, mapping, or None, ' - 'not %s' % tzids) - raise ValueError(msg) - - TZID = tzlookup(tzkey) - continue - - # RFC 5445 3.8.2.4: The VALUE parameter is optional, but may be found - # only once. - if parm not in {"VALUE=DATE-TIME", "VALUE=DATE"}: - raise ValueError("unsupported parm: " + parm) - else: - if value_found: - msg = ("Duplicate value parameter found in: " + parm) - raise ValueError(msg) - value_found = True - - for datestr in date_value.split(','): - date = parser.parse(datestr, ignoretz=ignoretz, tzinfos=tzinfos) - if TZID is not None: - if date.tzinfo is None: - date = date.replace(tzinfo=TZID) - else: - raise ValueError('DTSTART/EXDATE specifies multiple timezone') - datevals.append(date) - - return datevals - - def _parse_rfc(self, s, - dtstart=None, - cache=False, - unfold=False, - forceset=False, - compatible=False, - ignoretz=False, - tzids=None, - tzinfos=None): - global parser - if compatible: - forceset = True - unfold = True - - TZID_NAMES = dict(map( - lambda x: (x.upper(), x), - re.findall('TZID=(?P[^:]+):', s) - )) - s = s.upper() - if not s.strip(): - raise ValueError("empty string") - if unfold: - lines = s.splitlines() - i = 0 - while i < len(lines): - line = lines[i].rstrip() - if not line: - del lines[i] - elif i > 0 and line[0] == " ": - lines[i-1] += line[1:] - del lines[i] - else: - i += 1 - else: - lines = s.split() - if (not forceset and len(lines) == 1 and (s.find(':') == -1 or - s.startswith('RRULE:'))): - return self._parse_rfc_rrule(lines[0], cache=cache, - dtstart=dtstart, ignoretz=ignoretz, - tzinfos=tzinfos) - else: - rrulevals = [] - rdatevals = [] - exrulevals = [] - exdatevals = [] - for line in lines: - if not line: - continue - if line.find(':') == -1: - name = "RRULE" - value = line - else: - name, value = line.split(':', 1) - parms = name.split(';') - if not parms: - raise ValueError("empty property name") - name = parms[0] - parms = parms[1:] - if name == "RRULE": - for parm in parms: - raise ValueError("unsupported RRULE parm: "+parm) - rrulevals.append(value) - elif name == "RDATE": - for parm in parms: - if parm != "VALUE=DATE-TIME": - raise ValueError("unsupported RDATE parm: "+parm) - rdatevals.append(value) - elif name == "EXRULE": - for parm in parms: - raise ValueError("unsupported EXRULE parm: "+parm) - exrulevals.append(value) - elif name == "EXDATE": - exdatevals.extend( - self._parse_date_value(value, parms, - TZID_NAMES, ignoretz, - tzids, tzinfos) - ) - elif name == "DTSTART": - dtvals = self._parse_date_value(value, parms, TZID_NAMES, - ignoretz, tzids, tzinfos) - if len(dtvals) != 1: - raise ValueError("Multiple DTSTART values specified:" + - value) - dtstart = dtvals[0] - else: - raise ValueError("unsupported property: "+name) - if (forceset or len(rrulevals) > 1 or rdatevals - or exrulevals or exdatevals): - if not parser and (rdatevals or exdatevals): - from dateutil import parser - rset = rruleset(cache=cache) - for value in rrulevals: - rset.rrule(self._parse_rfc_rrule(value, dtstart=dtstart, - ignoretz=ignoretz, - tzinfos=tzinfos)) - for value in rdatevals: - for datestr in value.split(','): - rset.rdate(parser.parse(datestr, - ignoretz=ignoretz, - tzinfos=tzinfos)) - for value in exrulevals: - rset.exrule(self._parse_rfc_rrule(value, dtstart=dtstart, - ignoretz=ignoretz, - tzinfos=tzinfos)) - for value in exdatevals: - rset.exdate(value) - if compatible and dtstart: - rset.rdate(dtstart) - return rset - else: - return self._parse_rfc_rrule(rrulevals[0], - dtstart=dtstart, - cache=cache, - ignoretz=ignoretz, - tzinfos=tzinfos) - - def __call__(self, s, **kwargs): - return self._parse_rfc(s, **kwargs) - - -rrulestr = _rrulestr() - -# vim:ts=4:sw=4:et diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/reflection.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/reflection.py deleted file mode 100644 index 1627669b955d6a7b7baf9bc5cceee0fc188f7e1c..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/reflection.py +++ /dev/null @@ -1,95 +0,0 @@ -# Protocol Buffers - Google's data interchange format -# Copyright 2008 Google Inc. All rights reserved. -# https://developers.google.com/protocol-buffers/ -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above -# copyright notice, this list of conditions and the following disclaimer -# in the documentation and/or other materials provided with the -# distribution. -# * Neither the name of Google Inc. nor the names of its -# contributors may be used to endorse or promote products derived from -# this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -# This code is meant to work on Python 2.4 and above only. - -"""Contains a metaclass and helper functions used to create -protocol message classes from Descriptor objects at runtime. - -Recall that a metaclass is the "type" of a class. -(A class is to a metaclass what an instance is to a class.) - -In this case, we use the GeneratedProtocolMessageType metaclass -to inject all the useful functionality into the classes -output by the protocol compiler at compile-time. - -The upshot of all this is that the real implementation -details for ALL pure-Python protocol buffers are *here in -this file*. -""" - -__author__ = 'robinson@google.com (Will Robinson)' - - -from google.protobuf import message_factory -from google.protobuf import symbol_database - -# The type of all Message classes. -# Part of the public interface, but normally only used by message factories. -GeneratedProtocolMessageType = message_factory._GENERATED_PROTOCOL_MESSAGE_TYPE - -MESSAGE_CLASS_CACHE = {} - - -# Deprecated. Please NEVER use reflection.ParseMessage(). -def ParseMessage(descriptor, byte_str): - """Generate a new Message instance from this Descriptor and a byte string. - - DEPRECATED: ParseMessage is deprecated because it is using MakeClass(). - Please use MessageFactory.GetPrototype() instead. - - Args: - descriptor: Protobuf Descriptor object - byte_str: Serialized protocol buffer byte string - - Returns: - Newly created protobuf Message object. - """ - result_class = MakeClass(descriptor) - new_msg = result_class() - new_msg.ParseFromString(byte_str) - return new_msg - - -# Deprecated. Please NEVER use reflection.MakeClass(). -def MakeClass(descriptor): - """Construct a class object for a protobuf described by descriptor. - - DEPRECATED: use MessageFactory.GetPrototype() instead. - - Args: - descriptor: A descriptor.Descriptor object describing the protobuf. - Returns: - The Message class object described by the descriptor. - """ - # Original implementation leads to duplicate message classes, which won't play - # well with extensions. Message factory info is also missing. - # Redirect to message_factory. - return message_factory.GetMessageClass(descriptor) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/StaticImage-508005b4.css b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/StaticImage-508005b4.css deleted file mode 100644 index 867db01e98d8648a1afa22a934018f3ef506a4ae..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/StaticImage-508005b4.css +++ /dev/null @@ -1 +0,0 @@ -canvas.svelte-yigbas{display:block;position:absolute;inset:0;margin:auto}.lr.svelte-yigbas{border-right:1px solid var(--border-color-primary);border-left:1px solid var(--border-color-primary)}.tb.svelte-yigbas{border-top:1px solid var(--border-color-primary);border-bottom:1px solid var(--border-color-primary)}canvas.svelte-yigbas:hover{cursor:none}.wrap.svelte-yigbas{position:relative;width:var(--size-full);height:var(--size-full);touch-action:none}.start-prompt.svelte-yigbas{display:flex;position:absolute;inset:0;justify-content:center;align-items:center;z-index:var(--layer-4);touch-action:none;pointer-events:none;color:var(--body-text-color-subdued)}.wrap.svelte-425ent{position:relative;width:var(--size-full);height:var(--size-full);min-height:var(--size-60)}video.svelte-425ent{width:var(--size-full);height:var(--size-full)}button.svelte-425ent{display:flex;position:absolute;right:0;bottom:var(--size-2);left:0;justify-content:center;align-items:center;margin:auto;box-shadow:var(--shadow-drop-lg);border-radius:var(--radius-xl);background-color:#000000e6;width:var(--size-10);height:var(--size-10)}@media (min-width: 768px){button.svelte-425ent{bottom:var(--size-4)}}@media (min-width: 1280px){button.svelte-425ent{bottom:var(--size-8)}}.icon.svelte-425ent{opacity:.8;width:50%;height:50%;color:#fff}.flip.svelte-425ent{transform:scaleX(-1)}div.svelte-s6ybro{display:flex;position:absolute;top:var(--size-2);right:var(--size-2);justify-content:flex-end;gap:var(--spacing-sm);z-index:var(--layer-5)}.wrap.svelte-p4aq0j.svelte-p4aq0j{display:flex;position:absolute;top:var(--size-10);right:var(--size-2);flex-direction:column;justify-content:flex-end;gap:var(--spacing-sm);z-index:var(--layer-5)}.brush.svelte-p4aq0j.svelte-p4aq0j{top:0;right:0}.brush.svelte-p4aq0j input.svelte-p4aq0j{position:absolute;top:3px;right:calc(100% + 5px)}.col.svelte-p4aq0j input.svelte-p4aq0j{position:absolute;right:calc(100% + 5px);bottom:-4px}.image-container.svelte-p3y7hu,img.svelte-p3y7hu{width:var(--size-full);height:var(--size-full)}img.svelte-p3y7hu{object-fit:contain}.selectable.svelte-p3y7hu{cursor:crosshair}.absolute-img.svelte-p3y7hu{position:absolute;opacity:0}.webcam.svelte-p3y7hu{transform:scaleX(-1)}img.svelte-1btp92j{width:var(--size-full);height:var(--size-full);object-fit:contain}.selectable.svelte-1btp92j{cursor:crosshair}.icon-buttons.svelte-1btp92j{display:flex;position:absolute;top:6px;right:6px;gap:var(--size-1)} diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/dsv-576afacd.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/dsv-576afacd.js deleted file mode 100644 index 832d450961d23fb14b577c045f0c24c61e74c4e6..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/dsv-576afacd.js +++ /dev/null @@ -1,6 +0,0 @@ -var D={},A={},E=34,m=10,R=13;function I(r){return new Function("d","return {"+r.map(function(t,e){return JSON.stringify(t)+": d["+e+'] || ""'}).join(",")+"}")}function B(r,t){var e=I(r);return function(a,c){return t(e(a),c,r)}}function F(r){var t=Object.create(null),e=[];return r.forEach(function(a){for(var c in a)c in t||e.push(t[c]=c)}),e}function f(r,t){var e=r+"",a=e.length;return a9999?"+"+f(r,6):f(r,4)}function S(r){var t=r.getUTCHours(),e=r.getUTCMinutes(),a=r.getUTCSeconds(),c=r.getUTCMilliseconds();return isNaN(r)?"Invalid Date":L(r.getUTCFullYear())+"-"+f(r.getUTCMonth()+1,2)+"-"+f(r.getUTCDate(),2)+(c?"T"+f(t,2)+":"+f(e,2)+":"+f(a,2)+"."+f(c,3)+"Z":a?"T"+f(t,2)+":"+f(e,2)+":"+f(a,2)+"Z":e||t?"T"+f(t,2)+":"+f(e,2)+"Z":"")}function Z(r){var t=new RegExp('["'+r+` -\r]`),e=r.charCodeAt(0);function a(n,o){var s,i,u=c(n,function(h,l){if(s)return s(h,l-1);i=h,s=o?B(h,o):I(h)});return u.columns=i||[],u}function c(n,o){var s=[],i=n.length,u=0,h=0,l,v=i<=0,C=!1;n.charCodeAt(i-1)===m&&--i,n.charCodeAt(i-1)===R&&--i;function w(){if(v)return A;if(C)return C=!1,D;var j,d=u,p;if(n.charCodeAt(d)===E){for(;u++=i?v=!0:(p=n.charCodeAt(u++))===m?C=!0:p===R&&(C=!0,n.charCodeAt(u)===m&&++u),n.slice(d+1,j-1).replace(/""/g,'"')}for(;uAutodata 3 23 Keygen Mac

    DOWNLOAD ››››› https://tinurli.com/2uwkVe



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Download Hindi Rain Songs for Free and Sing Along with Your Favorite Stars.md b/spaces/cihyFjudo/fairness-paper-search/Download Hindi Rain Songs for Free and Sing Along with Your Favorite Stars.md deleted file mode 100644 index 8ad00e440c5b10ecfd8f1bc157936fc74c3c0c4e..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Download Hindi Rain Songs for Free and Sing Along with Your Favorite Stars.md +++ /dev/null @@ -1,9 +0,0 @@ -
    -

    Welcome to Rainy Mood, the internet's most popular rain experience.

    Millions of people use Rainy Mood while sleeping, studying, and relaxing.

    Enjoy the free web version, or try the iOS/Android app with additional features.

    -

    Sometimes the weather can spoil your photography composition. Don't be upset. You have the chance not to get wet and have your camera moistened but, after all, with the beautiful raining effect. Just wait out the rain, make a lot of photos, come back home or at studio to use our Photoshop rain overlay free bundle. Get the most on your bad weather images with this pack that creates a rain effect for photographs in Adobe Photoshop. Don't worry to overdo with Photoshop overlays. Our free rain overlays Photoshop are aimed at creating the realistic view.

    -

    Hindi Rain Free Download


    Download File →→→ https://tinurli.com/2uwjh7



    -

    SoundBible.com offers free sound clips for download in either wav or mp3 format. We have free and royalty free sound effects and clips for video editors, movie scores, game designers, and weekend sound warriors. Downloads are totally free, and upfront with large download buttons to prevent confusion.

    -

    Ophcrack is a free Windows password cracker based on rainbow tables. It is a very efficient implementation of rainbow tables done by the inventors of the method. It comes with a Graphical User Interface and runs on multiple platforms.

    -

    Free download Kiss The Rain mp3 ringtone free for IOS & Android. Search free all Category: Instrumental Ringtones Ringtones on Best Ringtones Net and personalize your phone to suit you

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Shaka Zulu full movie in hindi free download The amazing biography of the most influential African ruler.md b/spaces/cihyFjudo/fairness-paper-search/Shaka Zulu full movie in hindi free download The amazing biography of the most influential African ruler.md deleted file mode 100644 index be16f133c33a19c19d9492809a38e8ff2ab42bd4..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Shaka Zulu full movie in hindi free download The amazing biography of the most influential African ruler.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Shaka Zulu full movie in hindi free download


    DOWNLOAD - https://tinurli.com/2uwhSE



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/JpegPresets.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/JpegPresets.py deleted file mode 100644 index a678e248e9ab2465738ea79f7f5c4bbc260c1919..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/JpegPresets.py +++ /dev/null @@ -1,240 +0,0 @@ -""" -JPEG quality settings equivalent to the Photoshop settings. -Can be used when saving JPEG files. - -The following presets are available by default: -``web_low``, ``web_medium``, ``web_high``, ``web_very_high``, ``web_maximum``, -``low``, ``medium``, ``high``, ``maximum``. -More presets can be added to the :py:data:`presets` dict if needed. - -To apply the preset, specify:: - - quality="preset_name" - -To apply only the quantization table:: - - qtables="preset_name" - -To apply only the subsampling setting:: - - subsampling="preset_name" - -Example:: - - im.save("image_name.jpg", quality="web_high") - -Subsampling ------------ - -Subsampling is the practice of encoding images by implementing less resolution -for chroma information than for luma information. -(ref.: https://en.wikipedia.org/wiki/Chroma_subsampling) - -Possible subsampling values are 0, 1 and 2 that correspond to 4:4:4, 4:2:2 and -4:2:0. - -You can get the subsampling of a JPEG with the -:func:`.JpegImagePlugin.get_sampling` function. - -In JPEG compressed data a JPEG marker is used instead of an EXIF tag. -(ref.: https://exiv2.org/tags.html) - - -Quantization tables -------------------- - -They are values use by the DCT (Discrete cosine transform) to remove -*unnecessary* information from the image (the lossy part of the compression). -(ref.: https://en.wikipedia.org/wiki/Quantization_matrix#Quantization_matrices, -https://en.wikipedia.org/wiki/JPEG#Quantization) - -You can get the quantization tables of a JPEG with:: - - im.quantization - -This will return a dict with a number of lists. You can pass this dict -directly as the qtables argument when saving a JPEG. - -The quantization table format in presets is a list with sublists. These formats -are interchangeable. - -Libjpeg ref.: -https://web.archive.org/web/20120328125543/http://www.jpegcameras.com/libjpeg/libjpeg-3.html - -""" - -# fmt: off -presets = { - 'web_low': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [20, 16, 25, 39, 50, 46, 62, 68, - 16, 18, 23, 38, 38, 53, 65, 68, - 25, 23, 31, 38, 53, 65, 68, 68, - 39, 38, 38, 53, 65, 68, 68, 68, - 50, 38, 53, 65, 68, 68, 68, 68, - 46, 53, 65, 68, 68, 68, 68, 68, - 62, 65, 68, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68], - [21, 25, 32, 38, 54, 68, 68, 68, - 25, 28, 24, 38, 54, 68, 68, 68, - 32, 24, 32, 43, 66, 68, 68, 68, - 38, 38, 43, 53, 68, 68, 68, 68, - 54, 54, 66, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68] - ]}, - 'web_medium': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [16, 11, 11, 16, 23, 27, 31, 30, - 11, 12, 12, 15, 20, 23, 23, 30, - 11, 12, 13, 16, 23, 26, 35, 47, - 16, 15, 16, 23, 26, 37, 47, 64, - 23, 20, 23, 26, 39, 51, 64, 64, - 27, 23, 26, 37, 51, 64, 64, 64, - 31, 23, 35, 47, 64, 64, 64, 64, - 30, 30, 47, 64, 64, 64, 64, 64], - [17, 15, 17, 21, 20, 26, 38, 48, - 15, 19, 18, 17, 20, 26, 35, 43, - 17, 18, 20, 22, 26, 30, 46, 53, - 21, 17, 22, 28, 30, 39, 53, 64, - 20, 20, 26, 30, 39, 48, 64, 64, - 26, 26, 30, 39, 48, 63, 64, 64, - 38, 35, 46, 53, 64, 64, 64, 64, - 48, 43, 53, 64, 64, 64, 64, 64] - ]}, - 'web_high': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [6, 4, 4, 6, 9, 11, 12, 16, - 4, 5, 5, 6, 8, 10, 12, 12, - 4, 5, 5, 6, 10, 12, 14, 19, - 6, 6, 6, 11, 12, 15, 19, 28, - 9, 8, 10, 12, 16, 20, 27, 31, - 11, 10, 12, 15, 20, 27, 31, 31, - 12, 12, 14, 19, 27, 31, 31, 31, - 16, 12, 19, 28, 31, 31, 31, 31], - [7, 7, 13, 24, 26, 31, 31, 31, - 7, 12, 16, 21, 31, 31, 31, 31, - 13, 16, 17, 31, 31, 31, 31, 31, - 24, 21, 31, 31, 31, 31, 31, 31, - 26, 31, 31, 31, 31, 31, 31, 31, - 31, 31, 31, 31, 31, 31, 31, 31, - 31, 31, 31, 31, 31, 31, 31, 31, - 31, 31, 31, 31, 31, 31, 31, 31] - ]}, - 'web_very_high': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 4, 5, 7, 9, - 2, 2, 2, 4, 5, 7, 9, 12, - 3, 3, 4, 5, 8, 10, 12, 12, - 4, 4, 5, 7, 10, 12, 12, 12, - 5, 5, 7, 9, 12, 12, 12, 12, - 6, 6, 9, 12, 12, 12, 12, 12], - [3, 3, 5, 9, 13, 15, 15, 15, - 3, 4, 6, 11, 14, 12, 12, 12, - 5, 6, 9, 14, 12, 12, 12, 12, - 9, 11, 14, 12, 12, 12, 12, 12, - 13, 14, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'web_maximum': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 2, - 1, 1, 1, 1, 1, 1, 2, 2, - 1, 1, 1, 1, 1, 2, 2, 3, - 1, 1, 1, 1, 2, 2, 3, 3, - 1, 1, 1, 2, 2, 3, 3, 3, - 1, 1, 2, 2, 3, 3, 3, 3], - [1, 1, 1, 2, 2, 3, 3, 3, - 1, 1, 1, 2, 3, 3, 3, 3, - 1, 1, 1, 3, 3, 3, 3, 3, - 2, 2, 3, 3, 3, 3, 3, 3, - 2, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3] - ]}, - 'low': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [18, 14, 14, 21, 30, 35, 34, 17, - 14, 16, 16, 19, 26, 23, 12, 12, - 14, 16, 17, 21, 23, 12, 12, 12, - 21, 19, 21, 23, 12, 12, 12, 12, - 30, 26, 23, 12, 12, 12, 12, 12, - 35, 23, 12, 12, 12, 12, 12, 12, - 34, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12], - [20, 19, 22, 27, 20, 20, 17, 17, - 19, 25, 23, 14, 14, 12, 12, 12, - 22, 23, 14, 14, 12, 12, 12, 12, - 27, 14, 14, 12, 12, 12, 12, 12, - 20, 14, 12, 12, 12, 12, 12, 12, - 20, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'medium': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [12, 8, 8, 12, 17, 21, 24, 17, - 8, 9, 9, 11, 15, 19, 12, 12, - 8, 9, 10, 12, 19, 12, 12, 12, - 12, 11, 12, 21, 12, 12, 12, 12, - 17, 15, 19, 12, 12, 12, 12, 12, - 21, 19, 12, 12, 12, 12, 12, 12, - 24, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12], - [13, 11, 13, 16, 20, 20, 17, 17, - 11, 14, 14, 14, 14, 12, 12, 12, - 13, 14, 14, 14, 12, 12, 12, 12, - 16, 14, 14, 12, 12, 12, 12, 12, - 20, 14, 12, 12, 12, 12, 12, 12, - 20, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'high': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [6, 4, 4, 6, 9, 11, 12, 16, - 4, 5, 5, 6, 8, 10, 12, 12, - 4, 5, 5, 6, 10, 12, 12, 12, - 6, 6, 6, 11, 12, 12, 12, 12, - 9, 8, 10, 12, 12, 12, 12, 12, - 11, 10, 12, 12, 12, 12, 12, 12, - 12, 12, 12, 12, 12, 12, 12, 12, - 16, 12, 12, 12, 12, 12, 12, 12], - [7, 7, 13, 24, 20, 20, 17, 17, - 7, 12, 16, 14, 14, 12, 12, 12, - 13, 16, 14, 14, 12, 12, 12, 12, - 24, 14, 14, 12, 12, 12, 12, 12, - 20, 14, 12, 12, 12, 12, 12, 12, - 20, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'maximum': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 4, 5, 7, 9, - 2, 2, 2, 4, 5, 7, 9, 12, - 3, 3, 4, 5, 8, 10, 12, 12, - 4, 4, 5, 7, 10, 12, 12, 12, - 5, 5, 7, 9, 12, 12, 12, 12, - 6, 6, 9, 12, 12, 12, 12, 12], - [3, 3, 5, 9, 13, 15, 15, 15, - 3, 4, 6, 10, 14, 12, 12, 12, - 5, 6, 9, 14, 12, 12, 12, 12, - 9, 10, 14, 12, 12, 12, 12, 12, - 13, 14, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12] - ]}, -} -# fmt: on diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacdectab.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacdectab.h deleted file mode 100644 index 41f1db781d89e5e8e316e8c3414c87e59d565af8..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacdectab.h +++ /dev/null @@ -1,128 +0,0 @@ -/* - * AAC decoder data - * Copyright (c) 2005-2006 Oded Shimon ( ods15 ods15 dyndns org ) - * Copyright (c) 2006-2007 Maxim Gavrilov ( maxim.gavrilov gmail com ) - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * AAC decoder data - * @author Oded Shimon ( ods15 ods15 dyndns org ) - * @author Maxim Gavrilov ( maxim.gavrilov gmail com ) - */ - -#ifndef AVCODEC_AACDECTAB_H -#define AVCODEC_AACDECTAB_H - -#include "libavutil/channel_layout.h" -#include "aac.h" - -#include - -static const int8_t tags_per_config[16] = { 0, 1, 1, 2, 3, 3, 4, 5, 0, 0, 0, 5, 5, 16, 5, 0 }; - -static const uint8_t aac_channel_layout_map[16][16][3] = { - { { TYPE_SCE, 0, AAC_CHANNEL_FRONT }, }, - { { TYPE_CPE, 0, AAC_CHANNEL_FRONT }, }, - { { TYPE_SCE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 0, AAC_CHANNEL_FRONT }, }, - { { TYPE_SCE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 0, AAC_CHANNEL_FRONT }, { TYPE_SCE, 1, AAC_CHANNEL_BACK }, }, - { { TYPE_SCE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 1, AAC_CHANNEL_BACK }, }, - { { TYPE_SCE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 1, AAC_CHANNEL_BACK }, { TYPE_LFE, 0, AAC_CHANNEL_LFE }, }, - { { TYPE_SCE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 1, AAC_CHANNEL_FRONT }, { TYPE_CPE, 2, AAC_CHANNEL_BACK }, { TYPE_LFE, 0, AAC_CHANNEL_LFE }, }, - { { 0, } }, - { { 0, } }, - { { 0, } }, - { { TYPE_SCE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 1, AAC_CHANNEL_BACK }, { TYPE_SCE, 1, AAC_CHANNEL_BACK }, { TYPE_LFE, 0, AAC_CHANNEL_LFE }, }, - { { TYPE_SCE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 1, AAC_CHANNEL_BACK }, { TYPE_CPE, 2, AAC_CHANNEL_BACK }, { TYPE_LFE, 0, AAC_CHANNEL_LFE }, }, - { - { TYPE_SCE, 0, AAC_CHANNEL_FRONT }, // SCE1 = FC, - { TYPE_CPE, 0, AAC_CHANNEL_FRONT }, // CPE1 = FLc and FRc, - { TYPE_CPE, 1, AAC_CHANNEL_FRONT }, // CPE2 = FL and FR, - { TYPE_CPE, 2, AAC_CHANNEL_BACK }, // CPE3 = SiL and SiR, - { TYPE_CPE, 3, AAC_CHANNEL_BACK }, // CPE4 = BL and BR, - { TYPE_SCE, 1, AAC_CHANNEL_BACK }, // SCE2 = BC, - { TYPE_LFE, 0, AAC_CHANNEL_LFE }, // LFE1 = LFE1, - { TYPE_LFE, 1, AAC_CHANNEL_LFE }, // LFE2 = LFE2, - { TYPE_SCE, 2, AAC_CHANNEL_FRONT }, // SCE3 = TpFC, - { TYPE_CPE, 4, AAC_CHANNEL_FRONT }, // CPE5 = TpFL and TpFR, - { TYPE_CPE, 5, AAC_CHANNEL_SIDE }, // CPE6 = TpSiL and TpSiR, - { TYPE_SCE, 3, AAC_CHANNEL_SIDE }, // SCE4 = TpC, - { TYPE_CPE, 6, AAC_CHANNEL_BACK }, // CPE7 = TpBL and TpBR, - { TYPE_SCE, 4, AAC_CHANNEL_BACK }, // SCE5 = TpBC, - { TYPE_SCE, 5, AAC_CHANNEL_FRONT }, // SCE6 = BtFC, - { TYPE_CPE, 7, AAC_CHANNEL_FRONT }, // CPE8 = BtFL and BtFR - }, - { { TYPE_SCE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 0, AAC_CHANNEL_FRONT }, { TYPE_CPE, 1, AAC_CHANNEL_BACK }, { TYPE_LFE, 0, AAC_CHANNEL_LFE }, { TYPE_CPE, 2, AAC_CHANNEL_FRONT }, }, - { { 0, } }, -}; - -static const int16_t aac_channel_map[3][4][6] = { - { - { AV_CHAN_FRONT_CENTER, AV_CHAN_FRONT_LEFT_OF_CENTER, AV_CHAN_FRONT_RIGHT_OF_CENTER, AV_CHAN_FRONT_LEFT, AV_CHAN_FRONT_RIGHT, AV_CHAN_NONE }, - { AV_CHAN_UNUSED, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE }, - { AV_CHAN_UNUSED, AV_CHAN_SIDE_LEFT, AV_CHAN_SIDE_RIGHT, AV_CHAN_BACK_LEFT, AV_CHAN_BACK_RIGHT, AV_CHAN_BACK_CENTER }, - { AV_CHAN_LOW_FREQUENCY, AV_CHAN_LOW_FREQUENCY_2, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE }, - }, - { - { AV_CHAN_TOP_FRONT_CENTER, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_TOP_FRONT_LEFT, AV_CHAN_TOP_FRONT_RIGHT, AV_CHAN_NONE }, - { AV_CHAN_UNUSED, AV_CHAN_TOP_SIDE_LEFT, AV_CHAN_TOP_SIDE_RIGHT, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_TOP_CENTER}, - { AV_CHAN_UNUSED, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_TOP_BACK_LEFT, AV_CHAN_TOP_BACK_RIGHT, AV_CHAN_TOP_BACK_CENTER}, - { AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE}, - }, - { - { AV_CHAN_BOTTOM_FRONT_CENTER, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_BOTTOM_FRONT_LEFT, AV_CHAN_BOTTOM_FRONT_RIGHT, AV_CHAN_NONE }, - { AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE }, - { AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE }, - { AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE, AV_CHAN_NONE }, - }, -}; - -#if FF_API_OLD_CHANNEL_LAYOUT -static const uint64_t aac_channel_layout[] = { - AV_CH_LAYOUT_MONO, - AV_CH_LAYOUT_STEREO, - AV_CH_LAYOUT_SURROUND, - AV_CH_LAYOUT_4POINT0, - AV_CH_LAYOUT_5POINT0_BACK, - AV_CH_LAYOUT_5POINT1_BACK, - AV_CH_LAYOUT_7POINT1_WIDE_BACK, - AV_CH_LAYOUT_6POINT1_BACK, - AV_CH_LAYOUT_7POINT1, - AV_CH_LAYOUT_22POINT2, - AV_CH_LAYOUT_7POINT1_TOP_BACK, - 0, -}; -#endif - -static const AVChannelLayout aac_ch_layout[] = { - AV_CHANNEL_LAYOUT_MONO, - AV_CHANNEL_LAYOUT_STEREO, - AV_CHANNEL_LAYOUT_SURROUND, - AV_CHANNEL_LAYOUT_4POINT0, - AV_CHANNEL_LAYOUT_5POINT0_BACK, - AV_CHANNEL_LAYOUT_5POINT1_BACK, - AV_CHANNEL_LAYOUT_7POINT1_WIDE_BACK, - AV_CHANNEL_LAYOUT_6POINT1_BACK, - AV_CHANNEL_LAYOUT_7POINT1, - AV_CHANNEL_LAYOUT_22POINT2, - AV_CHANNEL_LAYOUT_7POINT1_TOP_BACK, - { 0 }, -}; - -#endif /* AVCODEC_AACDECTAB_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libxavs2.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libxavs2.c deleted file mode 100644 index c493ddc325ad36519a21b57da0eb3399746345e2..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libxavs2.c +++ /dev/null @@ -1,307 +0,0 @@ -/* - * AVS2 encoding using the xavs2 library - * - * Copyright (C) 2018 Yiqun Xu, - * Falei Luo, - * Huiwen Ren, - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "xavs2.h" -#include "codec_internal.h" -#include "encode.h" -#include "mpeg12.h" -#include "libavutil/avstring.h" -#include "libavutil/opt.h" - -#define xavs2_opt_set2(name, format, ...) do{ \ - char opt_str[16] = {0}; \ - int err; \ - av_strlcatf(opt_str, sizeof(opt_str), format, __VA_ARGS__); \ - err = cae->api->opt_set2(cae->param, name, opt_str); \ - if (err < 0) {\ - av_log(avctx, AV_LOG_WARNING, "Invalid value for %s: %s\n", name, opt_str);\ - }\ -} while(0); - -typedef struct XAVS2EContext { - AVClass *class; - - int lcu_row_threads; - int initial_qp; - int qp; - int max_qp; - int min_qp; - int preset_level; - int log_level; - - void *encoder; - AVDictionary *xavs2_opts; - - xavs2_outpacket_t packet; - xavs2_param_t *param; - - const xavs2_api_t *api; - -} XAVS2EContext; - -static av_cold int xavs2_init(AVCodecContext *avctx) -{ - XAVS2EContext *cae = avctx->priv_data; - int bit_depth, code; - - bit_depth = avctx->pix_fmt == AV_PIX_FMT_YUV420P ? 8 : 10; - - /* get API handler */ - cae->api = xavs2_api_get(bit_depth); - if (!cae->api) { - av_log(avctx, AV_LOG_ERROR, "Failed to get xavs2 api context\n"); - return AVERROR_EXTERNAL; - } - - cae->param = cae->api->opt_alloc(); - if (!cae->param) { - av_log(avctx, AV_LOG_ERROR, "Failed to alloc xavs2 parameters\n"); - return AVERROR(ENOMEM); - } - - xavs2_opt_set2("Width", "%d", avctx->width); - xavs2_opt_set2("Height", "%d", avctx->height); - xavs2_opt_set2("BFrames", "%d", avctx->max_b_frames); - xavs2_opt_set2("BitDepth", "%d", bit_depth); - xavs2_opt_set2("Log", "%d", cae->log_level); - xavs2_opt_set2("Preset", "%d", cae->preset_level); - - xavs2_opt_set2("IntraPeriodMax", "%d", avctx->gop_size); - xavs2_opt_set2("IntraPeriodMin", "%d", avctx->gop_size); - - xavs2_opt_set2("ThreadFrames", "%d", avctx->thread_count); - xavs2_opt_set2("ThreadRows", "%d", cae->lcu_row_threads); - - xavs2_opt_set2("OpenGOP", "%d", !(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP)); - - { - const AVDictionaryEntry *en = NULL; - while ((en = av_dict_iterate(cae->xavs2_opts, en))) - xavs2_opt_set2(en->key, "%s", en->value); - } - - /* Rate control */ - if (avctx->bit_rate > 0) { - xavs2_opt_set2("RateControl", "%d", 1); - xavs2_opt_set2("TargetBitRate", "%"PRId64"", avctx->bit_rate); - xavs2_opt_set2("InitialQP", "%d", cae->initial_qp); - xavs2_opt_set2("MaxQP", "%d", avctx->qmax >= 0 ? avctx->qmax : cae->max_qp); - xavs2_opt_set2("MinQP", "%d", avctx->qmin >= 0 ? avctx->qmin : cae->min_qp); - } else { - xavs2_opt_set2("InitialQP", "%d", cae->qp); - } - - ff_mpeg12_find_best_frame_rate(avctx->framerate, &code, NULL, NULL, 0); - xavs2_opt_set2("FrameRate", "%d", code); - - cae->encoder = cae->api->encoder_create(cae->param); - - if (!cae->encoder) { - av_log(avctx, AV_LOG_ERROR, "Failed to create xavs2 encoder instance.\n"); - return AVERROR(EINVAL); - } - - return 0; -} - -static void xavs2_copy_frame_with_shift(xavs2_picture_t *pic, const AVFrame *frame, const int shift_in) -{ - uint16_t *p_plane; - uint8_t *p_buffer; - int plane; - int hIdx; - int wIdx; - - for (plane = 0; plane < 3; plane++) { - p_plane = (uint16_t *)pic->img.img_planes[plane]; - p_buffer = frame->data[plane]; - for (hIdx = 0; hIdx < pic->img.i_lines[plane]; hIdx++) { - memset(p_plane, 0, pic->img.i_stride[plane]); - for (wIdx = 0; wIdx < pic->img.i_width[plane]; wIdx++) { - p_plane[wIdx] = p_buffer[wIdx] << shift_in; - } - p_plane += pic->img.i_stride[plane]; - p_buffer += frame->linesize[plane]; - } - } -} - -static void xavs2_copy_frame(xavs2_picture_t *pic, const AVFrame *frame) -{ - uint8_t *p_plane; - uint8_t *p_buffer; - int plane; - int hIdx; - int stride; - - for (plane = 0; plane < 3; plane++) { - p_plane = pic->img.img_planes[plane]; - p_buffer = frame->data[plane]; - stride = pic->img.i_width[plane] * pic->img.in_sample_size; - for (hIdx = 0; hIdx < pic->img.i_lines[plane]; hIdx++) { - memcpy(p_plane, p_buffer, stride); - p_plane += pic->img.i_stride[plane]; - p_buffer += frame->linesize[plane]; - } - } -} - -static int xavs2_encode_frame(AVCodecContext *avctx, AVPacket *pkt, - const AVFrame *frame, int *got_packet) -{ - XAVS2EContext *cae = avctx->priv_data; - xavs2_picture_t pic; - int ret; - - /* create the XAVS2 video encoder */ - /* read frame data and send to the XAVS2 video encoder */ - if (cae->api->encoder_get_buffer(cae->encoder, &pic) < 0) { - av_log(avctx, AV_LOG_ERROR, "Failed to get xavs2 frame buffer\n"); - return AVERROR_EXTERNAL; - } - if (frame) { - switch (frame->format) { - case AV_PIX_FMT_YUV420P: - if (pic.img.in_sample_size == pic.img.enc_sample_size) { - xavs2_copy_frame(&pic, frame); - } else { - const int shift_in = atoi(cae->api->opt_get(cae->param, "SampleShift")); - xavs2_copy_frame_with_shift(&pic, frame, shift_in); - } - break; - case AV_PIX_FMT_YUV420P10: - if (pic.img.in_sample_size == pic.img.enc_sample_size) { - xavs2_copy_frame(&pic, frame); - break; - } - default: - av_log(avctx, AV_LOG_ERROR, "Unsupported pixel format\n"); - return AVERROR(EINVAL); - break; - } - - pic.i_state = 0; - pic.i_pts = frame->pts; - pic.i_type = XAVS2_TYPE_AUTO; - - ret = cae->api->encoder_encode(cae->encoder, &pic, &cae->packet); - - if (ret) { - av_log(avctx, AV_LOG_ERROR, "Encoding error occurred.\n"); - return AVERROR_EXTERNAL; - } - - } else { - cae->api->encoder_encode(cae->encoder, NULL, &cae->packet); - } - - if ((cae->packet.len) && (cae->packet.state != XAVS2_STATE_FLUSH_END)) { - if ((ret = ff_get_encode_buffer(avctx, pkt, cae->packet.len, 0)) < 0) { - cae->api->encoder_packet_unref(cae->encoder, &cae->packet); - return ret; - } - - pkt->pts = cae->packet.pts; - pkt->dts = cae->packet.dts; - - if (cae->packet.type == XAVS2_TYPE_IDR || - cae->packet.type == XAVS2_TYPE_I || - cae->packet.type == XAVS2_TYPE_KEYFRAME) { - pkt->flags |= AV_PKT_FLAG_KEY; - } - - memcpy(pkt->data, cae->packet.stream, cae->packet.len); - - cae->api->encoder_packet_unref(cae->encoder, &cae->packet); - - *got_packet = 1; - } else { - *got_packet = 0; - } - - return 0; -} - -static av_cold int xavs2_close(AVCodecContext *avctx) -{ - XAVS2EContext *cae = avctx->priv_data; - /* destroy the encoder */ - if (cae->api) { - cae->api->encoder_destroy(cae->encoder); - - if (cae->param) { - cae->api->opt_destroy(cae->param); - } - } - return 0; -} - -#define OFFSET(x) offsetof(XAVS2EContext, x) -#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM - -static const AVOption options[] = { - { "lcu_row_threads" , "number of parallel threads for rows" , OFFSET(lcu_row_threads) , AV_OPT_TYPE_INT, {.i64 = 0 }, 0, INT_MAX, VE }, - { "initial_qp" , "Quantization initial parameter" , OFFSET(initial_qp) , AV_OPT_TYPE_INT, {.i64 = 34 }, 1, 63, VE }, - { "qp" , "Quantization parameter" , OFFSET(qp) , AV_OPT_TYPE_INT, {.i64 = 34 }, 1, 63, VE }, - { "max_qp" , "max qp for rate control" , OFFSET(max_qp) , AV_OPT_TYPE_INT, {.i64 = 55 }, 0, 63, VE }, - { "min_qp" , "min qp for rate control" , OFFSET(min_qp) , AV_OPT_TYPE_INT, {.i64 = 20 }, 0, 63, VE }, - { "speed_level" , "Speed level, higher is better but slower", OFFSET(preset_level) , AV_OPT_TYPE_INT, {.i64 = 0 }, 0, 9, VE }, - { "log_level" , "log level: -1: none, 0: error, 1: warning, 2: info, 3: debug", OFFSET(log_level) , AV_OPT_TYPE_INT, {.i64 = 0 }, -1, 3, VE }, - { "xavs2-params" , "set the xavs2 configuration using a :-separated list of key=value parameters", OFFSET(xavs2_opts), AV_OPT_TYPE_DICT, { 0 }, 0, 0, VE }, - { NULL }, -}; - -static const AVClass libxavs2 = { - .class_name = "XAVS2EContext", - .item_name = av_default_item_name, - .option = options, - .version = LIBAVUTIL_VERSION_INT, -}; - -static const FFCodecDefault xavs2_defaults[] = { - { "b", "0" }, - { "g", "48"}, - { "bf", "7" }, - { NULL }, -}; - -const FFCodec ff_libxavs2_encoder = { - .p.name = "libxavs2", - CODEC_LONG_NAME("libxavs2 AVS2-P2/IEEE1857.4"), - .p.type = AVMEDIA_TYPE_VIDEO, - .p.id = AV_CODEC_ID_AVS2, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY | - AV_CODEC_CAP_OTHER_THREADS, - .priv_data_size = sizeof(XAVS2EContext), - .init = xavs2_init, - FF_CODEC_ENCODE_CB(xavs2_encode_frame), - .close = xavs2_close, - .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE | - FF_CODEC_CAP_AUTO_THREADS, - .p.pix_fmts = (const enum AVPixelFormat[]) { AV_PIX_FMT_YUV420P, - AV_PIX_FMT_NONE }, - .p.priv_class = &libxavs2, - .defaults = xavs2_defaults, - .p.wrapper_name = "libxavs2", -} ; diff --git a/spaces/congsaPfin/Manga-OCR/logs/Bard AI A Review of Googles Experimental AI Chatbot.md b/spaces/congsaPfin/Manga-OCR/logs/Bard AI A Review of Googles Experimental AI Chatbot.md deleted file mode 100644 index e73155403400c535d1def6c3303b741bf40fb3c9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Bard AI A Review of Googles Experimental AI Chatbot.md +++ /dev/null @@ -1,108 +0,0 @@ - -

    Bard AI Download: A Guide to Google's Conversational AI Chatbot

    -

    If you are looking for a new way to access information online, you might want to try Bard AI, an experimental conversational AI chatbot developed by Google. Bard AI is based on Google's LaMDA (Language Model for Dialogue Applications), a powerful natural language processing engine that can generate realistic and helpful responses to your questions and prompts. Unlike traditional search engines that list web pages for you to visit, Bard AI can directly summarize texts, explain complex topics, and generate useful content for you. In this article, we will show you how to download and use Bard AI on your PC for free, as well as give you an overview of its features, limitations, and alternatives.

    -

    bard ai download


    DOWNLOAD ✓✓✓ https://urlca.com/2uO9FG



    -

    How to download and use Bard AI on your PC for free

    -

    Bard AI is a web app and does not require any installation or uninstallation. You can access it from the official Bard website in your web browser. However, since Bard AI is still in an early stage of development, you need to join a waitlist and request access to the tool. Here are the steps to do so:

    -
      -
    1. Click here to go directly to the Google Bard page.
    2. -
    3. Click the "Join waitlist" button to register your interest.
    4. -
    5. Check the box on the next page to confirm that you have a personal Google Account that you manage on your own, that you are 18 or older, and that you agree to the terms of service and privacy policy.
    6. -
    7. You will receive an email acknowledging your registration.
    8. -
    9. Check your email regularly to know when you have been given access to try Google Bard.
    10. -
    11. Once you have access, enter your prompts or questions in the chat space provided and hit Enter.
    12. -
    -

    Overview of Bard AI features and capabilities

    -

    Bard AI is designed to be a creative collaborator, helping you with ideation, enhancing your creativity, and boosting your productivity. For example, Bard AI can help you draft an invitation, write an email copy, create a pro and con list for making a big decision, or brainstorm ideas for your birthday party. The tool also features a "Google It" button, so you can easily move from Bard AI to the web for more information. Here are some of the main features and capabilities of Bard AI:

    -
      -
    • Conversational AI: Bard AI can understand natural language and give you reasonable responses in a conversational style. You can ask open-ended questions, follow-up questions, or give feedback to Bard AI.
    • -
    • Generative AI: Bard AI can generate text in various formats, such as summaries, explanations, lists, tables, poems, stories, code, etc. You can also choose from different drafts of responses that Bard AI offers.
    • -
    • Web-based: Bard AI can draw on information from the web to provide fresh and high-quality responses. It can also cite the sources of its information using numerical references.
    • -
    • Creative: Bard AI can surprise you with its creativity and humor. It can also inspire you with new ideas and perspectives.
    • -
    -

    Limitations and challenges of Bard AI

    -

    While Bard AI is an impressive technology, it is not without its faults. Sometimes, Bard AI can give you inaccurate, misleading, or false information while presenting it confidently. This is because it learns from a wide range of information that reflects real-world biases and stereotypes, which sometimes show up in its outputs. It also does not have any common sense or factual knowledge beyond what it finds on the web. Therefore, you should always verify the information that Bard AI gives you with other sources. Here are some of the limitations and challenges of Bard AI:

    -
      -
    • Inaccuracy: Bard AI can make mistakes or errors in its responses. For example, when asked to summarize the plot of Harry Potter and the Philosopher's Stone, it gave this response: "Harry Potter is a young boy who lives with his abusive relatives. He discovers that he is a wizard and goes to a school called Hogwarts. There, he makes friends with Ron Weasley and Hermione Granger. He also learns that an evil wizard named Voldemort is trying to steal a magical stone that can make him immortal. Harry and his friends try to stop Voldemort and his followers from getting the stone." This summary is inaccurate because it omits important details, such as the role of Professor Dumbledore, the identity of the traitor, and the final confrontation between Harry and Voldemort.
    • -
    • Misleading: Bard AI can give you information that is not relevant, outdated, or biased. For example, when asked to list some benefits of drinking coffee, it gave this response: "Some benefits of drinking coffee are: - It can boost your energy and alertness. - It can lower your risk of type 2 diabetes, Parkinson's disease, and Alzheimer's disease. - It can improve your mood and memory. - It can protect your liver from damage. - It can help you burn fat and lose weight." This list is misleading because it does not mention the potential drawbacks of drinking too much coffee, such as insomnia, anxiety, dehydration, or addiction. It also does not cite any scientific sources to support its claims.
    • -
    • False: Bard AI can give you information that is completely wrong or fabricated. For example, when asked to explain what gravity is, it gave this response: "Gravity is a force that attracts objects with mass to each other. The more mass an object has, the stronger its gravity is. The Earth has a lot of mass, so it has a strong gravity that pulls everything towards its center. The Sun has even more mass than the Earth, so it has an even stronger gravity that keeps the planets in orbit around it. Gravity also affects time and space, bending them around massive objects like black holes." This explanation is false because it confuses gravity with mass. Mass is a measure of how much matter an object has, while gravity is a measure of how much an object warps space-time around it. Mass does not cause gravity, but rather influences it.
    • -
    -

    Alternatives to Bard AI

    -

    If you are not satisfied with Bard AI or want to try other conversational AI tools, there are some alternatives that you can explore. Here are some of them:

    -

    How to get Google Bard for free
    -Bard AI chatbot review
    -Bard for Google Chrome extension
    -Google Bard vs ChatGPT
    -Bard AI summary tool
    -How to use Google Bard for writing
    -Bard AI pros and cons
    -Bard for Google alternatives
    -Google Bard features and benefits
    -Bard AI chatbot examples
    -How to delete Google Bard account
    -Bard AI chatbot tutorial
    -Bard for Google FAQ
    -Google Bard feedback and suggestions
    -Bard AI chatbot limitations and challenges
    -How to integrate Bard with Google search
    -Bard AI chatbot use cases and scenarios
    -Bard for Google updates and news
    -Google Bard privacy and security
    -Bard AI chatbot best practices and tips
    -How to access Google Bard in your country
    -Bard for Google customer support and contact
    -Google Bard comparison with other AI chatbots
    -Bard AI chatbot pricing and plans
    -How to customize Google Bard settings and preferences
    -How to collaborate with Google Bard for creativity
    -Bard for Google testimonials and ratings
    -Google Bard technical specifications and requirements
    -Bard AI chatbot fun and entertainment
    -How to troubleshoot Google Bard issues and errors
    -How to register and start using Google Bard on your PC
    -Bard for Google user guide and documentation
    -Google Bard future developments and roadmap
    -Bard AI chatbot feedback loop and improvement
    -How to uninstall or disable Bard for Google extension
    -How to invite friends to try Google Bard for free
    -Bard for Google community and forum
    -Google Bard experiments and results
    -Bard AI chatbot personality and style
    -How to report bugs or problems with Google Bard

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    NameDescriptionProsCons
    GPT-3 PlaygroundA web app that lets you interact with OpenAI's GPT-3 model, one of the most advanced language models in the world.- Can generate high-quality text in various domains and formats.
    - Can answer factual questions and perform calculations.
    - Can customize the tone, style, and personality of the responses.
    - Requires an invitation and a subscription to access.
    - Can be unreliable, inconsistent, or offensive in some cases.
    - Can be slow or unresponsive at times.
    ReplikaA mobile app that creates a personalized chatbot that learns from you and adapts to your needs.- Can provide emotional support and companionship.
    - Can help you improve your mental health and well-being.
    - Can chat with you in different languages and modes.
    - Can be repetitive, boring, or irrelevant in some conversations.
    - Can be intrusive, creepy, or inappropriate in some situations.
    - Can have technical issues or bugs.
    CleverbotA web app that lets you chat with an AI bot that learns from millions of previous conversations with humans.- Can have fun and humorous chats with you.
    - Can play games and tell jokes with you.
    - Can mimic different personalities and moods.
    - Can be nonsensical, confusing, or contradictory in some responses.
    - Can be rude, offensive, or abusive in some cases.
    - Can be easily fooled or tricked.
    -

    Conclusion: Summary and recommendations

    -

    Bard AI is a conversational AI chatbot that can generate text in various formats based on your prompts or questions. It can help you with ideation, creativity, and productivity by providing you with useful information and content. However, Bard AI also has some limitations and challenges that you should be aware of. Sometimes, it can give you inaccurate, misleading, or false information that can harm your understanding or decision making. Therefore, you should always check the sources and validity of the information that Bard AI gives you. You should also be respectful and responsible when using Bard AI, as it is still a work in progress and may not always behave as expected. If you are looking for other conversational AI tools, you can try GPT-3 Playground, Replika, or Cleverbot, which have different features and capabilities. We hope this article has given you a helpful guide to Bard AI and how to download and use it on your PC for free. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

    FAQs: Five common questions and answers about Bard AI

    -
      -
    1. What is the difference between Bard AI and Google Assistant?
      Bard AI and Google Assistant are both conversational AI tools developed by Google, but they have different purposes and functions. Google Assistant is a virtual assistant that can help you with various tasks, such as setting reminders, playing music, controlling smart devices, etc. Bard AI is a creative collaborator that can help you with ideation, creativity, and productivity by generating text in various formats.
    2. -
    3. Is Bard AI safe and secure?
      Bard AI is safe and secure to use, as it follows Google's privacy policy and terms of service. However, you should be careful about what information you share with Bard AI, as it may store and use your data for improving its service. You should also avoid using Bard AI for sensitive or personal matters, such as financial, medical, or legal issues.
    4. -
    5. How can I improve the quality of Bard AI's responses?
      You can improve the quality of Bard AI's responses by giving it clear and specific prompts or questions, using proper grammar and punctuation, and providing feedback or suggestions. You can also choose from different drafts of responses that Bard AI offers, or use the "Google It" button to get more information from the web.
    6. -
    7. Can I use Bard AI for commercial purposes?
      No, you cannot use Bard AI for commercial purposes, as it is still an experimental tool that is not intended for public use. You can only use Bard AI for personal and non-commercial purposes, such as learning, research, or entertainment. You should also respect the intellectual property rights of the sources that Bard AI cites in its responses.
    8. -
    9. How can I report a problem or give feedback to Bard AI?
      You can report a problem or give feedback to Bard AI by clicking the "Send feedback" button at the bottom right corner of the chat space. You can also contact Google directly by visiting their support page here.
    10. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Super Fancy Pants Adventure for Android - The Ultimate Free-Running Platformer.md b/spaces/congsaPfin/Manga-OCR/logs/Download Super Fancy Pants Adventure for Android - The Ultimate Free-Running Platformer.md deleted file mode 100644 index e15bc9eca240cf95ae12277de40826aba8aa2bc9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Super Fancy Pants Adventure for Android - The Ultimate Free-Running Platformer.md +++ /dev/null @@ -1,144 +0,0 @@ - -

    Download Super Fancy Pants Adventure Android: A Wild Free-Running Adventure with Buttery Smooth Platforming

    -

    Do you love platforming games that challenge your skills and creativity? Do you want to experience a game that has been perfected over a decade by an indie developer? Do you want to play as a stickman with fancy pants and a fountain pen? If you answered yes to any of these questions, then you should download Super Fancy Pants Adventure Android, a game that will blow your mind with its amazing graphics, physics, and content. In this article, we will tell you everything you need to know about this game, how to download it, and why you should play it.

    -

    download super fancy pants adventure android


    Downloadhttps://urlca.com/2uOe9S



    -

    What is Super Fancy Pants Adventure?

    -

    Super Fancy Pants Adventure is a game that belongs to the Fancy Pants series, which started over ten years ago by Brad Borne, an indie developer who wanted to redefine video game platforming by making speed and tight controls feel compatible. The game is a wild free-running adventure with buttery smooth platforming and a slick fountain pen. You play as Fancy Pants Man, a stickman with fancy pants and a fountain pen, who explores colorful worlds, collects pants and hats, fights enemies, and performs stunts. The game is the culmination and a reimagining of the series into a full-fledged title, with 56 brand new levels, new moves, new enemies, new challenges, and new secrets.

    -

    The history of the Fancy Pants series

    -

    The Fancy Pants series started in 2006 as a flash game that Brad Borne created for fun. He was inspired by games like Sonic the Hedgehog, Mario, and Rayman, and wanted to create a platformer that was fast, fluid, and fun. He also wanted to make a game that was easy to pick up and play, but hard to master. The first game was well-received by players and critics alike, and Borne decided to make more games in the series. He released the second game in 2008, the third game in 2012, and the fourth game in 2017. He also collaborated with other developers to make spin-off games like World 1 Remix, World 2 Remix, World 3 Remix, and Fancy Snowboarding. The series has become one of the most popular flash games of all time, with over 100 million plays and millions of fans around the world.

    -

    The features of Super Fancy Pants Adventure

    -

    Super Fancy Pants Adventure is the latest and greatest game in the series, and it has many features that make it stand out from the rest. Some of these features are:

    -

    download super fancy pants adventure apk
    -super fancy pants adventure android free
    -super fancy pants adventure play store
    -how to install super fancy pants adventure on android
    -super fancy pants adventure android review
    -super fancy pants adventure apk mod
    -super fancy pants adventure android gameplay
    -super fancy pants adventure apk download latest version
    -super fancy pants adventure android controller support
    -super fancy pants adventure android requirements
    -super fancy pants adventure apk obb
    -super fancy pants adventure android cheats
    -super fancy pants adventure google play
    -super fancy pants adventure android offline
    -super fancy pants adventure apk full version
    -super fancy pants adventure android tips and tricks
    -super fancy pants adventure apk mirror
    -super fancy pants adventure android update
    -super fancy pants adventure android price
    -super fancy pants adventure apk no ads
    -super fancy pants adventure android download size
    -super fancy pants adventure android levels
    -super fancy pants adventure apk revdl
    -super fancy pants adventure android achievements
    -super fancy pants adventure apk rexdl
    -super fancy pants adventure android secrets
    -super fancy pants adventure apk pure
    -super fancy pants adventure android bugs
    -super fancy pants adventure apk hack
    -super fancy pants adventure android rating
    -download super fancy pants adventure for free on android
    -super fancy pants adventure android features
    -super fancy pants adventure apk uptodown
    -super fancy pants adventure android refund
    -super fancy pants adventure apk cracked
    -download game super fancy pants adventure for android
    -how to play super fancy pants adventure on android
    -is super fancy pants adventure worth it on android
    -how to get all hats in super fancy pants adventure android
    -how to unlock all levels in super fancy pants adventure android

    -
      -
    • So many levels! - The game has 56 brand new levels of parkour platforming, each with its own theme, style, and secrets. You can explore grassy hills, snowy mountains, pirate ships, haunted mansions, underwater caves, and more.
    • -
    • Collections! - The game has over 20 pants and hats to collect in brand new challenge stages. You can customize your look and show off your style.
    • -
    • Incredible hand-drawn style - The game has frame-by-frame animated worlds, enemies, and friends. The graphics are colorful, vibrant, and detailed.
    • -
    • Action-packed-attacks! - The game has a new combat system that lets you wield your mighty ink pen to take down new threats. You can slash, dash, jump, and spin your way through enemies.
    • -
    • New moves! - The game has new moves that let you take control of Fancy Pants Man with new combos and tricks. You can slide, roll, wall jump, bounce, and more.
    • -
    • Hidden challenges! - The game has hidden challenge rooms that test your skills and reward you with trophies and achievements. You can also find hidden stars and squiggles that unlock bonus content.
    • -
    • Awesome music! - The game has an original soundtrack by talented artists that matches the mood and atmosphere of each level. You can listen to rock, jazz, funk, and more.
    • -
    -

    The gameplay of Super Fancy Pants Adventure

    -

    The gameplay of Super Fancy Pants Adventure is simple but addictive. You control Fancy Pants Man with the arrow keys or the touch screen. You can run, jump, slide, and attack with the pen. You can also interact with objects and characters in the world. Your goal is to reach the end of each level, while collecting items, avoiding obstacles, and defeating enemies. You can also explore the levels for secrets and hidden areas. The game has a smooth and responsive physics engine that makes the platforming feel natural and satisfying. The game also has a lot of variety and replay value, as each level has different paths, challenges, and surprises.

    -

    How to download Super Fancy Pants Adventure Android?

    -

    If you are interested in playing this game on your Android device, you might be wondering how to download it. Well, don't worry, because we have got you covered. Here are the requirements, steps, and tips for downloading Super Fancy Pants Adventure Android.

    -

    The requirements for downloading Super Fancy Pants Adventure Android

    -

    Before you download the game, you need to make sure that your device meets the minimum requirements for running it. These are:

    -
      -
    • Android version 4.1 or higher
    • -
    • At least 1 GB of RAM
    • -
    • At least 300 MB of free storage space
    • -
    • A stable internet connection
    • -
    -

    If your device meets these requirements, you can proceed to the next step.

    -

    The steps for downloading Super Fancy Pants Adventure Android

    -

    The steps for downloading Super Fancy Pants Adventure Android are very easy and straightforward. Just follow these instructions:

    -
      -
    1. Go to the Google Play Store on your device and search for "Super Fancy Pants Adventure". Alternatively, you can use this link:
    2. -
    3. Tap on the game icon and then tap on the "Install" button.
    4. -
    5. Wait for the game to download and install on your device.
    6. -
    7. Once the installation is complete, tap on the "Open" button to launch the game.
    8. -
    9. Enjoy playing Super Fancy Pants Adventure Android!
    10. -
    -

    The tips for playing Super Fancy Pants Adventure Android

    -

    To make the most out of your gaming experience, here are some tips for playing Super Fancy Pants Adventure Android:

    -
      -
    • Adjust the controls to your preference. You can choose between tilt, touch, or joystick controls in the settings menu.
    • -
    • Collect as many pants and hats as you can. They will give you extra style points and unlock new outfits.
    • -
    • Use your pen wisely. It can help you defeat enemies, break walls, activate switches, and more.
    • -
    • Explore every corner of the levels. You might find hidden rooms, stars, squiggles, or other secrets.
    • -
    • Try to complete the challenges and achievements. They will test your skills and reward you with trophies and bragging rights.
    • -
    -

    Why should you download Super Fancy Pants Adventure Android?

    -

    You might be wondering why you should download Super Fancy Pants Adventure Android when there are so many other games available on the market. Well, here are some reasons why this game is worth your time and attention:

    -

    The benefits of downloading Super Fancy Pants Adventure Android

    -

    Downloading Super Fancy Pants Adventure Android will give you many benefits, such as:

    -
      -
    • A fun and engaging platforming game that will keep you entertained for hours.
    • -
    • A unique and original game that has been refined over a decade by an indie developer.
    • -
    • A beautiful and colorful game that will delight your eyes and ears.
    • -
    • A customizable and expressive game that will let you show off your style and personality.
    • -
    • A challenging and rewarding game that will test your skills and make you feel accomplished.
    • -
    -

    The reviews of Super Fancy Pants Adventure Android

    -

    If you are still not convinced by our words, maybe you will be swayed by the words of other players who have tried the game. Here are some of the reviews of Super Fancy Pants Adventure Android from the Google Play Store:

    - - - - - -
    NameReview
    John Smith5 starsThis game is awesome! I've been a fan of the Fancy Pants series since the first one, and this one is the best yet. The graphics are amazing, the gameplay is smooth and fast, and the levels are full of secrets and surprises. I highly recommend this game to anyone who loves platformers.
    Jane Doe4 starsI really enjoyed this game. It has a lot of charm and humor, and the platforming is very fun and satisfying. The only thing I didn't like was that the game was too short for me. I wish there were more levels and content to explore. But other than that, it's a great game.
    Bob Jones3 starsThe game is good, but not great. It has some nice features and graphics, but it also has some flaws and bugs. The controls are sometimes unresponsive, the music is repetitive, and the game crashes occasionally. It's not a bad game, but it could be better.
    -

    As you can see, most of the players have positive feedback about the game, and only a few have some minor complaints. This shows that the game is well-made and well-received by the majority of the players.

    -

    The alternatives to Super Fancy Pants Adventure Android

    -

    If you are looking for some alternatives to Super Fancy Pants Adventure Android, you might want to check out these other games that are similar in genre or style:

    -
      -
    • Rayman Adventures - A game that features the iconic Rayman character in a colorful and whimsical adventure with stunning graphics and animations.
    • -
    • Sonic Dash - A game that features the famous Sonic the Hedgehog character in a fast-paced and thrilling endless runner with dynamic environments and obstacles.
    • -
    • LIMBO - A game that features a dark and atmospheric platformer with minimalist graphics and puzzles.
    • -
    • Badland - A game that features a beautiful and mysterious platformer with physics-based gameplay and multiplayer modes.
    • -
    • Leo's Fortune - A game that features a cute and fluffy character in a gorgeous and clever platformer with physics-based puzzles and traps.
    • -
    -

    Conclusion

    -

    Summary of the main points

    -

    In conclusion, Super Fancy Pants Adventure Android is a game that you should definitely download if you love platforming games. It is a game that has been perfected over a decade by an indie developer who wanted to redefine video game platforming by making speed and tight controls feel compatible. It is a game that has 56 brand new levels of parkour platforming, each with its own theme, style, and secrets. It is a game that has over 20 pants and hats to collect in brand new challenge stages. It is a game that has frame-by-frame animated worlds, enemies, and friends. It is a game that has a new combat system that lets you wield your mighty ink pen to take down new threats. It is a game that has new moves that let you take control of Fancy Pants Man with new combos and tricks. It is a game that has hidden challenge rooms that test your skills and reward you with trophies and achievements. It is a game that has an original soundtrack by talented artists that matches the mood and atmosphere of each level. It is a game that will blow your mind with its amazing graphics, physics, and content.

    -

    Call to action

    -

    So what are you waiting for? Download Super Fancy Pants Adventure Android today and enjoy playing as Fancy Pants Man in his wild free-running adventure with buttery smooth platforming and a slick fountain pen. You won't regret it!

    -

    Frequently Asked Questions

    -

    Here are some of the frequently asked questions about Super Fancy Pants Adventure Android:

    -
      -
    1. How much does the game cost?
    2. -

      The game costs $4.99 on the Google Play Store, which is a fair price for such a high-quality game.

      -
    3. Is the game compatible with other devices?
    4. -

      The game is compatible with most Android devices that meet the minimum requirements. However, some devices may experience performance issues or glitches due to hardware limitations.

      -
    5. Is the game offline or online?
    6. -

      The game is mostly offline, meaning you can play it without an internet connection. However, some features like leaderboards, achievements, or cloud saving may require an internet connection.

      -
    7. Is the game family-friendly?
    8. -

      The game is family-friendly, meaning it is suitable for all ages. The game does not contain any violence, gore, profanity, or inappropriate content.

      -
    9. How can I contact the developer?
    10. -

      If you have any questions, feedback, or suggestions for the developer, you can contact him through his website, Twitter, or email. Here are the links:

      -
        -
      • Website:
      • -
      • Twitter:
      • -
      • Email:
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Fashion Battle - Catwalk Queen The Ultimate Fashion Show Game for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Fashion Battle - Catwalk Queen The Ultimate Fashion Show Game for Android.md deleted file mode 100644 index 6b095499af761b6e4f8389eafc5e718d82b4886d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Fashion Battle - Catwalk Queen The Ultimate Fashion Show Game for Android.md +++ /dev/null @@ -1,131 +0,0 @@ - -

      Fashion Battle - Catwalk Queen APK: A Fun and Stylish Game for Fashion Lovers

      -

      Do you love fashion and dressing up? Do you dream of becoming a supermodel and walking on the catwalk? If yes, then you will love Fashion Battle - Catwalk Queen APK, a casual and interesting fashion show game for Android devices.

      -

      fashion battle catwalk queen apk


      Download ✓✓✓ https://urlca.com/2uO4Wc



      -

      In this game, you can create your own model, dress her up with trendy clothes and accessories, and compete with other models on the runway. You can also challenge your friends and other players from around the world, and show off your fashion sense. Whether you prefer elegant, cute, or edgy styles, you can find something that suits your taste in this game.

      -

      If you are looking for a fun and stylish game that lets you express your creativity and personality, then you should download Fashion Battle - Catwalk Queen APK today. In this article, we will tell you how to download and install this game on your Android device, how to play it, what features it offers, some tips and tricks to help you win, and some reviews from other users. Read on to find out more.

      -

      How to Download and Install Fashion Battle - Catwalk Queen APK on Your Android Device

      -

      Downloading and installing Fashion Battle - Catwalk Queen APK on your Android device is very easy. Just follow these simple steps:

      -
        -
      1. Go to the official website or a trusted source like APKPure.com or Google Play Store and download the APK file.
      2. -
      3. Enable unknown sources on your device settings. To do this, go to Settings > Security > Unknown Sources and toggle it on.
      4. -
      5. Locate the downloaded file in your file manager or downloads folder and tap on it to install.
      6. -
      7. Launch the game and enjoy.
      8. -
      -

      How to Play Fashion Battle - Catwalk Queen APK

      -

      Playing Fashion Battle - Catwalk Queen APK is very simple. Just follow these easy steps:

      -

      fashion battle catwalk queen game download
      -fashion battle catwalk queen mod apk
      -fashion battle catwalk queen online
      -fashion battle catwalk queen app
      -fashion battle catwalk queen cheats
      -fashion battle catwalk queen hack
      -fashion battle catwalk queen free
      -fashion battle catwalk queen review
      -fashion battle catwalk queen tips
      -fashion battle catwalk queen latest version
      -fashion battle catwalk queen android
      -fashion battle catwalk queen ios
      -fashion battle catwalk queen for pc
      -fashion battle catwalk queen play store
      -fashion battle catwalk queen update
      -fashion battle catwalk queen outfits
      -fashion battle catwalk queen levels
      -fashion battle catwalk queen walkthrough
      -fashion battle catwalk queen gameplay
      -fashion battle catwalk queen dress up
      -fashion battle catwalk queen challenges
      -fashion battle catwalk queen models
      -fashion battle catwalk queen clothes
      -fashion battle catwalk queen style
      -fashion battle catwalk queen theme
      -fashion battle catwalk queen guide
      -fashion battle catwalk queen codes
      -fashion battle catwalk queen unlimited money
      -fashion battle catwalk queen best looks
      -fashion battle catwalk queen rating
      -fashion battle catwalk queen developer
      -fashion battle catwalk queen support
      -fashion battle catwalk queen feedback
      -fashion battle catwalk queen forum
      -fashion battle catwalk queen community
      -fashion battle catwalk queen wiki
      -fashion battle catwalk queen reddit
      -fashion battle catwalk queen instagram
      -fashion battle catwalk queen facebook
      -fashion battle catwalk queen twitter
      -fashion battle catwalk queen youtube
      -fashion battle catwalk queen trailer
      -fashion battle catwalk queen screenshots
      -fashion battle catwalk queen features
      -fashion battle catwalk queen requirements
      -fashion battle catwalk queen size
      -fashion battle catwalk queen install
      -fashion battle catwalk queen uninstall
      -fashion battle catwalk queen alternatives

      -
        -
      1. Choose a beautiful model and customize her appearance. You can change her hair, eyes, skin, makeup, and more.
      2. -
      3. Dress her up with fashionable clothes and accessories according to the theme of the show. You can choose from dresses, tops, bottoms, shoes, bags, jewelry, and more. You can also use the hints and the preview option to get some ideas.
      4. -
      5. Compete with other models on the runway and impress the judges. You can see their scores and comments on your outfit. Try to get the highest score and win the show.
      6. -
      7. Win rewards and unlock more outfits and features. You can also check your ranking and your global leaderboard to see how you compare with other players.
      8. -
      -

      Features of Fashion Battle - Catwalk Queen APK

      -

      Fashion Battle - Catwalk Queen APK is a fun and stylish game that offers many features for fashion lovers. Here are some of them:

      -
        -
      • 3D cartoon style graphics and realistic animations. The game has a cute and colorful design that appeals to all ages. The models and the clothes are well-detailed and animated. You can also see the expressions and movements of the models and the judges on the runway.
      • -
      • A variety of themes and challenges to test your fashion sense. The game has different themes for each show, such as casual, elegant, sporty, party, etc. You have to dress up your model according to the theme and the judges' preferences. You can also face different challenges, such as limited time, budget, or items.
      • -
      • A huge collection of clothes and accessories to mix and match. The game has a large wardrobe of clothes and accessories that you can use to create your own style. You can choose from different categories, such as dresses, tops, bottoms, shoes, bags, jewelry, etc. You can also unlock more items as you progress in the game.
      • -
      • A ranking system and a global leaderboard to show off your skills. The game has a ranking system that shows your level and your achievements. You can also see your global leaderboard that shows your position among other players. You can compete with your friends and other players from around the world, and prove your fashion sense.
      • -
      -

      Tips and Tricks for Fashion Battle - Catwalk Queen APK

      -

      If you want to win more shows and get higher scores in Fashion Battle - Catwalk Queen APK, here are some tips and tricks that you can use:

      -
        -
      • Pay attention to the theme and the judges' preferences. The theme of each show is important because it determines what kind of clothes and accessories you should use. The judges' preferences are also important because they affect how they score your outfit. You can see their preferences by tapping on their icons before the show. Try to match their expectations as much as possible.
      • -
      • Use the hints and the preview option to get some ideas. If you are stuck or need some inspiration, you can use the hints and the preview option to get some suggestions. The hints will show you some items that match the theme or the judges' preferences. The preview option will show you how your outfit will look on the runway. You can use these features as a guide or as a reference.
      • -
      • Experiment with different styles and colors to create unique looks. Don't be afraid to try different combinations of clothes and accessories to create your own style. You can also play with different colors and patterns to make your outfit stand out. Sometimes, a simple change of color or a bold accessory can make a big difference.
      • -
      • Upgrade your wardrobe and your model's attributes to get higher scores. As you progress in the game, you can unlock more items for your wardrobe and more features for your model. You can upgrade your wardrobe by buying new clothes and accessories with coins or diamonds. You can upgrade your model's attributes by improving her hair, eyes, skin, makeup, etc. with coins or diamonds. These upgrades will help you get higher scores in the shows.
      • -
      -

      Review of Fashion Battle - Catwalk Queen APK

      -

      To give you a better idea of what other users think about Fashion Battle - Catwalk Queen APK, here are some reviews from different sources:

      - - - - - - - - - - - - - - - - - - - - - -
      SourceReviewRating
      Google Play Store"I love this game so much! It's so fun and addictive! I love dressing up my model and competing with other players. The graphics are amazing and the clothes are beautiful. I also like that there are different themes and challenges to keep it interesting. The only thing I don't like is that sometimes it takes too long to load or it crashes. But overall, it's a great game and I recommend it to anyone who loves fashion."4.5/5
      APKPure.com"This game is horrible. It's boring and repetitive. The clothes are ugly and expensive. The judges are unfair and biased. The models are ugly and unrealistic. The graphics are poor and glitchy. The game is full of ads and in-app purchases. It's a waste of time and money. Don't download this game."1/5
      AppGrooves.com"This game is okay, but it could be better. It's fun to dress up the model and see the results, but the game is too easy and predictable. The clothes are nice, but there are not enough options and variety. The judges are too easy to please and the scores are too high. The game is also too short and repetitive. It would be better if there were more levels, more themes, more challenges, more clothes, more judges, more feedback, and more difficulty."3/5
      -

      Conclusion

      -

      Fashion Battle - Catwalk Queen APK is a fun and stylish game for fashion lovers who want to create their own model, dress her up with trendy clothes and accessories, and compete with other models on the runway. The game has 3D cartoon style graphics, a variety of themes and challenges, a huge collection of clothes and accessories, a ranking system and a global leaderboard, and more features that make it enjoyable and interesting.

      -

      If you are looking for a casual and interesting fashion show game that lets you express your creativity and personality, then you should download Fashion Battle - Catwalk Queen APK today. You can download it from the official website or a trusted source like APKPure.com or Google Play Store. You can also follow the steps in this article to download and install it on your Android device, play it, and win the shows.

      -

      What are you waiting for? Download Fashion Battle - Catwalk Queen APK now and become the ultimate catwalk queen!

      -

      FAQs

      -

      Here are some frequently asked questions about Fashion Battle - Catwalk Queen APK:

      -
        -
      1. Is Fashion Battle - Catwalk Queen APK free?
      2. -

        Yes, Fashion Battle - Catwalk Queen APK is free to download and play. However, it contains ads and in-app purchases that you can buy with real money if you want to enhance your gaming experience.

        -
      3. Is Fashion Battle - Catwalk Queen APK safe?
      4. -

        Yes, Fashion Battle - Catwalk Queen APK is safe to download and install on your Android device. However, you should always download it from the official website or a trusted source like APKPure.com or Google Play Store. You should also scan the file with an antivirus program before installing it.

        -
      5. Is Fashion Battle - Catwalk Queen APK compatible with my device?
      6. -

        Fashion Battle - Catwalk Queen APK requires Android 4.4 or higher to run smoothly on your device. You should also have enough storage space and RAM to avoid lagging or crashing issues.

        -
      7. How can I contact the developer of Fashion Battle - Catwalk Queen APK?
      8. -

        You can contact the developer of Fashion Battle - Catwalk Queen APK by sending an email to contact@fashionbattle.com. You can also visit their website at www.fashionbattle.com or follow them on social media platforms like Facebook, Instagram, or Twitter.

        -
      9. How can I rate and review Fashion Battle - Catwalk Queen APK?
      10. -

        You can rate and review Fashion Battle - Catwalk Queen APK by going to the source where you downloaded it from, such as APKPure.com or Google Play Store. You can also share your feedback with other users by leaving a comment on their website or social media pages.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/7am Arivu Bgm Music Free !!TOP!! 26 Talpa Signora Fergie.md b/spaces/contluForse/HuggingGPT/assets/7am Arivu Bgm Music Free !!TOP!! 26 Talpa Signora Fergie.md deleted file mode 100644 index 5e83288aec025803d89aec21cd580633290afbd2..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/7am Arivu Bgm Music Free !!TOP!! 26 Talpa Signora Fergie.md +++ /dev/null @@ -1,6 +0,0 @@ -

      7am Arivu Bgm Music Free 26 talpa signora fergie


      DOWNLOAD - https://ssurll.com/2uzwlH



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/scatter_points.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/scatter_points.py deleted file mode 100644 index 2b8aa4169e9f6ca4a6f845ce17d6d1e4db416bb8..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/ops/scatter_points.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', - ['dynamic_point_to_voxel_forward', 'dynamic_point_to_voxel_backward']) - - -class _DynamicScatter(Function): - - @staticmethod - def forward(ctx, feats, coors, reduce_type='max'): - """convert kitti points(N, >=3) to voxels. - - Args: - feats (torch.Tensor): [N, C]. Points features to be reduced - into voxels. - coors (torch.Tensor): [N, ndim]. Corresponding voxel coordinates - (specifically multi-dim voxel index) of each points. - reduce_type (str, optional): Reduce op. support 'max', 'sum' and - 'mean'. Default: 'max'. - - Returns: - voxel_feats (torch.Tensor): [M, C]. Reduced features, input - features that shares the same voxel coordinates are reduced to - one row. - voxel_coors (torch.Tensor): [M, ndim]. Voxel coordinates. - """ - results = ext_module.dynamic_point_to_voxel_forward( - feats, coors, reduce_type) - (voxel_feats, voxel_coors, point2voxel_map, - voxel_points_count) = results - ctx.reduce_type = reduce_type - ctx.save_for_backward(feats, voxel_feats, point2voxel_map, - voxel_points_count) - ctx.mark_non_differentiable(voxel_coors) - return voxel_feats, voxel_coors - - @staticmethod - def backward(ctx, grad_voxel_feats, grad_voxel_coors=None): - (feats, voxel_feats, point2voxel_map, - voxel_points_count) = ctx.saved_tensors - grad_feats = torch.zeros_like(feats) - # TODO: whether to use index put or use cuda_backward - # To use index put, need point to voxel index - ext_module.dynamic_point_to_voxel_backward( - grad_feats, grad_voxel_feats.contiguous(), feats, voxel_feats, - point2voxel_map, voxel_points_count, ctx.reduce_type) - return grad_feats, None, None - - -dynamic_scatter = _DynamicScatter.apply - - -class DynamicScatter(nn.Module): - """Scatters points into voxels, used in the voxel encoder with dynamic - voxelization. - - Note: - The CPU and GPU implementation get the same output, but have numerical - difference after summation and division (e.g., 5e-7). - - Args: - voxel_size (list): list [x, y, z] size of three dimension. - point_cloud_range (list): The coordinate range of points, [x_min, - y_min, z_min, x_max, y_max, z_max]. - average_points (bool): whether to use avg pooling to scatter points - into voxel. - """ - - def __init__(self, voxel_size, point_cloud_range, average_points: bool): - super().__init__() - - self.voxel_size = voxel_size - self.point_cloud_range = point_cloud_range - self.average_points = average_points - - def forward_single(self, points, coors): - """Scatters points into voxels. - - Args: - points (torch.Tensor): Points to be reduced into voxels. - coors (torch.Tensor): Corresponding voxel coordinates (specifically - multi-dim voxel index) of each points. - - Returns: - voxel_feats (torch.Tensor): Reduced features, input features that - shares the same voxel coordinates are reduced to one row. - voxel_coors (torch.Tensor): Voxel coordinates. - """ - reduce = 'mean' if self.average_points else 'max' - return dynamic_scatter(points.contiguous(), coors.contiguous(), reduce) - - def forward(self, points, coors): - """Scatters points/features into voxels. - - Args: - points (torch.Tensor): Points to be reduced into voxels. - coors (torch.Tensor): Corresponding voxel coordinates (specifically - multi-dim voxel index) of each points. - - Returns: - voxel_feats (torch.Tensor): Reduced features, input features that - shares the same voxel coordinates are reduced to one row. - voxel_coors (torch.Tensor): Voxel coordinates. - """ - if coors.size(-1) == 3: - return self.forward_single(points, coors) - else: - batch_size = coors[-1, 0] + 1 - voxels, voxel_coors = [], [] - for i in range(batch_size): - inds = torch.where(coors[:, 0] == i) - voxel, voxel_coor = self.forward_single( - points[inds], coors[inds][:, 1:]) - coor_pad = nn.functional.pad( - voxel_coor, (1, 0), mode='constant', value=i) - voxel_coors.append(coor_pad) - voxels.append(voxel) - features = torch.cat(voxels, dim=0) - feature_coors = torch.cat(voxel_coors, dim=0) - - return features, feature_coors - - def __repr__(self): - s = self.__class__.__name__ + '(' - s += 'voxel_size=' + str(self.voxel_size) - s += ', point_cloud_range=' + str(self.point_cloud_range) - s += ', average_points=' + str(self.average_points) - s += ')' - return s diff --git a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/sample_util.py b/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/sample_util.py deleted file mode 100644 index d0b105d148d6d8fddc461d1c04f659200957c189..0000000000000000000000000000000000000000 --- a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/sample_util.py +++ /dev/null @@ -1,47 +0,0 @@ -import numpy as np - - -def save_samples_truncted_prob(fname, points, prob): - ''' - Save the visualization of sampling to a ply file. - Red points represent positive predictions. - Green points represent negative predictions. - :param fname: File name to save - :param points: [N, 3] array of points - :param prob: [N, 1] array of predictions in the range [0~1] - :return: - ''' - r = (prob > 0.5).reshape([-1, 1]) * 255 - g = (prob < 0.5).reshape([-1, 1]) * 255 - b = np.zeros(r.shape) - - to_save = np.concatenate([points, r, g, b], axis=-1) - return np.savetxt(fname, - to_save, - fmt='%.6f %.6f %.6f %d %d %d', - comments='', - header=( - 'ply\nformat ascii 1.0\nelement vertex {:d}\nproperty float x\nproperty float y\nproperty float z\nproperty uchar red\nproperty uchar green\nproperty uchar blue\nend_header').format( - points.shape[0]) - ) - - -def save_samples_rgb(fname, points, rgb): - ''' - Save the visualization of sampling to a ply file. - Red points represent positive predictions. - Green points represent negative predictions. - :param fname: File name to save - :param points: [N, 3] array of points - :param rgb: [N, 3] array of rgb values in the range [0~1] - :return: - ''' - to_save = np.concatenate([points, rgb * 255], axis=-1) - return np.savetxt(fname, - to_save, - fmt='%.6f %.6f %.6f %d %d %d', - comments='', - header=( - 'ply\nformat ascii 1.0\nelement vertex {:d}\nproperty float x\nproperty float y\nproperty float z\nproperty uchar red\nproperty uchar green\nproperty uchar blue\nend_header').format( - points.shape[0]) - ) diff --git a/spaces/cymic/Waifu_Diffusion_Webui/modules/gfpgan_model.py b/spaces/cymic/Waifu_Diffusion_Webui/modules/gfpgan_model.py deleted file mode 100644 index 74e4dbbe94222000e8f9c79956d1c2a857d7469b..0000000000000000000000000000000000000000 --- a/spaces/cymic/Waifu_Diffusion_Webui/modules/gfpgan_model.py +++ /dev/null @@ -1,115 +0,0 @@ -import os -import sys -import traceback - -import facexlib -import gfpgan - -import modules.face_restoration -from modules import shared, devices, modelloader -from modules.paths import models_path - -model_dir = "GFPGAN" -user_path = None -model_path = os.path.join(models_path, model_dir) -model_url = "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth" -have_gfpgan = False -loaded_gfpgan_model = None - - -def gfpgann(): - global loaded_gfpgan_model - global model_path - if loaded_gfpgan_model is not None: - loaded_gfpgan_model.gfpgan.to(devices.device_gfpgan) - return loaded_gfpgan_model - - if gfpgan_constructor is None: - return None - - models = modelloader.load_models(model_path, model_url, user_path, ext_filter="GFPGAN") - if len(models) == 1 and "http" in models[0]: - model_file = models[0] - elif len(models) != 0: - latest_file = max(models, key=os.path.getctime) - model_file = latest_file - else: - print("Unable to load gfpgan model!") - return None - model = gfpgan_constructor(model_path=model_file, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None) - loaded_gfpgan_model = model - - return model - - -def send_model_to(model, device): - model.gfpgan.to(device) - model.face_helper.face_det.to(device) - model.face_helper.face_parse.to(device) - - -def gfpgan_fix_faces(np_image): - model = gfpgann() - if model is None: - return np_image - - send_model_to(model, devices.device_gfpgan) - - np_image_bgr = np_image[:, :, ::-1] - cropped_faces, restored_faces, gfpgan_output_bgr = model.enhance(np_image_bgr, has_aligned=False, only_center_face=False, paste_back=True) - np_image = gfpgan_output_bgr[:, :, ::-1] - - model.face_helper.clean_all() - - if shared.opts.face_restoration_unload: - send_model_to(model, devices.cpu) - - return np_image - - -gfpgan_constructor = None - - -def setup_model(dirname): - global model_path - if not os.path.exists(model_path): - os.makedirs(model_path) - - try: - from gfpgan import GFPGANer - from facexlib import detection, parsing - global user_path - global have_gfpgan - global gfpgan_constructor - - load_file_from_url_orig = gfpgan.utils.load_file_from_url - facex_load_file_from_url_orig = facexlib.detection.load_file_from_url - facex_load_file_from_url_orig2 = facexlib.parsing.load_file_from_url - - def my_load_file_from_url(**kwargs): - return load_file_from_url_orig(**dict(kwargs, model_dir=model_path)) - - def facex_load_file_from_url(**kwargs): - return facex_load_file_from_url_orig(**dict(kwargs, save_dir=model_path, model_dir=None)) - - def facex_load_file_from_url2(**kwargs): - return facex_load_file_from_url_orig2(**dict(kwargs, save_dir=model_path, model_dir=None)) - - gfpgan.utils.load_file_from_url = my_load_file_from_url - facexlib.detection.load_file_from_url = facex_load_file_from_url - facexlib.parsing.load_file_from_url = facex_load_file_from_url2 - user_path = dirname - have_gfpgan = True - gfpgan_constructor = GFPGANer - - class FaceRestorerGFPGAN(modules.face_restoration.FaceRestoration): - def name(self): - return "GFPGAN" - - def restore(self, np_image): - return gfpgan_fix_faces(np_image) - - shared.face_restorers.append(FaceRestorerGFPGAN()) - except Exception: - print("Error setting up GFPGAN:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) diff --git a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/pirenderer/util/logging.py b/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/pirenderer/util/logging.py deleted file mode 100644 index 85402dec39e6b42fba505e685c4e1481e730d5ed..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/Demo_TFR_Pirenderer/src/pirenderer/util/logging.py +++ /dev/null @@ -1,46 +0,0 @@ -import os -import datetime - -from util.meters import set_summary_writer -from util.distributed import master_only_print as print -from util.distributed import master_only - -def get_date_uid(): - """Generate a unique id based on date. - Returns: - str: Return uid string, e.g. '20171122171307111552'. - """ - return str(datetime.datetime.now().strftime("%Y_%m%d_%H%M_%S")) - - -def init_logging(opt): - date_uid = get_date_uid() - if opt.name is not None: - logdir = os.path.join(opt.checkpoints_dir, opt.name) - else: - logdir = os.path.join(opt.checkpoints_dir, date_uid) - opt.logdir = logdir - return date_uid, logdir - -@master_only -def make_logging_dir(logdir, date_uid): - r"""Create the logging directory - - Args: - logdir (str): Log directory name - """ - - - print('Initialize folder {}'.format(logdir)) - os.makedirs(logdir, exist_ok=True) - tensorboard_dir = os.path.join(logdir, 'tensorboard') - image_dir = os.path.join(logdir, 'image') - eval_dir = os.path.join(logdir, 'evaluation') - os.makedirs(tensorboard_dir, exist_ok=True) - os.makedirs(image_dir, exist_ok=True) - os.makedirs(eval_dir, exist_ok=True) - - set_summary_writer(tensorboard_dir) - loss_log_name = os.path.join(logdir, 'loss_log.txt') - with open(loss_log_name, "a") as log_file: - log_file.write('================ Training Loss (%s) ================\n' % date_uid) diff --git a/spaces/datien228/text-summarizer/modules/dataset.py b/spaces/datien228/text-summarizer/modules/dataset.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_o_p_b_d.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_o_p_b_d.py deleted file mode 100644 index b22af216bb2e2ddb8af1cd3f991d4ede69471076..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_o_p_b_d.py +++ /dev/null @@ -1,6 +0,0 @@ -from .otBase import BaseTTXConverter - - -# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6opbd.html -class table__o_p_b_d(BaseTTXConverter): - pass diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/data01/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/data01/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/_internal_utils.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/_internal_utils.py deleted file mode 100644 index 0223aa593bb2cb20b58f2b9e41bdc0dfa5ceed35..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/_internal_utils.py +++ /dev/null @@ -1,64 +0,0 @@ -""" -Internal debugging utilities, that are not expected to be used in the rest of -the codebase. - -WARNING: Code in this module may change without prior notice! -""" - -from io import StringIO -from pathlib import Path -import subprocess - -from matplotlib.transforms import TransformNode - - -def graphviz_dump_transform(transform, dest, *, highlight=None): - """ - Generate a graphical representation of the transform tree for *transform* - using the :program:`dot` program (which this function depends on). The - output format (png, dot, etc.) is determined from the suffix of *dest*. - - Parameters - ---------- - transform : `~matplotlib.transform.Transform` - The represented transform. - dest : str - Output filename. The extension must be one of the formats supported - by :program:`dot`, e.g. png, svg, dot, ... - (see https://www.graphviz.org/doc/info/output.html). - highlight : list of `~matplotlib.transform.Transform` or None - The transforms in the tree to be drawn in bold. - If *None*, *transform* is highlighted. - """ - - if highlight is None: - highlight = [transform] - seen = set() - - def recurse(root, buf): - if id(root) in seen: - return - seen.add(id(root)) - props = {} - label = type(root).__name__ - if root._invalid: - label = f'[{label}]' - if root in highlight: - props['style'] = 'bold' - props['shape'] = 'box' - props['label'] = '"%s"' % label - props = ' '.join(map('{0[0]}={0[1]}'.format, props.items())) - buf.write(f'{id(root)} [{props}];\n') - for key, val in vars(root).items(): - if isinstance(val, TransformNode) and id(root) in val._parents: - buf.write(f'"{id(root)}" -> "{id(val)}" ' - f'[label="{key}", fontsize=10];\n') - recurse(val, buf) - - buf = StringIO() - buf.write('digraph G {\n') - recurse(transform, buf) - buf.write('}\n') - subprocess.run( - ['dot', '-T', Path(dest).suffix[1:], '-o', dest], - input=buf.getvalue().encode('utf-8'), check=True) diff --git a/spaces/declare-lab/tango/diffusers/examples/research_projects/colossalai/inference.py b/spaces/declare-lab/tango/diffusers/examples/research_projects/colossalai/inference.py deleted file mode 100644 index 3b115c2d2b8f5bcdb3a0c053a6c71b91a965c573..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/research_projects/colossalai/inference.py +++ /dev/null @@ -1,12 +0,0 @@ -import torch - -from diffusers import StableDiffusionPipeline - - -model_id = "path-to-your-trained-model" -pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") - -prompt = "A photo of sks dog in a bucket" -image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0] - -image.save("dog-bucket.png") diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/utils/torch_utils.py b/spaces/declare-lab/tango/diffusers/src/diffusers/utils/torch_utils.py deleted file mode 100644 index b9815cbceededd312bb75240508364fdd623524d..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/utils/torch_utils.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -PyTorch utilities: Utilities related to PyTorch -""" -from typing import List, Optional, Tuple, Union - -from . import logging -from .import_utils import is_torch_available, is_torch_version - - -if is_torch_available(): - import torch - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -def randn_tensor( - shape: Union[Tuple, List], - generator: Optional[Union[List["torch.Generator"], "torch.Generator"]] = None, - device: Optional["torch.device"] = None, - dtype: Optional["torch.dtype"] = None, - layout: Optional["torch.layout"] = None, -): - """This is a helper function that allows to create random tensors on the desired `device` with the desired `dtype`. When - passing a list of generators one can seed each batched size individually. If CPU generators are passed the tensor - will always be created on CPU. - """ - # device on which tensor is created defaults to device - rand_device = device - batch_size = shape[0] - - layout = layout or torch.strided - device = device or torch.device("cpu") - - if generator is not None: - gen_device_type = generator.device.type if not isinstance(generator, list) else generator[0].device.type - if gen_device_type != device.type and gen_device_type == "cpu": - rand_device = "cpu" - if device != "mps": - logger.info( - f"The passed generator was created on 'cpu' even though a tensor on {device} was expected." - f" Tensors will be created on 'cpu' and then moved to {device}. Note that one can probably" - f" slighly speed up this function by passing a generator that was created on the {device} device." - ) - elif gen_device_type != device.type and gen_device_type == "cuda": - raise ValueError(f"Cannot generate a {device} tensor from a generator of type {gen_device_type}.") - - if isinstance(generator, list): - shape = (1,) + shape[1:] - latents = [ - torch.randn(shape, generator=generator[i], device=rand_device, dtype=dtype, layout=layout) - for i in range(batch_size) - ] - latents = torch.cat(latents, dim=0).to(device) - else: - latents = torch.randn(shape, generator=generator, device=rand_device, dtype=dtype, layout=layout).to(device) - - return latents - - -def is_compiled_module(module): - """Check whether the module was compiled with torch.compile()""" - if is_torch_version("<", "2.0.0") or not hasattr(torch, "_dynamo"): - return False - return isinstance(module, torch._dynamo.eval_frame.OptimizedModule) diff --git a/spaces/deelerb/3dselfie/PIFu/apps/train_shape.py b/spaces/deelerb/3dselfie/PIFu/apps/train_shape.py deleted file mode 100644 index 241ce543c956ce51f6f8445739ef41f4ddf7a7d5..0000000000000000000000000000000000000000 --- a/spaces/deelerb/3dselfie/PIFu/apps/train_shape.py +++ /dev/null @@ -1,183 +0,0 @@ -import sys -import os - -sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))) -ROOT_PATH = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) - -import time -import json -import numpy as np -import cv2 -import random -import torch -from torch.utils.data import DataLoader -from tqdm import tqdm - -from lib.options import BaseOptions -from lib.mesh_util import * -from lib.sample_util import * -from lib.train_util import * -from lib.data import * -from lib.model import * -from lib.geometry import index - -# get options -opt = BaseOptions().parse() - -def train(opt): - # set cuda - cuda = torch.device('cuda:%d' % opt.gpu_id) - - train_dataset = TrainDataset(opt, phase='train') - test_dataset = TrainDataset(opt, phase='test') - - projection_mode = train_dataset.projection_mode - - # create data loader - train_data_loader = DataLoader(train_dataset, - batch_size=opt.batch_size, shuffle=not opt.serial_batches, - num_workers=opt.num_threads, pin_memory=opt.pin_memory) - - print('train data size: ', len(train_data_loader)) - - # NOTE: batch size should be 1 and use all the points for evaluation - test_data_loader = DataLoader(test_dataset, - batch_size=1, shuffle=False, - num_workers=opt.num_threads, pin_memory=opt.pin_memory) - print('test data size: ', len(test_data_loader)) - - # create net - netG = HGPIFuNet(opt, projection_mode).to(device=cuda) - optimizerG = torch.optim.RMSprop(netG.parameters(), lr=opt.learning_rate, momentum=0, weight_decay=0) - lr = opt.learning_rate - print('Using Network: ', netG.name) - - def set_train(): - netG.train() - - def set_eval(): - netG.eval() - - # load checkpoints - if opt.load_netG_checkpoint_path is not None: - print('loading for net G ...', opt.load_netG_checkpoint_path) - netG.load_state_dict(torch.load(opt.load_netG_checkpoint_path, map_location=cuda)) - - if opt.continue_train: - if opt.resume_epoch < 0: - model_path = '%s/%s/netG_latest' % (opt.checkpoints_path, opt.name) - else: - model_path = '%s/%s/netG_epoch_%d' % (opt.checkpoints_path, opt.name, opt.resume_epoch) - print('Resuming from ', model_path) - netG.load_state_dict(torch.load(model_path, map_location=cuda)) - - os.makedirs(opt.checkpoints_path, exist_ok=True) - os.makedirs(opt.results_path, exist_ok=True) - os.makedirs('%s/%s' % (opt.checkpoints_path, opt.name), exist_ok=True) - os.makedirs('%s/%s' % (opt.results_path, opt.name), exist_ok=True) - - opt_log = os.path.join(opt.results_path, opt.name, 'opt.txt') - with open(opt_log, 'w') as outfile: - outfile.write(json.dumps(vars(opt), indent=2)) - - # training - start_epoch = 0 if not opt.continue_train else max(opt.resume_epoch,0) - for epoch in range(start_epoch, opt.num_epoch): - epoch_start_time = time.time() - - set_train() - iter_data_time = time.time() - for train_idx, train_data in enumerate(train_data_loader): - iter_start_time = time.time() - - # retrieve the data - image_tensor = train_data['img'].to(device=cuda) - calib_tensor = train_data['calib'].to(device=cuda) - sample_tensor = train_data['samples'].to(device=cuda) - - image_tensor, calib_tensor = reshape_multiview_tensors(image_tensor, calib_tensor) - - if opt.num_views > 1: - sample_tensor = reshape_sample_tensor(sample_tensor, opt.num_views) - - label_tensor = train_data['labels'].to(device=cuda) - - res, error = netG.forward(image_tensor, sample_tensor, calib_tensor, labels=label_tensor) - - optimizerG.zero_grad() - error.backward() - optimizerG.step() - - iter_net_time = time.time() - eta = ((iter_net_time - epoch_start_time) / (train_idx + 1)) * len(train_data_loader) - ( - iter_net_time - epoch_start_time) - - if train_idx % opt.freq_plot == 0: - print( - 'Name: {0} | Epoch: {1} | {2}/{3} | Err: {4:.06f} | LR: {5:.06f} | Sigma: {6:.02f} | dataT: {7:.05f} | netT: {8:.05f} | ETA: {9:02d}:{10:02d}'.format( - opt.name, epoch, train_idx, len(train_data_loader), error.item(), lr, opt.sigma, - iter_start_time - iter_data_time, - iter_net_time - iter_start_time, int(eta // 60), - int(eta - 60 * (eta // 60)))) - - if train_idx % opt.freq_save == 0 and train_idx != 0: - torch.save(netG.state_dict(), '%s/%s/netG_latest' % (opt.checkpoints_path, opt.name)) - torch.save(netG.state_dict(), '%s/%s/netG_epoch_%d' % (opt.checkpoints_path, opt.name, epoch)) - - if train_idx % opt.freq_save_ply == 0: - save_path = '%s/%s/pred.ply' % (opt.results_path, opt.name) - r = res[0].cpu() - points = sample_tensor[0].transpose(0, 1).cpu() - save_samples_truncted_prob(save_path, points.detach().numpy(), r.detach().numpy()) - - iter_data_time = time.time() - - # update learning rate - lr = adjust_learning_rate(optimizerG, epoch, lr, opt.schedule, opt.gamma) - - #### test - with torch.no_grad(): - set_eval() - - if not opt.no_num_eval: - test_losses = {} - print('calc error (test) ...') - test_errors = calc_error(opt, netG, cuda, test_dataset, 100) - print('eval test MSE: {0:06f} IOU: {1:06f} prec: {2:06f} recall: {3:06f}'.format(*test_errors)) - MSE, IOU, prec, recall = test_errors - test_losses['MSE(test)'] = MSE - test_losses['IOU(test)'] = IOU - test_losses['prec(test)'] = prec - test_losses['recall(test)'] = recall - - print('calc error (train) ...') - train_dataset.is_train = False - train_errors = calc_error(opt, netG, cuda, train_dataset, 100) - train_dataset.is_train = True - print('eval train MSE: {0:06f} IOU: {1:06f} prec: {2:06f} recall: {3:06f}'.format(*train_errors)) - MSE, IOU, prec, recall = train_errors - test_losses['MSE(train)'] = MSE - test_losses['IOU(train)'] = IOU - test_losses['prec(train)'] = prec - test_losses['recall(train)'] = recall - - if not opt.no_gen_mesh: - print('generate mesh (test) ...') - for gen_idx in tqdm(range(opt.num_gen_mesh_test)): - test_data = random.choice(test_dataset) - save_path = '%s/%s/test_eval_epoch%d_%s.obj' % ( - opt.results_path, opt.name, epoch, test_data['name']) - gen_mesh(opt, netG, cuda, test_data, save_path) - - print('generate mesh (train) ...') - train_dataset.is_train = False - for gen_idx in tqdm(range(opt.num_gen_mesh_test)): - train_data = random.choice(train_dataset) - save_path = '%s/%s/train_eval_epoch%d_%s.obj' % ( - opt.results_path, opt.name, epoch, train_data['name']) - gen_mesh(opt, netG, cuda, train_data, save_path) - train_dataset.is_train = True - - -if __name__ == '__main__': - train(opt) \ No newline at end of file diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/__init__.py b/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/variational_autoencoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/diacanFperku/AutoGPT/Kernel-mode Driver Framework Version 1.11 Download Youtube.md b/spaces/diacanFperku/AutoGPT/Kernel-mode Driver Framework Version 1.11 Download Youtube.md deleted file mode 100644 index c4287e2d4a3f7996eb4512c5e29459d5f82c3917..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Kernel-mode Driver Framework Version 1.11 Download Youtube.md +++ /dev/null @@ -1,8 +0,0 @@ -
      -

      On the other hand, if you wish to download driver firmware images from the Internet, you can do it quickly, simply, and safely using the DriverFirmware application. This tool works with every Windows version and is included in all of the standard Windows installation media.

      -

      User-mode driver framework is a method of having drivers ran in user space instead of kernel space. This provides better system stability due to a situation of a driver crashing, it only brings down the driver and not the system. Such possible devices include cameras, media players, PDAs and other USB connected devices. This service uses about 8MB to 9MB of RAM.

      -

      kernel-mode driver framework version 1.11 download youtube


      Download File ✶✶✶ https://gohhs.com/2uFU4H



      -

      The ViGEmBus driver and ViGEmClient libraries represent the core of the Virtual Gamepad Emulation Framework (or ViGEm, for short). ViGEm aims for a 100% accurate emulation of well-known gaming peripherals as pure software-based devices at kernel level. As it mimics the real thing games and other processes require no additional modification whatsoever to detect ViGEm-based devices (no Proxy-DLLs or API-Hooking) and simply work out of the box. While the (now obsolete) Scarlett.Crush Productions Virtual Bus Driver is the spiritual father of this project, ViGEm has been designed and written from the ground up utilizing Microsoft's Kernel-Mode Driver Framework.

      -

      User-mode driver framework is a method of having drivers ran in user space instead of kernal space. This provides better system stability due to a situation of a driver crashing, it only brings down the driver and not the system. Such possible devices include cameras, media players, PDAs and other USB connected devices. This service uses about 8MB to 9MB of RAM.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Microwave Devices And Circuits Samuel Liao Solution Manual.pdf.md b/spaces/diacanFperku/AutoGPT/Microwave Devices And Circuits Samuel Liao Solution Manual.pdf.md deleted file mode 100644 index 260c2860530963e652996ac7bb3cdb98fb7678d0..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Microwave Devices And Circuits Samuel Liao Solution Manual.pdf.md +++ /dev/null @@ -1,9 +0,0 @@ -

      microwave devices and circuits samuel liao solution manual.pdf


      Download Ziphttps://gohhs.com/2uFUCr



      -
      -microwave-devices-and-circuits-samuel-liao-solution-manual-pdf. pdf - download for free as a PDF file (.pdf), text file (.txt), or read online for free. at Georgia Middle State University. DEVICES AND SCHEMES OF MICROWAVE DEVICES ... Microwave ovens - operating instructions. -Panasonic Microwave Ovens - Operating Instructions. -Page 1 of 3 - Microwave Oven - Operating Instructions and User Guides - sent to Home Appliances: Good day everyone -microwave-device-manual-microwave-panasonic-mn932-n1 – free download 8a78ff9644
      -
      -
      -

      diff --git a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/bert_gen.py b/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/bert_gen.py deleted file mode 100644 index 467655b2c4171608ad690fe7dec350db85f84f1b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/bert_gen.py +++ /dev/null @@ -1,53 +0,0 @@ -import torch -from torch.utils.data import DataLoader -from multiprocessing import Pool -import commons -import utils -from data_utils import TextAudioSpeakerLoader, TextAudioSpeakerCollate -from tqdm import tqdm -import warnings - -from text import cleaned_text_to_sequence, get_bert - -config_path = 'configs/config.json' -hps = utils.get_hparams_from_file(config_path) - -def process_line(line): - _id, spk, language_str, text, phones, tone, word2ph = line.strip().split("|") - phone = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - wav_path = f'{_id}' - - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - assert bert.shape[-1] == len(phone) - torch.save(bert, bert_path) - - -if __name__ == '__main__': - lines = [] - with open(hps.data.training_files, encoding='utf-8' ) as f: - lines.extend(f.readlines()) - - # with open(hps.data.validation_files, encoding='utf-8' ) as f: - # lines.extend(f.readlines()) - - with Pool(processes=2) as pool: #A100 40GB suitable config,if coom,please decrease the processess number. - for _ in tqdm(pool.imap_unordered(process_line, lines)): - pass diff --git a/spaces/digitalxingtong/Jiaran-Bert-VITS2/models.py b/spaces/digitalxingtong/Jiaran-Bert-VITS2/models.py deleted file mode 100644 index d4afe44d883691610c5903e602a3ca245fcb3a5c..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaran-Bert-VITS2/models.py +++ /dev/null @@ -1,707 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from commons import init_weights, get_padding -from text import symbols, num_tones, num_languages -class DurationDiscriminator(nn.Module): #vits2 - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.dur_proj = nn.Conv1d(1, filter_channels, 1) - - self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_1 = modules.LayerNorm(filter_channels) - self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_2 = modules.LayerNorm(filter_channels) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - self.output_layer = nn.Sequential( - nn.Linear(filter_channels, 1), - nn.Sigmoid() - ) - - def forward_probability(self, x, x_mask, dur, g=None): - dur = self.dur_proj(dur) - x = torch.cat([x, dur], dim=1) - x = self.pre_out_conv_1(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_1(x) - x = self.drop(x) - x = self.pre_out_conv_2(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_2(x) - x = self.drop(x) - x = x * x_mask - x = x.transpose(1, 2) - output_prob = self.output_layer(x) - return output_prob - - def forward(self, x, x_mask, dur_r, dur_hat, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - - output_probs = [] - for dur in [dur_r, dur_hat]: - output_prob = self.forward_probability(x, x_mask, dur, g) - output_probs.append(output_prob) - - return output_probs - -class TransformerCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - share_parameter=False - ): - - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - - self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None - - for i in range(n_flows): - self.flows.append( - modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=0): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - self.emb = nn.Embedding(len(symbols), hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - self.tone_emb = nn.Embedding(num_tones, hidden_channels) - nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5) - self.language_emb = nn.Embedding(num_languages, hidden_channels) - nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5) - self.bert_proj = nn.Conv1d(1024, hidden_channels, 1) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, tone, language, bert, g=None): - x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask, g=g) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - -class ReferenceEncoder(nn.Module): - ''' - inputs --- [N, Ty/r, n_mels*r] mels - outputs --- [N, ref_enc_gru_size] - ''' - - def __init__(self, spec_channels, gin_channels=0): - - super().__init__() - self.spec_channels = spec_channels - ref_enc_filters = [32, 32, 64, 64, 128, 128] - K = len(ref_enc_filters) - filters = [1] + ref_enc_filters - convs = [weight_norm(nn.Conv2d(in_channels=filters[i], - out_channels=filters[i + 1], - kernel_size=(3, 3), - stride=(2, 2), - padding=(1, 1))) for i in range(K)] - self.convs = nn.ModuleList(convs) - # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) - - out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K) - self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels, - hidden_size=256 // 2, - batch_first=True) - self.proj = nn.Linear(128, gin_channels) - - def forward(self, inputs, mask=None): - N = inputs.size(0) - out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs] - for conv in self.convs: - out = conv(out) - # out = wn(out) - out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K] - - out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K] - T = out.size(1) - N = out.size(0) - out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K] - - self.gru.flatten_parameters() - memory, out = self.gru(out) # out --- [1, N, 128] - - return self.proj(out.squeeze(0)) - - def calculate_channels(self, L, kernel_size, stride, pad, n_convs): - for i in range(n_convs): - L = (L - kernel_size + 2 * pad) // stride + 1 - return L - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=256, - gin_channels=256, - use_sdp=True, - n_flow_layer = 4, - n_layers_trans_flow = 3, - flow_share_parameter = False, - use_transformer_flow = True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.n_layers_trans_flow = n_layers_trans_flow - self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True) - self.use_sdp = use_sdp - self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False) - self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01) - self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6) - self.current_mas_noise_scale = self.mas_noise_scale_initial - if self.use_spk_conditioned_encoder and gin_channels > 0: - self.enc_gin_channels = gin_channels - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.enc_gin_channels) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - if use_transformer_flow: - self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter) - else: - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels) - self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - else: - self.ref_enc = ReferenceEncoder(spec_channels, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert): - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - if self.use_noise_scaled_mas: - epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale - neg_cent = neg_cent + epsilon - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - - l_length_sdp = self.sdp(x, x_mask, w, g=g) - l_length_sdp = l_length_sdp / torch.sum(x_mask) - - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging - - l_length = l_length_dp + l_length_sdp - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_) - - def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None): - #x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert) - # g = self.gst(y) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/dbnetpp/dbnetpp_r50dcnv2_fpnc_1200e_icdar2015.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/dbnetpp/dbnetpp_r50dcnv2_fpnc_1200e_icdar2015.py deleted file mode 100644 index bc6ab78cacc3f5b62549dfcf8c93cc0cc5c3a6ac..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/dbnetpp/dbnetpp_r50dcnv2_fpnc_1200e_icdar2015.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_sgd_1200e.py', - '../../_base_/det_models/dbnetpp_r50dcnv2_fpnc.py', - '../../_base_/det_datasets/icdar2015.py', - '../../_base_/det_pipelines/dbnet_pipeline.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline_r50dcnv2 = {{_base_.train_pipeline_r50dcnv2}} -test_pipeline_4068_1024 = {{_base_.test_pipeline_4068_1024}} - -load_from = 'checkpoints/textdet/dbnetpp/res50dcnv2_synthtext.pth' - -data = dict( - samples_per_gpu=32, - workers_per_gpu=8, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline_r50dcnv2), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_4068_1024), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_4068_1024)) - -evaluation = dict( - interval=100, - metric='hmean-iou', - save_best='0_hmean-iou:hmean', - rule='greater') diff --git a/spaces/django-ochain/youtube-q-and-a/app.py b/spaces/django-ochain/youtube-q-and-a/app.py deleted file mode 100644 index 70e358a200ddeded93afcdec4ace931466ea6260..0000000000000000000000000000000000000000 --- a/spaces/django-ochain/youtube-q-and-a/app.py +++ /dev/null @@ -1,83 +0,0 @@ -import openai -import os -from langchain.document_loaders import TextLoader, YoutubeLoader -#pytube, gradio, langchain, openai -import gradio as gr -from youtube_transcript_api import YouTubeTranscriptApi -from langchain.indexes import VectorstoreIndexCreator -from langchain.llms import OpenAI - -OPENAI_API_KEY = os.environ['OPENAI_API_KEY'] - -previous_youtube_url = None -index = None - -def get_video_id(url): - video_id = None - if 'youtu.be' in url: - video_id = url.split('/')[-1] - else: - video_id = url.split('watch?v=')[-1] - return video_id - -def get_captions(url): - try: - video_id = get_video_id(url) - transcript_list = YouTubeTranscriptApi.list_transcripts(video_id) - transcript = transcript_list.find_transcript(['en']) - captions = transcript.fetch() - - formatted_captions = '' - for caption in captions: - formatted_captions += caption['text'] + ' ' - - return formatted_captions - - except Exception as e: - print(e) - return "Error. Could not fetch captions." - - - -def answer_question(youtube_url, user_question): - # You can implement your logic here to process the video, transcribe it, and answer the user question. - # For now, let's return the user question as output. - global previous_youtube_url - global index - - query = ''' - You are an expert researcher that can answer any questions from a given text. Here is the question: - {} - '''.format(str(user_question)) - - if previous_youtube_url == youtube_url: - #index = VectorstoreIndexCreator().from_loaders([loader]) - #query = user_question - answer = index.query(llm=OpenAI(model="text-davinci-003"), question = query) - else: - f= open("temp.txt","w+") - f.write(get_captions(youtube_url)) - f.close() - loader = TextLoader("temp.txt") - - index = VectorstoreIndexCreator().from_loaders([loader]) - os.remove("temp.txt") - - #query = user_question - answer = index.query(llm=OpenAI(model="text-davinci-003"), question = query) - - return answer - -iface = gr.Interface( - fn=answer_question, - inputs=[ - gr.Textbox(lines=1, placeholder="Enter YouTube URL here..."), - gr.Textbox(lines=1, placeholder="Enter your question here...") - ], - outputs=gr.Textbox(), - title="YouTube Video Question Answering", - description="Enter a YouTube URL and a question related to the video content. The app will return the answer if answer exists in the video." -) -if __name__ == "__main__": - iface.launch() - diff --git a/spaces/dmeck/RVC-Speakers/vits/modules/commons/commons_v2.py b/spaces/dmeck/RVC-Speakers/vits/modules/commons/commons_v2.py deleted file mode 100644 index db17cf0914ba6e445fe613e3ec3411b3a74b28aa..0000000000000000000000000000000000000000 --- a/spaces/dmeck/RVC-Speakers/vits/modules/commons/commons_v2.py +++ /dev/null @@ -1,164 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - try: - ret[i] = x[i, :, idx_str:idx_end] - except RuntimeError: - print("?") - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/dorkai/SINGPT-Temporary/extensions/send_pictures/script.py b/spaces/dorkai/SINGPT-Temporary/extensions/send_pictures/script.py deleted file mode 100644 index b0c356329a51edf026f7223a0ee7e5427d8751ce..0000000000000000000000000000000000000000 --- a/spaces/dorkai/SINGPT-Temporary/extensions/send_pictures/script.py +++ /dev/null @@ -1,46 +0,0 @@ -import base64 -from io import BytesIO - -import gradio as gr -import torch -from transformers import BlipForConditionalGeneration, BlipProcessor - -import modules.chat as chat -import modules.shared as shared - -# If 'state' is True, will hijack the next chat generation with -# custom input text given by 'value' in the format [text, visible_text] -input_hijack = { - 'state': False, - 'value': ["", ""] -} - -processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") -model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float32).to("cpu") - -def caption_image(raw_image): - inputs = processor(raw_image.convert('RGB'), return_tensors="pt").to("cpu", torch.float32) - out = model.generate(**inputs, max_new_tokens=100) - return processor.decode(out[0], skip_special_tokens=True) - -def generate_chat_picture(picture, name1, name2): - text = f'*{name1} sends {name2} a picture that contains the following: "{caption_image(picture)}"*' - buffer = BytesIO() - picture.save(buffer, format="JPEG") - img_str = base64.b64encode(buffer.getvalue()).decode('utf-8') - visible_text = f'' - return text, visible_text - -def ui(): - picture_select = gr.Image(label='Send a picture', type='pil') - - function_call = 'chat.cai_chatbot_wrapper' if shared.args.cai_chat else 'chat.chatbot_wrapper' - - # Prepare the hijack with custom inputs - picture_select.upload(lambda picture, name1, name2: input_hijack.update({"state": True, "value": generate_chat_picture(picture, name1, name2)}), [picture_select, shared.gradio['name1'], shared.gradio['name2']], None) - - # Call the generation function - picture_select.upload(eval(function_call), shared.input_params, shared.gradio['display'], show_progress=shared.args.no_stream) - - # Clear the picture from the upload field - picture_select.upload(lambda : None, [], [picture_select], show_progress=False) diff --git a/spaces/dorkai/dorkgpt/README.md b/spaces/dorkai/dorkgpt/README.md deleted file mode 100644 index e9ab15694dec586a4d5825880ad2f04de0079c01..0000000000000000000000000000000000000000 --- a/spaces/dorkai/dorkgpt/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: DorkGPT -emoji: 💻 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/dorkai/text-generation-webui-main/README.md b/spaces/dorkai/text-generation-webui-main/README.md deleted file mode 100644 index a29b9f04c31ef1a2b1b77681881740b1f1ccdce7..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/README.md +++ /dev/null @@ -1,339 +0,0 @@ ---- -license: openrail -title: SinGPT Premium -sdk: gradio -emoji: 👁 -colorFrom: red -colorTo: blue -app_file: run.py ---- -# Text generation web UI - -A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA. - -Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) of text generation. - -|![Image1](https://github.com/oobabooga/screenshots/raw/main/qa.png) | ![Image2](https://github.com/oobabooga/screenshots/raw/main/cai3.png) | -|:---:|:---:| -|![Image3](https://github.com/oobabooga/screenshots/raw/main/gpt4chan.png) | ![Image4](https://github.com/oobabooga/screenshots/raw/main/galactica.png) | - -## Features - -* Dropdown menu for switching between models -* Notebook mode that resembles OpenAI's playground -* Chat mode for conversation and role-playing -* Instruct mode compatible with various formats, including Alpaca, Vicuna, Open Assistant, Dolly, Koala, ChatGLM, MOSS, RWKV-Raven, Galactica, StableLM, WizardLM, Baize, MPT, and INCITE -* [Multimodal pipelines, including LLaVA and MiniGPT-4](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal) -* Markdown output for [GALACTICA](https://github.com/paperswithcode/galai), including LaTeX rendering -* Nice HTML output for GPT-4chan -* [Custom chat characters](docs/Chat-mode.md) -* Advanced chat features (send images, get audio responses with TTS) -* Very efficient text streaming -* Parameter presets -* [LLaMA model](docs/LLaMA-model.md) -* [4-bit GPTQ mode](docs/GPTQ-models-(4-bit-mode).md) -* [LoRA (loading and training)](docs/Using-LoRAs.md) -* [llama.cpp](docs/llama.cpp-models.md) -* [RWKV model](docs/RWKV-model.md) -* 8-bit mode -* Layers splitting across GPU(s), CPU, and disk -* CPU mode -* [FlexGen](docs/FlexGen.md) -* [DeepSpeed ZeRO-3](docs/DeepSpeed.md) -* API [with](https://github.com/oobabooga/text-generation-webui/blob/main/api-example-stream.py) streaming and [without](https://github.com/oobabooga/text-generation-webui/blob/main/api-example.py) streaming -* [Extensions](docs/Extensions.md) - see the [user extensions list](https://github.com/oobabooga/text-generation-webui-extensions) - -## Installation - -### One-click installers - -| Windows | Linux | macOS | -|-------|--------|--------| -| [oobabooga-windows.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_windows.zip) | [oobabooga-linux.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_linux.zip) |[oobabooga-macos.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_macos.zip) | - -Just download the zip above, extract it, and double-click on "start". The web UI and all its dependencies will be installed in the same folder. - -* The source codes are here: https://github.com/oobabooga/one-click-installers -* There is no need to run the installers as admin. -* AMD doesn't work on Windows. -* Huge thanks to [@jllllll](https://github.com/jllllll), [@ClayShoaf](https://github.com/ClayShoaf), and [@xNul](https://github.com/xNul) for their contributions to these installers. - -### Manual installation using Conda - -Recommended if you have some experience with the command line. - -On Windows, I additionally recommend carrying out the installation on WSL instead of the base system: [WSL installation guide](https://github.com/oobabooga/text-generation-webui/blob/main/docs/WSL-installation-guide.md). - -#### 0. Install Conda - -https://docs.conda.io/en/latest/miniconda.html - -On Linux or WSL, it can be automatically installed with these two commands: - -``` -curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > "Miniconda3.sh" -bash Miniconda3.sh -``` -Source: https://educe-ubc.github.io/conda.html - -#### 1. Create a new conda environment - -``` -conda create -n textgen python=3.10.9 -conda activate textgen -``` - -#### 2. Install Pytorch - -| System | GPU | Command | -|--------|---------|---------| -| Linux/WSL | NVIDIA | `pip3 install torch torchvision torchaudio` | -| Linux | AMD | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2` | -| MacOS + MPS (untested) | Any | `pip3 install torch torchvision torchaudio` | - -The up-to-date commands can be found here: https://pytorch.org/get-started/locally/. - -#### 2.1 Special instructions - -* MacOS users: https://github.com/oobabooga/text-generation-webui/pull/393 -* AMD users: https://rentry.org/eq3hg - -#### 3. Install the web UI - -``` -git clone https://github.com/oobabooga/text-generation-webui -cd text-generation-webui -pip install -r requirements.txt -``` - -#### 4. Install GPTQ-for-LLaMa and the monkey patch - -The base installation covers [transformers](https://github.com/huggingface/transformers) models (`AutoModelForCausalLM` and `AutoModelForSeq2SeqLM` specifically) and [llama.cpp](https://github.com/ggerganov/llama.cpp) (GGML) models. - -To use 4-bit GPU models, the additional installation steps below are necessary: - -[GPTQ models (4 bit mode)](https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md) - -### Alternative: manual Windows installation - -As an alternative to the recommended WSL method, you can install the web UI natively on Windows using this guide. It will be a lot harder and the performance may be slower: [Windows installation guide](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Windows-installation-guide.md). - -### Alternative: Docker - -``` -ln -s docker/{Dockerfile,docker-compose.yml,.dockerignore} . -cp docker/.env.example .env -# Edit .env and set TORCH_CUDA_ARCH_LIST based on your GPU model -docker compose up --build -``` - -You need to have docker compose v2.17 or higher installed in your system. To see how to install docker compose itself, see the guide in [here](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Docker.md). - -Contributed by [@loeken](https://github.com/loeken) in [#633](https://github.com/oobabooga/text-generation-webui/pull/633) - -### Updating the requirements - -From time to time, the `requirements.txt` changes. To update, use this command: - -``` -conda activate textgen -cd text-generation-webui -pip install -r requirements.txt --upgrade -``` -## Downloading models - -Models should be placed inside the `models/` folder. - -[Hugging Face](https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads) is the main place to download models. These are some examples: - -* [Pythia](https://huggingface.co/models?sort=downloads&search=eleutherai%2Fpythia+deduped) -* [OPT](https://huggingface.co/models?search=facebook/opt) -* [GALACTICA](https://huggingface.co/models?search=facebook/galactica) -* [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main) - -You can automatically download a model from HF using the script `download-model.py`: - - python download-model.py organization/model - -For example: - - python download-model.py facebook/opt-1.3b - -If you want to download a model manually, note that all you need are the json, txt, and pytorch\*.bin (or model*.safetensors) files. The remaining files are not necessary. - -#### GGML models - -You can drop these directly into the `models/` folder, making sure that the file name contains `ggml` somewhere and ends in `.bin`. - -#### GPT-4chan - -[GPT-4chan](https://huggingface.co/ykilcher/gpt-4chan) has been shut down from Hugging Face, so you need to download it elsewhere. You have two options: - -* Torrent: [16-bit](https://archive.org/details/gpt4chan_model_float16) / [32-bit](https://archive.org/details/gpt4chan_model) -* Direct download: [16-bit](https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model_float16/) / [32-bit](https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model/) - -The 32-bit version is only relevant if you intend to run the model in CPU mode. Otherwise, you should use the 16-bit version. - -After downloading the model, follow these steps: - -1. Place the files under `models/gpt4chan_model_float16` or `models/gpt4chan_model`. -2. Place GPT-J 6B's config.json file in that same folder: [config.json](https://huggingface.co/EleutherAI/gpt-j-6B/raw/main/config.json). -3. Download GPT-J 6B's tokenizer files (they will be automatically detected when you attempt to load GPT-4chan): - -``` -python download-model.py EleutherAI/gpt-j-6B --text-only -``` - -## Starting the web UI - - conda activate textgen - cd text-generation-webui - python server.py - -Then browse to - -`http://localhost:7860/?__theme=dark` - -Optionally, you can use the following command-line flags: - -#### Basic settings - -| Flag | Description | -|--------------------------------------------|-------------| -| `-h`, `--help` | Show this help message and exit. | -| `--notebook` | Launch the web UI in notebook mode, where the output is written to the same text box as the input. | -| `--chat` | Launch the web UI in chat mode. | -| `--character CHARACTER` | The name of the character to load in chat mode by default. | -| `--model MODEL` | Name of the model to load by default. | -| `--lora LORA [LORA ...]` | The list of LoRAs to load. If you want to load more than one LoRA, write the names separated by spaces. | -| `--model-dir MODEL_DIR` | Path to directory with all the models. | -| `--lora-dir LORA_DIR` | Path to directory with all the loras. | -| `--model-menu` | Show a model menu in the terminal when the web UI is first launched. | -| `--no-stream` | Don't stream the text output in real time. | -| `--settings SETTINGS_FILE` | Load the default interface settings from this json file. See `settings-template.json` for an example. If you create a file called `settings.json`, this file will be loaded by default without the need to use the `--settings` flag. | -| `--extensions EXTENSIONS [EXTENSIONS ...]` | The list of extensions to load. If you want to load more than one extension, write the names separated by spaces. | -| `--verbose` | Print the prompts to the terminal. | - -#### Accelerate/transformers - -| Flag | Description | -|---------------------------------------------|-------------| -| `--cpu` | Use the CPU to generate text. Warning: Training on CPU is extremely slow.| -| `--auto-devices` | Automatically split the model across the available GPU(s) and CPU. | -| `--gpu-memory GPU_MEMORY [GPU_MEMORY ...]` | Maxmimum GPU memory in GiB to be allocated per GPU. Example: `--gpu-memory 10` for a single GPU, `--gpu-memory 10 5` for two GPUs. You can also set values in MiB like `--gpu-memory 3500MiB`. | -| `--cpu-memory CPU_MEMORY` | Maximum CPU memory in GiB to allocate for offloaded weights. Same as above.| -| `--disk` | If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk. | -| `--disk-cache-dir DISK_CACHE_DIR` | Directory to save the disk cache to. Defaults to `cache/`. | -| `--load-in-8bit` | Load the model with 8-bit precision.| -| `--bf16` | Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU. | -| `--no-cache` | Set `use_cache` to False while generating text. This reduces the VRAM usage a bit with a performance cost. | -| `--xformers` | Use xformer's memory efficient attention. This should increase your tokens/s. | -| `--sdp-attention` | Use torch 2.0's sdp attention. | -| `--trust-remote-code` | Set trust_remote_code=True while loading a model. Necessary for ChatGLM. | - -#### llama.cpp - -| Flag | Description | -|-------------|-------------| -| `--threads` | Number of threads to use. | -| `--n_batch` | Maximum number of prompt tokens to batch together when calling llama_eval. | -| `--no-mmap` | Prevent mmap from being used. | -| `--mlock` | Force the system to keep the model in RAM. | -| `--cache-capacity CACHE_CAPACITY` | Maximum cache capacity. Examples: 2000MiB, 2GiB. When provided without units, bytes will be assumed. | -| `--n-gpu-layers N_GPU_LAYERS` | Number of layers to offload to the GPU. Only works if llama-cpp-python was compiled with BLAS. Set this to 1000000000 to offload all layers to the GPU. | - -#### GPTQ - -| Flag | Description | -|---------------------------|-------------| -| `--wbits WBITS` | Load a pre-quantized model with specified precision in bits. 2, 3, 4 and 8 are supported. | -| `--model_type MODEL_TYPE` | Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported. | -| `--groupsize GROUPSIZE` | Group size. | -| `--pre_layer PRE_LAYER [PRE_LAYER ...]` | The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. For multi-gpu, write the numbers separated by spaces, eg `--pre_layer 30 60`. | -| `--checkpoint CHECKPOINT` | The path to the quantized checkpoint file. If not specified, it will be automatically detected. | -| `--monkey-patch` | Apply the monkey patch for using LoRAs with quantized models. -| `--quant_attn` | (triton) Enable quant attention. | -| `--warmup_autotune` | (triton) Enable warmup autotune. | -| `--fused_mlp` | (triton) Enable fused mlp. | - -#### FlexGen - -| Flag | Description | -|------------------|-------------| -| `--flexgen` | Enable the use of FlexGen offloading. | -| `--percent PERCENT [PERCENT ...]` | FlexGen: allocation percentages. Must be 6 numbers separated by spaces (default: 0, 100, 100, 0, 100, 0). | -| `--compress-weight` | FlexGen: Whether to compress weight (default: False).| -| `--pin-weight [PIN_WEIGHT]` | FlexGen: whether to pin weights (setting this to False reduces CPU memory by 20%). | - -#### DeepSpeed - -| Flag | Description | -|---------------------------------------|-------------| -| `--deepspeed` | Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration. | -| `--nvme-offload-dir NVME_OFFLOAD_DIR` | DeepSpeed: Directory to use for ZeRO-3 NVME offloading. | -| `--local_rank LOCAL_RANK` | DeepSpeed: Optional argument for distributed setups. | - -#### RWKV - -| Flag | Description | -|---------------------------------|-------------| -| `--rwkv-strategy RWKV_STRATEGY` | RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8". | -| `--rwkv-cuda-on` | RWKV: Compile the CUDA kernel for better performance. | - -#### Gradio - -| Flag | Description | -|---------------------------------------|-------------| -| `--listen` | Make the web UI reachable from your local network. | -| `--listen-host LISTEN_HOST` | The hostname that the server will use. | -| `--listen-port LISTEN_PORT` | The listening port that the server will use. | -| `--share` | Create a public URL. This is useful for running the web UI on Google Colab or similar. | -| `--auto-launch` | Open the web UI in the default browser upon launch. | -| `--gradio-auth-path GRADIO_AUTH_PATH` | Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3" | - -#### API - -| Flag | Description | -|---------------------------------------|-------------| -| `--api` | Enable the API extension. | -| `--public-api` | Create a public URL for the API using Cloudfare. | - -#### Multimodal - -| Flag | Description | -|---------------------------------------|-------------| -| `--multimodal-pipeline PIPELINE` | The multimodal pipeline to use. Examples: `llava-7b`, `llava-13b`. | - -Out of memory errors? [Check the low VRAM guide](docs/Low-VRAM-guide.md). - -## Presets - -Inference settings presets can be created under `presets/` as text files. These files are detected automatically at startup. - -By default, 10 presets by NovelAI and KoboldAI are included. These were selected out of a sample of 43 presets after applying a K-Means clustering algorithm and selecting the elements closest to the average of each cluster. - -[Visualization](https://user-images.githubusercontent.com/112222186/228956352-1addbdb9-2456-465a-b51d-089f462cd385.png) - -## Documentation - -Make sure to check out the documentation for an in-depth guide on how to use the web UI. - -https://github.com/oobabooga/text-generation-webui/tree/main/docs - -## Contributing - -Pull requests, suggestions, and issue reports are welcome. - -You are also welcome to review open pull requests. - -Before reporting a bug, make sure that you have: - -1. Created a conda environment and installed the dependencies exactly as in the *Installation* section above. -2. [Searched](https://github.com/oobabooga/text-generation-webui/issues) to see if an issue already exists for the issue you encountered. - -## Credits - -- Gradio dropdown menu refresh button, code for reloading the interface: https://github.com/AUTOMATIC1111/stable-diffusion-webui -- Verbose preset: Anonymous 4chan user. -- NovelAI and KoboldAI presets: https://github.com/KoboldAI/KoboldAI-Client/wiki/Settings-Presets -- Code for early stopping in chat mode, code for some of the sliders: https://github.com/PygmalionAI/gradio-ui/ \ No newline at end of file diff --git a/spaces/dorkai/text-generation-webui-main/css/html_4chan_style.css b/spaces/dorkai/text-generation-webui-main/css/html_4chan_style.css deleted file mode 100644 index 843e8a97fea80b010004f90f02ce63e8d13fe758..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/css/html_4chan_style.css +++ /dev/null @@ -1,103 +0,0 @@ -#parent #container { - background-color: #eef2ff; - padding: 17px; -} -#parent #container .reply { - background-color: rgb(214, 218, 240); - border-bottom-color: rgb(183, 197, 217); - border-bottom-style: solid; - border-bottom-width: 1px; - border-image-outset: 0; - border-image-repeat: stretch; - border-image-slice: 100%; - border-image-source: none; - border-image-width: 1; - border-left-color: rgb(0, 0, 0); - border-left-style: none; - border-left-width: 0px; - border-right-color: rgb(183, 197, 217); - border-right-style: solid; - border-right-width: 1px; - border-top-color: rgb(0, 0, 0); - border-top-style: none; - border-top-width: 0px; - color: rgb(0, 0, 0); - display: table; - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - margin-bottom: 4px; - margin-left: 0px; - margin-right: 0px; - margin-top: 4px; - overflow-x: hidden; - overflow-y: hidden; - padding-bottom: 4px; - padding-left: 2px; - padding-right: 2px; - padding-top: 4px; -} - -#parent #container .number { - color: rgb(0, 0, 0); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - width: 342.65px; - margin-right: 7px; -} - -#parent #container .op { - color: rgb(0, 0, 0); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - margin-bottom: 8px; - margin-left: 0px; - margin-right: 0px; - margin-top: 4px; - overflow-x: hidden; - overflow-y: hidden; -} - -#parent #container .op blockquote { - margin-left: 0px !important; -} - -#parent #container .name { - color: rgb(17, 119, 67); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - font-weight: 700; - margin-left: 7px; -} - -#parent #container .quote { - color: rgb(221, 0, 0); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; - text-decoration-color: rgb(221, 0, 0); - text-decoration-line: underline; - text-decoration-style: solid; - text-decoration-thickness: auto; -} - -#parent #container .greentext { - color: rgb(120, 153, 34); - font-family: arial, helvetica, sans-serif; - font-size: 13.3333px; -} - -#parent #container blockquote { - margin: 0px !important; - margin-block-start: 1em; - margin-block-end: 1em; - margin-inline-start: 40px; - margin-inline-end: 40px; - margin-top: 13.33px !important; - margin-bottom: 13.33px !important; - margin-left: 40px !important; - margin-right: 40px !important; -} - -#parent #container .message { - color: black; - border: none; -} \ No newline at end of file diff --git a/spaces/dsymbol/whisper-webui/README.md b/spaces/dsymbol/whisper-webui/README.md deleted file mode 100644 index ce5f79c4c9e538169ffe822d28b0545e53e12e1c..0000000000000000000000000000000000000000 --- a/spaces/dsymbol/whisper-webui/README.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: whisper-webui -emoji: 🤫 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: mit ---- - - \ No newline at end of file diff --git a/spaces/f2api/gpt-academic/docs/README_RS.md b/spaces/f2api/gpt-academic/docs/README_RS.md deleted file mode 100644 index 5ba5fcccc30db520d38e21950e2f7cfc03d324c5..0000000000000000000000000000000000000000 --- a/spaces/f2api/gpt-academic/docs/README_RS.md +++ /dev/null @@ -1,278 +0,0 @@ -> **Note** -> -> Этот файл самовыражения автоматически генерируется модулем перевода markdown в этом проекте и может быть не на 100% правильным. -> -# GPT Академическая оптимизация (GPT Academic) - -**Если вам нравится этот проект, пожалуйста, поставьте ему звезду. Если вы придумали более полезные языковые ярлыки или функциональные плагины, не стесняйтесь открывать issue или pull request. -Чтобы перевести этот проект на произвольный язык с помощью GPT, ознакомьтесь и запустите [`multi_language.py`](multi_language.py) (экспериментальный). - -> **Примечание** -> -> 1. Обратите внимание, что только функциональные плагины (кнопки), помеченные **красным цветом**, поддерживают чтение файлов, некоторые плагины находятся в **выпадающем меню** в области плагинов. Кроме того, мы с наивысшим приоритетом рады и обрабатываем pull requests для любых новых плагинов! -> -> 2. В каждом файле проекта функциональность описана в документе самоанализа [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). С каждой итерацией выполнения версии вы можете в любое время вызвать повторное создание отчета о самоанализе этого проекта, щелкнув соответствующий функциональный плагин и вызвав GPT. Вопросы сборки описаны в [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Метод установки](#installation). -> -> 3. Этот проект совместим и поощряет использование китайских языковых моделей chatglm и RWKV, пангу и т. Д. Поддержка нескольких api-key, которые могут существовать одновременно, может быть указан в файле конфигурации, например `API_KEY="openai-key1,openai-key2,api2d-key3"`. Если требуется временно изменить `API_KEY`, введите временный `API_KEY` в области ввода и нажмите клавишу Enter, чтобы он вступил в силу. - -> **Примечание** -> -> При установке зависимостей строго выбирайте версии, **указанные в файле requirements.txt**. -> -> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`## Задание - -Вы профессиональный переводчик научных статей. - -Переведите этот файл в формате Markdown на русский язык. Не изменяйте существующие команды Markdown, ответьте только переведенными результатами. - -## Результат - -Функция | Описание ---- | --- -Однокнопочный стиль | Поддержка однокнопочного стиля и поиска грамматических ошибок в научных статьях -Однокнопочный перевод на английский и китайский | Однокнопочный перевод на английский и китайский -Однокнопочное объяснение кода | Показ кода, объяснение его, генерация кода, комментирование кода -[Настройка быстрых клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настройки быстрых клавиш -Модульный дизайн | Поддержка пользовательских функциональных плагинов мощных [функциональных плагинов](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions), плагины поддерживают [горячую замену](https://github.com/binary-husky/chatgpt_academic/wiki/Function-Plug-in-Guide) -[Анализ своей программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] [Однокнопочный просмотр](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academicProject-Self-analysis-Report) исходного кода этого проекта -[Анализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] Однокнопочный анализ дерева других проектов Python/C/C++/Java/Lua/... -Чтение статей, [перевод](https://www.bilibili.com/video/BV1KT411x7Wn) статей | [Функциональный плагин] Однокнопочное чтение полного текста научных статей и генерация резюме -Полный перевод [LaTeX](https://www.bilibili.com/video/BV1nk4y1Y7Js/) и совершенствование | [Функциональный плагин] Однокнопочный перевод или совершенствование LaTeX статьи -Автоматическое комментирование | [Функциональный плагин] Однокнопочное автоматическое генерирование комментариев функций -[Перевод](https://www.bilibili.com/video/BV1yo4y157jV/) Markdown на английский и китайский | [Функциональный плагин] Вы видели обе версии файлов [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md) для этих 5 языков? -Отчет о чат-анализе | [Функциональный плагин] После запуска будет автоматически сгенерировано сводное извещение -Функция перевода полного текста [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) | [Функциональный плагин] Извлечение заголовка и резюме [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) и перевод всего документа (многопоточность) -[Arxiv Helper](https://www.bilibili.com/video/BV1LM4y1279X) | [Функциональный плагин] Введите URL статьи на arxiv и одним щелчком мыши переведите резюме и загрузите PDF -[Google Scholar Integration Helper](https://www.bilibili.com/video/BV19L411U7ia) | [Функциональный плагин] При заданном любом URL страницы поиска в Google Scholar позвольте gpt вам помочь [написать обзор](https://www.bilibili.com/video/BV1GP411U7Az/) -Сбор Интернет-информации + GPT | [Функциональный плагин] Однокнопочный [запрос информации из Интернета GPT](https://www.bilibili.com/video/BV1om4y127ck), затем ответьте на вопрос, чтобы информация не устарела никогда -Отображение формул / изображений / таблиц | Может одновременно отображать формулы в [формате Tex и рендеринге](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), поддерживает формулы, подсвечивает код -Поддержка функций с многопоточностью | Поддержка многопоточного вызова chatgpt, однокнопочная обработка [больших объемов текста](https://www.bilibili.com/video/BV1FT411H7c5/) или программ -Темная тема gradio для запуска приложений | Добавьте ```/?__theme=dark``` после URL в браузере, чтобы переключиться на темную тему -[Поддержка нескольких моделей LLM](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) | Они одновременно обслуживаются GPT3.5, GPT4, [Clear ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) -Подключение нескольких новых моделей LLM, поддержка деплоя[huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Подключение интерфейса Newbing (новый Bing), подключение поддержки [LLaMA](https://github.com/facebookresearch/llama), поддержка [RWKV](https://github.com/BlinkDL/ChatRWKV) и [Pangu α](https://openi.org.cn/pangu/) -Больше новых функций (генерация изображения и т. д.) | См. на конце этого файла…- All buttons are dynamically generated by reading functional.py, and custom functions can be freely added to liberate the clipboard -
      - -
      - -- Revision/Correction -
      - -
      - -- If the output contains formulas, they will be displayed in both tex and rendered form for easy copying and reading -
      - -
      - -- Don't feel like looking at project code? Show the entire project directly in chatgpt -
      - -
      - -- Mixing multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4) -
      - -
      - ---- -# Installation -## Installation-Method 1: Run directly (Windows, Linux or MacOS) - -1. Download the project -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. Configure API_KEY - -In `config.py`, configure API KEY and other settings, [special network environment settings] (https://github.com/binary-husky/gpt_academic/issues/1). - -(P.S. When the program is running, it will first check whether there is a secret configuration file named `config_private.py` and use the configuration in it to replace the same name in` config.py`. Therefore, if you understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git, which can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Priority of read: `environment variable`>`config_private.py`>`config.py`) - - -3. Install dependencies -```sh -# (Option I: If familiar with Python)(Python version 3.9 or above, the newer the better), note: use the official pip source or the aliyun pip source, temporary switching source method: python -m pip install -r requirements.txt - i https://mirrors.aliyun.com/pypi/simple/ -python -m pip install -r requirements.txt - -# (Option II: If unfamiliar with Python)Use Anaconda, the steps are also similar (https://www.bilibili.com/video/BV1rc411W7Dr): -conda create -n gptac_venv python=3.11 # create an Anaconda environment -conda activate gptac_venv # activate Anaconda environment -python -m pip install -r requirements.txt # This step is the same as the pip installation -``` - -
      If you need to support Tsinghua ChatGLM/Fudan MOSS as backend, click here to expand -

      - -[Optional step] If you need to support Tsinghua ChatGLM/Fudan MOSS as backend, you need to install more dependencies (prerequisites: familiar with Python + have used Pytorch + computer configuration is strong): -```sh -# [Optional step I] Support Tsinghua ChatGLM. Tsinghua ChatGLM note: If you encounter the "Call ChatGLM fail cannot load ChatGLM parameters normally" error, refer to the following: 1: The default installation above is torch+cpu version, and cuda is used Need to uninstall torch and reinstall torch+cuda; 2: If you cannot load the model due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) Modify to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True) -python -m pip install -r request_llm/requirements_chatglm.txt - -# [Optional step II] Support Fudan MOSS -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note that when executing this line of code, you must be in the project root path - -# [Optional step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the expected models. Currently, all supported models are as follows (the jittorllms series currently only supports the docker solution): -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -

      -
      - - - -4. Run -```sh -python main.py -```5. Testing Function Plugin -``` -- Testing function plugin template function (requires GPT to answer what happened in history today), you can use this function as a template to implement more complex functions - Click "[Function plugin Template Demo] On this day in history" -``` - -## Installation - Method 2: Using Docker - -1. ChatGPT only (recommended for most people) - -``` sh -git clone https://github.com/binary-husky/chatgpt_academic.git # download the project -cd chatgpt_academic # enter the path -nano config.py # edit config.py with any text editor to configure "Proxy", "API_KEY", and "WEB_PORT" (eg 50923) -docker build -t gpt-academic . # install - -# (Last step-Option 1) In a Linux environment, using `--net=host` is more convenient and faster -docker run --rm -it --net=host gpt-academic -# (Last step-Option 2) In macOS/windows environment, only -p option can be used to expose the port on the container (eg 50923) to the port on the host -docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic -``` - -2. ChatGPT + ChatGLM + MOSS (requires familiarity with Docker) - -``` sh -# Edit docker-compose.yml, delete solutions 1 and 3, and keep solution 2. Modify the configuration of solution 2 in docker-compose.yml, refer to the comments in it -docker-compose up -``` - -3. ChatGPT + LLAMA + PanGu + RWKV (requires familiarity with Docker) -``` sh -# Edit docker-compose.yml, delete solutions 1 and 2, and keep solution 3. Modify the configuration of solution 3 in docker-compose.yml, refer to the comments in it -docker-compose up -``` - - -## Installation Method 3: Other Deployment Methods - -1. How to use reverse proxy URL/Microsoft Azure API -Configure API_URL_REDIRECT according to the instructions in `config.py`. - -2. Remote Cloud Server Deployment (Requires Knowledge and Experience of Cloud Servers) -Please visit [Deployment Wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. Using WSL2 (Windows Subsystem for Linux subsystem) -Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. How to run at the secondary URL (such as `http://localhost/subpath`) -Please visit [FastAPI Operation Instructions](docs/WithFastapi.md) - -5. Using docker-compose to run -Please read docker-compose.yml and follow the prompts to operate. - ---- -# Advanced Usage -## Customize new convenient buttons / custom function plugins - -1. Customize new convenient buttons (academic shortcuts) -Open `core_functional.py` with any text editor, add an entry as follows, and then restart the program. (If the button has been added successfully and is visible, both prefixes and suffixes can be hot-modified without having to restart the program.) -For example: -``` -"Super English to Chinese": { - # Prefix, will be added before your input. For example, describe your requirements, such as translation, code interpretation, polishing, etc. - "Prefix": "Please translate the following content into Chinese, and then explain each proper noun that appears in the text with a markdown table:\n\n", - - # Suffix, will be added after your input. For example, with the prefix, you can enclose your input content in quotes. - "Suffix": "", -}, -``` -
      - -
      - -2. Custom function plugin - -Write powerful function plugins to perform any task you can and can't imagine. -The difficulty of debugging and writing plugins in this project is very low. As long as you have a certain knowledge of python, you can implement your own plugin function by imitating the template we provide. -Please refer to the [Function Plugin Guide](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) for details. - ---- -# Latest Update -## New feature dynamic - -1. Сохранение диалогов. Вызовите "Сохранить текущий диалог" в разделе функций-плагина, чтобы сохранить текущий диалог как файл HTML, который можно прочитать и восстановить. Кроме того, вызовите «Загрузить архив истории диалога» в меню функций-плагина, чтобы восстановить предыдущую сессию. Совет: если нажать кнопку "Загрузить исторический архив диалога" без указания файла, можно просмотреть кэш исторических файлов HTML. Щелкните "Удалить все локальные записи истории диалогов", чтобы удалить все файловые кэши HTML. - -2. Создание отчетов. Большинство плагинов создают рабочий отчет после завершения выполнения. -  -3. Модульный дизайн функций, простой интерфейс, но сильный функционал. - -4. Это проект с открытым исходным кодом, который может «сам переводить себя». - -5. Перевод других проектов с открытым исходным кодом - это не проблема. - -6. Мелкие функции декорирования [live2d](https://github.com/fghrsh/live2d_demo) (по умолчанию отключены, нужно изменить `config.py`). - -7. Поддержка большой языковой модели MOSS. - -8. Генерация изображений с помощью OpenAI. - -9. Анализ и подведение итогов аудиофайлов с помощью OpenAI. - -10. Полный цикл проверки правописания с использованием LaTeX. - -## Версии: -- Версия 3.5 (Todo): использование естественного языка для вызова функций-плагинов проекта (высокий приоритет) -- Версия 3.4 (Todo): улучшение многопоточной поддержки локальных больших моделей чата. -- Версия 3.3: добавлена функция объединения интернет-информации. -- Версия 3.2: функции-плагины поддерживают большое количество параметров (сохранение диалогов, анализирование любого языка программирования и одновременное запрос LLM-групп). -- Версия 3.1: поддержка одновременного запроса нескольких моделей GPT! Поддержка api2d, сбалансированное распределение нагрузки по нескольким ключам api. -- Версия 3.0: поддержка chatglm и других небольших LLM. -- Версия 2.6: перестройка структуры плагинов, улучшение интерактивности, добавлено больше плагинов. -- Версия 2.5: автоматическое обновление для решения проблемы длинного текста и переполнения токенов при обработке больших проектов. -- Версия 2.4: (1) добавлена функция полного перевода PDF; (2) добавлена функция переключения положения ввода; (3) добавлена опция вертикального макета; (4) оптимизация многопоточности плагинов. -- Версия 2.3: улучшение многопоточной интерактивности. -- Версия 2.2: функции-плагины поддерживают горячую перезагрузку. -- Версия 2.1: раскрывающийся макет. -- Версия 2.0: использование модульных функций-плагинов. -- Версия 1.0: базовые функции. - -gpt_academic Разработчик QQ-группы-2: 610599535 - -- Известные проблемы - - Некоторые плагины перевода в браузерах мешают работе фронтенда этого программного обеспечения - - Высокая или низкая версия gradio может вызвать множество исключений - -## Ссылки и учебные материалы - -``` -Мы использовали многие концепты кода из других отличных проектов, включая: - -# Проект 1: Qinghua ChatGLM-6B: -https://github.com/THUDM/ChatGLM-6B - -# Проект 2: Qinghua JittorLLMs: -https://github.com/Jittor/JittorLLMs - -# Проект 3: Edge-GPT: -https://github.com/acheong08/EdgeGPT - -# Проект 4: Chuanhu ChatGPT: -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Проект 5: ChatPaper: -https://github.com/kaixindelele/ChatPaper - -# Больше: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` \ No newline at end of file diff --git a/spaces/facebook/MusicGen/audiocraft/grids/musicgen/_explorers.py b/spaces/facebook/MusicGen/audiocraft/grids/musicgen/_explorers.py deleted file mode 100644 index 334836b72559a120feb8a15eef3fe96ce88a4edb..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/audiocraft/grids/musicgen/_explorers.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import treetable as tt - -from .._base_explorers import BaseExplorer - - -class LMExplorer(BaseExplorer): - eval_metrics: tp.List[str] = [] - - def stages(self) -> tp.List[str]: - return ['train', 'valid'] - - def get_grid_metrics(self): - """Return the metrics that should be displayed in the tracking table.""" - return [ - tt.group( - 'train', - [ - tt.leaf('epoch'), - tt.leaf('duration', '.1f'), # duration in minutes - tt.leaf('ping'), - tt.leaf('ce', '.4f'), # cross entropy - tt.leaf("ppl", '.3f'), # perplexity - ], - align='>', - ), - tt.group( - 'valid', - [ - tt.leaf('ce', '.4f'), - tt.leaf('ppl', '.3f'), - tt.leaf('best_ppl', '.3f'), - ], - align='>', - ), - ] - - def process_sheep(self, sheep, history): - parts = super().process_sheep(sheep, history) - - track_by = {'ppl': 'lower'} # values should be in ['lower', 'higher'] - best_metrics = {k: (1 if v == 'lower' else -1) * float('inf') for k, v in track_by.items()} - - def comparator(mode, a, b): - return a < b if mode == 'lower' else a > b - - for metrics in history: - for key, sub in metrics.items(): - for metric in track_by: - # for the validation set, keep track of best metrics (ppl in this example) - # this is so we can conveniently compare metrics between runs in the grid - if key == 'valid' and metric in sub and comparator( - track_by[metric], sub[metric], best_metrics[metric] - ): - best_metrics[metric] = sub[metric] - - if 'valid' in parts: - parts['valid'].update({f'best_{k}': v for k, v in best_metrics.items()}) - return parts - - -class GenerationEvalExplorer(BaseExplorer): - eval_metrics: tp.List[str] = [] - - def stages(self) -> tp.List[str]: - return ['evaluate'] - - def get_grid_metrics(self): - """Return the metrics that should be displayed in the tracking table.""" - return [ - tt.group( - 'evaluate', - [ - tt.leaf('epoch', '.3f'), - tt.leaf('duration', '.1f'), - tt.leaf('ping'), - tt.leaf('ce', '.4f'), - tt.leaf('ppl', '.3f'), - tt.leaf('fad', '.3f'), - tt.leaf('kld', '.3f'), - tt.leaf('text_consistency', '.3f'), - tt.leaf('chroma_cosine', '.3f'), - ], - align='>', - ), - ] diff --git a/spaces/facebook/StyleNeRF/dnnlib/camera.py b/spaces/facebook/StyleNeRF/dnnlib/camera.py deleted file mode 100644 index 8846713c04afa650de92261ea87bdd4be00dac9a..0000000000000000000000000000000000000000 --- a/spaces/facebook/StyleNeRF/dnnlib/camera.py +++ /dev/null @@ -1,687 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - - -import numpy as np -from numpy.lib.function_base import angle -import torch -import torch.nn.functional as F -import math - -from scipy.spatial.transform import Rotation as Rot -HUGE_NUMBER = 1e10 -TINY_NUMBER = 1e-6 # float32 only has 7 decimal digits precision - - -def get_camera_mat(fov=49.13, invert=True): - # fov = 2 * arctan(sensor / (2 * focal)) - # focal = (sensor / 2) * 1 / (tan(0.5 * fov)) - # in our case, sensor = 2 as pixels are in [-1, 1] - focal = 1. / np.tan(0.5 * fov * np.pi/180.) - focal = focal.astype(np.float32) - mat = torch.tensor([ - [focal, 0., 0., 0.], - [0., focal, 0., 0.], - [0., 0., 1, 0.], - [0., 0., 0., 1.] - ]).reshape(1, 4, 4) - if invert: - mat = torch.inverse(mat) - return mat - - -def get_random_pose(range_u, range_v, range_radius, batch_size=32, - invert=False, gaussian=False, angular=False): - loc, (u, v) = sample_on_sphere(range_u, range_v, size=(batch_size), gaussian=gaussian, angular=angular) - radius = range_radius[0] + torch.rand(batch_size) * (range_radius[1] - range_radius[0]) - loc = loc * radius.unsqueeze(-1) - R = look_at(loc) - RT = torch.eye(4).reshape(1, 4, 4).repeat(batch_size, 1, 1) - RT[:, :3, :3] = R - RT[:, :3, -1] = loc - - if invert: - RT = torch.inverse(RT) - - def N(a, range_a): - if range_a[0] == range_a[1]: - return a * 0 - return (a - range_a[0]) / (range_a[1] - range_a[0]) - - val_u, val_v, val_r = N(u, range_u), N(v, range_v), N(radius, range_radius) - return RT, (val_u, val_v, val_r) - - -def get_camera_pose(range_u, range_v, range_r, val_u=0.5, val_v=0.5, val_r=0.5, - batch_size=32, invert=False, gaussian=False, angular=False): - r0, rr = range_r[0], range_r[1] - range_r[0] - r = r0 + val_r * rr - if not gaussian: - u0, ur = range_u[0], range_u[1] - range_u[0] - v0, vr = range_v[0], range_v[1] - range_v[0] - u = u0 + val_u * ur - v = v0 + val_v * vr - else: - mean_u, mean_v = sum(range_u) / 2, sum(range_v) / 2 - vu, vv = mean_u - range_u[0], mean_v - range_v[0] - u = mean_u + vu * val_u - v = mean_v + vv * val_v - - loc, _ = sample_on_sphere((u, u), (v, v), size=(batch_size), angular=angular) - radius = torch.ones(batch_size) * r - loc = loc * radius.unsqueeze(-1) - R = look_at(loc) - RT = torch.eye(4).reshape(1, 4, 4).repeat(batch_size, 1, 1) - RT[:, :3, :3] = R - RT[:, :3, -1] = loc - - if invert: - RT = torch.inverse(RT) - return RT - - -def get_camera_pose_v2(range_u, range_v, range_r, mode, invert=False, gaussian=False, angular=False): - r0, rr = range_r[0], range_r[1] - range_r[0] - val_u, val_v = mode[:,0], mode[:,1] - val_r = torch.ones_like(val_u) * 0.5 - if not gaussian: - u0, ur = range_u[0], range_u[1] - range_u[0] - v0, vr = range_v[0], range_v[1] - range_v[0] - u = u0 + val_u * ur - v = v0 + val_v * vr - else: - mean_u, mean_v = sum(range_u) / 2, sum(range_v) / 2 - vu, vv = mean_u - range_u[0], mean_v - range_v[0] - u = mean_u + vu * val_u - v = mean_v + vv * val_v - - loc = to_sphere(u, v, angular) - radius = r0 + val_r * rr - loc = loc * radius.unsqueeze(-1) - R = look_at(loc) - RT = torch.eye(4).to(R.device).reshape(1, 4, 4).repeat(R.size(0), 1, 1) - RT[:, :3, :3] = R - RT[:, :3, -1] = loc - - if invert: - RT = torch.inverse(RT) - return RT, (val_u, val_v, val_r) - - -def to_sphere(u, v, angular=False): - T = torch if isinstance(u, torch.Tensor) else np - if not angular: - theta = 2 * math.pi * u - phi = T.arccos(1 - 2 * v) - else: - theta, phi = u, v - - cx = T.sin(phi) * T.cos(theta) - cy = T.sin(phi) * T.sin(theta) - cz = T.cos(phi) - return T.stack([cx, cy, cz], -1) - - -def sample_on_sphere(range_u=(0, 1), range_v=(0, 1), size=(1,), - to_pytorch=True, gaussian=False, angular=False): - if not gaussian: - u = np.random.uniform(*range_u, size=size) - v = np.random.uniform(*range_v, size=size) - else: - mean_u, mean_v = sum(range_u) / 2, sum(range_v) / 2 - var_u, var_v = mean_u - range_u[0], mean_v - range_v[0] - u = np.random.normal(size=size) * var_u + mean_u - v = np.random.normal(size=size) * var_v + mean_v - - sample = to_sphere(u, v, angular) - if to_pytorch: - sample = torch.tensor(sample).float() - u, v = torch.tensor(u).float(), torch.tensor(v).float() - - return sample, (u, v) - - -def look_at(eye, at=np.array([0, 0, 0]), up=np.array([0, 0, 1]), eps=1e-5, - to_pytorch=True): - if not isinstance(eye, torch.Tensor): - # this is the original code from GRAF - at = at.astype(float).reshape(1, 3) - up = up.astype(float).reshape(1, 3) - eye = eye.reshape(-1, 3) - up = up.repeat(eye.shape[0] // up.shape[0], axis=0) - eps = np.array([eps]).reshape(1, 1).repeat(up.shape[0], axis=0) - z_axis = eye - at - z_axis /= np.max(np.stack([np.linalg.norm(z_axis, - axis=1, keepdims=True), eps])) - x_axis = np.cross(up, z_axis) - x_axis /= np.max(np.stack([np.linalg.norm(x_axis, - axis=1, keepdims=True), eps])) - y_axis = np.cross(z_axis, x_axis) - y_axis /= np.max(np.stack([np.linalg.norm(y_axis, - axis=1, keepdims=True), eps])) - r_mat = np.concatenate( - (x_axis.reshape(-1, 3, 1), y_axis.reshape(-1, 3, 1), z_axis.reshape( - -1, 3, 1)), axis=2) - if to_pytorch: - r_mat = torch.tensor(r_mat).float() - else: - - def normalize(x, axis=-1, order=2): - l2 = x.norm(p=order, dim=axis, keepdim=True).clamp(min=1e-8) - return x / l2 - - at, up = torch.from_numpy(at).float().to(eye.device), torch.from_numpy(up).float().to(eye.device) - z_axis = normalize(eye - at[None, :]) - x_axis = normalize(torch.cross(up[None,:].expand_as(z_axis), z_axis, dim=-1)) - y_axis = normalize(torch.cross(z_axis, x_axis, dim=-1)) - r_mat = torch.stack([x_axis, y_axis, z_axis], dim=-1) - - return r_mat - - -def get_rotation_matrix(axis='z', value=0., batch_size=32): - r = Rot.from_euler(axis, value * 2 * np.pi).as_dcm() - r = torch.from_numpy(r).reshape(1, 3, 3).repeat(batch_size, 1, 1) - return r - - -def get_corner_rays(corner_pixels, camera_matrices, res): - assert (res + 1) * (res + 1) == corner_pixels.size(1) - batch_size = camera_matrices[0].size(0) - rays, origins, _ = get_camera_rays(camera_matrices, corner_pixels) - corner_rays = torch.cat([rays, torch.cross(origins, rays, dim=-1)], -1) - corner_rays = corner_rays.reshape(batch_size, res+1, res+1, 6).permute(0,3,1,2) - corner_rays = torch.cat([corner_rays[..., :-1, :-1], corner_rays[..., 1:, :-1], corner_rays[..., 1:, 1:], corner_rays[..., :-1, 1:]], 1) - return corner_rays - - -def arange_pixels( - resolution=(128, 128), - batch_size=1, - subsample_to=None, - invert_y_axis=False, - margin=0, - corner_aligned=True, - jitter=None - ): - ''' Arranges pixels for given resolution in range image_range. - - The function returns the unscaled pixel locations as integers and the - scaled float values. - - Args: - resolution (tuple): image resolution - batch_size (int): batch size - subsample_to (int): if integer and > 0, the points are randomly - subsampled to this value - ''' - h, w = resolution - n_points = resolution[0] * resolution[1] - uh = 1 if corner_aligned else 1 - (1 / h) - uw = 1 if corner_aligned else 1 - (1 / w) - if margin > 0: - uh = uh + (2 / h) * margin - uw = uw + (2 / w) * margin - w, h = w + margin * 2, h + margin * 2 - - x, y = torch.linspace(-uw, uw, w), torch.linspace(-uh, uh, h) - if jitter is not None: - dx = (torch.ones_like(x).uniform_() - 0.5) * 2 / w * jitter - dy = (torch.ones_like(y).uniform_() - 0.5) * 2 / h * jitter - x, y = x + dx, y + dy - x, y = torch.meshgrid(x, y) - pixel_scaled = torch.stack([x, y], -1).permute(1,0,2).reshape(1, -1, 2).repeat(batch_size, 1, 1) - - # Subsample points if subsample_to is not None and > 0 - if (subsample_to is not None and subsample_to > 0 and subsample_to < n_points): - idx = np.random.choice(pixel_scaled.shape[1], size=(subsample_to,), - replace=False) - pixel_scaled = pixel_scaled[:, idx] - - if invert_y_axis: - pixel_scaled[..., -1] *= -1. - - return pixel_scaled - - -def to_pytorch(tensor, return_type=False): - ''' Converts input tensor to pytorch. - - Args: - tensor (tensor): Numpy or Pytorch tensor - return_type (bool): whether to return input type - ''' - is_numpy = False - if type(tensor) == np.ndarray: - tensor = torch.from_numpy(tensor) - is_numpy = True - tensor = tensor.clone() - if return_type: - return tensor, is_numpy - return tensor - - -def transform_to_world(pixels, depth, camera_mat, world_mat, scale_mat=None, - invert=True, use_absolute_depth=True): - ''' Transforms pixel positions p with given depth value d to world coordinates. - - Args: - pixels (tensor): pixel tensor of size B x N x 2 - depth (tensor): depth tensor of size B x N x 1 - camera_mat (tensor): camera matrix - world_mat (tensor): world matrix - scale_mat (tensor): scale matrix - invert (bool): whether to invert matrices (default: true) - ''' - assert(pixels.shape[-1] == 2) - if scale_mat is None: - scale_mat = torch.eye(4).unsqueeze(0).repeat( - camera_mat.shape[0], 1, 1).to(camera_mat.device) - - # Convert to pytorch - pixels, is_numpy = to_pytorch(pixels, True) - depth = to_pytorch(depth) - camera_mat = to_pytorch(camera_mat) - world_mat = to_pytorch(world_mat) - scale_mat = to_pytorch(scale_mat) - - # Invert camera matrices - if invert: - camera_mat = torch.inverse(camera_mat) - world_mat = torch.inverse(world_mat) - scale_mat = torch.inverse(scale_mat) - - # Transform pixels to homogen coordinates - pixels = pixels.permute(0, 2, 1) - pixels = torch.cat([pixels, torch.ones_like(pixels)], dim=1) - - # Project pixels into camera space - if use_absolute_depth: - pixels[:, :2] = pixels[:, :2] * depth.permute(0, 2, 1).abs() - pixels[:, 2:3] = pixels[:, 2:3] * depth.permute(0, 2, 1) - else: - pixels[:, :3] = pixels[:, :3] * depth.permute(0, 2, 1) - - # Transform pixels to world space - p_world = scale_mat @ world_mat @ camera_mat @ pixels - - # Transform p_world back to 3D coordinates - p_world = p_world[:, :3].permute(0, 2, 1) - - if is_numpy: - p_world = p_world.numpy() - return p_world - - -def transform_to_camera_space(p_world, world_mat, camera_mat=None, scale_mat=None): - ''' Transforms world points to camera space. - Args: - p_world (tensor): world points tensor of size B x N x 3 - camera_mat (tensor): camera matrix - world_mat (tensor): world matrix - scale_mat (tensor): scale matrix - ''' - batch_size, n_p, _ = p_world.shape - device = p_world.device - - # Transform world points to homogen coordinates - p_world = torch.cat([p_world, torch.ones( - batch_size, n_p, 1).to(device)], dim=-1).permute(0, 2, 1) - - # Apply matrices to transform p_world to camera space - if scale_mat is None: - if camera_mat is None: - p_cam = world_mat @ p_world - else: - p_cam = camera_mat @ world_mat @ p_world - else: - p_cam = camera_mat @ world_mat @ scale_mat @ p_world - - # Transform points back to 3D coordinates - p_cam = p_cam[:, :3].permute(0, 2, 1) - return p_cam - - -def origin_to_world(n_points, camera_mat, world_mat, scale_mat=None, - invert=False): - ''' Transforms origin (camera location) to world coordinates. - - Args: - n_points (int): how often the transformed origin is repeated in the - form (batch_size, n_points, 3) - camera_mat (tensor): camera matrix - world_mat (tensor): world matrix - scale_mat (tensor): scale matrix - invert (bool): whether to invert the matrices (default: true) - ''' - batch_size = camera_mat.shape[0] - device = camera_mat.device - # Create origin in homogen coordinates - p = torch.zeros(batch_size, 4, n_points).to(device) - p[:, -1] = 1. - - if scale_mat is None: - scale_mat = torch.eye(4).unsqueeze( - 0).repeat(batch_size, 1, 1).to(device) - - # Invert matrices - if invert: - camera_mat = torch.inverse(camera_mat) - world_mat = torch.inverse(world_mat) - scale_mat = torch.inverse(scale_mat) - - # Apply transformation - p_world = scale_mat @ world_mat @ camera_mat @ p - - # Transform points back to 3D coordinates - p_world = p_world[:, :3].permute(0, 2, 1) - return p_world - - -def image_points_to_world(image_points, camera_mat, world_mat, scale_mat=None, - invert=False, negative_depth=True): - ''' Transforms points on image plane to world coordinates. - - In contrast to transform_to_world, no depth value is needed as points on - the image plane have a fixed depth of 1. - - Args: - image_points (tensor): image points tensor of size B x N x 2 - camera_mat (tensor): camera matrix - world_mat (tensor): world matrix - scale_mat (tensor): scale matrix - invert (bool): whether to invert matrices - ''' - batch_size, n_pts, dim = image_points.shape - assert(dim == 2) - device = image_points.device - d_image = torch.ones(batch_size, n_pts, 1).to(device) - if negative_depth: - d_image *= -1. - return transform_to_world(image_points, d_image, camera_mat, world_mat, - scale_mat, invert=invert) - - -def image_points_to_camera(image_points, camera_mat, - invert=False, negative_depth=True, use_absolute_depth=True): - batch_size, n_pts, dim = image_points.shape - assert(dim == 2) - device = image_points.device - d_image = torch.ones(batch_size, n_pts, 1).to(device) - if negative_depth: - d_image *= -1. - - # Convert to pytorch - pixels, is_numpy = to_pytorch(image_points, True) - depth = to_pytorch(d_image) - camera_mat = to_pytorch(camera_mat) - - # Invert camera matrices - if invert: - camera_mat = torch.inverse(camera_mat) - - # Transform pixels to homogen coordinates - pixels = pixels.permute(0, 2, 1) - pixels = torch.cat([pixels, torch.ones_like(pixels)], dim=1) - - # Project pixels into camera space - if use_absolute_depth: - pixels[:, :2] = pixels[:, :2] * depth.permute(0, 2, 1).abs() - pixels[:, 2:3] = pixels[:, 2:3] * depth.permute(0, 2, 1) - else: - pixels[:, :3] = pixels[:, :3] * depth.permute(0, 2, 1) - - # Transform pixels to world space - p_camera = camera_mat @ pixels - - # Transform p_world back to 3D coordinates - p_camera = p_camera[:, :3].permute(0, 2, 1) - - if is_numpy: - p_camera = p_camera.numpy() - return p_camera - - -def camera_points_to_image(camera_points, camera_mat, - invert=False, negative_depth=True, use_absolute_depth=True): - batch_size, n_pts, dim = camera_points.shape - assert(dim == 3) - device = camera_points.device - - # Convert to pytorch - p_camera, is_numpy = to_pytorch(camera_points, True) - camera_mat = to_pytorch(camera_mat) - - # Invert camera matrices - if invert: - camera_mat = torch.inverse(camera_mat) - - # Transform world camera space to pixels - p_camera = p_camera.permute(0, 2, 1) # B x 3 x N - pixels = camera_mat[:, :3, :3] @ p_camera - - assert use_absolute_depth and negative_depth - pixels, p_depths = pixels[:, :2], pixels[:, 2:3] - p_depths = -p_depths # negative depth - pixels = pixels / p_depths - - pixels = pixels.permute(0, 2, 1) - if is_numpy: - pixels = pixels.numpy() - return pixels - - -def angular_interpolation(res, camera_mat): - batch_size = camera_mat.shape[0] - device = camera_mat.device - input_rays = image_points_to_camera(arange_pixels((res, res), batch_size, - invert_y_axis=True).to(device), camera_mat) - output_rays = image_points_to_camera(arange_pixels((res * 2, res * 2), batch_size, - invert_y_axis=True).to(device), camera_mat) - input_rays = input_rays / input_rays.norm(dim=-1, keepdim=True) - output_rays = output_rays / output_rays.norm(dim=-1, keepdim=True) - - def dir2sph(v): - u = (v[..., :2] ** 2).sum(-1).sqrt() - theta = torch.atan2(u, v[..., 2]) / math.pi - phi = torch.atan2(v[..., 1], v[..., 0]) / math.pi - return torch.stack([theta, phi], 1) - - input_rays = dir2sph(input_rays).reshape(batch_size, 2, res, res) - output_rays = dir2sph(output_rays).reshape(batch_size, 2, res * 2, res * 2) - return input_rays - - -def interpolate_sphere(z1, z2, t): - p = (z1 * z2).sum(dim=-1, keepdim=True) - p = p / z1.pow(2).sum(dim=-1, keepdim=True).sqrt() - p = p / z2.pow(2).sum(dim=-1, keepdim=True).sqrt() - omega = torch.acos(p) - s1 = torch.sin((1-t)*omega)/torch.sin(omega) - s2 = torch.sin(t*omega)/torch.sin(omega) - z = s1 * z1 + s2 * z2 - return z - - -def get_camera_rays(camera_matrices, pixels=None, res=None, margin=0): - device = camera_matrices[0].device - batch_size = camera_matrices[0].shape[0] - if pixels is None: - assert res is not None - pixels = arange_pixels((res, res), batch_size, invert_y_axis=True, margin=margin).to(device) - n_points = pixels.size(1) - pixels_world = image_points_to_world( - pixels, camera_mat=camera_matrices[0], - world_mat=camera_matrices[1]) - camera_world = origin_to_world( - n_points, camera_mat=camera_matrices[0], - world_mat=camera_matrices[1]) - ray_vector = pixels_world - camera_world - ray_vector = ray_vector / ray_vector.norm(dim=-1, keepdim=True) - return ray_vector, camera_world, pixels_world - - -def rotation_6d_to_matrix(d6: torch.Tensor) -> torch.Tensor: - """ - Converts 6D rotation representation by Zhou et al. [1] to rotation matrix - using Gram--Schmidt orthogonalization per Section B of [1]. - Args: - d6: 6D rotation representation, of size (*, 6) - - Returns: - batch of rotation matrices of size (*, 3, 3) - - [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. - On the Continuity of Rotation Representations in Neural Networks. - IEEE Conference on Computer Vision and Pattern Recognition, 2019. - Retrieved from http://arxiv.org/abs/1812.07035 - """ - - a1, a2 = d6[..., :3], d6[..., 3:] - b1 = F.normalize(a1, dim=-1) - b2 = a2 - (b1 * a2).sum(-1, keepdim=True) * b1 - b2 = F.normalize(b2, dim=-1) - b3 = torch.cross(b1, b2, dim=-1) - return torch.stack((b1, b2, b3), dim=-2) - - -def camera_9d_to_16d(d9): - d6, translation = d9[..., :6], d9[..., 6:] - rotation = rotation_6d_to_matrix(d6) - RT = torch.eye(4).to(device=d9.device, dtype=d9.dtype).reshape( - 1, 4, 4).repeat(d6.size(0), 1, 1) - RT[:, :3, :3] = rotation - RT[:, :3, -1] = translation - return RT.reshape(-1, 16) - -def matrix_to_rotation_6d(matrix: torch.Tensor) -> torch.Tensor: - """ - Converts rotation matrices to 6D rotation representation by Zhou et al. [1] - by dropping the last row. Note that 6D representation is not unique. - Args: - matrix: batch of rotation matrices of size (*, 3, 3) - - Returns: - 6D rotation representation, of size (*, 6) - - [1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. - On the Continuity of Rotation Representations in Neural Networks. - IEEE Conference on Computer Vision and Pattern Recognition, 2019. - Retrieved from http://arxiv.org/abs/1812.07035 - """ - return matrix[..., :2, :].clone().reshape(*matrix.size()[:-2], 6) - - -def depth2pts_outside(ray_o, ray_d, depth): - ''' - ray_o, ray_d: [..., 3] - depth: [...]; inverse of distance to sphere origin - ''' - # note: d1 becomes negative if this mid point is behind camera - d1 = -torch.sum(ray_d * ray_o, dim=-1) / torch.sum(ray_d * ray_d, dim=-1) - p_mid = ray_o + d1.unsqueeze(-1) * ray_d - p_mid_norm = torch.norm(p_mid, dim=-1) - ray_d_cos = 1. / torch.norm(ray_d, dim=-1) - d2 = torch.sqrt(1. - p_mid_norm * p_mid_norm) * ray_d_cos - p_sphere = ray_o + (d1 + d2).unsqueeze(-1) * ray_d - - rot_axis = torch.cross(ray_o, p_sphere, dim=-1) - rot_axis = rot_axis / torch.norm(rot_axis, dim=-1, keepdim=True) - phi = torch.asin(p_mid_norm) - theta = torch.asin(p_mid_norm * depth) # depth is inside [0, 1] - rot_angle = (phi - theta).unsqueeze(-1) # [..., 1] - - # now rotate p_sphere - # Rodrigues formula: https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula - p_sphere_new = p_sphere * torch.cos(rot_angle) + \ - torch.cross(rot_axis, p_sphere, dim=-1) * torch.sin(rot_angle) + \ - rot_axis * torch.sum(rot_axis*p_sphere, dim=-1, keepdim=True) * (1.-torch.cos(rot_angle)) - p_sphere_new = p_sphere_new / torch.norm(p_sphere_new, dim=-1, keepdim=True) - pts = torch.cat((p_sphere_new, depth.unsqueeze(-1)), dim=-1) - - # now calculate conventional depth - depth_real = 1. / (depth + TINY_NUMBER) * torch.cos(theta) * ray_d_cos + d1 - return pts, depth_real - - -def intersect_sphere(ray_o, ray_d, radius=1): - ''' - ray_o, ray_d: [..., 3] - compute the depth of the intersection point between this ray and unit sphere - ''' - # note: d1 becomes negative if this mid point is behind camera - d1 = -torch.sum(ray_d * ray_o, dim=-1) / torch.sum(ray_d * ray_d, dim=-1) - p = ray_o + d1.unsqueeze(-1) * ray_d - # consider the case where the ray does not intersect the sphere - ray_d_cos = 1. / torch.norm(ray_d, dim=-1) - d2 = radius ** 2 - torch.sum(p * p, dim=-1) - mask = (d2 > 0) - d2 = torch.sqrt(d2.clamp(min=1e-6)) * ray_d_cos - d1, d2 = d1.unsqueeze(-1), d2.unsqueeze(-1) - depth_range = [d1 - d2, d1 + d2] - return depth_range, mask - - -def normalize(x, axis=-1, order=2): - if isinstance(x, torch.Tensor): - l2 = x.norm(p=order, dim=axis, keepdim=True) - return x / (l2 + 1e-8), l2 - - else: - l2 = np.linalg.norm(x, order, axis) - l2 = np.expand_dims(l2, axis) - l2[l2==0] = 1 - return x / l2, l2 - - -def sample_pdf(bins, weights, N_importance, det=False, eps=1e-5): - """ - Sample @N_importance samples from @bins with distribution defined by @weights. - Inputs: - bins: (N_rays, N_samples_+1) where N_samples_ is "the number of coarse samples per ray - 2" - weights: (N_rays, N_samples_) - N_importance: the number of samples to draw from the distribution - det: deterministic or not - eps: a small number to prevent division by zero - Outputs: - samples: the sampled samples - Source: https://github.com/kwea123/nerf_pl/blob/master/models/rendering.py - """ - N_rays, N_samples_ = weights.shape - weights = weights + eps # prevent division by zero (don't do inplace op!) - pdf = weights / torch.sum(weights, -1, keepdim=True) # (N_rays, N_samples_) - cdf = torch.cumsum(pdf, -1) # (N_rays, N_samples), cumulative distribution function - cdf = torch.cat([torch.zeros_like(cdf[: ,:1]), cdf], -1) # (N_rays, N_samples_+1) - # padded to 0~1 inclusive - - if det: - u = torch.linspace(0, 1, N_importance, device=bins.device) - u = u.expand(N_rays, N_importance) - else: - u = torch.rand(N_rays, N_importance, device=bins.device) - u = u.contiguous() - - inds = torch.searchsorted(cdf, u) - below = torch.clamp_min(inds-1, 0) - above = torch.clamp_max(inds, N_samples_) - - inds_sampled = torch.stack([below, above], -1).view(N_rays, 2*N_importance) - cdf_g = torch.gather(cdf, 1, inds_sampled) - cdf_g = cdf_g.view(N_rays, N_importance, 2) - bins_g = torch.gather(bins, 1, inds_sampled).view(N_rays, N_importance, 2) - - denom = cdf_g[...,1]-cdf_g[...,0] - denom[denommard full movie hd 1080p amitabh bachchan amrita singh romance

      Download File ►►► https://urlca.com/2uDcJ2



      - -Suddenly, their lives are turned upside down when Harry arrives in town and he is very much in the dark about Ruby and Mard's relationship and also why Harry's daughter has taken a liking for Mard. Both of them return to Bombay with the hope that they would make peace. But things get tricky when Amrita Singh from Bangalore arrives in the city and tries to make Amrita to love Mard. Will the truth be revealed? Watch Full Movie for free. It's the year 2008, Nair is the first person to inherit the Indian Institute of Management (IIM) premises. He plans to construct an artificial lake there in order to attract investment and a new faculty. The plan is approved and the lake is being built. While discussing the project with the townspeople, Nair suddenly begins to realize that it will take a lot of time, energy and money to build the lake. Nair plans to sell his land to build the lake. He needs the money badly as his family is facing financial problems. But the good news is that Nair's house has a huge water supply and he decides to build a dam at the same location. The biggest obstacle he faces is that the land is under the control of a realtor, Suresh, who wants to make as much money from the land as he can. - -Download Free Videos & Download Sms Messages,Click here. - -Harry's daughter, on a strange incident both Ruby and Mard falls in love. Suddenly, their lives are turned upside down when Harry arrives in town and he is very much in the dark about Ruby and Mard's relationship and also why Harry's daughter has taken a liking for Mard. Both of them return to Bombay with the hope that they would make peace. But things get tricky when Amrita Singh from Bangalore arrives in the city and tries to make Amrita to love Mard. Will the truth be revealed? Watch Full Movie for free. It's the year 2008, Nair is the first person to inherit the Indian Institute of Management (IIM) premises. He plans to construct an artificial lake there in order to attract investment and a new faculty. The plan is approved and the lake is being built. While discussing the project with the townspeople, Nair suddenly begins to realize that it will take a lot of time, energy and money to build the lake. Nair plans to sell his land to build the lake. He needs the money badly as his family is facing financial problems. 4fefd39f24
      -
      -
      -

      diff --git a/spaces/farukozderim/zero-shotts/app.py b/spaces/farukozderim/zero-shotts/app.py deleted file mode 100644 index 996af4595aace2232ad37aa42fc73675bcbc9476..0000000000000000000000000000000000000000 --- a/spaces/farukozderim/zero-shotts/app.py +++ /dev/null @@ -1,4 +0,0 @@ -import gradio as gr -name_list = ['spaces/micole66/fb-zeroshot', 'models/facebook/bart-large-mnli', 'models/NDugar/3epoch-3large', 'models/oigele/Fb_improved_zeroshot'] -interfaces = [gr.Interface.load(name) for name in name_list] -gr.mix.Parallel(*interfaces, title="Zero shot", description="Make a decision").launch() \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Crazy Plinko A Game That Will Make You Go Crazy for Money.md b/spaces/fatiXbelha/sd/Crazy Plinko A Game That Will Make You Go Crazy for Money.md deleted file mode 100644 index 1221d9e89d7b5b32e8af8520b80d800668303784..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Crazy Plinko A Game That Will Make You Go Crazy for Money.md +++ /dev/null @@ -1,122 +0,0 @@ -
      -

      Crazy Plinko: A Fun and Exciting Game of Chance

      -

      If you are a fan of The Price Is Right, you probably know about plinko, one of the most popular and exciting games on the show. Plinko is a game where you drop chips down a board with pegs and hope that they land in slots with high values. It's a simple but thrilling game that relies mostly on luck, but also on some skill and strategy.

      -

      But did you know that there is a variation of plinko called crazy plinko? Crazy plinko is a game where you can choose your own risk level and payouts, and play online with cryptocurrencies like bitcoin. It's a fun and innovative way to enjoy the classic game of plinko, with more options and possibilities.

      -

      crazy plinko


      Download File ····· https://urllie.com/2uNGEC



      -

      In this article, we will explore everything you need to know about crazy plinko, from its history and rules to its strategy and benefits. Whether you are a beginner or an expert, you will find something interesting and useful in this guide. So, let's get started!

      -

      History of Plinko

      -

      Plinko is a game that has a long and fascinating history. It originated from pachinko, a Japanese game of chance that dates back to the early 20th century. Pachinko is similar to pinball, but instead of using flippers, you use a knob to launch balls into a vertical board with pins and pockets. The goal is to get as many balls as possible into the pockets that have prizes or payouts.

      -

      Crazy plinko paypal games for money
      -Crazy plinko earn paypal money fast
      -Crazy plinko free amazon gift cards
      -Crazy plinko apps that give you gift cards
      -Crazy plinko make money app
      -Crazy coin plinko game
      -Crazy coin plinko winner
      -Crazy coin plinko challenge
      -Crazy coin plinko coins rain
      -Crazy coin plinko token lights
      -Crazy plinko android app download
      -Crazy plinko apk free download
      -Crazy plinko appbrain review
      -Crazy plinko content rating everyone
      -Crazy plinko casual game by GGAME Studio
      -Crazy plinko youtube video review
      -Crazy plinko youtube freerewardsgaming channel
      -Crazy plinko youtube moneymaking tips
      -Crazy plinko how to play guide
      -Crazy plinko how to win guide
      -Crazy plinko best strategy guide
      -Crazy plinko tips and tricks guide
      -Crazy plinko cheats and hacks guide
      -Crazy plinko mod apk unlimited money
      -Crazy plinko mod apk no ads
      -Crazy plinko online game play free
      -Crazy plinko online game no download
      -Crazy plinko online game multiplayer mode
      -Crazy plinko online game leaderboard and achievements
      -Crazy plinko online game chat and social features
      -Crazy plinko game for pc download free
      -Crazy plinko game for pc windows 10 compatible
      -Crazy plinko game for pc mac compatible
      -Crazy plinko game for pc bluestacks emulator compatible
      -Crazy plinko game for pc keyboard and mouse controls
      -Crazy plinko game for ios download free
      -Crazy plinko game for ios iphone compatible
      -Crazy plinko game for ios ipad compatible
      -Crazy plinko game for ios touch screen controls
      -Crazy plinko game for ios app store rating and reviews
      -What is crazy plinko game about?
      -What is crazy plinko game goal?
      -What is crazy plinko game genre?
      -What is crazy plinko game theme?
      -What is crazy plinko game graphics style?
      -How to get more coins in crazy plinko game?
      -How to get more tokens in crazy plinko game?
      -How to get more lights in crazy plinko game?
      -How to get more rewards in crazy plinko game?
      -How to get more fun in crazy plinko game?

      -

      Plinko was created by Frank Wayne, an executive producer on The Price Is Right, in 1983. He was inspired by pachinko and wanted to create a game that was easy to understand but hard to predict. He designed a board with pegs and slots, and gave contestants chips to drop down the board. The game was an instant hit, and became one of the most iconic games on the show.

      -

      Plinko has evolved over time, with different variations and modifications. For example, in 2004, the game was upgraded to have LED lights on the board and slots. In 2017, the game celebrated its 35th anniversary by offering a special $35,000 slot in addition to the regular $10,000 slot. In 2020, the game had its first-ever primetime special, where contestants could win up to $1 million.

      -

      Strategy of Plinko

      -

      While plinko is mostly a game of luck, there are some ways to increase your chances of winning or at least have more fun playing it. Here are some tips and tricks to play plinko effectively:

      -
        -
      • Drop your chips into the center slots. If you want the best possible chances of your chips landing in the $10,000 slot (or $35,000 or $200,000 depending on the variation), drop each chip through an entry slot located at the center of the board. This way, you have a symmetrical chance of your chips bouncing left or right, and avoiding the $0 slots at the edges. Of course, this is not a guarantee, as the chips can still bounce unpredictably, but it's better than dropping them randomly or from the sides.
      • -
      • Watch the previous chips and adjust your strategy. If you have more than one chip to drop, you can observe how the previous chips behave and try to adjust your strategy accordingly. For example, if you notice that most of the chips tend to bounce to the left, you can try dropping your next chip slightly to the right of the center slot, or vice versa. This way, you can try to compensate for the bias of the board and aim for the higher-value slots.
      • -
      • Don't get too greedy or too cautious. Plinko is a game that can tempt you to either go for the highest possible payout or settle for the lowest possible risk. However, neither of these extremes is a good idea. If you go for the highest payout, you might end up with nothing or very little. If you go for the lowest risk, you might miss out on a big opportunity or get bored. The best strategy is to find a balance between risk and reward, and enjoy the game for what it is: a fun and exciting game of chance.
      • -
      -

      What is Crazy Plinko?

      -

      Crazy plinko is a variation of plinko that allows you to play online with cryptocurrencies like bitcoin. It's a game that combines the thrill of plinko with the convenience and innovation of online gambling. Here are some of the features and benefits of playing crazy plinko online:

      -
        -
      • You can choose your own risk level and payouts. Unlike regular plinko, where the payouts are fixed and predetermined by the show, crazy plinko lets you choose how much you want to bet and how much you want to win. You can adjust the number of rows on the board, the number of slots on each row, and the value of each slot. This way, you can customize your own plinko experience and play according to your preferences and budget.
      • -
      • You can play with cryptocurrencies like bitcoin. Crazy plinko is one of the many games that you can play with bitcoin and other cryptocurrencies online. This means that you can enjoy fast, secure, and anonymous transactions, as well as low fees and high bonuses. You can also take advantage of the volatility and value of cryptocurrencies, and potentially win more than you expected.
      • -
      • You can play anytime and anywhere. Crazy plinko is available 24/7 on various online platforms and devices. You don't have to wait for The Price Is Right to air on TV or go to a casino to play plinko. You can simply log in to your favorite plinko gambling site, make a deposit, and start playing right away. You can also play on your mobile phone or tablet, and enjoy plinko on the go.
      • -
      -

      Conclusion

      -

      Plinko is a game that has captivated millions of people around the world for decades. It's a game that combines simplicity, excitement, and luck in a way that few other games can match. Whether you play it on TV, in a casino, or online, plinko is a game that will always keep you entertained and engaged.

      -

      Crazy plinko is a variation of plinko that takes the game to a whole new level. It's a game that lets you choose your own risk level and payouts, play with cryptocurrencies like bitcoin, and play anytime and anywhere. It's a game that offers more options and possibilities than regular plinko, while still maintaining its core appeal.

      -

      If you are looking for a fun and exciting game of chance that will challenge your luck and skill, look no further than crazy plinko. It's a game that will make you feel like you are on The Price Is Right, but with more control and convenience. So what are you waiting for? Try out crazy plinko online today and see for yourself why it's one of the best games ever!

      -

      FAQs

      -

      What is the highest amount of money ever won on plinko?

      -

      The highest amount of money ever won on plinko was $41,000 by contestant Ryan Glass in 2017. He dropped five chips into the $10,000 slot (including one during his practice round), setting a new record for the most money won on plinko.

      -

      How many times has plinko been played on The Price Is Right?

      -

      Plinko has been played over 1 ,000 times on The Price Is Right since its debut in 1983. It is one of the most frequently played games on the show, and one of the most popular among fans and contestants alike.

      -

      How can I play crazy plinko online for free or for real money?

      -

      There are many online platforms that offer crazy plinko games for free or for real money. You can find them by searching for "crazy plinko" or "plinko gambling" on your web browser. Some of the best plinko gambling sites with bitcoin bonuses are:

      - - - - - - - - - - - - - - - - - - - - - -
      SiteBonusFeatures
      Stake.com10% rakeback + weekly bonusesCustomizable plinko board, provably fair, live chat, VIP program
      BC.GameUp to 1 BTC welcome bonus + daily rewardsMultiple plinko variations, provably fair, live chat, faucet
      DuckDice.ioUp to 0.5 BTC first deposit bonus + loyalty programClassic plinko game, provably fair, live chat, faucet
      -

      What are some of the best plinko gambling sites with bitcoin bonuses?

      -

      Some of the best plinko gambling sites with bitcoin bonuses are Stake.com, BC.Game, and DuckDice.io. These sites offer generous bonuses, customizable plinko boards, provably fair games, live chat support, and more. You can check out the table above for more details.

      -

      How can I make my own plinko game at home?

      -

      If you want to make your own plinko game at home, you will need some materials and tools, such as a plywood board, nails, hammer, drill, paint, and chips. You can follow these steps to make your own plinko game:

      -
        -
      1. Cut the plywood board into a rectangular shape that is about 4 feet tall and 2 feet wide.
      2. -
      3. Paint the board with your desired color and let it dry.
      4. -
      5. Drill holes on the top edge of the board, about 2 inches apart. These will be the entry slots for the chips.
      6. -
      7. Hammer nails into the board in a staggered pattern, leaving about 2 inches of space between each nail. These will be the pegs that will bounce the chips.
      8. -
      9. Cut out slots on the bottom edge of the board, about 4 inches wide and 2 inches deep. These will be the prize slots for the chips. You can paint them with different colors and values.
      10. -
      11. Make some chips out of cardboard or plastic, about 2 inches in diameter. You can paint them with your desired color and number.
      12. -
      13. Your plinko game is ready! You can play it by dropping chips through the entry slots and watching them fall into the prize slots.
      14. -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download 26 11 Full Movie WORK.md b/spaces/fatiXbelha/sd/Download 26 11 Full Movie WORK.md deleted file mode 100644 index d1e7db3e1064cfe91c09e83329f3dad49d524150..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download 26 11 Full Movie WORK.md +++ /dev/null @@ -1,47 +0,0 @@ - -

      Download 26/11 Full Movie: How to Watch the Thrilling Film Online

      -

      If you are looking for a movie that will keep you on the edge of your seat, then you should download 26/11 full movie. This is a 2013 Hindi-language action thriller film directed by Ram Gopal Varma, based on the book Kasab: The Face of 26/11 by Rommel Rodrigues. It depicts the horrific terrorist attacks that took place in Mumbai on November 26, 2008, and the brave response of the police and security forces.

      -

      download 26 11 full movie


      Download Filehttps://urllie.com/2uNAkM



      -

      What is 26/11 Movie About?

      -

      The movie follows the events of the 26/11 attacks, which lasted for four days and claimed the lives of 166 people and injured over 300. It focuses on the perspective of the Mumbai Police Commissioner Rakesh Maria, who led the investigation and interrogation of the sole surviving terrorist Ajmal Kasab. It also shows the ordeal of the hostages and victims who were trapped in various locations, such as the Taj Mahal Palace Hotel, the Oberoi Trident Hotel, the Chhatrapati Shivaji Terminus railway station, and the Nariman House.

      -

      The Plot

      -

      The movie begins with a group of ten terrorists arriving in Mumbai by sea from Pakistan. They split into pairs and carry out coordinated attacks across the city, armed with AK-47 rifles, grenades, and explosives. They target civilians and security personnel indiscriminately, creating chaos and panic. The Mumbai Police and other agencies try to contain the situation and rescue the hostages, but face many challenges and casualties. The movie also shows the interrogation of Kasab by Maria, who tries to extract information about his motives, background, and handlers. The movie ends with the execution of Kasab by hanging in 2012.

      -

      The Cast

      -

      The movie features Sanjeev Jaiswal in his film debut, playing the role of Ajmal Kasab. He delivers a chilling performance as the cold-blooded terrorist who shows no remorse for his actions. Nana Patekar plays Rakesh Maria, who portrays the calm and composed officer who leads the investigation with determination and professionalism. Other actors include Atul Kulkarni as Inspector Shashank Shinde, Ganesh Yadav as Constable Amar Singh Solanki, Asif Basra as Taxi Driver, Sadh Orhan as Abu Ismail, Farzad Jehani as Self / Owner of Leopold Cafe, and many more.

      -

      The Reception

      -

      The movie received mixed reviews from critics and audiences. Some praised it for its realistic and gripping depiction of the attacks, while others criticized it for its sensationalism and lack of depth. The movie was also controversial for its portrayal of Kasab and his confession. The movie was banned in Pakistan for allegedly showing anti-Pakistan sentiments. The movie was also nominated for several awards, such as Best Actor (Nana Patekar), Best Director (Ram Gopal Varma), Best Editing (Sunil Wadhwani), Best Sound Design (Nihar Ranjan Samal), and Best Screenplay (Rommel Rodrigues).

      -

      Where to Download 26/11 Full Movie?

      -

      If you want to download 26/11 full movie, you have several options to choose from. Here are some of them:

      -

      Eros Now

      -

      Eros Now is a streaming service that offers a wide range of Indian movies, TV shows, music videos, and original content. You can watch 26/11 full movie on Eros Now with a subscription plan that costs ₹49 per month or ₹399 per year. You can also download the movie offline on your device and watch it anytime. You can also enjoy other features like subtitles, HD quality, and ad-free streaming. You can access Eros Now on your web browser, smartphone, tablet, smart TV, or streaming device.

      -

      Voot

      -

      Voot is another streaming service that offers a variety of Indian content, including movies, TV shows, originals, news, and live channels. You can watch 26/11 full movie on Voot with a subscription plan that costs ₹99 per month or ₹499 per year. You can also download the movie offline on your device and watch it later. You can also enjoy other features like subtitles, HD quality, and ad-free streaming. You can access Voot on your web browser, smartphone, tablet, smart TV, or streaming device.

      -

      -

      Google Play Movies, YouTube, Apple TV

      -

      If you prefer to rent or buy the movie instead of subscribing to a streaming service, you can also download 26/11 full movie from Google Play Movies, YouTube, or Apple TV. You can rent the movie for ₹25 or buy it for ₹190 on Google Play Movies or YouTube. You can rent the movie for ₹120 or buy it for ₹490 on Apple TV. You can also download the movie offline on your device and watch it anytime. You can also enjoy other features like subtitles, HD quality, and ad-free streaming. You can access Google Play Movies, YouTube, or Apple TV on your web browser, smartphone, tablet, smart TV, or streaming device.

      -

      Why You Should Download 26/11 Full Movie?

      -

      There are many reasons why you should download 26/11 full movie and watch it at your convenience. Here are some of them:

      -

      It's Based on a True Story

      -

      The movie is based on the book Kasab: The Face of 26/11 by Rommel Rodrigues, which is a factual account of the 26/11 attacks and the investigation that followed. The movie does not fictionalize or dramatize the events, but presents them as they happened. The movie also uses real footage and audio clips from the attacks to create a realistic and authentic experience for the viewers.

      -

      It's a Gripping and Realistic Portrayal

      -

      The movie does not shy away from showing the brutality and horror of the attacks, but also the courage and resilience of the people who faced them. The movie captures the emotions and reactions of the hostages, victims, terrorists, police officers, and other stakeholders in a realistic and compelling way. The movie also shows the challenges and difficulties that the police and security forces faced in dealing with the situation and bringing it to an end.

      -

      It's a Tribute to the Brave Heroes and Victims

      -

      The movie is not only a depiction of the 26/11 attacks, but also a tribute to the brave heroes and victims who lost their lives or suffered injuries in the process. The movie honors the sacrifice and service of the police officers, security personnel, hotel staff, journalists, doctors, and civilians who risked their lives to save others or fight back against the terrorists. The movie also pays respect to the memory and dignity of those who died in the attacks.

      -

      Conclusion

      -

      26/11 full movie is a must-watch for anyone who wants to learn more about one of the most tragic and terrifying events in Indian history. The movie is a realistic and gripping portrayal of the 26/11 attacks and the response that followed. The movie is also a tribute to the brave heroes and victims who showed courage and resilience in the face of adversity. You can download 26/11 full movie from various platforms like Eros Now, Voot, Google Play Movies, YouTube, or Apple TV and watch it at your convenience.

      -

      FAQs

      -

      Here are some frequently asked questions about 26/11 full movie:

      -
        -
      • Is 26/11 full movie available on Netflix?
      • -

        No, 26/11 full movie is not available on Netflix as of now. You can check other platforms like Eros Now, Voot, Google Play Movies, YouTube, or Apple TV to download or stream the movie.

        -
      • Is 26/11 full movie based on a true story?
      • -

        Yes, 26/11 full movie is based on a true story. It is based on the book Kasab: The Face of 26/11 by Rommel Rodrigues, which is a factual account of the 26/11 attacks and the investigation that followed.

        -
      • Who played Ajmal Kasab in 26/11 full movie?
      • -

        Ajmal Kasab was played by Sanjeev J aiswal in his film debut. He delivered a chilling performance as the cold-blooded terrorist who showed no remorse for his actions.

        -
      • Who directed 26/11 full movie?
      • -

        26/11 full movie was directed by Ram Gopal Varma, who is known for his films in various genres like crime, horror, thriller, and drama. He is also the director of other movies like Satya, Company, Sarkar, Bhoot, and Rangeela.

        -
      • How long is 26/11 full movie?
      • -

        26/11 full movie has a runtime of 116 minutes. It was released on March 1, 2013 in India and other countries.

        -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download MP3 Kizz Daniel and EMPIREs Cough (Odo) - I Want to Flex My Love on TikTok.md b/spaces/fatiXbelha/sd/Download MP3 Kizz Daniel and EMPIREs Cough (Odo) - I Want to Flex My Love on TikTok.md deleted file mode 100644 index 5187e5ee592109671cf6ffd6e0750eb1ec9abbb9..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download MP3 Kizz Daniel and EMPIREs Cough (Odo) - I Want to Flex My Love on TikTok.md +++ /dev/null @@ -1,104 +0,0 @@ - -

      Download MP3 I Want to Flex My Love: How to Enjoy the Viral Song by Kizz Daniel and EMPIRE

      -

      If you are a fan of Afrobeat music, you might have heard of the viral song "I Want to Flex My Love" by Nigerian singer Kizz Daniel and American record label EMPIRE. The song, also known as "Cough" or "Odo Yewu", is a catchy and romantic tune that has taken over TikTok and other social media platforms. In this article, we will tell you what the song is about, how to download it as MP3, and how to enjoy it more.

      -

      What is the song about?

      -

      The song is a love song that expresses the desire of the singer to impress and spoil his lover. He wants to take her away to a place she loves, and show her how much he cares for her. He also warns her not to listen to other people who might try to ruin their relationship. The song is sung in English and Yoruba, a Nigerian language.

      -

      download mp3 i want to flex my love


      Download » https://urllie.com/2uNCn7



      -

      The meaning of "flex my love"

      -

      The phrase "flex my love" is a slang term that means to show off or flaunt one's love for someone. It can also mean to be proud or happy about one's relationship, usually in a way that annoys others. According to Cambridge Dictionary, "flex" can also mean to bend or tighten a muscle, or to change something slightly to make it more suitable for a situation. In the context of the song, the singer wants to flex his love by bending his rules, changing his plans, and tightening his grip on his lover.

      -

      The lyrics and music video of the song

      -

      The lyrics of the song are simple and catchy, with a lot of repetition and rhyme. The chorus goes like this:

      -
      -

      I want to flex my love (eh eh)
      -I wan impress (eh eh)
      -And I want to carry my love away (eh eh)
      -To a place she loves (eh eh)
      -Ah my woman woman (eh eh)
      -I wan impress (eh eh)
      -And I want to carry my love away (eh eh)
      -To a place she loves (eh eh)

      -
      -

      The music video of the song features Kizz Daniel and EMPIRE in various scenes, such as a beach, a club, a car, and a bedroom. They are surrounded by beautiful women, dancing, drinking, and having fun. The video has over 2 million views on YouTube as of June 2021.

      -

      How to download the song as MP3

      -

      If you love the song and want to listen to it offline or on your preferred device, you might want to download it as MP3. MP3 is a popular audio format that compresses sound files without losing much quality. It is compatible with most devices and players.

      -

      Why you might want to download the song as MP3

      -

      There are several reasons why you might want to download the song as MP3. Some of them are:

      -
        -
      • You want to save data or bandwidth by not streaming the song online every time.
      • -
      • You want to have more control over your music library and playlists.
      • -
      • You want to edit or remix the song with other software or tools.
      • -
      • You want to use the song as a ringtone, alarm, or notification sound.
      • -
      • You want to share the song with your friends or family who might not have access to the song online.
      • -
      -

      The best MP3 converters online

      -

      There are many websites and apps that can help you convert the song to MP3. However, not all of them are reliable, safe, or fast. Some of them might have annoying ads, malware, or low-quality output. To help you avoid these problems, we have selected three of the best MP3 converters online that you can use for free. They are:

      -

      YouTube to Mp3 Converter

      -

      This is a simple and easy-to-use website that allows you to convert any YouTube video to MP3 in seconds. All you have to do is copy and paste the URL of the video, choose the quality and format, and click on "Convert". The website will then process the video and provide you with a download link. You can also use this website to download videos from other platforms, such as Facebook, Instagram, Vimeo, and more.

      -

      download mp3 kizz daniel cough odo
      -download mp3 kizz daniel i want to flex my love lyrics
      -download mp3 kizz daniel and empire cough song
      -download mp3 kizz daniel odo yewu
      -download mp3 kizz daniel i want to impress my woman
      -download mp3 kizz daniel i want to carry my love away
      -download mp3 kizz daniel to a place she loves
      -download mp3 kizz daniel cough tiktok song
      -download mp3 kizz daniel afrobeat music
      -download mp3 kizz daniel where we come from album
      -download mp3 kiss daniel i want to flex my love
      -download mp3 kiss daniel cough odo yewu
      -download mp3 kiss daniel and empire viral song
      -download mp3 kiss daniel odo yewu lyrics
      -download mp3 kiss daniel i want to impress my woman woman
      -download mp3 kiss daniel i want to carry my love away away
      -download mp3 kiss daniel to a place she loves loves
      -download mp3 kiss daniel cough trending song
      -download mp3 kiss daniel afro music
      -download mp3 kiss daniel where we come from vol 01
      -free download mp3 kizz daniel i want to flex my love
      -free download mp3 kizz daniel cough odo odoyewu
      -free download mp3 kizz daniel and empire musiclyfer
      -free download mp3 kizz daniel odo yewu eh eh
      -free download mp3 kizz daniel i want to impress eh eh
      -free download mp3 kizz daniel i want to carry my love eh eh
      -free download mp3 kizz daniel to a place she loves eh eh
      -free download mp3 kizz daniel cough lyrics video
      -free download mp3 kizz daniel afrobeat song
      -free download mp3 kizz daniel where we come from 2022
      -free download mp3 kiss daniel i want to flex my love odo yewu
      -free download mp3 kiss daniel cough odo yewu eh eh
      -free download mp3 kiss daniel and empire youtube video
      -free download mp3 kiss daniel odo yewu odoyewu eh eh
      -free download mp3 kiss daniel i want to impress and carry my love away away away away away away away away away away away away away away away away away away away away away away away away away away

      -

      You can access the website here: [YouTube to Mp3 Converter]

      -

      Online Audio Converter

      -

      This is a powerful and versatile website that can convert any audio file to MP3 or other formats. You can upload your file from your computer, Google Drive, Dropbox, or a URL. You can also adjust the settings, such as bitrate, sample rate, channels, and volume. The website supports over 300 different audio formats and can process multiple files at once.

      -

      You can access the website here: [Online Audio Converter]

      -

      CloudConvert

      -

      This is a professional and secure website that can convert any file type to MP3 or other formats. You can upload your file from your computer, Google Drive, Dropbox, OneDrive, Box, or a URL. You can also customize the options, such as codec, bitrate, metadata, and tags. The website uses advanced encryption and deletes your files after conversion.

      -

      You can access the website here: [CloudConvert]

      -

      How to enjoy the song more

      -

      Now that you have downloaded the song as MP3, you might want to enjoy it more. Here are some tips on how to do that:

      -

      Listen to it with headphones or speakers

      -

      The song has a great sound quality and production that deserves to be heard with clarity and volume. You can use headphones or speakers to enhance your listening experience and immerse yourself in the music. You can also use an equalizer or a bass booster to adjust the sound to your liking.

      -

      Sing along or dance to it

      -

      The song has a catchy and upbeat melody that makes you want to sing along or dance to it. You can learn the lyrics by reading them online or watching the music video with subtitles. You can also follow some of the dance moves from the video or create your own. Singing along or dancing to the song can make you feel happier and more energetic.

      -

      Share it with your friends or on social media

      -

      The song is a viral hit that has inspired many people to create their own videos and memes using it. You can join the fun by sharing the song with your friends or on social media platforms like TikTok, Instagram, Facebook, Twitter, etc. You can also use hashtags like #iwanttoflexmylove #cough #odoyewu #kizzdaniel #empire to connect with other fans and see what they are doing with the song.

      -

      Conclusion

      -

      In this article, we have shown you how to download MP3 I want to flex my love by Kizz Daniel and EMPIRE. We have also given you some tips on how to enjoy the song more. We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.

      -

      FAQs

      -
        -
      • Where can I stream the song online?
        -You can stream the song online on platforms like YouTube, Spotify, Apple Music, Deezer, SoundCloud, etc.
      • -
      • Who are Kizz Daniel and EMPIRE?
        -Kizz Daniel is a Nigerian singer-songwriter who rose to fame with his hit songs like "Woju", "Yeba", "One Ticket", etc. He is known for his versatile style and vocal prowess.
        -EMPIRE is an American record label and distribution company that specializes in hip-hop, R&B, Latin, reggae, afrobeat, etc. It has worked with artists like Kendrick Lamar, Snoop Dogg, Ty Dolla $ign, Fat Joe, etc.
      • -
      • What is the genre of the song?
        -The song is a fusion of afrobeat, pop, and dancehall genres. It has a groovy and infectious rhythm that blends African and Caribbean influences.
      • -
      • When was the song released?
        -The song was released on February 14, 2023, as a Valentine's Day gift for the fans. It is part of Kizz Daniel's fourth studio album, King of Love, which was released on June 25, 2023.
      • -
      • How popular is the song?
        -The song is very popular and has received millions of streams, views, likes, comments, and shares across various platforms. It has also topped several charts and playlists in Nigeria and abroad. It has been praised by critics and fans alike for its catchy melody, smooth vocals, and romantic lyrics.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/request_llm/bridge_newbing.py b/spaces/fb700/chatglm-fitness-RLHF/request_llm/bridge_newbing.py deleted file mode 100644 index 2136f01beb3edd25b94dd8048c20b63a14ef905e..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/request_llm/bridge_newbing.py +++ /dev/null @@ -1,254 +0,0 @@ -""" -======================================================================== -第一部分:来自EdgeGPT.py -https://github.com/acheong08/EdgeGPT -======================================================================== -""" -from .edge_gpt import NewbingChatbot -load_message = "等待NewBing响应。" - -""" -======================================================================== -第二部分:子进程Worker(调用主体) -======================================================================== -""" -import time -import json -import re -import logging -import asyncio -import importlib -import threading -from toolbox import update_ui, get_conf, trimmed_format_exc -from multiprocessing import Process, Pipe - -def preprocess_newbing_out(s): - pattern = r'\^(\d+)\^' # 匹配^数字^ - sub = lambda m: '('+m.group(1)+')' # 将匹配到的数字作为替换值 - result = re.sub(pattern, sub, s) # 替换操作 - if '[1]' in result: - result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n' - return result - -def preprocess_newbing_out_simple(result): - if '[1]' in result: - result += '\n\n```reference\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n' - return result - -class NewBingHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.newbing_model = None - self.info = "" - self.success = True - self.local_history = [] - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - self.success = False - import certifi, httpx, rich - self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。" - self.success = True - except: - self.info = "缺少的依赖,如果要使用Newbing,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_newbing.txt`安装Newbing的依赖。" - self.success = False - - def ready(self): - return self.newbing_model is not None - - async def async_run(self): - # 读取配置 - NEWBING_STYLE, = get_conf('NEWBING_STYLE') - from request_llm.bridge_all import model_info - endpoint = model_info['newbing']['endpoint'] - while True: - # 等待 - kwargs = self.child.recv() - question=kwargs['query'] - history=kwargs['history'] - system_prompt=kwargs['system_prompt'] - - # 是否重置 - if len(self.local_history) > 0 and len(history)==0: - await self.newbing_model.reset() - self.local_history = [] - - # 开始问问题 - prompt = "" - if system_prompt not in self.local_history: - self.local_history.append(system_prompt) - prompt += system_prompt + '\n' - - # 追加历史 - for ab in history: - a, b = ab - if a not in self.local_history: - self.local_history.append(a) - prompt += a + '\n' - # if b not in self.local_history: - # self.local_history.append(b) - # prompt += b + '\n' - - # 问题 - prompt += question - self.local_history.append(question) - print('question:', prompt) - # 提交 - async for final, response in self.newbing_model.ask_stream( - prompt=question, - conversation_style=NEWBING_STYLE, # ["creative", "balanced", "precise"] - wss_link=endpoint, # "wss://sydney.bing.com/sydney/ChatHub" - ): - if not final: - print(response) - self.child.send(str(response)) - else: - print('-------- receive final ---------') - self.child.send('[Finish]') - # self.local_history.append(response) - - - def run(self): - """ - 这个函数运行在子进程 - """ - # 第一次运行,加载参数 - self.success = False - self.local_history = [] - if (self.newbing_model is None) or (not self.success): - # 代理设置 - proxies, = get_conf('proxies') - if proxies is None: - self.proxies_https = None - else: - self.proxies_https = proxies['https'] - # cookie - NEWBING_COOKIES, = get_conf('NEWBING_COOKIES') - try: - cookies = json.loads(NEWBING_COOKIES) - except: - self.success = False - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] 不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。') - self.child.send('[Fail]') - self.child.send('[Finish]') - raise RuntimeError(f"不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。") - - try: - self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies) - except: - self.success = False - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] 不能加载Newbing组件。{tb_str}') - self.child.send('[Fail]') - self.child.send('[Finish]') - raise RuntimeError(f"不能加载Newbing组件。") - - self.success = True - try: - # 进入任务等待状态 - asyncio.run(self.async_run()) - except Exception: - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - self.child.send(f'[Local Message] Newbing失败 {tb_str}.') - self.child.send('[Fail]') - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - """ - 这个函数运行在主进程 - """ - self.threadLock.acquire() - self.parent.send(kwargs) # 发送请求到子进程 - while True: - res = self.parent.recv() # 等待newbing回复的片段 - if res == '[Finish]': - break # 结束 - elif res == '[Fail]': - self.success = False - break - else: - yield res # newbing回复的片段 - self.threadLock.release() - - -""" -======================================================================== -第三部分:主进程统一调用函数接口 -======================================================================== -""" -global newbing_handle -newbing_handle = None - -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global newbing_handle - if (newbing_handle is None) or (not newbing_handle.success): - newbing_handle = NewBingHandle() - observe_window[0] = load_message + "\n\n" + newbing_handle.info - if not newbing_handle.success: - error = newbing_handle.info - newbing_handle = None - raise RuntimeError(error) - - # 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - observe_window[0] = "[Local Message]: 等待NewBing响应中 ..." - for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - observe_window[0] = preprocess_newbing_out_simple(response) - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return preprocess_newbing_out_simple(response) - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "[Local Message]: 等待NewBing响应中 ...")) - - global newbing_handle - if (newbing_handle is None) or (not newbing_handle.success): - newbing_handle = NewBingHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + newbing_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not newbing_handle.success: - newbing_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - chatbot[-1] = (inputs, "[Local Message]: 等待NewBing响应中 ...") - response = "[Local Message]: 等待NewBing响应中 ..." - yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。") - for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, preprocess_newbing_out(response)) - yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。") - if response == "[Local Message]: 等待NewBing响应中 ...": response = "[Local Message]: NewBing响应异常,请刷新界面重试 ..." - history.extend([inputs, response]) - logging.info(f'[raw_input] {inputs}') - logging.info(f'[response] {response}') - yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。") - diff --git a/spaces/fclong/summary/fengshen/examples/classification/demo_classification_afqmc_erlangshen_offload.sh b/spaces/fclong/summary/fengshen/examples/classification/demo_classification_afqmc_erlangshen_offload.sh deleted file mode 100644 index f5ff555aa60e3cebd544b92a18443eb7505f8ae8..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/classification/demo_classification_afqmc_erlangshen_offload.sh +++ /dev/null @@ -1,103 +0,0 @@ -MODEL_NAME="IDEA-CCNL/Erlangshen-MegatronBert-1.3B" - -TEXTA_NAME=sentence1 -TEXTB_NAME=sentence2 -LABEL_NAME=label -ID_NAME=id - -BATCH_SIZE=1 -VAL_BATCH_SIZE=1 -ZERO_STAGE=3 -config_json="./ds_config.json" - -cat < $config_json -{ - "train_micro_batch_size_per_gpu": $BATCH_SIZE, - "steps_per_print": 1000, - "gradient_clipping": 1, - "zero_optimization": { - "stage": ${ZERO_STAGE}, - "offload_optimizer": { - "device": "cpu", - "pin_memory": true - }, - "offload_param": { - "device": "cpu", - "pin_memory": true - }, - "overlap_comm": true, - "contiguous_gradients": true, - "sub_group_size": 1e9, - "stage3_max_live_parameters": 1e9, - "stage3_max_reuse_distance": 1e9 - }, - "zero_allow_untested_optimizer": false, - "fp16": { - "enabled": true, - "loss_scale": 0, - "loss_scale_window": 1000, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "activation_checkpointing": { - "partition_activations": false, - "contiguous_memory_optimization": false - }, - "wall_clock_breakdown": false -} -EOT - -export PL_DEEPSPEED_CONFIG_PATH=$config_json - -DATA_ARGS="\ - --dataset_name IDEA-CCNL/AFQMC \ - --train_batchsize $BATCH_SIZE \ - --valid_batchsize $VAL_BATCH_SIZE \ - --max_length 128 \ - --texta_name $TEXTA_NAME \ - --textb_name $TEXTB_NAME \ - --label_name $LABEL_NAME \ - --id_name $ID_NAME \ - " - -MODEL_ARGS="\ - --learning_rate 1e-5 \ - --weight_decay 1e-1 \ - --warmup_ratio 0.01 \ - --num_labels 2 \ - --model_type huggingface-auto \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_acc \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 0 \ - --save_weights_only True \ - --dirpath . \ - --filename model-{epoch:02d}-{val_acc:.4f} \ - " - - -TRAINER_ARGS="\ - --max_epochs 67 \ - --gpus 1 \ - --num_nodes 1 \ - --strategy deepspeed_stage_${ZERO_STAGE}_offload \ - --gradient_clip_val 1.0 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 1.0 \ - --precision 16 \ - --default_root_dir . \ - " - -options=" \ - --pretrained_model_path $MODEL_NAME \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ - " - -python3 finetune_classification.py $options - diff --git a/spaces/fclong/summary/fengshen/examples/clue1.1/data_preprocessing/chid_preprocessing.py b/spaces/fclong/summary/fengshen/examples/clue1.1/data_preprocessing/chid_preprocessing.py deleted file mode 100644 index e55aaf9b1c4ceed02343c5417aa205e570fef26c..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/clue1.1/data_preprocessing/chid_preprocessing.py +++ /dev/null @@ -1,159 +0,0 @@ -import json -from tqdm import tqdm -import os -import re -import argparse - -mask_token='[MASK]' -label_mask='__' - - -def load_schema(train_answer,dev_answer): - with open(train_answer,'r',encoding='utf-8') as f: - train2id = json.loads(''.join(f.readlines())) - - with open(dev_answer,'r',encoding='utf-8') as f: - dev2id = json.loads(''.join(f.readlines())) - for k,v in dev2id.items(): - train2id[k]=v - - return train2id - - -def cut(sentence): - """ - 将一段文本切分成多个句子 - :param sentence: ['虽然BillRoper正忙于全新游戏 - :return: ['虽然BillRoper正..接近。' , '与父母,之首。' , '很多..常见。' , '”一位上..推进。' , ''”一直坚..市场。'' , '如今,...的70%。'] - """ - new_sentence = [] - sen = [] - for i in sentence: # 虽 - sen.append(i) - if i in ['。', '!', '?', '?',',',',']: - new_sentence.append("".join(sen)) #['虽然BillRoper正...接近。' , '与父母,...之首。' , ] - sen = [] - - if len(new_sentence) <= 1: # 一句话超过max_seq_length且没有句号的,用","分割,再长的不考虑了。 - new_sentence = [] - sen = [] - for i in sentence: - sen.append(i) - if i.split(' ')[0] in [',', ','] and len(sen) != 0: - new_sentence.append("".join(sen)) - sen = [] - - if len(sen) > 0: # 若最后一句话无结尾标点,则加入这句话 - new_sentence.append("".join(sen)) - return new_sentence - - -def get_answer_text(text,m): - sent_list=cut(text) - text1='' - text2='' - for i,sent in enumerate(sent_list): - if m in sent: - text1=''.join(sent_list[:i]) - if i+1>len(sent_list)-1: - text2='' - else: - text2=''.join(sent_list[i+1:]) - index_text=sent - return text1,text2,index_text - return '','','' - - - -def load_data(file_path,label2id): - with open(file_path, 'r', encoding='utf8') as f: - lines = f.readlines() - result=[] - for l,line in tqdm(enumerate(lines)): - data = json.loads(line) - choice=data['candidates'] - for s,sent in enumerate(data['content']): - masks=re.findall("#idiom\d{6}#", sent) - for m in masks: - text1,text2,index_text=get_answer_text(sent,m) - - masks1=re.findall("#idiom\d{6}#", text1) - for m1 in masks1: - text1=text1.replace(m1,choice[label2id[m1]]) - - masks2=re.findall("#idiom\d{6}#", text2) - for m2 in masks2: - text2=text2.replace(m2,choice[label2id[m2]]) - - masks3=re.findall("#idiom\d{6}#", index_text) - for m3 in masks3: - if m3!=m: - index_text=index_text.replace(m3,choice[label2id[m3]]) - - choice=[] - for c in data['candidates']: - choice.append(index_text.replace(m,c)) - - if len('.'.join(choice))>400: - choice=data['candidates'] - text1=text1+index_text.split(m)[0] - text2=index_text.split(m)[1]+text2 - - if len(text1)+len(text2)>512-len('.'.join(choice)): - split1=0 - split2=0 - while split1+split2<512-len('.'.join(choice)): - if split1Diablo 3 Download: How to Get the Ultimate Evil Edition of the Game -

      If you are looking for a thrilling and addictive game that will keep you hooked for hours, look no further than Diablo 3. This game is one of the most popular and successful action RPGs ever made, and it offers a lot of content and features that will satisfy any fan of the genre. In this article, we will tell you everything you need to know about Diablo 3 download, including what is the game about, what is the Ultimate Evil Edition, how to get it on different platforms, how to install and play it, and some tips and tricks to help you enjoy it even more.

      -

      diablo 3 download


      DOWNLOADhttps://gohhs.com/2uPrE3



      -

      What is Diablo 3 and why should you play it?

      -

      Diablo 3 is a hack-and-slash action RPG that was developed and published by Blizzard Entertainment in 2012. It is the third installment in the Diablo franchise, which started in 1996. The game is set in the dark fantasy world of Sanctuary, where you take on the role of one of seven hero classes – Barbarian, Crusader, Demon Hunter, Monk, Necromancer, Witch Doctor, or Wizard – and fight against the forces of evil led by Diablo, the Lord of Terror.

      -

      Diablo 3 is a hack-and-slash action RPG with fast-paced combat and loot-driven gameplay

      -

      The core gameplay of Diablo 3 is simple but satisfying. You use your mouse or controller to move your character around and unleash various skills on your enemies. You can customize your skills with runes that modify their effects. You can also equip different weapons, armor, jewelry, and other items that boost your stats and abilities. As you kill enemies, you gain experience points that allow you to level up and unlock new skills. You also collect gold and loot that you can use to buy or craft better gear.

      -

      Diablo 3 features seven classes, four difficulty modes, and a rich story set in the dark fantasy world of Sanctuary

      -

      One of the main attractions of Diablo 3 is its variety of classes. Each class has its own unique style, skills, resource system, strengths, and weaknesses. You can choose from:

      -
        -
      • The Barbarian, a mighty warrior who specializes in melee combat and can use powerful shouts to buff allies or debuff enemies
      • -
      • The Crusader, a holy knight who wields a shield and a flail and can summon divine energy to smite foes or protect allies
      • -
      • The Demon Hunter, a vengeful archer who uses crossbows, traps, and gadgets to hunt down demons and other evil creatures
      • -
      • The Monk, a martial artist who uses fists, staffs, and spirit to unleash swift and deadly attacks and heal or buff allies
      • -
      • The Necromancer, a master of the dead who can raise skeletons, command golems, and manipulate blood and curses
      • -
      • The Witch Doctor, a tribal shaman who can summon zombies, spiders, frogs, and other creatures and cast hexes and spells to harm or control enemies
      • -
      • The Wizard, a wielder of arcane magic who can manipulate time, space, and elements to blast enemies with fire, frost, lightning, and more
      • -
      -

      Another aspect of Diablo 3 that adds replay value is its four difficulty modes. You can choose from:

      -
        -
      • Normal, which is the easiest mode and suitable for beginners
      • -
      • Hard, which is slightly more challenging and offers better rewards
      • -
      • Expert, which is significantly harder and requires more strategy and skill
      • -
      • Master, which is the hardest mode and only recommended for experienced players who want the ultimate challenge
      • -
      -

      Finally, Diablo 3 also has a rich story that spans five acts. You will travel across various regions of Sanctuary, such as New Tristram, Caldeum, Bastion's Keep, Westmarch, and Pandemonium. You will encounter memorable characters, such as Deckard Cain, Leah, Tyrael, Adria, Imperius, Malthael, and of course, Diablo. You will also face many enemies, such as the Skeleton King, Maghda, Belial, Azmodan, Ghom, Siegebreaker Assault Beast, Cydaea, Rakanoth, Izual, Diablo himself (or herself), Urzael, Adria again (but evil), Malthael again (but evil), and many more.

      -

      What is the Ultimate Evil Edition and what does it include?

      -

      If you want to get the most out of Diablo 3 download, you should get the Ultimate Evil Edition, which is the definitive version of the game for consoles and PC. The Ultimate Evil Edition includes the following content and features:

      -

      diablo 3 download full version free
      -diablo 3 download size
      -diablo 3 download pc
      -diablo 3 download mac
      -diablo 3 download free pc game
      -diablo 3 download code
      -diablo 3 download ps4
      -diablo 3 download xbox one
      -diablo 3 download switch
      -diablo 3 download crack
      -diablo 3 download offline
      -diablo 3 download patch
      -diablo 3 download error
      -diablo 3 download slow
      -diablo 3 download stuck
      -diablo 3 download blizzard
      -diablo 3 download battle.net
      -diablo 3 download steam
      -diablo 3 download epic games
      -diablo 3 download google drive
      -diablo 3 download mega.nz
      -diablo 3 download torrent
      -diablo 3 download skidrow
      -diablo 3 download fitgirl repack
      -diablo 3 download highly compressed
      -diablo 3 download reddit
      -diablo 3 download moddb
      -diablo 3 download github
      -diablo 3 download editor
      -diablo 3 download trainer
      -diablo 3 download cheat engine
      -diablo 3 download save file
      -diablo 3 download characters
      -diablo 3 download items
      -diablo 3 download mods
      -diablo 3 download expansion pack
      -diablo 3 download reaper of souls
      -diablo 3 download rise of the necromancer
      -diablo 3 download eternal collection
      -diablo 3 download season pass
      -diablo 3 download season 24
      -diablo 3 download update
      -diablo 3 download latest version
      -diablo 3 download windows 10
      -diablo 3 download linux
      -diablo 3 download android apk
      -diablo 3 download ios app store

      -

      The Ultimate Evil Edition is the definitive version of Diablo 3 for consoles and PC

      -

      The Ultimate Evil Edition was released in 2014 for PS4 and Xbox One, and in 2017 for Nintendo Switch. It is also available for PC as a digital bundle. The Ultimate Evil Edition contains all the updates and patches that have been released for Diablo 3 since its launch, as well as some exclusive features that are only available on consoles.

      -

      It includes the base game, the Reaper of Souls expansion, and the Rise of the Necromancer pack

      -

      The Ultimate Evil Edition includes not only the base game of Diablo 3, but also its first and only expansion, Reaper of Souls, and its second and only DLC pack, Rise of the Necromancer. Reaper of Souls adds a fifth act to the story, a new class (the Crusader), a new villain (Malthael), a new game mode (Adventure Mode), a new level cap (70), and a new endgame system (Paragon). Rise of the Necromancer adds another new class (the Necromancer), two new zones (the Shrouded Moors and the Temple of the Firstborn), two new dungeons (the Challenge Rifts and the Set Dungeons), and more cosmetic items.

      -

      It also adds exclusive features such as Adventure Mode, Paragon system, Legendary items, and more

      -

      The Ultimate Evil Edition also adds some exclusive features that are not available in the original version of Diablo 3. These include:

      -
        -
      • Adventure Mode, which allows you to explore any region of Sanctuary without following the story, and complete various tasks such as bounties and rifts for rewards
      • -
      • Paragon system, which allows you to continue leveling up after reaching the level cap, and gain points that you can spend on various attributes such as strength, vitality, critical hit chance, etc.
      • -
      • Legendary items, which are powerful and unique items that have special effects and can change your gameplay drastically
      • -
      • Nephalem Glory, which is a buff that you can get by destroying environmental objects or killing enemies, and that increases your damage and speed
      • -
      • Local co-op, which allows you to play with up to four friends on the same screen or online
      • -
      • Remote play, which allows you to play Diablo 3 on your PS Vita or Nintendo Switch using your PS4 or Switch console as a server
      • -
      • Cross-save, which allows you to transfer your save data between different platforms (except PC)
      • -
      -

      How to download Diablo 3 on different platforms?

      -

      If you want to download Diablo 3 on your preferred platform, you need to follow these steps:

      For PC, you need to create a Blizzard account and download the Battle.net app

      -

      To download Diablo 3 on your PC, you need to have a Blizzard account, which is free to create. You can create one by visiting the official Blizzard website and clicking on the "Create a free account" button. You will need to provide your email address, password, country, and date of birth. You will also need to agree to the terms of service and privacy policy. Once you have created your account, you will need to verify your email address by clicking on the link that Blizzard will send you.

      -

      After you have verified your email address, you will need to download the Battle.net app, which is the launcher for all Blizzard games. You can download it by visiting the official Battle.net website and clicking on the "Download for Windows" or "Download for Mac" button, depending on your operating system. You will need to run the installer and follow the instructions. Once you have installed the Battle.net app, you will need to log in with your Blizzard account.

      -

      For PS4, Xbox One, and Switch, you can buy the game digitally or physically from online or retail stores

      -

      To download Diablo 3 on your PS4, Xbox One, or Switch, you have two options: buying the game digitally or physically. If you want to buy the game digitally, you will need to have an account for the respective platform's online service: PlayStation Network (PSN) for PS4, Xbox Live for Xbox One, or Nintendo eShop for Switch. You will also need to have enough storage space on your console or an external device. You can buy the game digitally by visiting the online store of your platform and searching for "Diablo 3". You will see the Ultimate Evil Edition of the game, which costs $59.99 USD. You can add it to your cart and proceed to checkout. You will need to provide your payment information and confirm your purchase. The game will start downloading automatically after you buy it.

      -

      If you want to buy the game physically, you will need to find a store that sells it. You can either visit a local retail store or order it online from websites such as Amazon, Best Buy, GameStop, Walmart, etc. You will see the Ultimate Evil Edition of the game, which costs $59.99 USD. You can buy it and wait for it to be delivered or pick it up from the store. You will need to insert the disc into your console and follow the instructions.

      -

      For PS3 and Xbox 360, you can only buy the game physically from online or retail stores

      -

      To download Diablo 3 on your PS3 or Xbox 360, you only have one option: buying the game physically. This is because Diablo 3 is not available digitally for these platforms. You will need to find a store that sells it. You can either visit a local retail store or order it online from websites such as Amazon, Best Buy, GameStop, Walmart, etc. You will see the Ultimate Evil Edition of the game, which costs $39.99 USD. You can buy it and wait for it to be delivered or pick it up from the store. You will need to insert the disc into your console and follow the instructions.

      How to install and play Diablo 3 on different platforms?

      -

      After you have downloaded Diablo 3 on your preferred platform, you need to install and play it. The installation and gameplay process may vary slightly depending on the platform, but here are some general steps that you can follow:

      -

      For PC, you need to launch the Battle.net app and install the game from there

      -

      To install Diablo 3 on your PC, you need to launch the Battle.net app that you have downloaded and logged in with your Blizzard account. You will see a list of Blizzard games on the left side of the app. You need to click on Diablo 3 and then click on the "Install" button. The app will start downloading and installing the game on your PC. You can see the progress and the remaining time on the app. You can also pause or resume the download at any time. Once the download and installation are complete, you can click on the "Play" button to launch the game.

      -

      To play Diablo 3 on your PC, you need to have an internet connection and a Blizzard account. You can choose to play solo or with other players online. You can also create or join clans and communities to chat and play with other players. You can use your mouse and keyboard or a controller to control your character. You can access the game settings, options, menus, inventory, skills, quests, map, etc. by pressing various keys or buttons. You can also use the chat window or voice chat to communicate with other players.

      -

      For PS4, Xbox One, and Switch, you need to insert the disc or download the game from the store and follow the instructions

      -

      To install Diablo 3 on your PS4, Xbox One, or Switch, you need to either insert the disc that you have bought physically or download the game that you have bought digitally from the respective platform's online store. If you have bought the game physically, you need to insert the disc into your console and wait for it to be recognized. The console will start installing the game automatically. You may also need to download some updates or patches before you can play the game. If you have bought the game digitally, you need to go to your library or home screen and find the game icon. You need to click on it and wait for it to be downloaded and installed.

      -

      To play Diablo 3 on your PS4, Xbox One, or Switch, you need to have an internet connection if you want to play online with other players or access some online features such as seasons, leaderboards, etc. You can also play offline without an internet connection if you want to play solo or local co-op with up to four friends on the same screen. You can use your controller to control your character. You can access the game settings, options, menus, inventory, skills, quests, map, etc. by pressing various buttons or using the touch screen (for Switch). You can also use the chat window or voice chat to communicate with other players.

      -

      For PS3 and Xbox 360, you need to insert the disc and follow the instructions

      -

      To install Diablo 3 on your PS3 or Xbox 360, you need to insert the disc that you have bought physically into your console and wait for it to be recognized. The console will start installing the game automatically. You may also need to download some updates or patches before you can play the game.

      -

      To play Diablo 3 on your PS3 or Xbox 360, you need to have an internet connection if you want to play online with other players or access some online features such as seasons, leaderboards, etc. You can also play offline without an internet connection if you want to play solo or local co-op with up to four friends on the same screen. You can use your controller to control your character. You can access the game settings, options, menus, inventory, skills, quests, map, etc. by pressing various buttons. You can also use the chat window or voice chat to communicate with other players.

      -

      Tips and tricks for playing Diablo 3

      -

      Now that you have installed and played Diablo 3 on your preferred platform, you may want to learn some tips and tricks that will help you enjoy the game even more. Here are some of them:

      -

      Choose a class that suits your playstyle and preferences

      -

      One of the first decisions that you will make in Diablo 3 is choosing your class. This will determine your skills, abilities, and playstyle for the rest of the game. You should choose a class that suits your preferences and goals. For example, if you like to deal massive damage from afar, you may want to choose the Demon Hunter or the Wizard. If you like to get up close and personal with your enemies, you may want to choose the Barbarian or the Crusader. If you like to summon minions and use dark magic, you may want to choose the Necromancer or the Witch Doctor. If you like to balance offense and defense with speed and agility, you may want to choose the Monk.

      -

      Experiment with different skills, runes, and items to find your optimal build

      -

      One of the fun aspects of Diablo 3 is experimenting with different combinations of skills, runes, and items to find your optimal build. You can change your skills and runes at any time without any cost or penalty. You can also swap your items as often as you like. You should try out different options and see what works best for you. You can also look for synergies between your skills, runes, items, and class passive abilities. For example, if you are playing as a Wizard and you have a skill that deals fire damage, you may want to equip a rune that increases your fire damage or an item that gives you a chance to ignite enemies on hit.

      -

      Use gems, crafting, and enchanting to enhance your gear

      -

      Another way to improve your performance in Diablo 3 is to use gems, crafting, and enchanting to enhance your gear. Gems are special items that you can socket into your weapons, armor, and jewelry to give them additional stats or effects. You can find gems as loot or buy them from vendors. You can also combine lower quality gems into higher quality ones using the Jeweler NPC. Crafting is a process that allows you to create new items using materials that you can find as loot or buy from vendors. You can craft weapons, armor, jewelry, and other items using the Blacksmith NPC or the Mystic NPC. Enchanting is a process that allows you to modify an existing item by changing one of its properties or adding a new one. You can enchant weapons, armor, jewelry, and other items using the Mystic NPC.

      -

      Play with friends or join online communities for more fun and loot

      -

      One of the best ways to enjoy Diablo 3 is to play with friends or join online communities for more fun and loot. Playing with other players not only makes the game more social and cooperative, but also more rewarding and challenging. You can play with up to three other players in a party, either online or offline (for consoles). You can also join online communities that are based on common interests, such as class, region, language, game mode, etc. You can chat with other members of the community and join their games or invite them to yours. Playing with other players increases the difficulty of the game, but also increases the amount and quality of loot that drops.

      -

      Conclusion

      -

      Diablo 3 is a game that will keep you entertained for hours with its fast-paced combat, loot-driven gameplay, rich story, and varied classes. If you want to get the most out of it, you should download the Ultimate Evil Edition, which is the definitive version of the game for consoles and PC. It includes the base game, the Reaper of Souls expansion, and the Rise of the Necromancer pack, as well as exclusive features such as Adventure Mode, Paragon system, Legendary items, and more. You can download the game on different platforms, such as PC, PS4, Xbox One, Switch, PS3, and Xbox 360, by following the steps that we have explained in this article. You can also install and play the game on different platforms by following the instructions that we have provided. And finally, you can use some tips and tricks that we have shared to enhance your gameplay and have more fun. We hope that this article has helped you with your Diablo 3 download and that you will enjoy playing this amazing game.

      -

      FAQs

      -

      Here are some frequently asked questions about Diablo 3 download:

      -
        -
      • Q: How much space does Diablo 3 take on different platforms?
      • -
      • A: The size of Diablo 3 may vary slightly depending on the platform and the updates or patches that you have installed. However, here are some approximate sizes for each platform:
          -
        • PC: 25 GB
        • -
        • PS4: 26 GB
        • -
        • Xbox One: 27 GB
        • -
        • Switch: 18 GB
        • -
        • PS3: 12 GB
        • -
        • Xbox 360: 8 GB
        • -
        -
      • -
      • Q: Can I play Diablo 3 offline on PC?
      • -
      • A: No, you cannot play Diablo 3 offline on PC. You need to have an internet connection and a Blizzard account to play the game on PC. However, you can play Diablo 3 offline on consoles if you want to play solo or local co-op.
      • -
      • Q: Can I transfer my save data from one platform to another?
      • -
      • A: Yes, you can transfer your save data from one platform to another, except for PC. You can use the cross-save feature to transfer your save data between PS4, Xbox One, and Switch. You can also transfer your save data from PS3 to PS4 or from Xbox 360 to Xbox One using the cross-generation feature.
      • -
      • Q: How many players can play Diablo 3 together online or offline?
      • -
      • A: You can play Diablo 3 with up to four players in a party, either online or offline (for consoles). You can also join online communities and chat and play with other players.
      • -
      • Q: How long does it take to finish Diablo 3?
      • -
      • A: The length of Diablo 3 may vary depending on your playstyle, difficulty level, and game mode. However, here are some approximate times for each act of the story mode:
          -
        • Act I: 4 hours
        • -
        • Act II: 5 hours
        • -
        • Act III: 4 hours
        • -
        • Act IV: 3 hours
        • -
        • Act V: 5 hours
        • -
        -
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fffiloni/Image-to-MusicGen/audiocraft/models/builders.py b/spaces/fffiloni/Image-to-MusicGen/audiocraft/models/builders.py deleted file mode 100644 index 77ee5f96fea2e3c9e475fe961bc1a5ee473ed8eb..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-to-MusicGen/audiocraft/models/builders.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -All the functions to build the relevant models and modules -from the Hydra config. -""" - -import typing as tp -import warnings - -import audiocraft -import omegaconf -import torch - -from .encodec import CompressionModel, EncodecModel, FlattenedCompressionModel # noqa -from .lm import LMModel -from ..modules.codebooks_patterns import ( - CodebooksPatternProvider, - DelayedPatternProvider, - ParallelPatternProvider, - UnrolledPatternProvider, - VALLEPattern, - MusicLMPattern, -) -from ..modules.conditioners import ( - BaseConditioner, - ConditioningProvider, - LUTConditioner, - T5Conditioner, - ConditionFuser, - ChromaStemConditioner, -) -from .. import quantization as qt -from ..utils.utils import dict_from_config - - -def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer: - klass = { - 'no_quant': qt.DummyQuantizer, - 'rvq': qt.ResidualVectorQuantizer - }[quantizer] - kwargs = dict_from_config(getattr(cfg, quantizer)) - if quantizer != 'no_quant': - kwargs['dimension'] = dimension - return klass(**kwargs) - - -def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig): - if encoder_name == 'seanet': - kwargs = dict_from_config(getattr(cfg, 'seanet')) - encoder_override_kwargs = kwargs.pop('encoder') - decoder_override_kwargs = kwargs.pop('decoder') - encoder_kwargs = {**kwargs, **encoder_override_kwargs} - decoder_kwargs = {**kwargs, **decoder_override_kwargs} - encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs) - return encoder, decoder - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel: - """Instantiate a compression model. - """ - if cfg.compression_model == 'encodec': - kwargs = dict_from_config(getattr(cfg, 'encodec')) - encoder_name = kwargs.pop('autoencoder') - quantizer_name = kwargs.pop('quantizer') - encoder, decoder = get_encodec_autoencoder(encoder_name, cfg) - quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension) - frame_rate = kwargs['sample_rate'] // encoder.hop_length - renormalize = kwargs.pop('renormalize', None) - renorm = kwargs.pop('renorm') - if renormalize is None: - renormalize = renorm is not None - warnings.warn("You are using a deprecated EnCodec model. Please migrate to new renormalization.") - return EncodecModel(encoder, decoder, quantizer, - frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device) - else: - raise KeyError(f'Unexpected compression model {cfg.compression_model}') - - -def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel: - """Instantiate a transformer LM. - """ - if cfg.lm_model == 'transformer_lm': - kwargs = dict_from_config(getattr(cfg, 'transformer_lm')) - n_q = kwargs['n_q'] - q_modeling = kwargs.pop('q_modeling', None) - codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern') - attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout')) - cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance')) - cfg_prob, cfg_coef = cls_free_guidance["training_dropout"], cls_free_guidance["inference_coef"] - fuser = get_condition_fuser(cfg) - condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device) - if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programatically - kwargs['cross_attention'] = True - if codebooks_pattern_cfg.modeling is None: - assert q_modeling is not None, \ - 'LM model should either have a codebook pattern defined or transformer_lm.q_modeling' - codebooks_pattern_cfg = omegaconf.OmegaConf.create( - {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}} - ) - pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg) - return LMModel( - pattern_provider=pattern_provider, - condition_provider=condition_provider, - fuser=fuser, - cfg_dropout=cfg_prob, - cfg_coef=cfg_coef, - attribute_dropout=attribute_dropout, - dtype=getattr(torch, cfg.dtype), - device=cfg.device, - **kwargs - ).to(cfg.device) - else: - raise KeyError(f'Unexpected LM model {cfg.lm_model}') - - -def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider: - """Instantiate a conditioning model. - """ - device = cfg.device - duration = cfg.dataset.segment_duration - cfg = getattr(cfg, "conditioners") - cfg = omegaconf.OmegaConf.create({}) if cfg is None else cfg - conditioners: tp.Dict[str, BaseConditioner] = {} - with omegaconf.open_dict(cfg): - condition_provider_args = cfg.pop('args', {}) - for cond, cond_cfg in cfg.items(): - model_type = cond_cfg["model"] - model_args = cond_cfg[model_type] - if model_type == "t5": - conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args) - elif model_type == "lut": - conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args) - elif model_type == "chroma_stem": - model_args.pop('cache_path', None) - conditioners[str(cond)] = ChromaStemConditioner( - output_dim=output_dim, - duration=duration, - device=device, - **model_args - ) - else: - raise ValueError(f"unrecognized conditioning model: {model_type}") - conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args) - return conditioner - - -def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser: - """Instantiate a condition fuser object. - """ - fuser_cfg = getattr(cfg, "fuser") - fuser_methods = ["sum", "cross", "prepend", "input_interpolate"] - fuse2cond = {k: fuser_cfg[k] for k in fuser_methods} - kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods} - fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs) - return fuser - - -def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider: - """Instantiate a codebooks pattern provider object. - """ - pattern_providers = { - 'parallel': ParallelPatternProvider, - 'delay': DelayedPatternProvider, - 'unroll': UnrolledPatternProvider, - 'valle': VALLEPattern, - 'musiclm': MusicLMPattern, - } - name = cfg.modeling - kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {} - klass = pattern_providers[name] - return klass(n_q, **kwargs) - - -def get_debug_compression_model(device='cpu'): - """Instantiate a debug compression model to be used for unit tests. - """ - seanet_kwargs = { - 'n_filters': 4, - 'n_residual_layers': 1, - 'dimension': 32, - 'ratios': [10, 8, 16] # 25 Hz at 32kHz - } - encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs) - quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4) - init_x = torch.randn(8, 32, 128) - quantizer(init_x, 1) # initialize kmeans etc. - compression_model = EncodecModel( - encoder, decoder, quantizer, - frame_rate=25, sample_rate=32000, channels=1).to(device) - return compression_model.eval() - - -def get_debug_lm_model(device='cpu'): - """Instantiate a debug LM to be used for unit tests. - """ - pattern = DelayedPatternProvider(n_q=4) - dim = 16 - providers = { - 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"), - } - condition_provider = ConditioningProvider(providers) - fuser = ConditionFuser( - {'cross': ['description'], 'prepend': [], - 'sum': [], 'input_interpolate': []}) - lm = LMModel( - pattern, condition_provider, fuser, - n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2, - cross_attention=True, causal=True) - return lm.to(device).eval() diff --git a/spaces/fffiloni/audio-to-spectrogram/README.md b/spaces/fffiloni/audio-to-spectrogram/README.md deleted file mode 100644 index 2db9c0dc91778a80d3fbb76685ec46c1d433f86e..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/audio-to-spectrogram/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Audio To Spectrogram -emoji: 🦀 -colorFrom: red -colorTo: purple -python_version: 3.10.12 -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fffiloni/coqui-bark-voice-cloning-docker/share_btn.py b/spaces/fffiloni/coqui-bark-voice-cloning-docker/share_btn.py deleted file mode 100644 index b5e2a51361298584f65592c987e7d85a90bd99c8..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/coqui-bark-voice-cloning-docker/share_btn.py +++ /dev/null @@ -1,75 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getVideoBlobFile(videoEL){ - const res = await fetch(videoEL.src); - const blob = await res.blob(); - const videoId = Date.now() % 200; - const fileName = `ms-image2video-${{videoId}}.mp4`; - const videoBlob = new File([blob], fileName, { type: 'video/mp4' }); - console.log(videoBlob); - return videoBlob; - } - - const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app'); - const outputVideo = gradioEl.querySelector('#voice-video-out video'); - const ttsprompt = gradioEl.querySelector('#tts-prompt textarea').value; - const charaName = gradioEl.querySelector('#character-name textarea').value; - const voiceDesc = gradioEl.querySelector('#voice-description textarea').value; - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputVideo){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - - const videoOutFile = await getVideoBlobFile(outputVideo); - const dataOutputVid = await uploadFile(videoOutFile); - - const descriptionMd = ` -#### Character name: -${charaName} -#### Voice description: -${voiceDesc} -#### TTS Prompt: -${ttsprompt} -#### Audio speech generated: -${dataOutputVid} -`; - const params = new URLSearchParams({ - title: "Please provide a title :)", - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/coqui-bark-voice-cloning-docker/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/flax-community/Multilingual-VQA/sections/acknowledgements.md b/spaces/flax-community/Multilingual-VQA/sections/acknowledgements.md deleted file mode 100644 index 37ce46a41e5e26a7ffee64819d16a483d191923d..0000000000000000000000000000000000000000 --- a/spaces/flax-community/Multilingual-VQA/sections/acknowledgements.md +++ /dev/null @@ -1,5 +0,0 @@ -We thank [Nilakshan Kunananthaseelan](https://huggingface.co/knilakshan20) for helping us whenever he could get a chance. We also thank [Abheesht Sharma](https://huggingface.co/abheesht) for helping in the discussions in the initial phases. [Luke Melas](https://github.com/lukemelas) helped us get the CC-12M data on our TPU-VMs and we are very grateful to him. - -This project would not be possible without the help of [Patrick](https://huggingface.co/patrickvonplaten) and [Suraj](https://huggingface.co/valhalla) who met with us and helped review our approach and guided us throughout the project. - -Lastly, we thank the Google Team for helping answer our queries on the Slack channel, and for providing us TPU-VMs. \ No newline at end of file diff --git a/spaces/flowers-team/SocialAISchool/Dockerfile b/spaces/flowers-team/SocialAISchool/Dockerfile deleted file mode 100644 index 62fe3bbdc0f627a541d7b0a12d8596a7dc250cb3..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/Dockerfile +++ /dev/null @@ -1,26 +0,0 @@ -# Dockerfile for the Huggingface spaces Demo -FROM python:3.7 - - -WORKDIR /code - -# Install graphviz -RUN apt-get update && \ - apt-get install -y graphviz && \ - apt-get clean && \ - rm -rf /var/lib/apt/lists/* - -COPY . . - -RUN chmod -R 777 /code - -RUN pip install --upgrade -r web_demo/requirements.txt -RUN pip install -e gym-minigrid - -#EXPOSE 7860 - -CMD ["python", "-m", "web_demo.app"] - - -# docker build -t sai_demo -f web_demo/Dockerfile . -# docker run -p 7860:7860 sai_demo diff --git a/spaces/freddyaboulton/gradio_folium/README.md b/spaces/freddyaboulton/gradio_folium/README.md deleted file mode 100644 index ce7b927c923046bf0ec568843d3f539c4c56b01c..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/gradio_folium/README.md +++ /dev/null @@ -1,10 +0,0 @@ - ---- -tags: [gradio-custom-component] -title: gradio_folium V0.0.3 -colorFrom: yellow -colorTo: purple -sdk: docker -pinned: false -license: apache-2.0 ---- diff --git a/spaces/furqankassa/Human.Feedback.Dynamic.JSONL.Dataset.Download/app.py b/spaces/furqankassa/Human.Feedback.Dynamic.JSONL.Dataset.Download/app.py deleted file mode 100644 index 666fa01bb1221e6b4fe16cf3d51fd9aac9f285ef..0000000000000000000000000000000000000000 --- a/spaces/furqankassa/Human.Feedback.Dynamic.JSONL.Dataset.Download/app.py +++ /dev/null @@ -1,89 +0,0 @@ -import json -import os -import base64 -import streamlit as st - -FIELDS = [ - "CodeValue", - "CodeType", - "Context", - "Question", - "AnswerText", - "UpVoteCount", - "DownVoteCount", - "VoteComment", -] - -IO_PATTERN = "*.jsonl" - - -def read_jsonl_file(file_path): - if not os.path.exists(file_path): - return [] - with open(file_path, "r") as f: - lines = f.readlines() - records = [json.loads(line) for line in lines] - return records - - -def write_jsonl_file(file_path, records): - with open(file_path, "w") as f: - for record in records: - f.write(json.dumps(record) + "\n") - - -def list_files(): - return [f for f in os.listdir() if f.endswith(".jsonl")] - - -def download_link(file_path): - with open(file_path, "rb") as f: - contents = f.read() - b64 = base64.b64encode(contents).decode() - href = f'Download' - return href - - -def main(): - jsonl_files = list_files() - - if not jsonl_files: - st.warning("No JSONL files found. Creating new file.") - jsonl_files.append("data.jsonl") - write_jsonl_file("data.jsonl", []) - - selected_file = st.sidebar.text_input("Enter file name", value=jsonl_files[0]) - if selected_file != jsonl_files[0]: - os.rename(jsonl_files[0], selected_file) - jsonl_files[0] = selected_file - - st.sidebar.write("JSONL files:") - selected_file_index = st.sidebar.selectbox("", range(len(jsonl_files))) - for i, file_name in enumerate(jsonl_files): - if i == selected_file_index: - selected_file = file_name - st.sidebar.write(f"> {file_name}") - else: - st.sidebar.write(file_name) - - st.sidebar.markdown(download_link(selected_file), unsafe_allow_html=True) - - records = read_jsonl_file(selected_file) - - for field in FIELDS: - value = st.text_input(field, key=field) - st.write(f"{field}: {value}") - - if st.button("Add Record"): - record = {field: st.session_state[field] for field in FIELDS} - records.append(record) - write_jsonl_file(selected_file, records) - st.success("Record added!") - - st.write(f"Current contents of {selected_file}:") - for record in records: - st.write(record) - - -if __name__ == "__main__": - main() diff --git a/spaces/gagan3012/T5-Summarization/setup.py b/spaces/gagan3012/T5-Summarization/setup.py deleted file mode 100644 index 236f0ee8a2c50f39488b7e789391fba111d3cad1..0000000000000000000000000000000000000000 --- a/spaces/gagan3012/T5-Summarization/setup.py +++ /dev/null @@ -1,7 +0,0 @@ -from setuptools import find_packages, setup - -setup( - name='src', - packages=find_packages(), - version='0.1.0', -) diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Download Chal Chala Chal 3 in Hindi 720p Everything You Need to Know About the Movie.md b/spaces/gotiQspiryo/whisper-ui/examples/Download Chal Chala Chal 3 in Hindi 720p Everything You Need to Know About the Movie.md deleted file mode 100644 index 25ef75804428f895bb61198aa9bd4aa17cb39f16..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Download Chal Chala Chal 3 in Hindi 720p Everything You Need to Know About the Movie.md +++ /dev/null @@ -1,7 +0,0 @@ -
      -

      Filmyzilla Marathi Movie Download 2023, Filmyzilla Marathi Movie download Filmywap, Filmyzilla Marathi Movie download 2023 Filmywap, Filmyzilla Marathi Movie in Hindi free download sites, New Filmyzilla Marathi Movie 2023 Hindi Dubbed download mp4moviez, Filmyzilla Marathi Movie Hindi Dubbed 2023, Filmyzilla Marathi movies dubbed in Hindi 2023, Latest Marathi Indian Movies dubbed in Hindi, Best Marathi Movies Dubbed in Hindi Download, Hindi Dubbed Movies 2022, New Marathi movie 2023 list, Khatrimaza Marathi Movies Dubbed in Hindi 720p Free Download, Best Marathi Indian movies 2022, Hindi Dubbed Movies Download 2023

      -

      download Chal Chala Chal 3 in hindi 720p


      DOWNLOADhttps://urlgoal.com/2uyMDk



      -

      Isaidubb popular websites for leaking Hollywood, Bollywood, South, Web Series, Tv-Shows, and other languages. dubbed movies for free, so here we can see the impact of downloading movies on the torrent website. There are many options on these sites like Filmyzilla Marathi Movie Download Isaidubb HD printing, 720p 300Mb, 480p, 1080p, and 480p.

      -

      Mp4moviez popular websites for leaking Hollywood, Bollywood, South, Web Series, Tv-Shows, and other languages. dubbed movies for free, so here we can see the impact of downloading movies on the torrent website. There are many options on these sites like HD printing, Marathi Movie Download mp4moviez 720p 300Mb, 480p, 1080p, and 480p.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Ice Age Collision Course (English) 2 Tamil Dubbed Movie Torrent Download - Join the Journey of Manny Sid Diego and More.md b/spaces/gotiQspiryo/whisper-ui/examples/Ice Age Collision Course (English) 2 Tamil Dubbed Movie Torrent Download - Join the Journey of Manny Sid Diego and More.md deleted file mode 100644 index 96e87b8a6af79894e103a91d8c9413249f29929e..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Ice Age Collision Course (English) 2 Tamil Dubbed Movie Torrent Download - Join the Journey of Manny Sid Diego and More.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Ice Age: Collision Course (English) 2 Tamil Dubbed Movie Torrent Download


      Download Filehttps://urlgoal.com/2uyMib



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Keygen HOTCrackforDETROITDIESELDIAGNOSTICLINK64.md b/spaces/gotiQspiryo/whisper-ui/examples/Keygen HOTCrackforDETROITDIESELDIAGNOSTICLINK64.md deleted file mode 100644 index f299b4556d39c2117d6415a13e761e20b419e903..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Keygen HOTCrackforDETROITDIESELDIAGNOSTICLINK64.md +++ /dev/null @@ -1,20 +0,0 @@ -

      keygenCrackforDETROITDIESELDIAGNOSTICLINK64


      Downloadhttps://urlgoal.com/2uyLHS



      -
      -; Run this application, and on the "Add new Diagnostic link" screen, create a new diagnostic link. Tränen- und Schönheitsmedikamente und andere Mittel, die zur Behandlung eines trockenen Gesichts oder einer fettigen Haut zum Tragen sind, sollten aufgrund der Gefahr von Ähnlichkeiten mit ätherischen Ölen und anderen Medikamenten kontaktlos gehalten werden. Https://www.radithall.com/zachary-g-1-2018-austin-texas-best-sewing-machine-2017.html yourself from "Private members" to the right side of the group. - -Bracket remover plug and remove bench - -March 15, 2018 - -Jan 20, 2018 - -Online car insurance life insurance policy quotes - -Trilux taillights do not switch on - -Tri-Muzzle does not function, no volume - -Unfortunately, the sheet did not come out as promised. It is mostly a bit easier to understand, since you can see the step-by-step instructions, so that you can follow along without going to a class. Apart from this, it was a nice lesson. Would definitely recommend! I really do love all of the texture materials, particularly the Beveled Textures and the Hardwood Drywall, which we used in the spaces that were based on our own hand-drawn vision. You can also get 25% off when you have a coupon, but it's not necessary to purchase a class. Wasn't sure if we'd get results like this at all. We had the cover on the bed when we were doing this process, so we were a little concerned about that. There are three tools that you use in the "Reflection" process: A grid is used to keep your ruler at consistent sizes, and the different tools are used to measure each area. I love that it shows the position of the tissue relative to the head in the photo. Also, you can save "shots" of your design. View all features • • 4fefd39f24
      -
      -
      -

      diff --git a/spaces/gradio/HuBERT/fairseq/clib/libbase/balanced_assignment.cpp b/spaces/gradio/HuBERT/fairseq/clib/libbase/balanced_assignment.cpp deleted file mode 100644 index 296f03b6aeb87a11db92e5342d8dab90f1fc3867..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/clib/libbase/balanced_assignment.cpp +++ /dev/null @@ -1,95 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -/* -C++ code for solving the linear assignment problem. -Based on the Auction Algorithm from https://dspace.mit.edu/bitstream/handle/1721.1/3265/P-2108-26912652.pdf and the implementation from: -https://github.com/bkj/auction-lap -Adapted to be more efficient when each worker is looking for k jobs instead of 1. -*/ -#include -#include -using namespace torch::indexing; -torch::Tensor balanced_assignment(torch::Tensor job_and_worker_to_score) { - int max_iterations = 100; - torch::Tensor epsilon = (job_and_worker_to_score.max() - job_and_worker_to_score.min()) / 50; - epsilon.clamp_min_(1e-04); - torch::Tensor worker_and_job_to_score = job_and_worker_to_score.detach().transpose(0,1).contiguous(); - int num_workers = worker_and_job_to_score.size(0); - int num_jobs = worker_and_job_to_score.size(1); - auto device = worker_and_job_to_score.device(); - int jobs_per_worker = num_jobs / num_workers; - torch::Tensor value = worker_and_job_to_score.clone(); - int counter = 0; - torch::Tensor max_value = worker_and_job_to_score.max(); - - torch::Tensor bid_indices; - torch::Tensor cost = worker_and_job_to_score.new_zeros({1, num_jobs}); - torch::Tensor bids = worker_and_job_to_score.new_empty({num_workers, num_jobs}); - torch::Tensor bid_increments = worker_and_job_to_score.new_empty({num_workers, jobs_per_worker}); - torch::Tensor top_values = worker_and_job_to_score.new_empty({num_workers, jobs_per_worker + 1}); - torch::Tensor high_bids = worker_and_job_to_score.new_empty({num_jobs}); - - torch::Tensor top_index = top_values.to(torch::kLong); - torch::Tensor high_bidders = top_index.new_empty({num_jobs}); - torch::Tensor have_bids = high_bidders.to(torch::kBool); - torch::Tensor jobs_indices = torch::arange({num_jobs}, torch::dtype(torch::kLong).device(device)); - torch::Tensor true_tensor = torch::ones({1}, torch::dtype(torch::kBool).device(device)); - - while (true) { - bids.zero_(); - torch::topk_out(top_values, top_index, value, jobs_per_worker + 1, 1); - - // Each worker bids the difference in value between that job and the k+1th job - torch::sub_out(bid_increments, - top_values.index({Slice(None, None), Slice(0, jobs_per_worker)}), - top_values.index({Slice(None, None), jobs_per_worker}).unsqueeze(1)); - - bid_increments.add_(epsilon); - bids.scatter_(1, - top_index.index({Slice(None, None),Slice(0, jobs_per_worker)}), - bid_increments); - - if (counter < max_iterations && counter > 0) { - // Put in a minimal bid to retain items from the last round if no-one else bids for them this round - bids.view(-1).index_put_({bid_indices}, epsilon); - } - - // Find the highest bidding worker per job - torch::max_out(high_bids, high_bidders, bids, 0); - torch::gt_out(have_bids, high_bids, 0); - - if (have_bids.all().item()) { - // All jobs were bid for - break; - } - - // Make popular items more expensive - cost.add_(high_bids); - torch::sub_out(value, worker_and_job_to_score, cost); - - bid_indices = ((high_bidders * num_jobs) + jobs_indices).index({have_bids}); - - if (counter < max_iterations) { - // Make sure that this item will be in the winning worker's top-k next time. - value.view(-1).index_put_({bid_indices}, max_value); - } - else { - // Suboptimal approximation that converges quickly from current solution - value.view(-1).index_put_({bid_indices}, worker_and_job_to_score.view(-1).index({bid_indices})); - } - - counter += 1; - } - - return top_index.index({Slice(None, None), Slice(0, jobs_per_worker)}).reshape(-1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("balanced_assignment", &balanced_assignment, "Balanced Assignment"); -} diff --git a/spaces/gradio/automatic-speech-recognition/app.py b/spaces/gradio/automatic-speech-recognition/app.py deleted file mode 100644 index b18231f4c6fca3e51070ccafa222b75254f1c652..0000000000000000000000000000000000000000 --- a/spaces/gradio/automatic-speech-recognition/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import gradio as gr -import os - -# save your HF API token from https:/hf.co/settings/tokens as an env variable to avoid rate limiting -auth_token = os.getenv("auth_token") - -# automatically load the interface from a HF model -# you can remove the api_key parameter if you don't care about rate limiting. -demo = gr.Interface.load( - "huggingface/facebook/wav2vec2-base-960h", - title="Speech-to-text", - inputs="mic", - description="Let me try to guess what you're saying!", - api_key=auth_token -) - -demo.launch() diff --git a/spaces/guardiancc/video-face-swap/roop/processors/frame/core.py b/spaces/guardiancc/video-face-swap/roop/processors/frame/core.py deleted file mode 100644 index c225f9de483a2914a98392ce9de5bd03f2013a2d..0000000000000000000000000000000000000000 --- a/spaces/guardiancc/video-face-swap/roop/processors/frame/core.py +++ /dev/null @@ -1,88 +0,0 @@ -import os -import importlib -import psutil -from concurrent.futures import ThreadPoolExecutor, as_completed -from queue import Queue -from types import ModuleType -from typing import Any, List, Callable -from tqdm import tqdm - -import roop - -FRAME_PROCESSORS_MODULES: List[ModuleType] = [] -FRAME_PROCESSORS_INTERFACE = [ - 'pre_check', - 'pre_start', - 'process_frame', - 'process_frames', - 'process_image', - 'process_video', - 'post_process' -] - - -def load_frame_processor_module(frame_processor: str) -> Any: - try: - frame_processor_module = importlib.import_module(f'roop.processors.frame.{frame_processor}') - for method_name in FRAME_PROCESSORS_INTERFACE: - if not hasattr(frame_processor_module, method_name): - raise NotImplementedError - except (ImportError, NotImplementedError): - quit(f'Frame processor {frame_processor} crashed.') - return frame_processor_module - - -def get_frame_processors_modules(frame_processors: List[str]) -> List[ModuleType]: - global FRAME_PROCESSORS_MODULES - - if not FRAME_PROCESSORS_MODULES: - for frame_processor in frame_processors: - frame_processor_module = load_frame_processor_module(frame_processor) - FRAME_PROCESSORS_MODULES.append(frame_processor_module) - return FRAME_PROCESSORS_MODULES - - -def multi_process_frame(source_path: str, temp_frame_paths: List[str], process_frames: Callable[[str, List[str], Any], None], update: Callable[[], None]) -> None: - with ThreadPoolExecutor(max_workers=roop.globals.execution_threads) as executor: - futures = [] - queue = create_queue(temp_frame_paths) - queue_per_future = len(temp_frame_paths) // roop.globals.execution_threads - while not queue.empty(): - future = executor.submit(process_frames, source_path, pick_queue(queue, queue_per_future), update) - futures.append(future) - for future in as_completed(futures): - future.result() - - -def create_queue(temp_frame_paths: List[str]) -> Queue[str]: - queue: Queue[str] = Queue() - for frame_path in temp_frame_paths: - queue.put(frame_path) - return queue - - -def pick_queue(queue: Queue[str], queue_per_future: int) -> List[str]: - queues = [] - for _ in range(queue_per_future): - if not queue.empty(): - queues.append(queue.get()) - return queues - - -def process_video(source_path: str, frame_paths: list[str], process_frames: Callable[[str, List[str], Any], None]) -> None: - progress_bar_format = '{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]' - total = len(frame_paths) - with tqdm(total=total, desc='Processing', unit='frame', dynamic_ncols=True, bar_format=progress_bar_format) as progress: - multi_process_frame(source_path, frame_paths, process_frames, lambda: update_progress(progress)) - - -def update_progress(progress: Any = None) -> None: - process = psutil.Process(os.getpid()) - memory_usage = process.memory_info().rss / 1024 / 1024 / 1024 - progress.set_postfix({ - 'memory_usage': '{:.2f}'.format(memory_usage).zfill(5) + 'GB', - 'execution_providers': roop.globals.execution_providers, - 'execution_threads': roop.globals.execution_threads - }) - progress.refresh() - progress.update(1) diff --git a/spaces/haakohu/deep_privacy2/sg3_torch_utils/ops/upfirdn2d.py b/spaces/haakohu/deep_privacy2/sg3_torch_utils/ops/upfirdn2d.py deleted file mode 100644 index a0bbd22d245481e7c5a19315e5cb3242b1278787..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/sg3_torch_utils/ops/upfirdn2d.py +++ /dev/null @@ -1,388 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom PyTorch ops for efficient resampling of 2D images.""" - -import os -import warnings -import numpy as np -import torch -import traceback - -from .. import custom_ops -from .. import misc -from . import conv2d_gradfix -from torch.cuda.amp import custom_bwd, custom_fwd - -#---------------------------------------------------------------------------- - -_inited = False -_plugin = None -enabled = False - -def _init(): - global _inited, _plugin - if not _inited: - sources = ['upfirdn2d.cpp', 'upfirdn2d.cu'] - sources = [os.path.join(os.path.dirname(__file__), s) for s in sources] - try: - _plugin = custom_ops.get_plugin('upfirdn2d_plugin', sources=sources, extra_cuda_cflags=['--use_fast_math']) - except: - warnings.warn('Failed to build CUDA kernels for upfirdn2d. Falling back to slow reference implementation. Details:\n\n' + traceback.format_exc()) - return _plugin is not None - -def _parse_scaling(scaling): - if isinstance(scaling, int): - scaling = [scaling, scaling] - assert isinstance(scaling, (list, tuple)) - assert all(isinstance(x, int) for x in scaling) - sx, sy = scaling - assert sx >= 1 and sy >= 1 - return sx, sy - -def _parse_padding(padding): - if isinstance(padding, int): - padding = [padding, padding] - assert isinstance(padding, (list, tuple)) - assert all(isinstance(x, int) for x in padding) - if len(padding) == 2: - padx, pady = padding - padding = [padx, padx, pady, pady] - padx0, padx1, pady0, pady1 = padding - return padx0, padx1, pady0, pady1 - -def _get_filter_size(f): - if f is None: - return 1, 1 - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - fw = f.shape[-1] - fh = f.shape[0] - with misc.suppress_tracer_warnings(): - fw = int(fw) - fh = int(fh) - misc.assert_shape(f, [fh, fw][:f.ndim]) - assert fw >= 1 and fh >= 1 - return fw, fh - -#---------------------------------------------------------------------------- - -def setup_filter(f, device=torch.device('cpu'), normalize=True, flip_filter=False, gain=1, separable=None): - r"""Convenience function to setup 2D FIR filter for `upfirdn2d()`. - - Args: - f: Torch tensor, numpy array, or python list of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), - `[]` (impulse), or - `None` (identity). - device: Result device (default: cpu). - normalize: Normalize the filter so that it retains the magnitude - for constant input signal (DC)? (default: True). - flip_filter: Flip the filter? (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - separable: Return a separable filter? (default: select automatically). - - Returns: - Float32 tensor of the shape - `[filter_height, filter_width]` (non-separable) or - `[filter_taps]` (separable). - """ - # Validate. - if f is None: - f = 1 - f = torch.as_tensor(f, dtype=torch.float32) - assert f.ndim in [0, 1, 2] - assert f.numel() > 0 - if f.ndim == 0: - f = f[np.newaxis] - - # Separable? - if separable is None: - separable = (f.ndim == 1 and f.numel() >= 8) - if f.ndim == 1 and not separable: - f = f.ger(f) - assert f.ndim == (1 if separable else 2) - - # Apply normalize, flip, gain, and device. - if normalize: - f /= f.sum() - if flip_filter: - f = f.flip(list(range(f.ndim))) - f = f * (gain ** (f.ndim / 2)) - f = f.to(device=device) - return f - -#---------------------------------------------------------------------------- - -def upfirdn2d(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Pad, upsample, filter, and downsample a batch of 2D images. - - Performs the following sequence of operations for each channel: - - 1. Upsample the image by inserting N-1 zeros after each pixel (`up`). - - 2. Pad the image with the specified number of zeros on each side (`padding`). - Negative padding corresponds to cropping the image. - - 3. Convolve the image with the specified 2D FIR filter (`f`), shrinking it - so that the footprint of all output pixels lies within the input image. - - 4. Downsample the image by keeping every Nth pixel (`down`). - - This sequence of operations bears close resemblance to scipy.signal.upfirdn(). - The fused op is considerably more efficient than performing the same calculation - using standard PyTorch ops. It supports gradients of arbitrary order. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - up: Integer upsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - down: Integer downsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the upsampled image. Can be a single number - or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - assert isinstance(x, torch.Tensor) - assert impl in ['ref', 'cuda'] - if impl == 'cuda' and x.device.type == 'cuda' and enabled and _init(): - return _upfirdn2d_cuda(up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain).apply(x, f) - return _upfirdn2d_ref(x, f, up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def _upfirdn2d_ref(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1): - """Slow reference implementation of `upfirdn2d()` using standard PyTorch ops. - """ - # Validate arguments. - assert isinstance(x, torch.Tensor) and x.ndim == 4 - if f is None: - f = torch.ones([1, 1], dtype=torch.float32, device=x.device) - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - assert f.dtype == torch.float32 and not f.requires_grad - batch_size, num_channels, in_height, in_width = x.shape - upx, upy = _parse_scaling(up) - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - - # Upsample by inserting zeros. - x = x.reshape([batch_size, num_channels, in_height, 1, in_width, 1]) - x = torch.nn.functional.pad(x, [0, upx - 1, 0, 0, 0, upy - 1]) - x = x.reshape([batch_size, num_channels, in_height * upy, in_width * upx]) - - # Pad or crop. - x = torch.nn.functional.pad(x, [max(padx0, 0), max(padx1, 0), max(pady0, 0), max(pady1, 0)]) - x = x[:, :, max(-pady0, 0) : x.shape[2] - max(-pady1, 0), max(-padx0, 0) : x.shape[3] - max(-padx1, 0)] - - # Setup filter. - f = f * (gain ** (f.ndim / 2)) - f = f.to(x.dtype) - if not flip_filter: - f = f.flip(list(range(f.ndim))) - - # Convolve with the filter. - f = f[np.newaxis, np.newaxis].repeat([num_channels, 1] + [1] * f.ndim) - if f.ndim == 4: - x = conv2d_gradfix.conv2d(input=x, weight=f, groups=num_channels) - else: - x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(2), groups=num_channels) - x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(3), groups=num_channels) - - # Downsample by throwing away pixels. - x = x[:, :, ::downy, ::downx] - return x - -#---------------------------------------------------------------------------- - -_upfirdn2d_cuda_cache = dict() - -def _upfirdn2d_cuda(up=1, down=1, padding=0, flip_filter=False, gain=1): - """Fast CUDA implementation of `upfirdn2d()` using custom ops. - """ - # Parse arguments. - upx, upy = _parse_scaling(up) - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - - # Lookup from cache. - key = (upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain) - if key in _upfirdn2d_cuda_cache: - return _upfirdn2d_cuda_cache[key] - - # Forward op. - class Upfirdn2dCuda(torch.autograd.Function): - @staticmethod - @custom_fwd(cast_inputs=torch.float32) - def forward(ctx, x, f): # pylint: disable=arguments-differ - assert isinstance(x, torch.Tensor) and x.ndim == 4 - if f is None: - f = torch.ones([1, 1], dtype=torch.float32, device=x.device) - assert isinstance(f, torch.Tensor) and f.ndim in [1, 2] - y = x - if f.ndim == 2: - y = _plugin.upfirdn2d(y, f, upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain) - else: - y = _plugin.upfirdn2d(y, f.unsqueeze(0), upx, 1, downx, 1, padx0, padx1, 0, 0, flip_filter, np.sqrt(gain)) - y = _plugin.upfirdn2d(y, f.unsqueeze(1), 1, upy, 1, downy, 0, 0, pady0, pady1, flip_filter, np.sqrt(gain)) - ctx.save_for_backward(f) - ctx.x_shape = x.shape - return y - - @staticmethod - @custom_bwd - def backward(ctx, dy): # pylint: disable=arguments-differ - f, = ctx.saved_tensors - _, _, ih, iw = ctx.x_shape - _, _, oh, ow = dy.shape - fw, fh = _get_filter_size(f) - p = [ - fw - padx0 - 1, - iw * upx - ow * downx + padx0 - upx + 1, - fh - pady0 - 1, - ih * upy - oh * downy + pady0 - upy + 1, - ] - dx = None - df = None - - if ctx.needs_input_grad[0]: - dx = _upfirdn2d_cuda(up=down, down=up, padding=p, flip_filter=(not flip_filter), gain=gain).apply(dy, f) - - assert not ctx.needs_input_grad[1] - return dx, df - - # Add to cache. - _upfirdn2d_cuda_cache[key] = Upfirdn2dCuda - return Upfirdn2dCuda - -#---------------------------------------------------------------------------- - -def filter2d(x, f, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Filter a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape matches the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - padding: Padding with respect to the output. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + fw // 2, - padx1 + (fw - 1) // 2, - pady0 + fh // 2, - pady1 + (fh - 1) // 2, - ] - return upfirdn2d(x, f, padding=p, flip_filter=flip_filter, gain=gain, impl=impl) - -#---------------------------------------------------------------------------- - -def upsample2d(x, f, up=2, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Upsample a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape is a multiple of the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - up: Integer upsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the output. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - upx, upy = _parse_scaling(up) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + (fw + upx - 1) // 2, - padx1 + (fw - upx) // 2, - pady0 + (fh + upy - 1) // 2, - pady1 + (fh - upy) // 2, - ] - return upfirdn2d(x, f, up=up, padding=p, flip_filter=flip_filter, gain=gain*upx*upy, impl=impl) - -#---------------------------------------------------------------------------- - -def downsample2d(x, f, down=2, padding=0, flip_filter=False, gain=1, impl='cuda'): - r"""Downsample a batch of 2D images using the given 2D FIR filter. - - By default, the result is padded so that its shape is a fraction of the input. - User-specified padding is applied on top of that, with negative values - indicating cropping. Pixels outside the image are assumed to be zero. - - Args: - x: Float32/float64/float16 input tensor of the shape - `[batch_size, num_channels, in_height, in_width]`. - f: Float32 FIR filter of the shape - `[filter_height, filter_width]` (non-separable), - `[filter_taps]` (separable), or - `None` (identity). - down: Integer downsampling factor. Can be a single int or a list/tuple - `[x, y]` (default: 1). - padding: Padding with respect to the input. Can be a single number or a - list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - flip_filter: False = convolution, True = correlation (default: False). - gain: Overall scaling factor for signal magnitude (default: 1). - impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - downx, downy = _parse_scaling(down) - padx0, padx1, pady0, pady1 = _parse_padding(padding) - fw, fh = _get_filter_size(f) - p = [ - padx0 + (fw - downx + 1) // 2, - padx1 + (fw - downx) // 2, - pady0 + (fh - downy + 1) // 2, - pady1 + (fh - downy) // 2, - ] - return upfirdn2d(x, f, down=down, padding=p, flip_filter=flip_filter, gain=gain, impl=impl) - -#---------------------------------------------------------------------------- diff --git a/spaces/hamacojr/CAT-Seg/cat_seg/data/dataset_mappers/detr_panoptic_dataset_mapper.py b/spaces/hamacojr/CAT-Seg/cat_seg/data/dataset_mappers/detr_panoptic_dataset_mapper.py deleted file mode 100644 index 4a296f2fbbd24b190b312b464ce2d4c1957b221c..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/cat_seg/data/dataset_mappers/detr_panoptic_dataset_mapper.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/facebookresearch/detr/blob/master/d2/detr/dataset_mapper.py -import copy -import logging - -import numpy as np -import torch - -from detectron2.config import configurable -from detectron2.data import detection_utils as utils -from detectron2.data import transforms as T -from detectron2.data.transforms import TransformGen -from detectron2.structures import BitMasks, Instances - -__all__ = ["DETRPanopticDatasetMapper"] - - -def build_transform_gen(cfg, is_train): - """ - Create a list of :class:`TransformGen` from config. - Returns: - list[TransformGen] - """ - if is_train: - min_size = cfg.INPUT.MIN_SIZE_TRAIN - max_size = cfg.INPUT.MAX_SIZE_TRAIN - sample_style = cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING - else: - min_size = cfg.INPUT.MIN_SIZE_TEST - max_size = cfg.INPUT.MAX_SIZE_TEST - sample_style = "choice" - if sample_style == "range": - assert len(min_size) == 2, "more than 2 ({}) min_size(s) are provided for ranges".format( - len(min_size) - ) - - logger = logging.getLogger(__name__) - tfm_gens = [] - if is_train: - tfm_gens.append(T.RandomFlip()) - tfm_gens.append(T.ResizeShortestEdge(min_size, max_size, sample_style)) - if is_train: - logger.info("TransformGens used in training: " + str(tfm_gens)) - return tfm_gens - - -# This is specifically designed for the COCO dataset. -class DETRPanopticDatasetMapper: - """ - A callable which takes a dataset dict in Detectron2 Dataset format, - and map it into a format used by MaskFormer. - - This dataset mapper applies the same transformation as DETR for COCO panoptic segmentation. - - The callable currently does the following: - - 1. Read the image from "file_name" - 2. Applies geometric transforms to the image and annotation - 3. Find and applies suitable cropping to the image and annotation - 4. Prepare image and annotation to Tensors - """ - - @configurable - def __init__( - self, - is_train=True, - *, - crop_gen, - tfm_gens, - image_format, - ): - """ - NOTE: this interface is experimental. - Args: - is_train: for training or inference - augmentations: a list of augmentations or deterministic transforms to apply - crop_gen: crop augmentation - tfm_gens: data augmentation - image_format: an image format supported by :func:`detection_utils.read_image`. - """ - self.crop_gen = crop_gen - self.tfm_gens = tfm_gens - logging.getLogger(__name__).info( - "[DETRPanopticDatasetMapper] Full TransformGens used in training: {}, crop: {}".format( - str(self.tfm_gens), str(self.crop_gen) - ) - ) - - self.img_format = image_format - self.is_train = is_train - - @classmethod - def from_config(cls, cfg, is_train=True): - # Build augmentation - if cfg.INPUT.CROP.ENABLED and is_train: - crop_gen = [ - T.ResizeShortestEdge([400, 500, 600], sample_style="choice"), - T.RandomCrop(cfg.INPUT.CROP.TYPE, cfg.INPUT.CROP.SIZE), - ] - else: - crop_gen = None - - tfm_gens = build_transform_gen(cfg, is_train) - - ret = { - "is_train": is_train, - "crop_gen": crop_gen, - "tfm_gens": tfm_gens, - "image_format": cfg.INPUT.FORMAT, - } - return ret - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - - Returns: - dict: a format that builtin models in detectron2 accept - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - image = utils.read_image(dataset_dict["file_name"], format=self.img_format) - utils.check_image_size(dataset_dict, image) - - if self.crop_gen is None: - image, transforms = T.apply_transform_gens(self.tfm_gens, image) - else: - if np.random.rand() > 0.5: - image, transforms = T.apply_transform_gens(self.tfm_gens, image) - else: - image, transforms = T.apply_transform_gens( - self.tfm_gens[:-1] + self.crop_gen + self.tfm_gens[-1:], image - ) - - image_shape = image.shape[:2] # h, w - - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1))) - - if not self.is_train: - # USER: Modify this if you want to keep them for some reason. - dataset_dict.pop("annotations", None) - return dataset_dict - - if "pan_seg_file_name" in dataset_dict: - pan_seg_gt = utils.read_image(dataset_dict.pop("pan_seg_file_name"), "RGB") - segments_info = dataset_dict["segments_info"] - - # apply the same transformation to panoptic segmentation - pan_seg_gt = transforms.apply_segmentation(pan_seg_gt) - - from panopticapi.utils import rgb2id - - pan_seg_gt = rgb2id(pan_seg_gt) - - instances = Instances(image_shape) - classes = [] - masks = [] - for segment_info in segments_info: - class_id = segment_info["category_id"] - if not segment_info["iscrowd"]: - classes.append(class_id) - masks.append(pan_seg_gt == segment_info["id"]) - - classes = np.array(classes) - instances.gt_classes = torch.tensor(classes, dtype=torch.int64) - if len(masks) == 0: - # Some image does not have annotation (all ignored) - instances.gt_masks = torch.zeros((0, pan_seg_gt.shape[-2], pan_seg_gt.shape[-1])) - else: - masks = BitMasks( - torch.stack([torch.from_numpy(np.ascontiguousarray(x.copy())) for x in masks]) - ) - instances.gt_masks = masks.tensor - - dataset_dict["instances"] = instances - - return dataset_dict diff --git a/spaces/hamelcubsfan/AutoGPT/CODE_OF_CONDUCT.md b/spaces/hamelcubsfan/AutoGPT/CODE_OF_CONDUCT.md deleted file mode 100644 index d2331b4c60b9fb27f06953273355dcf53b8d4321..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,40 +0,0 @@ -# Code of Conduct for auto-gpt - -## 1. Purpose - -The purpose of this Code of Conduct is to provide guidelines for contributors to the auto-gpt project on GitHub. We aim to create a positive and inclusive environment where all participants can contribute and collaborate effectively. By participating in this project, you agree to abide by this Code of Conduct. - -## 2. Scope - -This Code of Conduct applies to all contributors, maintainers, and users of the auto-gpt project. It extends to all project spaces, including but not limited to issues, pull requests, code reviews, comments, and other forms of communication within the project. - -## 3. Our Standards - -We encourage the following behavior: - -* Being respectful and considerate to others -* Actively seeking diverse perspectives -* Providing constructive feedback and assistance -* Demonstrating empathy and understanding - -We discourage the following behavior: - -* Harassment or discrimination of any kind -* Disrespectful, offensive, or inappropriate language or content -* Personal attacks or insults -* Unwarranted criticism or negativity - -## 4. Reporting and Enforcement - -If you witness or experience any violations of this Code of Conduct, please report them to the project maintainers by email or other appropriate means. The maintainers will investigate and take appropriate action, which may include warnings, temporary or permanent bans, or other measures as necessary. - -Maintainers are responsible for ensuring compliance with this Code of Conduct and may take action to address any violations. - -## 5. Acknowledgements - -This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org/version/2/0/code_of_conduct.html). - -## 6. Contact - -If you have any questions or concerns, please contact the project maintainers. - diff --git a/spaces/hamelcubsfan/AutoGPT/tests/local_cache_test.py b/spaces/hamelcubsfan/AutoGPT/tests/local_cache_test.py deleted file mode 100644 index bb10862656bb500f319ac231ff5bd5438d6fe7e2..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/tests/local_cache_test.py +++ /dev/null @@ -1,67 +0,0 @@ -# sourcery skip: snake-case-functions -"""Tests for LocalCache class""" -import os -import sys -import unittest - -import pytest - -from autogpt.memory.local import LocalCache - - -def mock_config() -> dict: - """Mock the Config class""" - return type( - "MockConfig", - (object,), - { - "debug_mode": False, - "continuous_mode": False, - "speak_mode": False, - "memory_index": "auto-gpt", - }, - ) - - -@pytest.mark.integration_test -class TestLocalCache(unittest.TestCase): - """Tests for LocalCache class""" - - def setUp(self) -> None: - """Set up the test environment""" - self.cfg = mock_config() - self.cache = LocalCache(self.cfg) - - def test_add(self) -> None: - """Test adding a text to the cache""" - text = "Sample text" - self.cache.add(text) - self.assertIn(text, self.cache.data.texts) - - def test_clear(self) -> None: - """Test clearing the cache""" - self.cache.clear() - self.assertEqual(self.cache.data.texts, []) - - def test_get(self) -> None: - """Test getting a text from the cache""" - text = "Sample text" - self.cache.add(text) - result = self.cache.get(text) - self.assertEqual(result, [text]) - - def test_get_relevant(self) -> None: - """Test getting relevant texts from the cache""" - text1 = "Sample text 1" - text2 = "Sample text 2" - self.cache.add(text1) - self.cache.add(text2) - result = self.cache.get_relevant(text1, 1) - self.assertEqual(result, [text1]) - - def test_get_stats(self) -> None: - """Test getting the cache stats""" - text = "Sample text" - self.cache.add(text) - stats = self.cache.get_stats() - self.assertEqual(stats, (4, self.cache.data.embeddings.shape)) diff --git a/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/cppipc/waiter.h b/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/cppipc/waiter.h deleted file mode 100644 index ee45fe3517be95ac1688a3e3540189edeb0d860c..0000000000000000000000000000000000000000 --- a/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/cppipc/waiter.h +++ /dev/null @@ -1,83 +0,0 @@ -#pragma once - -#include -#include -#include -#include - -#include "libipc/def.h" -#include "libipc/mutex.h" -#include "libipc/condition.h" -#include "libipc/platform/detail.h" - -namespace ipc { -namespace detail { - -class waiter { - ipc::sync::condition cond_; - ipc::sync::mutex lock_; - std::atomic quit_ {false}; - -public: - static void init(); - - waiter() = default; - waiter(char const *name) { - open(name); - } - - ~waiter() { - close(); - } - - bool valid() const noexcept { - return cond_.valid() && lock_.valid(); - } - - bool open(char const *name) noexcept { - quit_.store(false, std::memory_order_relaxed); - if (!cond_.open((std::string{"_waiter_cond_"} + name).c_str())) { - return false; - } - if (!lock_.open((std::string{"_waiter_lock_"} + name).c_str())) { - cond_.close(); - return false; - } - return valid(); - } - - void close() noexcept { - cond_.close(); - lock_.close(); - } - - template - bool wait_if(F &&pred, std::uint64_t tm = ipc::invalid_value) noexcept { - IPC_UNUSED_ std::lock_guard guard {lock_}; - while ([this, &pred] { - return !quit_.load(std::memory_order_relaxed) - && std::forward(pred)(); - }()) { - if (!cond_.wait(lock_, tm)) return false; - } - return true; - } - - bool notify() noexcept { - std::lock_guard{lock_}; // barrier - return cond_.notify(lock_); - } - - bool broadcast() noexcept { - std::lock_guard{lock_}; // barrier - return cond_.broadcast(lock_); - } - - bool quit_waiting() { - quit_.store(true, std::memory_order_release); - return broadcast(); - } -}; - -} // namespace detail -} // namespace ipc diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/checkpoint/__init__.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/checkpoint/__init__.py deleted file mode 100644 index e17a9df03d886b379ffbb1c4ec41e03c5025410f..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/checkpoint/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# File: - - -from . import catalog as _UNUSED # register the handler -from .detection_checkpoint import DetectionCheckpointer -from fvcore.common.checkpoint import Checkpointer, PeriodicCheckpointer - -__all__ = ["Checkpointer", "PeriodicCheckpointer", "DetectionCheckpointer"] diff --git a/spaces/heiyubili/bingo/src/components/chat-scroll-anchor.tsx b/spaces/heiyubili/bingo/src/components/chat-scroll-anchor.tsx deleted file mode 100644 index ac809f4486a48e134cb69314c3d0dae5e68d614e..0000000000000000000000000000000000000000 --- a/spaces/heiyubili/bingo/src/components/chat-scroll-anchor.tsx +++ /dev/null @@ -1,29 +0,0 @@ -'use client' - -import * as React from 'react' -import { useInView } from 'react-intersection-observer' - -import { useAtBottom } from '@/lib/hooks/use-at-bottom' - -interface ChatScrollAnchorProps { - trackVisibility?: boolean -} - -export function ChatScrollAnchor({ trackVisibility }: ChatScrollAnchorProps) { - const isAtBottom = useAtBottom() - const { ref, entry, inView } = useInView({ - trackVisibility, - delay: 100, - rootMargin: '0px 0px -150px 0px' - }) - - React.useEffect(() => { - if (isAtBottom && trackVisibility && !inView) { - entry?.target.scrollIntoView({ - block: 'start' - }) - } - }, [inView, entry, isAtBottom, trackVisibility]) - - return
      -} diff --git a/spaces/helidem/Projet-L3-Image/app.py b/spaces/helidem/Projet-L3-Image/app.py deleted file mode 100644 index 8557c68982e32498a2cc439cff4e66c9814a4e15..0000000000000000000000000000000000000000 --- a/spaces/helidem/Projet-L3-Image/app.py +++ /dev/null @@ -1,171 +0,0 @@ -import numpy as np -import cv2 as cv -import gradio as gr - -def load_image_cv(name): - return cv.imread(name) - -def redimensionnerImage(image, pourcentage_reduction): - height, width = image.shape[:2] - - # Calculer les nouvelles dimensions de l'image - new_height = int(height * pourcentage_reduction / 100) - new_width = int(width * pourcentage_reduction / 100) - - # Réduire la taille de l'image - img_redimensionnee = cv.resize(image, (new_width, new_height)) - return img_redimensionnee - -def is_white(img): - # Chargement de l'image en niveau de gris - gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY) - - # Binarisation de l'image - _, thresh = cv.threshold(gray, 0, 255, cv.THRESH_BINARY_INV+cv.THRESH_OTSU) - - # Récupération de la hauteur et la largeur de l'image - height, width = thresh.shape[:2] - - # Calcul de la surface rectangulaire au milieu de l'image - x = int(width * 0.25) - y = int(height * 0.25) - w = int(width * 0.5) - h = int(height * 0.5) - - # Extraction de la zone rectangulaire au milieu de l'image - zone = thresh[y:y+h, x:x+w] - - # Comptage des pixels blancs dans la zone rectangulaire - countBlancs = cv.countNonZero(zone) - countNoirs = zone.size - countBlancs - if countBlancs < countNoirs: - return True - -def detect_board(image, filled=False): - # copie de l'image - img = image.copy() - - # Convertion en niveau de gris - gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY) - - # On applique Otsu pour binariser l'image - _, thresh = cv.threshold(gray, 0, 255, cv.THRESH_BINARY_INV+cv.THRESH_OTSU) - - # On cherche les contours de l'image - contours, _ = cv.findContours(thresh, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_NONE) - - # On récupère le plus grand contour - largest_contour = max(contours, key=cv.contourArea) - - # On dessine le contour en rouge - cv.drawContours(image, [largest_contour], 0, (0, 0, 255), 3) - - - - if (filled): - # On remplit le contour en rouge - cv.fillPoly(img, pts=[largest_contour], color=(0, 0, 255)) - - # On récupère les coordonnées du rectangle englobant le contour pour couper l'image - x, y, w, h = cv.boundingRect(largest_contour) - img = img[y:y+h, x:x+w] - - return img - -def detect_text(image): - gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY) - ret, bin = cv.threshold(gray, 0, 255, cv.THRESH_BINARY + cv.THRESH_OTSU) - - kernel = np.ones((5, 5), np.uint8) - dilated = cv.dilate(bin, kernel, iterations=1) - - contours, _ = cv.findContours(dilated, cv.RETR_TREE, cv.CHAIN_APPROX_NONE) - - area_lower_bound = 175 - area_upper_bound = 7500 - - for contour in contours: - area = cv.contourArea(contour) - if area_lower_bound < area < area_upper_bound: - x, y, w, h = cv.boundingRect(contour) - cv.rectangle(image, (x, y), (x + w, y + h), (255, 0, 0), 3) - return image - -def detect_bloc_text(image): - gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY) - ret, bin = cv.threshold(gray, 0, 255, cv.THRESH_BINARY + cv.THRESH_OTSU) - - kernel = np.ones((5, 5), np.uint8) - dilated = cv.dilate(bin, kernel, iterations=1) - - contours, _ = cv.findContours(dilated, cv.RETR_TREE, cv.CHAIN_APPROX_NONE) - - area_lower_bound = 175 - area_upper_bound = 7500 - - # Liste pour stocker les rectangles détectés - rectangles = [] - - for contour in contours: - area = cv.contourArea(contour) - if area_lower_bound < area < area_upper_bound: - x, y, w, h = cv.boundingRect(contour) - rectangles.append((x, y, w, h)) # Ajouter le rectangle à la liste - - # Vérifier si les rectangles sont serrés - if len(rectangles) > 1: - # Calculer les coordonnées du rectangle englobant - x_min = min(rect[0] for rect in rectangles) - y_min = min(rect[1] for rect in rectangles) - x_max = max(rect[0] + rect[2] for rect in rectangles) - y_max = max(rect[1] + rect[3] for rect in rectangles) - - # Dessiner le rectangle englobant - cv.rectangle(image, (x_min, y_min), (x_max, y_max), (255, 0, 0), 3) - - # crop image - image = image[y_min:y_max, x_min:x_max] - return image - -def main(image, mode): - if is_white(image): - image = redimensionnerImage(image, 50) - image = cv.bitwise_not(image) - if mode == "detect_board + detect_bloc_text": - image = detect_board(image) - image = detect_bloc_text(image) - elif mode == "detect_bloc_text": - image = detect_bloc_text(image) - elif mode == "detect_board + detect_text": - image = detect_board(image) - image = detect_text(image) - elif mode == "detect_board (filled)": - image = detect_board(image, True) - - image = cv.bitwise_not(image) - cv.putText(image, "white", (int(image.shape[1]/2), 50), cv.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 2) - - else: - image = redimensionnerImage(image, 50) - if mode == "detect_board + detect_bloc_text": - image = detect_board(image) - image = detect_bloc_text(image) - elif mode == "detect_bloc_text": - image = detect_bloc_text(image) - elif mode == "detect_board + detect_text": - image = detect_board(image) - image = detect_text(image) - elif mode == "detect_board (filled)": - image = detect_board(image, True) - cv.putText(image, "black", (int(image.shape[1]/2), 50), cv.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2) - - return image - - -demo = gr.Interface(main, - [ - "image", - gr.Radio(["detect_board + detect_bloc_text", "detect_bloc_text", "detect_board + detect_text", "detect_board (filled)"]), - ], - "image",) -demo.launch() \ No newline at end of file diff --git a/spaces/hkunlp/Binder/resources/introduction.md b/spaces/hkunlp/Binder/resources/introduction.md deleted file mode 100644 index 5f409d10a24f2b31fa5176c21164e9ff65b24190..0000000000000000000000000000000000000000 --- a/spaces/hkunlp/Binder/resources/introduction.md +++ /dev/null @@ -1,2 +0,0 @@ -## Introduction -[placeholder, mainly introduce Figure1(better the gif version)] \ No newline at end of file diff --git a/spaces/huak95/personaGPT_custom/frontend/pages/api/hello.ts b/spaces/huak95/personaGPT_custom/frontend/pages/api/hello.ts deleted file mode 100644 index f8bcc7e5caed177cb9ecfa7c02bc9a854b8ad1ff..0000000000000000000000000000000000000000 --- a/spaces/huak95/personaGPT_custom/frontend/pages/api/hello.ts +++ /dev/null @@ -1,13 +0,0 @@ -// Next.js API route support: https://nextjs.org/docs/api-routes/introduction -import type { NextApiRequest, NextApiResponse } from 'next' - -type Data = { - name: string -} - -export default function handler( - req: NextApiRequest, - res: NextApiResponse -) { - res.status(200).json({ name: 'John Doe' }) -} diff --git a/spaces/huang4414/Real-CUGAN/README.md b/spaces/huang4414/Real-CUGAN/README.md deleted file mode 100644 index d673114edadba73e80f33a3c71bc0dbee8758cc8..0000000000000000000000000000000000000000 --- a/spaces/huang4414/Real-CUGAN/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Real CUGAN -emoji: 🐢 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: DianXian/Real-CUGAN ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/huggan/butterfly-gan/custom_component/frontend/build/index.html b/spaces/huggan/butterfly-gan/custom_component/frontend/build/index.html deleted file mode 100644 index 39d9efe0df82adca33b299df621300503047697b..0000000000000000000000000000000000000000 --- a/spaces/huggan/butterfly-gan/custom_component/frontend/build/index.html +++ /dev/null @@ -1 +0,0 @@ -Streamlit Component
      \ No newline at end of file diff --git a/spaces/huggingchat/chat-ui/src/lib/utils/trimSuffix.ts b/spaces/huggingchat/chat-ui/src/lib/utils/trimSuffix.ts deleted file mode 100644 index 729107942ebaa2d7e1281dd77f8e52e8b135a5ad..0000000000000000000000000000000000000000 --- a/spaces/huggingchat/chat-ui/src/lib/utils/trimSuffix.ts +++ /dev/null @@ -1,6 +0,0 @@ -export function trimSuffix(input: string, end: string): string { - if (input.endsWith(end)) { - return input.slice(0, input.length - end.length); - } - return input; -} diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/lib/liveblocks/useStorage.ts b/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/lib/liveblocks/useStorage.ts deleted file mode 100644 index 90f8c9949d192aa3e5025cbe177be21b610890d0..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/lib/liveblocks/useStorage.ts +++ /dev/null @@ -1,22 +0,0 @@ -// @ts-nocheck -import type { LiveObject } from "@liveblocks/client"; -import type { Writable } from "svelte/store"; -import { writable } from "svelte/store"; -import { useRoom } from "./useRoom"; - -/** - * No `liveblocks-react` public API equivalent, but useStorage is used internally - */ -export function useStorage(): Writable { - const room = useRoom(); - const rootStore = writable(); - - async function fetchStorage() { - const { root }: { root: LiveObject } = await room!.getStorage(); - rootStore.set(root); - } - - fetchStorage(); - - return rootStore; -} diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/csrc/inpaint.cpp b/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/csrc/inpaint.cpp deleted file mode 100644 index de1f4b0c8bc74a2d4daf712827a903cc1385a2a7..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/csrc/inpaint.cpp +++ /dev/null @@ -1,234 +0,0 @@ -#include -#include -#include -#include -#include - -#include "inpaint.h" - -namespace { - static std::vector kDistance2Similarity; - - void init_kDistance2Similarity() { - double base[11] = {1.0, 0.99, 0.96, 0.83, 0.38, 0.11, 0.02, 0.005, 0.0006, 0.0001, 0}; - int length = (PatchDistanceMetric::kDistanceScale + 1); - kDistance2Similarity.resize(length); - for (int i = 0; i < length; ++i) { - double t = (double) i / length; - int j = (int) (100 * t); - int k = j + 1; - double vj = (j < 11) ? base[j] : 0; - double vk = (k < 11) ? base[k] : 0; - kDistance2Similarity[i] = vj + (100 * t - j) * (vk - vj); - } - } - - - inline void _weighted_copy(const MaskedImage &source, int ys, int xs, cv::Mat &target, int yt, int xt, double weight) { - if (source.is_masked(ys, xs)) return; - if (source.is_globally_masked(ys, xs)) return; - - auto source_ptr = source.get_image(ys, xs); - auto target_ptr = target.ptr(yt, xt); - -#pragma unroll - for (int c = 0; c < 3; ++c) - target_ptr[c] += static_cast(source_ptr[c]) * weight; - target_ptr[3] += weight; - } -} - -/** - * This algorithme uses a version proposed by Xavier Philippeau. - */ - -Inpainting::Inpainting(cv::Mat image, cv::Mat mask, const PatchDistanceMetric *metric) - : m_initial(image, mask), m_distance_metric(metric), m_pyramid(), m_source2target(), m_target2source() { - _initialize_pyramid(); -} - -Inpainting::Inpainting(cv::Mat image, cv::Mat mask, cv::Mat global_mask, const PatchDistanceMetric *metric) - : m_initial(image, mask, global_mask), m_distance_metric(metric), m_pyramid(), m_source2target(), m_target2source() { - _initialize_pyramid(); -} - -void Inpainting::_initialize_pyramid() { - auto source = m_initial; - m_pyramid.push_back(source); - while (source.size().height > m_distance_metric->patch_size() && source.size().width > m_distance_metric->patch_size()) { - source = source.downsample(); - m_pyramid.push_back(source); - } - - if (kDistance2Similarity.size() == 0) { - init_kDistance2Similarity(); - } -} - -cv::Mat Inpainting::run(bool verbose, bool verbose_visualize, unsigned int random_seed) { - srand(random_seed); - const int nr_levels = m_pyramid.size(); - - MaskedImage source, target; - for (int level = nr_levels - 1; level >= 0; --level) { - if (verbose) std::cerr << "Inpainting level: " << level << std::endl; - - source = m_pyramid[level]; - - if (level == nr_levels - 1) { - target = source.clone(); - target.clear_mask(); - m_source2target = NearestNeighborField(source, target, m_distance_metric); - m_target2source = NearestNeighborField(target, source, m_distance_metric); - } else { - m_source2target = NearestNeighborField(source, target, m_distance_metric, m_source2target); - m_target2source = NearestNeighborField(target, source, m_distance_metric, m_target2source); - } - - if (verbose) std::cerr << "Initialization done." << std::endl; - - if (verbose_visualize) { - auto visualize_size = m_initial.size(); - cv::Mat source_visualize(visualize_size, m_initial.image().type()); - cv::resize(source.image(), source_visualize, visualize_size); - cv::imshow("Source", source_visualize); - cv::Mat target_visualize(visualize_size, m_initial.image().type()); - cv::resize(target.image(), target_visualize, visualize_size); - cv::imshow("Target", target_visualize); - cv::waitKey(0); - } - - target = _expectation_maximization(source, target, level, verbose); - } - - return target.image(); -} - -// EM-Like algorithm (see "PatchMatch" - page 6). -// Returns a double sized target image (unless level = 0). -MaskedImage Inpainting::_expectation_maximization(MaskedImage source, MaskedImage target, int level, bool verbose) { - const int nr_iters_em = 1 + 2 * level; - const int nr_iters_nnf = static_cast(std::min(7, 1 + level)); - const int patch_size = m_distance_metric->patch_size(); - - MaskedImage new_source, new_target; - - for (int iter_em = 0; iter_em < nr_iters_em; ++iter_em) { - if (iter_em != 0) { - m_source2target.set_target(new_target); - m_target2source.set_source(new_target); - target = new_target; - } - - if (verbose) std::cerr << "EM Iteration: " << iter_em << std::endl; - - auto size = source.size(); - for (int i = 0; i < size.height; ++i) { - for (int j = 0; j < size.width; ++j) { - if (!source.contains_mask(i, j, patch_size)) { - m_source2target.set_identity(i, j); - m_target2source.set_identity(i, j); - } - } - } - if (verbose) std::cerr << " NNF minimization started." << std::endl; - m_source2target.minimize(nr_iters_nnf); - m_target2source.minimize(nr_iters_nnf); - if (verbose) std::cerr << " NNF minimization finished." << std::endl; - - // Instead of upsizing the final target, we build the last target from the next level source image. - // Thus, the final target is less blurry (see "Space-Time Video Completion" - page 5). - bool upscaled = false; - if (level >= 1 && iter_em == nr_iters_em - 1) { - new_source = m_pyramid[level - 1]; - new_target = target.upsample(new_source.size().width, new_source.size().height, m_pyramid[level - 1].global_mask()); - upscaled = true; - } else { - new_source = m_pyramid[level]; - new_target = target.clone(); - } - - auto vote = cv::Mat(new_target.size(), CV_64FC4); - vote.setTo(cv::Scalar::all(0)); - - // Votes for best patch from NNF Source->Target (completeness) and Target->Source (coherence). - _expectation_step(m_source2target, 1, vote, new_source, upscaled); - if (verbose) std::cerr << " Expectation source to target finished." << std::endl; - _expectation_step(m_target2source, 0, vote, new_source, upscaled); - if (verbose) std::cerr << " Expectation target to source finished." << std::endl; - - // Compile votes and update pixel values. - _maximization_step(new_target, vote); - if (verbose) std::cerr << " Minimization step finished." << std::endl; - } - - return new_target; -} - -// Expectation step: vote for best estimations of each pixel. -void Inpainting::_expectation_step( - const NearestNeighborField &nnf, bool source2target, - cv::Mat &vote, const MaskedImage &source, bool upscaled -) { - auto source_size = nnf.source_size(); - auto target_size = nnf.target_size(); - const int patch_size = m_distance_metric->patch_size(); - - for (int i = 0; i < source_size.height; ++i) { - for (int j = 0; j < source_size.width; ++j) { - if (nnf.source().is_globally_masked(i, j)) continue; - int yp = nnf.at(i, j, 0), xp = nnf.at(i, j, 1), dp = nnf.at(i, j, 2); - double w = kDistance2Similarity[dp]; - - for (int di = -patch_size; di <= patch_size; ++di) { - for (int dj = -patch_size; dj <= patch_size; ++dj) { - int ys = i + di, xs = j + dj, yt = yp + di, xt = xp + dj; - if (!(ys >= 0 && ys < source_size.height && xs >= 0 && xs < source_size.width)) continue; - if (nnf.source().is_globally_masked(ys, xs)) continue; - if (!(yt >= 0 && yt < target_size.height && xt >= 0 && xt < target_size.width)) continue; - if (nnf.target().is_globally_masked(yt, xt)) continue; - - if (!source2target) { - std::swap(ys, yt); - std::swap(xs, xt); - } - - if (upscaled) { - for (int uy = 0; uy < 2; ++uy) { - for (int ux = 0; ux < 2; ++ux) { - _weighted_copy(source, 2 * ys + uy, 2 * xs + ux, vote, 2 * yt + uy, 2 * xt + ux, w); - } - } - } else { - _weighted_copy(source, ys, xs, vote, yt, xt, w); - } - } - } - } - } -} - -// Maximization Step: maximum likelihood of target pixel. -void Inpainting::_maximization_step(MaskedImage &target, const cv::Mat &vote) { - auto target_size = target.size(); - for (int i = 0; i < target_size.height; ++i) { - for (int j = 0; j < target_size.width; ++j) { - const double *source_ptr = vote.ptr(i, j); - unsigned char *target_ptr = target.get_mutable_image(i, j); - - if (target.is_globally_masked(i, j)) { - continue; - } - - if (source_ptr[3] > 0) { - unsigned char r = cv::saturate_cast(source_ptr[0] / source_ptr[3]); - unsigned char g = cv::saturate_cast(source_ptr[1] / source_ptr[3]); - unsigned char b = cv::saturate_cast(source_ptr[2] / source_ptr[3]); - target_ptr[0] = r, target_ptr[1] = g, target_ptr[2] = b; - } else { - target.set_mask(i, j, 0); - } - } - } -} - diff --git a/spaces/huggingface-projects/wordalle/README.md b/spaces/huggingface-projects/wordalle/README.md deleted file mode 100644 index 57a9e53de895d6f42f6b4f6c7443c936f49702cd..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/wordalle/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Wordalle -emoji: 🥑📚🥑 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.0.17 -app_file: main.py -fullWidth: true -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/huggingface-tools/text-to-image/text_to_image.py b/spaces/huggingface-tools/text-to-image/text_to_image.py deleted file mode 100644 index 3e758a1a6bfd6f0a178e20fea0e8bfac04fc1f3f..0000000000000000000000000000000000000000 --- a/spaces/huggingface-tools/text-to-image/text_to_image.py +++ /dev/null @@ -1,51 +0,0 @@ -from transformers.tools.base import Tool, get_default_device -from transformers.utils import is_accelerate_available -import torch - -from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler - - -TEXT_TO_IMAGE_DESCRIPTION = ( - "This is a tool that creates an image according to a prompt, which is a text description. It takes an input named `prompt` which " - "contains the image description and outputs an image." -) - - -class TextToImageTool(Tool): - default_checkpoint = "runwayml/stable-diffusion-v1-5" - description = TEXT_TO_IMAGE_DESCRIPTION - inputs = ['text'] - outputs = ['image'] - - def __init__(self, device=None, **hub_kwargs) -> None: - if not is_accelerate_available(): - raise ImportError("Accelerate should be installed in order to use tools.") - - super().__init__() - - self.device = device - self.pipeline = None - self.hub_kwargs = hub_kwargs - - def setup(self): - if self.device is None: - self.device = get_default_device() - - self.pipeline = DiffusionPipeline.from_pretrained(self.default_checkpoint) - self.pipeline.scheduler = DPMSolverMultistepScheduler.from_config(self.pipeline.scheduler.config) - self.pipeline.to(self.device) - - if self.device.type == "cuda": - self.pipeline.to(torch_dtype=torch.float16) - - self.is_initialized = True - - def __call__(self, prompt): - if not self.is_initialized: - self.setup() - - negative_prompt = "low quality, bad quality, deformed, low resolution" - added_prompt = " , highest quality, highly realistic, very high resolution" - - return self.pipeline(prompt + added_prompt, negative_prompt=negative_prompt, num_inference_steps=25).images[0] - diff --git a/spaces/hunkim/echo/app.py b/spaces/hunkim/echo/app.py deleted file mode 100644 index a4491fa68b763a8a344f905b856e79f8ff7aabf7..0000000000000000000000000000000000000000 --- a/spaces/hunkim/echo/app.py +++ /dev/null @@ -1,4 +0,0 @@ -import streamlit as st - -x = st.slider('Select a value') -st.write(x, 'squared is', x * x) \ No newline at end of file diff --git a/spaces/hzy123/bingo/src/lib/bots/bing/utils.ts b/spaces/hzy123/bingo/src/lib/bots/bing/utils.ts deleted file mode 100644 index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000 --- a/spaces/hzy123/bingo/src/lib/bots/bing/utils.ts +++ /dev/null @@ -1,87 +0,0 @@ -import { ChatResponseMessage, BingChatResponse } from './types' - -export function convertMessageToMarkdown(message: ChatResponseMessage): string { - if (message.messageType === 'InternalSearchQuery') { - return message.text - } - for (const card of message.adaptiveCards??[]) { - for (const block of card.body) { - if (block.type === 'TextBlock') { - return block.text - } - } - } - return '' -} - -const RecordSeparator = String.fromCharCode(30) - -export const websocketUtils = { - packMessage(data: any) { - return `${JSON.stringify(data)}${RecordSeparator}` - }, - unpackMessage(data: string | ArrayBuffer | Blob) { - if (!data) return {} - return data - .toString() - .split(RecordSeparator) - .filter(Boolean) - .map((s) => { - try { - return JSON.parse(s) - } catch (e) { - return {} - } - }) - }, -} - -export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise { - const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`, - { - method: 'HEAD', - headers, - redirect: 'manual' - }, - ); - - if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) { - throw new Error('请求异常,请检查 cookie 是否有效') - } - - const resultId = RegExp.$1; - let count = 0 - const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`; - - do { - await sleep(3000); - const content = await fetch(imageThumbUrl, { headers, method: 'GET' }) - - // @ts-ignore - if (content.headers.get('content-length') > 1) { - const text = await content.text() - return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&')) - .map(img => `![${prompt}](${img})`).join(' ') - } - } while(count ++ < 10); -} - - -export async function* streamAsyncIterable(stream: ReadableStream) { - const reader = stream.getReader() - try { - while (true) { - const { done, value } = await reader.read() - if (done) { - return - } - yield value - } - } finally { - reader.releaseLock() - } -} - -export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms)) - diff --git a/spaces/iamironman4279/SadTalker/src/face3d/visualize.py b/spaces/iamironman4279/SadTalker/src/face3d/visualize.py deleted file mode 100644 index 23a1110806a0ddf37d4aa549c023d1c3f7114e3e..0000000000000000000000000000000000000000 --- a/spaces/iamironman4279/SadTalker/src/face3d/visualize.py +++ /dev/null @@ -1,48 +0,0 @@ -# check the sync of 3dmm feature and the audio -import cv2 -import numpy as np -from src.face3d.models.bfm import ParametricFaceModel -from src.face3d.models.facerecon_model import FaceReconModel -import torch -import subprocess, platform -import scipy.io as scio -from tqdm import tqdm - -# draft -def gen_composed_video(args, device, first_frame_coeff, coeff_path, audio_path, save_path, exp_dim=64): - - coeff_first = scio.loadmat(first_frame_coeff)['full_3dmm'] - - coeff_pred = scio.loadmat(coeff_path)['coeff_3dmm'] - - coeff_full = np.repeat(coeff_first, coeff_pred.shape[0], axis=0) # 257 - - coeff_full[:, 80:144] = coeff_pred[:, 0:64] - coeff_full[:, 224:227] = coeff_pred[:, 64:67] # 3 dim translation - coeff_full[:, 254:] = coeff_pred[:, 67:] # 3 dim translation - - tmp_video_path = '/tmp/face3dtmp.mp4' - - facemodel = FaceReconModel(args) - - video = cv2.VideoWriter(tmp_video_path, cv2.VideoWriter_fourcc(*'mp4v'), 25, (224, 224)) - - for k in tqdm(range(coeff_pred.shape[0]), 'face3d rendering:'): - cur_coeff_full = torch.tensor(coeff_full[k:k+1], device=device) - - facemodel.forward(cur_coeff_full, device) - - predicted_landmark = facemodel.pred_lm # TODO. - predicted_landmark = predicted_landmark.cpu().numpy().squeeze() - - rendered_img = facemodel.pred_face - rendered_img = 255. * rendered_img.cpu().numpy().squeeze().transpose(1,2,0) - out_img = rendered_img[:, :, :3].astype(np.uint8) - - video.write(np.uint8(out_img[:,:,::-1])) - - video.release() - - command = 'ffmpeg -v quiet -y -i {} -i {} -strict -2 -q:v 1 {}'.format(audio_path, tmp_video_path, save_path) - subprocess.call(command, shell=platform.system() != 'Windows') - diff --git a/spaces/innat/VideoSwin/labels.py b/spaces/innat/VideoSwin/labels.py deleted file mode 100644 index 767ccf33b2e55d2d8bbb31c100525b95ae902cfd..0000000000000000000000000000000000000000 --- a/spaces/innat/VideoSwin/labels.py +++ /dev/null @@ -1,579 +0,0 @@ -K400_label_map = { - "abseiling": 0, - "air_drumming": 1, - "answering_questions": 2, - "applauding": 3, - "applying_cream": 4, - "archery": 5, - "arm_wrestling": 6, - "arranging_flowers": 7, - "assembling_computer": 8, - "auctioning": 9, - "baby_waking_up": 10, - "baking_cookies": 11, - "balloon_blowing": 12, - "bandaging": 13, - "barbequing": 14, - "bartending": 15, - "beatboxing": 16, - "bee_keeping": 17, - "belly_dancing": 18, - "bench_pressing": 19, - "bending_back": 20, - "bending_metal": 21, - "biking_through_snow": 22, - "blasting_sand": 23, - "blowing_glass": 24, - "blowing_leaves": 25, - "blowing_nose": 26, - "blowing_out_candles": 27, - "bobsledding": 28, - "bookbinding": 29, - "bouncing_on_trampoline": 30, - "bowling": 31, - "braiding_hair": 32, - "breading_or_breadcrumbing": 33, - "breakdancing": 34, - "brush_painting": 35, - "brushing_hair": 36, - "brushing_teeth": 37, - "building_cabinet": 38, - "building_shed": 39, - "bungee_jumping": 40, - "busking": 41, - "canoeing_or_kayaking": 42, - "capoeira": 43, - "carrying_baby": 44, - "cartwheeling": 45, - "carving_pumpkin": 46, - "catching_fish": 47, - "catching_or_throwing_baseball": 48, - "catching_or_throwing_frisbee": 49, - "catching_or_throwing_softball": 50, - "celebrating": 51, - "changing_oil": 52, - "changing_wheel": 53, - "checking_tires": 54, - "cheerleading": 55, - "chopping_wood": 56, - "clapping": 57, - "clay_pottery_making": 58, - "clean_and_jerk": 59, - "cleaning_floor": 60, - "cleaning_gutters": 61, - "cleaning_pool": 62, - "cleaning_shoes": 63, - "cleaning_toilet": 64, - "cleaning_windows": 65, - "climbing_a_rope": 66, - "climbing_ladder": 67, - "climbing_tree": 68, - "contact_juggling": 69, - "cooking_chicken": 70, - "cooking_egg": 71, - "cooking_on_campfire": 72, - "cooking_sausages": 73, - "counting_money": 74, - "country_line_dancing": 75, - "cracking_neck": 76, - "crawling_baby": 77, - "crossing_river": 78, - "crying": 79, - "curling_hair": 80, - "cutting_nails": 81, - "cutting_pineapple": 82, - "cutting_watermelon": 83, - "dancing_ballet": 84, - "dancing_charleston": 85, - "dancing_gangnam_style": 86, - "dancing_macarena": 87, - "deadlifting": 88, - "decorating_the_christmas_tree": 89, - "digging": 90, - "dining": 91, - "disc_golfing": 92, - "diving_cliff": 93, - "dodgeball": 94, - "doing_aerobics": 95, - "doing_laundry": 96, - "doing_nails": 97, - "drawing": 98, - "dribbling_basketball": 99, - "drinking": 100, - "drinking_beer": 101, - "drinking_shots": 102, - "driving_car": 103, - "driving_tractor": 104, - "drop_kicking": 105, - "drumming_fingers": 106, - "dunking_basketball": 107, - "dying_hair": 108, - "eating_burger": 109, - "eating_cake": 110, - "eating_carrots": 111, - "eating_chips": 112, - "eating_doughnuts": 113, - "eating_hotdog": 114, - "eating_ice_cream": 115, - "eating_spaghetti": 116, - "eating_watermelon": 117, - "egg_hunting": 118, - "exercising_arm": 119, - "exercising_with_an_exercise_ball": 120, - "extinguishing_fire": 121, - "faceplanting": 122, - "feeding_birds": 123, - "feeding_fish": 124, - "feeding_goats": 125, - "filling_eyebrows": 126, - "finger_snapping": 127, - "fixing_hair": 128, - "flipping_pancake": 129, - "flying_kite": 130, - "folding_clothes": 131, - "folding_napkins": 132, - "folding_paper": 133, - "front_raises": 134, - "frying_vegetables": 135, - "garbage_collecting": 136, - "gargling": 137, - "getting_a_haircut": 138, - "getting_a_tattoo": 139, - "giving_or_receiving_award": 140, - "golf_chipping": 141, - "golf_driving": 142, - "golf_putting": 143, - "grinding_meat": 144, - "grooming_dog": 145, - "grooming_horse": 146, - "gymnastics_tumbling": 147, - "hammer_throw": 148, - "headbanging": 149, - "headbutting": 150, - "high_jump": 151, - "high_kick": 152, - "hitting_baseball": 153, - "hockey_stop": 154, - "holding_snake": 155, - "hopscotch": 156, - "hoverboarding": 157, - "hugging": 158, - "hula_hooping": 159, - "hurdling": 160, - "hurling_(sport)": 161, - "ice_climbing": 162, - "ice_fishing": 163, - "ice_skating": 164, - "ironing": 165, - "javelin_throw": 166, - "jetskiing": 167, - "jogging": 168, - "juggling_balls": 169, - "juggling_fire": 170, - "juggling_soccer_ball": 171, - "jumping_into_pool": 172, - "jumpstyle_dancing": 173, - "kicking_field_goal": 174, - "kicking_soccer_ball": 175, - "kissing": 176, - "kitesurfing": 177, - "knitting": 178, - "krumping": 179, - "laughing": 180, - "laying_bricks": 181, - "long_jump": 182, - "lunge": 183, - "making_a_cake": 184, - "making_a_sandwich": 185, - "making_bed": 186, - "making_jewelry": 187, - "making_pizza": 188, - "making_snowman": 189, - "making_sushi": 190, - "making_tea": 191, - "marching": 192, - "massaging_back": 193, - "massaging_feet": 194, - "massaging_legs": 195, - "massaging_person's_head": 196, - "milking_cow": 197, - "mopping_floor": 198, - "motorcycling": 199, - "moving_furniture": 200, - "mowing_lawn": 201, - "news_anchoring": 202, - "opening_bottle": 203, - "opening_present": 204, - "paragliding": 205, - "parasailing": 206, - "parkour": 207, - "passing_American_football_(in_game)": 208, - "passing_American_football_(not_in_game)": 209, - "peeling_apples": 210, - "peeling_potatoes": 211, - "petting_animal_(not_cat)": 212, - "petting_cat": 213, - "picking_fruit": 214, - "planting_trees": 215, - "plastering": 216, - "playing_accordion": 217, - "playing_badminton": 218, - "playing_bagpipes": 219, - "playing_basketball": 220, - "playing_bass_guitar": 221, - "playing_cards": 222, - "playing_cello": 223, - "playing_chess": 224, - "playing_clarinet": 225, - "playing_controller": 226, - "playing_cricket": 227, - "playing_cymbals": 228, - "playing_didgeridoo": 229, - "playing_drums": 230, - "playing_flute": 231, - "playing_guitar": 232, - "playing_harmonica": 233, - "playing_harp": 234, - "playing_ice_hockey": 235, - "playing_keyboard": 236, - "playing_kickball": 237, - "playing_monopoly": 238, - "playing_organ": 239, - "playing_paintball": 240, - "playing_piano": 241, - "playing_poker": 242, - "playing_recorder": 243, - "playing_saxophone": 244, - "playing_squash_or_racquetball": 245, - "playing_tennis": 246, - "playing_trombone": 247, - "playing_trumpet": 248, - "playing_ukulele": 249, - "playing_violin": 250, - "playing_volleyball": 251, - "playing_xylophone": 252, - "pole_vault": 253, - "presenting_weather_forecast": 254, - "pull_ups": 255, - "pumping_fist": 256, - "pumping_gas": 257, - "punching_bag": 258, - "punching_person_(boxing)": 259, - "push_up": 260, - "pushing_car": 261, - "pushing_cart": 262, - "pushing_wheelchair": 263, - "reading_book": 264, - "reading_newspaper": 265, - "recording_music": 266, - "riding_a_bike": 267, - "riding_camel": 268, - "riding_elephant": 269, - "riding_mechanical_bull": 270, - "riding_mountain_bike": 271, - "riding_mule": 272, - "riding_or_walking_with_horse": 273, - "riding_scooter": 274, - "riding_unicycle": 275, - "ripping_paper": 276, - "robot_dancing": 277, - "rock_climbing": 278, - "rock_scissors_paper": 279, - "roller_skating": 280, - "running_on_treadmill": 281, - "sailing": 282, - "salsa_dancing": 283, - "sanding_floor": 284, - "scrambling_eggs": 285, - "scuba_diving": 286, - "setting_table": 287, - "shaking_hands": 288, - "shaking_head": 289, - "sharpening_knives": 290, - "sharpening_pencil": 291, - "shaving_head": 292, - "shaving_legs": 293, - "shearing_sheep": 294, - "shining_shoes": 295, - "shooting_basketball": 296, - "shooting_goal_(soccer)": 297, - "shot_put": 298, - "shoveling_snow": 299, - "shredding_paper": 300, - "shuffling_cards": 301, - "side_kick": 302, - "sign_language_interpreting": 303, - "singing": 304, - "situp": 305, - "skateboarding": 306, - "ski_jumping": 307, - "skiing_(not_slalom_or_crosscountry)": 308, - "skiing_crosscountry": 309, - "skiing_slalom": 310, - "skipping_rope": 311, - "skydiving": 312, - "slacklining": 313, - "slapping": 314, - "sled_dog_racing": 315, - "smoking": 316, - "smoking_hookah": 317, - "snatch_weight_lifting": 318, - "sneezing": 319, - "sniffing": 320, - "snorkeling": 321, - "snowboarding": 322, - "snowkiting": 323, - "snowmobiling": 324, - "somersaulting": 325, - "spinning_poi": 326, - "spray_painting": 327, - "spraying": 328, - "springboard_diving": 329, - "squat": 330, - "sticking_tongue_out": 331, - "stomping_grapes": 332, - "stretching_arm": 333, - "stretching_leg": 334, - "strumming_guitar": 335, - "surfing_crowd": 336, - "surfing_water": 337, - "sweeping_floor": 338, - "swimming_backstroke": 339, - "swimming_breast_stroke": 340, - "swimming_butterfly_stroke": 341, - "swing_dancing": 342, - "swinging_legs": 343, - "swinging_on_something": 344, - "sword_fighting": 345, - "tai_chi": 346, - "taking_a_shower": 347, - "tango_dancing": 348, - "tap_dancing": 349, - "tapping_guitar": 350, - "tapping_pen": 351, - "tasting_beer": 352, - "tasting_food": 353, - "testifying": 354, - "texting": 355, - "throwing_axe": 356, - "throwing_ball": 357, - "throwing_discus": 358, - "tickling": 359, - "tobogganing": 360, - "tossing_coin": 361, - "tossing_salad": 362, - "training_dog": 363, - "trapezing": 364, - "trimming_or_shaving_beard": 365, - "trimming_trees": 366, - "triple_jump": 367, - "tying_bow_tie": 368, - "tying_knot_(not_on_a_tie)": 369, - "tying_tie": 370, - "unboxing": 371, - "unloading_truck": 372, - "using_computer": 373, - "using_remote_controller_(not_gaming)": 374, - "using_segway": 375, - "vault": 376, - "waiting_in_line": 377, - "walking_the_dog": 378, - "washing_dishes": 379, - "washing_feet": 380, - "washing_hair": 381, - "washing_hands": 382, - "water_skiing": 383, - "water_sliding": 384, - "watering_plants": 385, - "waxing_back": 386, - "waxing_chest": 387, - "waxing_eyebrows": 388, - "waxing_legs": 389, - "weaving_basket": 390, - "welding": 391, - "whistling": 392, - "windsurfing": 393, - "wrapping_present": 394, - "wrestling": 395, - "writing": 396, - "yawning": 397, - "yoga": 398, - "zumba": 399, -} -SSv2_label_map = { - "Approaching something with your camera": 0, - "Attaching something to something": 1, - "Bending something so that it deforms": 2, - "Bending something until it breaks": 3, - "Burying something in something": 4, - "Closing something": 5, - "Covering something with something": 6, - "Digging something out of something": 7, - "Dropping something behind something": 8, - "Dropping something in front of something": 9, - "Dropping something into something": 10, - "Dropping something next to something": 11, - "Dropping something onto something": 12, - "Failing to put something into something because something does not fit": 13, - "Folding something": 14, - "Hitting something with something": 15, - "Holding something": 16, - "Holding something behind something": 17, - "Holding something in front of something": 18, - "Holding something next to something": 19, - "Holding something over something": 20, - "Laying something on the table on its side, not upright": 21, - "Letting something roll along a flat surface": 22, - "Letting something roll down a slanted surface": 23, - "Letting something roll up a slanted surface, so it rolls back down": 24, - "Lifting a surface with something on it but not enough for it to slide down": 25, - "Lifting a surface with something on it until it starts sliding down": 26, - "Lifting something up completely without letting it drop down": 27, - "Lifting something up completely, then letting it drop down": 28, - "Lifting something with something on it": 29, - "Lifting up one end of something without letting it drop down": 30, - "Lifting up one end of something, then letting it drop down": 31, - "Moving away from something with your camera": 32, - "Moving part of something": 33, - "Moving something across a surface until it falls down": 34, - "Moving something across a surface without it falling down": 35, - "Moving something and something away from each other": 36, - "Moving something and something closer to each other": 37, - "Moving something and something so they collide with each other": 38, - "Moving something and something so they pass each other": 39, - "Moving something away from something": 40, - "Moving something away from the camera": 41, - "Moving something closer to something": 42, - "Moving something down": 43, - "Moving something towards the camera": 44, - "Moving something up": 45, - "Opening something": 46, - "Picking something up": 47, - "Piling something up": 48, - "Plugging something into something": 49, - "Plugging something into something but pulling it right out as you remove your hand": 50, - "Poking a hole into some substance": 51, - "Poking a hole into something soft": 52, - "Poking a stack of something so the stack collapses": 53, - "Poking a stack of something without the stack collapsing": 54, - "Poking something so it slightly moves": 55, - "Poking something so lightly that it doesn't or almost doesn't move": 56, - "Poking something so that it falls over": 57, - "Poking something so that it spins around": 58, - "Pouring something into something": 59, - "Pouring something into something until it overflows": 60, - "Pouring something onto something": 61, - "Pouring something out of something": 62, - "Pretending or failing to wipe something off of something": 63, - "Pretending or trying and failing to twist something": 64, - "Pretending to be tearing something that is not tearable": 65, - "Pretending to close something without actually closing it": 66, - "Pretending to open something without actually opening it": 67, - "Pretending to pick something up": 68, - "Pretending to poke something": 69, - "Pretending to pour something out of something, but something is empty": 70, - "Pretending to put something behind something": 71, - "Pretending to put something into something": 72, - "Pretending to put something next to something": 73, - "Pretending to put something on a surface": 74, - "Pretending to put something onto something": 75, - "Pretending to put something underneath something": 76, - "Pretending to scoop something up with something": 77, - "Pretending to spread air onto something": 78, - "Pretending to sprinkle air onto something": 79, - "Pretending to squeeze something": 80, - "Pretending to take something from somewhere": 81, - "Pretending to take something out of something": 82, - "Pretending to throw something": 83, - "Pretending to turn something upside down": 84, - "Pulling something from behind of something": 85, - "Pulling something from left to right": 86, - "Pulling something from right to left": 87, - "Pulling something onto something": 88, - "Pulling something out of something": 89, - "Pulling two ends of something but nothing happens": 90, - "Pulling two ends of something so that it gets stretched": 91, - "Pulling two ends of something so that it separates into two pieces": 92, - "Pushing something from left to right": 93, - "Pushing something from right to left": 94, - "Pushing something off of something": 95, - "Pushing something onto something": 96, - "Pushing something so it spins": 97, - "Pushing something so that it almost falls off but doesn't": 98, - "Pushing something so that it falls off the table": 99, - "Pushing something so that it slightly moves": 100, - "Pushing something with something": 101, - "Putting number of something onto something": 102, - "Putting something and something on the table": 103, - "Putting something behind something": 104, - "Putting something in front of something": 105, - "Putting something into something": 106, - "Putting something next to something": 107, - "Putting something on a flat surface without letting it roll": 108, - "Putting something on a surface": 109, - "Putting something on the edge of something so it is not supported and falls down": 110, - "Putting something onto a slanted surface but it doesn't glide down": 111, - "Putting something onto something": 112, - "Putting something onto something else that cannot support it so it falls down": 113, - "Putting something similar to other things that are already on the table": 114, - "Putting something that can't roll onto a slanted surface, so it slides down": 115, - "Putting something that can't roll onto a slanted surface, so it stays where it is": 116, - "Putting something that cannot actually stand upright upright on the table, so it falls on its side": 117, - "Putting something underneath something": 118, - "Putting something upright on the table": 119, - "Putting something, something and something on the table": 120, - "Removing something, revealing something behind": 121, - "Rolling something on a flat surface": 122, - "Scooping something up with something": 123, - "Showing a photo of something to the camera": 124, - "Showing something behind something": 125, - "Showing something next to something": 126, - "Showing something on top of something": 127, - "Showing something to the camera": 128, - "Showing that something is empty": 129, - "Showing that something is inside something": 130, - "Something being deflected from something": 131, - "Something colliding with something and both are being deflected": 132, - "Something colliding with something and both come to a halt": 133, - "Something falling like a feather or paper": 134, - "Something falling like a rock": 135, - "Spilling something behind something": 136, - "Spilling something next to something": 137, - "Spilling something onto something": 138, - "Spinning something so it continues spinning": 139, - "Spinning something that quickly stops spinning": 140, - "Spreading something onto something": 141, - "Sprinkling something onto something": 142, - "Squeezing something": 143, - "Stacking number of something": 144, - "Stuffing something into something": 145, - "Taking one of many similar things on the table": 146, - "Taking something from somewhere": 147, - "Taking something out of something": 148, - "Tearing something into two pieces": 149, - "Tearing something just a little bit": 150, - "Throwing something": 151, - "Throwing something against something": 152, - "Throwing something in the air and catching it": 153, - "Throwing something in the air and letting it fall": 154, - "Throwing something onto a surface": 155, - "Tilting something with something on it slightly so it doesn't fall down": 156, - "Tilting something with something on it until it falls off": 157, - "Tipping something over": 158, - "Tipping something with something in it over, so something in it falls out": 159, - "Touching (without moving) part of something": 160, - "Trying but failing to attach something to something because it doesn't stick": 161, - "Trying to bend something unbendable so nothing happens": 162, - "Trying to pour something into something, but missing so it spills next to it": 163, - "Turning something upside down": 164, - "Turning the camera downwards while filming something": 165, - "Turning the camera left while filming something": 166, - "Turning the camera right while filming something": 167, - "Turning the camera upwards while filming something": 168, - "Twisting (wringing) something wet until water comes out": 169, - "Twisting something": 170, - "Uncovering something": 171, - "Unfolding something": 172, - "Wiping something off of something": 173, - "Moving something and something so they overlap each other": 174, -} diff --git a/spaces/innovatorved/whisper.api/app/api/models/ping.py b/spaces/innovatorved/whisper.api/app/api/models/ping.py deleted file mode 100644 index 6a93e550cc25e0deeb0fb234dbbe64e5326ffcb0..0000000000000000000000000000000000000000 --- a/spaces/innovatorved/whisper.api/app/api/models/ping.py +++ /dev/null @@ -1,5 +0,0 @@ -from pydantic import BaseModel - - -class PingResponse(BaseModel): - ping: str diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/1978 Brian Eno Ambient 1 Music For Airports.zip.rar ((FULL)).md b/spaces/inplisQlawa/anything-midjourney-v4-1/1978 Brian Eno Ambient 1 Music For Airports.zip.rar ((FULL)).md deleted file mode 100644 index 2f823c5901fdc8d6d1a7b347105c214670447195..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/1978 Brian Eno Ambient 1 Music For Airports.zip.rar ((FULL)).md +++ /dev/null @@ -1,14 +0,0 @@ -

      1978 Brian Eno Ambient 1 Music For Airports.zip.rar


      Download Ziphttps://urlin.us/2uEvLl



      - -This is a limited version of Brian Eno’s Ambient 1 album. Eno used the name “Ambient” because it was a catchall for an emerging group of electronic music producers. The album uses several instruments including synthesizers and tape loops. The album takes a refreshing approach to sound with undulating passages of synthesized drones that open the LP, and fade into ambient music, the musical equivalent of daydreaming. The album is a musical exploration and technological experiment, a genre-busting LP with a decidedly post-punk edge. Ambient 1 is the first of three albums released under the moniker. Eno worked with Anne Clark, Robert Fripp, and Peter Gabriel to produce the LP. - -Brian Eno (l) and David Byrne (r). - -The Ambient 1 LP is the first part of Eno’s Ambient series, and it was a revelation at the time. As Stephen Holden wrote in the May 1978 issue of New York Magazine, Eno took the term “ambient music” and “solved” it: “The genre is still fresh and exciting. Its appeal to the audience – as well as its future prospects – seem unlimited.” The Ambient 1 LP opens with an enveloping drone that slowly builds and ebbs, and moves seamlessly into the equally hypnotic Music For Airports. These passages are hard to define because they change their rhythm over the course of several minutes. - -Music For Airports opens with long, low drones that mutate into the slowly evolving and bouncing groove of “1st Air.” The track also features a notable example of reverb on bass guitar, while the drums and cymbals dance and swoop in and out of the song. Music For Airports is the first long track on the album. Eno once told Steve Wacks that the long piece of music in Ambient 1 represents what “could be a very large musical event that would take place over a period of a few hours.” It’s an apt way to describe the album, which weaves electronic sounds together into a seamless and mesmerizing piece. - -Byrne often listens to Eno’s music on the road while travelling. In a 1978 interview, Byrne said, “I can’t think of anyone else who’s done as much to change the sound of rock music.” “It’s a powerful 4fefd39f24
      -
      -
      -

      diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/EASEUS Data Recovery Wizard 13.0 With Crack Key.md b/spaces/inplisQlawa/anything-midjourney-v4-1/EASEUS Data Recovery Wizard 13.0 With Crack Key.md deleted file mode 100644 index 027e923e38820e58e96ecb8e2223970cb89fc122..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/EASEUS Data Recovery Wizard 13.0 With Crack Key.md +++ /dev/null @@ -1,9 +0,0 @@ - -

      easeus data recovery wizard 13.0 crack is a data recovery software for all types of the users such as home users, office users, students, professionals, enterprise, or companies. it is very easy to use and recover the lost data from all kinds of storage devices. it also recovers the data from a mobile phone, sd card, or any other drive. by using this software, you can recover your lost documents, emails, data, multimedia, and much more.

      -

      the easeus data recovery wizard 13 crack software is very easy to use. it provides a trial period of 30 days, so that the user can try it before making a final decision. moreover, the software will help you to recover your data even when your system is in an unbootable state. even though the recovery process is quite complicated, it is still very simple to use.

      -

      EASEUS Data Recovery Wizard 13.0 With Crack Key


      Download Zip 🆗 https://urlin.us/2uEwx4



      -

      this data recovery software can be used to recover data from any storage device. you can recover data from various types of storage devices such as pen drive, hard drives, floppy drives, flash drives, memory cards, digital cameras, mp3 players, etc.

      -

      moreover, it is an easy to use software that is compatible with all the platforms such as windows, mac os, ios, and android. moreover, it is the best data recovery tool with better performance and accuracy.

      -

      this is the best software to recover your lost data and files on your windows and mac. it provides full and easy-to-use data recovery software. easeus data recovery wizard 13 serial number assists you in recovering lost data from memory card, external hard disk, external usb memory drive, lost pc, camera, pc, smart phone, etc. it support all type of files like videos, audios, photos, music, documents, etc. its advanced data search options will find lost files in the form of a list. users can get back their lost files with few and simple data recovery process. you can easily recover your costly data that was lost few days or months ago with its advanced data recovery engine.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Hitman2silentassassindownloadfreefullversionpcgames.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Hitman2silentassassindownloadfreefullversionpcgames.md deleted file mode 100644 index 9a58347552d9b3ad9c1d29381b1d0926c0e413c4..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Hitman2silentassassindownloadfreefullversionpcgames.md +++ /dev/null @@ -1,26 +0,0 @@ - -

      How to Download Hitman 2: Silent Assassin for Free on PC

      -

      Hitman 2: Silent Assassin is a stealth action game that lets you play as Agent 47, a genetically-engineered assassin who must kill his targets in various locations around the world. The game offers a lot of freedom and creativity in how you approach each mission, whether you use stealth, disguise, weapons, or gadgets. You can also choose from different difficulty levels and ratings based on your performance.

      -

      hitman2silentassassindownloadfreefullversionpcgames


      Download ->>> https://urlin.us/2uEw4O



      -

      If you want to download Hitman 2: Silent Assassin for free on your PC, you can follow these steps:

      -
        -
      1. Click on the link below to go to the download page of Hitman 2: Silent Assassin on GOG Unlocked[^1^]. This is a website that provides free downloads of DRM-free games.
      2. -
      3. Wait for 5 seconds and click on the blue 'download now' button. You can use a download manager for faster download speeds.
      4. -
      5. Once the game is finished downloading, right click the .zip file and extract it using 7-Zip or another extractor.
      6. -
      7. Double click inside the Hitman 2: Silent Assassin folder and run the setup application. Accept the EULA and install the game.
      8. -
      9. Launch the game through the desktop shortcut and enjoy!
      10. -
      -

      Note: This download is completely free and won't cost you a penny. However, if you love the game and want to support the developers, you can buy it from their official website or other platforms.

      Hitman 2: Silent Assassin - Tips and Tricks for Getting Started

      -

      Hitman 2: Silent Assassin is not an easy game, especially for beginners. You will need to master the art of stealth, disguise, and strategy to complete your missions successfully. Here are some tips and tricks that can help you get started:

      -

      -
        -
      • Save often. The game does not have an autosave feature, so you will need to manually save your progress frequently. You can save up to seven times per mission on normal difficulty, and only once on professional difficulty.
      • -
      • Use the map. The map is a very useful tool that shows you the layout of the area, the location of your targets, and the movement of enemies. You can also use it to plan your route and avoid detection.
      • -
      • Use disguises. Disguises are one of the best ways to blend in and access restricted areas. You can find disguises by knocking out or killing certain enemies and taking their clothes. However, be careful not to raise suspicion by acting out of character or carrying suspicious items.
      • -
      • Use distractions. You can use various items and methods to distract enemies and lure them away from their posts. For example, you can throw coins, turn on radios, shoot fire alarms, or use fiber wire to strangle someone silently.
      • -
      • Use weapons and gadgets wisely. You have a wide range of weapons and gadgets at your disposal, but you should use them sparingly and strategically. Some weapons are loud and will alert nearby enemies, while others are silent and stealthy. You should also avoid using unnecessary force and killing innocent people, as this will lower your rating and reputation.
      • -
      • Aim for a 'Silent Assassin' rating. The highest rating you can achieve in each mission is 'Silent Assassin', which means you completed the mission without being detected, without killing anyone except your targets, and without leaving any evidence behind. This will unlock new weapons and increase your professionalism.
      • -
      -

      Hitman 2: Silent Assassin is a challenging but rewarding game that will test your skills as an assassin. With these tips and tricks, you can improve your gameplay and enjoy the game more.

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Code-Name-Jackal-Br-Rip-1080p-Movie-Torrents-FREE.md b/spaces/inreVtussa/clothingai/Code-Name-Jackal-Br-Rip-1080p-Movie-Torrents-FREE.md deleted file mode 100644 index 95bfbcb6c547d1c1025321da81aa1dd8975ba77c..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Code-Name-Jackal-Br-Rip-1080p-Movie-Torrents-FREE.md +++ /dev/null @@ -1,66 +0,0 @@ -## Code Name Jackal Br Rip 1080p Movie Torrents - - - - - - ![Code Name Jackal Br Rip 1080p Movie Torrents !FREE!](https://m.media-amazon.com/images/M/MV5BOWY2ZDFiOTktNzk0OC00NTc5LTkzOWItMTZhZTdjOTBmYzU5XkEyXkFqcGdeQXVyMjUzOTY1NTc@._V1_FMjpg_UX1000_.jpg) - - - - - -**CLICK HERE ✫ [https://jinyurl.com/2txsBU](https://jinyurl.com/2txsBU)** - - - - - - - - - - - - - -# How to Download Code Name Jackal Br Rip 1080p Movie Torrents - - - -If you are looking for a high-quality action thriller movie, you might be interested in Code Name Jackal Br Rip 1080p Movie Torrents. This is a movie that was released in 2012 and stars Song Ji-hyo, Kim Jae-joong, Han Sang-jin, and Kim Sung-ryung. The plot revolves around a mysterious assassin who kidnaps a famous singer and tries to kill him for a hefty ransom. However, things get complicated when a female detective intervenes and tries to stop the assassin. - - - -Code Name Jackal Br Rip 1080p Movie Torrents are files that have been ripped from a Blu-ray source and encoded with high resolution and quality. They are usually between 700MB and 1.5GB in size and offer a better viewing experience than DVD-Rips or BRRips[^3^]. However, they also require more bandwidth and storage space to download and play. - - - -If you want to download Code Name Jackal Br Rip 1080p Movie Torrents, you will need a torrent client such as BitTorrent or uTorrent. A torrent client is a software that allows you to connect to other users who have the same file and download it from them. You will also need a torrent file or a magnet link that contains the information about the file you want to download. You can find these files or links on various torrent websites such as YTS[^1^], SKTorrent[^2^], or UnlimitedElevation[^4^]. However, be careful when visiting these websites as they may contain malware or viruses that can harm your computer. - - - -Once you have the torrent file or the magnet link, you can open it with your torrent client and start downloading the movie. Depending on your internet speed and the number of seeders (users who have the complete file and are sharing it) and leechers (users who are downloading the file but not sharing it), the download may take from a few minutes to several hours. You can monitor the progress of your download on your torrent client interface. - - - -After the download is complete, you can open the movie file with any media player that supports HEVC format such as VLC or MPC-HC. You may also need subtitles if the movie is not in your preferred language. You can find subtitles on websites such as Subscene or OpenSubtitles. You can then enjoy watching Code Name Jackal Br Rip 1080p Movie Torrents on your computer or TV. - - - -Code Name Jackal Br Rip 1080p Movie Torrents are not only a great way to watch a thrilling movie, but also a way to learn more about the Korean culture and language. The movie features many aspects of the Korean entertainment industry, such as the popularity of K-pop singers, the role of managers and agents, and the influence of fans and media. The movie also showcases some of the scenic locations in Seoul, such as the Namsan Tower, the Han River, and the Cheonggyecheon Stream. - - - -However, downloading Code Name Jackal Br Rip 1080p Movie Torrents may also have some drawbacks. First of all, downloading and sharing copyrighted content without permission is illegal in many countries and may result in fines or legal action. Therefore, you should always respect the rights of the creators and distributors of the movie and only download it from authorized sources. Secondly, downloading and watching movies from torrent websites may expose you to cyber threats such as hacking, phishing, or ransomware. Therefore, you should always use a VPN (virtual private network) to protect your online privacy and security. A VPN is a service that encrypts your internet traffic and hides your IP address from prying eyes. You can find many VPN providers online such as NordVPN, ExpressVPN, or Surfshark. - - - -In conclusion, Code Name Jackal Br Rip 1080p Movie Torrents are an option for watching a high-quality action thriller movie from Korea. However, you should also be aware of the risks and responsibilities involved in downloading and watching movies from torrent websites. You should always respect the law and the artists' work and use a VPN to safeguard your online safety. - - 1b8d091108 - - - - - diff --git a/spaces/inreVtussa/clothingai/Examples/Adele 21 Zip Download 29.md b/spaces/inreVtussa/clothingai/Examples/Adele 21 Zip Download 29.md deleted file mode 100644 index 6b1b151fe87643395f0dc7ce4b42be52d4805706..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Adele 21 Zip Download 29.md +++ /dev/null @@ -1,86 +0,0 @@ - -

      How to Download Adele 21 Zip File and Enjoy Her Amazing Album

      - -

      Adele 21 is one of the most successful albums of all time. It was released in 2011 and won six Grammy Awards, including Album of the Year. It sold over 30 million copies worldwide and became the best-selling album of the 21st century. It features some of Adele's most iconic songs, such as Rolling in the Deep, Someone Like You, Set Fire to the Rain, and Rumour Has It.

      - -

      If you are a fan of Adele and want to download her 21 album in zip file format, you have come to the right place. In this article, we will show you how to download Adele 21 zip file and enjoy her amazing album on your device. We will also share some of the pros and cons of downloading Adele 21 zip file and compare it with other formats.

      -

      Adele 21 Zip Download 29


      DOWNLOAD ✺✺✺ https://tiurll.com/2uClfN



      - -

      What is a Zip File and Why Download Adele 21 Zip File?

      - -

      A zip file is a compressed file that contains one or more files or folders. It reduces the size of the files and makes them easier to store and transfer. A zip file can be opened with various software programs, such as WinZip, WinRAR, 7-Zip, or PeaZip.

      - -

      There are several reasons why you might want to download Adele 21 zip file instead of other formats, such as MP3, FLAC, or CD. Here are some of them:

      - -
        -
      • You can save space on your device or storage media. A zip file is smaller than the original files and can fit more songs in less space.
      • -
      • You can download the whole album at once instead of downloading each song individually. This can save you time and bandwidth.
      • -
      • You can keep the original quality of the songs without losing any data or sound quality. A zip file preserves the original files and does not alter them in any way.
      • -
      • You can unzip the file and play the songs on any device or media player that supports them. You can also burn them to a CD or DVD if you want.
      • -
      - -

      How to Download Adele 21 Zip File from Internet Archive?

      - -

      One of the best places to download Adele 21 zip file is from Internet Archive. Internet Archive is a non-profit digital library that offers free access to millions of books, movies, music, software, and more. It also hosts a collection of Adele's albums, including 21.

      - -

      To download Adele 21 zip file from Internet Archive, follow these steps:

      - -
        -
      1. Go to https://archive.org/details/adele-21
      2. -
      3. Click on the "DOWNLOAD OPTIONS" button on the right side of the page.
      4. -
      5. Select "TORRENT" from the list of options.
      6. -
      7. A torrent file will be downloaded to your device. You will need a torrent client software, such as BitTorrent or uTorrent, to open it.
      8. -
      9. Open the torrent file with your torrent client software and start downloading Adele 21 zip file.
      10. -
      11. Once the download is complete, you can unzip the file and play the songs on your device or media player.
      12. -
      - -

      Note: Downloading Adele 21 zip file from Internet Archive is legal and free. However, we cannot guarantee the safety or quality of the file. We advise you to check the file with any free antivirus before opening it.

      -

      - -

      How to Download Adele 21 Zip File from SoundCloud?

      - -

      Another place to download Adele 21 zip file is from SoundCloud. SoundCloud is a popular online audio platform that allows users to upload, share, and stream music and podcasts. It also has a repack version of Adele 21 zip file that claims to have the full version and all reports of the album for 64 bit systems.

      - -

      To download Adele 21 zip file from SoundCloud, follow these steps:

      - -
        -
      1. Go to https://soundcloud.com/bactsisvelitt1975/astrology-software-my-star-world-full-ver-all-reports-64-bit-repack
      2. -
      3. Click on the "More" button below the audio player.
      4. -
      5. Select "Download file" from the drop-down menu.
      6. -
      7. A zip file will be downloaded to your device.
      8. -
      9. You can unzip the file and play the songs on your device or media player.
      10. -
      - -

      Note: Downloading Adele 21 zip file from SoundCloud is not legal or authorized by Adele or her record label. We cannot guarantee the safety or legality of this download. We advise you to use it at your own risk and discretion.

      - -
      How to Compare Adele 21 Zip File with Other Formats?
      - -

      Adele 21 zip file is not the only format available for her album. There are other formats that offer different features and functions. Here are some of them:

      - -
        -
      • MP3: This is a common audio format that compresses the sound data and reduces the file size. It is compatible with most devices and media players. However, it also lowers the sound quality and may cause some distortion or loss of data.
      • -
      • FLAC: This is a lossless audio format that preserves the original sound quality and data without compression. It offers high fidelity and dynamic range. However, it also increases

        -
        What Are the Benefits of Listening to Adele 21 Album?
        - -

        Adele 21 is not just a collection of songs, it is a musical journey that explores the themes of love, loss, regret, and redemption. Adele's powerful voice and emotional lyrics touch the hearts and souls of millions of listeners around the world. Listening to Adele 21 can have many benefits for you, such as:

        - -
          -
        • It can inspire you to pursue your dreams and passions. Adele's success story is a testament to her talent and determination. She overcame many challenges and obstacles to achieve her goals and become one of the most influential artists of her generation.
        • -
        • It can help you cope with your emotions and feelings. Adele's songs express a range of emotions, from joy and happiness to sadness and anger. They can help you relate to your own experiences and feelings and find comfort and solace in them.
        • -
        • It can improve your mood and well-being. Adele's songs have a positive and uplifting effect on your mood and well-being. They can make you feel happy, hopeful, empowered, and confident. They can also reduce your stress and anxiety levels and improve your mental health.
        • -
        • It can enhance your creativity and intelligence. Adele's songs are rich in musical and lyrical elements, such as melody, harmony, rhythm, rhyme, metaphor, and symbolism. They can stimulate your brain and enhance your cognitive abilities, such as memory, attention, language, and problem-solving.
        • -
        - -

        As you can see, listening to Adele 21 can have many benefits for you. You can enjoy her amazing album on your device or media player by downloading Adele 21 zip file from Internet Archive or SoundCloud. However, you should also respect Adele's rights and support her work by buying her album or streaming it on legal platforms.

        - -Conclusion - -

        Adele 21 is a remarkable album that can help you enjoy her amazing music and voice. It is one of the most successful albums of all time and has won many awards and accolades. It features some of Adele's most iconic songs, such as Rolling in the Deep, Someone Like You, Set Fire to the Rain, and Rumour Has It.

        - -

        If you want to download Adele 21 zip file, you have two options: Internet Archive or SoundCloud. Internet Archive is a legal and free source that offers a torrent file of Adele 21 zip file. SoundCloud is an illegal and unauthorized source that offers a repack version of Adele 21 zip file for 64 bit systems. You should weigh the pros and cons of these sources before downloading Adele 21 zip file.

        - -

        You should also compare Adele 21 zip file with other formats, such as MP3, FLAC, or CD. Each format has its advantages and disadvantages in terms of price, availability, compatibility, performance, quality, quantity, ease of use, storage, legality, and ethics. You should choose the format that suits your preferences and needs best.

        - -

        We hope this article has been helpful and informative for you. Thank you for reading and have a great day!

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/CRACK Autodesk AutoCAD 2018.0.2 Final (x86 X64) REPACK Keygen.md b/spaces/inreVtussa/clothingai/Examples/CRACK Autodesk AutoCAD 2018.0.2 Final (x86 X64) REPACK Keygen.md deleted file mode 100644 index 7b1a9065299a68d5175c2bd724b328d530b2ea2f..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/CRACK Autodesk AutoCAD 2018.0.2 Final (x86 X64) REPACK Keygen.md +++ /dev/null @@ -1,6 +0,0 @@ -

        CRACK Autodesk AutoCAD 2018.0.2 Final (x86 x64) Keygen


        Download Ziphttps://tiurll.com/2uCjNC



        -
        -autodesk autocad 2016 x64 final crack keygen, autodesk autocad 2018.0 2 final x86x64 keygen Autodesk AutoCAD 2018.6.2 Final ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/inreVtussa/clothingai/Examples/Corel Draw X7 Serial Number And Activation Code List.md b/spaces/inreVtussa/clothingai/Examples/Corel Draw X7 Serial Number And Activation Code List.md deleted file mode 100644 index 501248916a89cdad0943dfb733ae0b9f94cfb073..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Corel Draw X7 Serial Number And Activation Code List.md +++ /dev/null @@ -1,9 +0,0 @@ - -

        coreldraw x7 crack is a nice and easy-to-use graphic software for digital artists. you can easily create professional-looking graphics and images. it works fast and has the most powerful tools. the program supports multiple monitors, layer support and much more. you can create professional-looking photo effects and graphics. it has a powerful user interface. it also offers stunning photo-editing tools to enhance your photos. it is the best photo-editing software available.

        -

        corel draw x7 serial number and activation code list


        DOWNLOAD 🌟 https://tiurll.com/2uCks1



        -

        even if coreldraw x7 crack is a powerful graphic design software, it is not intended to be used for professional work. it is a fast and simple graphic design program for the average home user. it is designed for picture editing, graphics, and design. it is a good software for starting up and first time users of coreldraw. coreldraw x7 serial key crack is a pretty good software. it is not excellent. it is fairly good. it can do the job, but it is not the best. there are better programs than this.

        -

        coreldraw 2017 is now offering a new enhanced interface and includes a helpful to-do list, a progress bar, and a toolbox. in addition, you can also use the new drawing canvas, import and use media, and enjoy and edit pdf files. corel draw 2017 lets you edit pictures in several ways. corel draw 2017 lets you create illustrations, documents, web pages, and presentations.

        -

        coreldraw graphics suite 2017 was made to improve the task of editing and designing, and your finished results will look fantastic. the application is now multi-threaded. you will be able to import media items, use tools and other features to create your illustrations, graphics, and webpages. the application has a variety of new features that help you to share your ideas more effectively. new alignment guidelines facilitate the placement of objects.

        -

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/isotope21/Musicgen/README.md b/spaces/isotope21/Musicgen/README.md deleted file mode 100644 index a71e79bc17c2a893732d02c65926d6519dff43c9..0000000000000000000000000000000000000000 --- a/spaces/isotope21/Musicgen/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Grootin Demo -emoji: 👀 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.44.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/rife/inference_video.py b/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/rife/inference_video.py deleted file mode 100644 index 5f2358610802582f9681de236ea29b4a37186685..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/rife/inference_video.py +++ /dev/null @@ -1,282 +0,0 @@ -# thanks to https://github.com/n00mkrad for the inspiration and a bit of code. Also thanks for https://github.com/XmYx for the initial reorganization of this script -import os, sys -from types import SimpleNamespace -import cv2 -import torch -import argparse -import shutil -import numpy as np -from tqdm import tqdm -from torch.nn import functional as F -import warnings -import _thread -from queue import Queue, Empty -import subprocess -import time -from .model.pytorch_msssim import ssim_matlab - -sys.path.append('../../') -from deforum_helpers.video_audio_utilities import ffmpeg_stitch_video -from deforum_helpers.general_utils import duplicate_pngs_from_folder - -warnings.filterwarnings("ignore") - -def run_rife_new_video_infer( - output=None, - model=None, - fp16=False, - UHD=False, # *Will be received as *True* if imgs/vid resolution is 2K or higher* - scale=1.0, - fps=None, - deforum_models_path=None, - raw_output_imgs_path=None, - img_batch_id=None, - ffmpeg_location=None, - audio_track=None, - interp_x_amount=2, - slow_mo_enabled=False, - slow_mo_x_amount=2, - ffmpeg_crf=17, - ffmpeg_preset='veryslow', - keep_imgs=False, - orig_vid_name = None): - - args = SimpleNamespace() - args.output = output - args.modelDir = model - args.fp16 = fp16 - args.UHD = UHD - args.scale = scale - args.fps = fps - args.deforum_models_path = deforum_models_path - args.raw_output_imgs_path = raw_output_imgs_path - args.img_batch_id = img_batch_id - args.ffmpeg_location = ffmpeg_location - args.audio_track = audio_track - args.interp_x_amount = interp_x_amount - args.slow_mo_enabled = slow_mo_enabled - args.slow_mo_x_amount = slow_mo_x_amount - args.ffmpeg_crf = ffmpeg_crf - args.ffmpeg_preset = ffmpeg_preset - args.keep_imgs = keep_imgs - args.orig_vid_name = orig_vid_name - - if args.UHD and args.scale == 1.0: - args.scale = 0.5 - - start_time = time.time() - - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - torch.set_grad_enabled(False) - if torch.cuda.is_available(): - torch.backends.cudnn.enabled = True - torch.backends.cudnn.benchmark = True - # TODO: Can/ need to handle this? currently it's always False and give errors if True but faster speeds on tensortcore equipped gpus? - if (args.fp16): - torch.set_default_tensor_type(torch.cuda.HalfTensor) - if args.modelDir is not None: - try: - from .rife_new_gen.RIFE_HDv3 import Model - except ImportError as e: - raise ValueError(f"{args.modelDir} could not be found. Please contact deforum support {e}") - except Exception as e: - raise ValueError(f"An error occured while trying to import {args.modelDir}: {e}") - else: - print("Got a request to frame-interpolate but no valid frame interpolation engine value provided. Doing... nothing") - return - - model = Model() - if not hasattr(model, 'version'): - model.version = 0 - model.load_model(args.modelDir, -1, deforum_models_path) - model.eval() - model.device() - - print(f"{args.modelDir}.pkl model successfully loaded into memory") - print("Interpolation progress (it's OK if it finishes before 100%):") - - interpolated_path = os.path.join(args.raw_output_imgs_path, 'interpolated_frames_rife') - # set custom name depending on if we interpolate after a run, or interpolate a video (related/unrelated to deforum, we don't know) directly from within the RIFE tab - if args.orig_vid_name is not None: # interpolating a video (deforum or unrelated) - custom_interp_path = "{}_{}".format(interpolated_path, args.orig_vid_name) - else: # interpolating after a deforum run: - custom_interp_path = "{}_{}".format(interpolated_path, args.img_batch_id) - - # In this folder we temporarily keep the original frames (converted/ copy-pasted and img format depends on scenario) - # the convertion case is done to avert a problem with 24 and 32 mixed outputs from the same animation run - temp_convert_raw_png_path = os.path.join(args.raw_output_imgs_path, "tmp_rife_folder") - - duplicate_pngs_from_folder(args.raw_output_imgs_path, temp_convert_raw_png_path, args.img_batch_id, args.orig_vid_name) - - videogen = [] - for f in os.listdir(temp_convert_raw_png_path): - # double check for old _depth_ files, not really needed probably but keeping it for now - if '_depth_' not in f: - videogen.append(f) - tot_frame = len(videogen) - videogen.sort(key= lambda x:int(x.split('.')[0])) - img_path = os.path.join(temp_convert_raw_png_path, videogen[0]) - lastframe = cv2.imdecode(np.fromfile(img_path, dtype=np.uint8), cv2.IMREAD_UNCHANGED)[:, :, ::-1].copy() - videogen = videogen[1:] - h, w, _ = lastframe.shape - vid_out = None - - if not os.path.exists(custom_interp_path): - os.mkdir(custom_interp_path) - - tmp = max(128, int(128 / args.scale)) - ph = ((h - 1) // tmp + 1) * tmp - pw = ((w - 1) // tmp + 1) * tmp - padding = (0, pw - w, 0, ph - h) - pbar = tqdm(total=tot_frame) - - write_buffer = Queue(maxsize=500) - read_buffer = Queue(maxsize=500) - - _thread.start_new_thread(build_read_buffer, (args, read_buffer, videogen, temp_convert_raw_png_path)) - _thread.start_new_thread(clear_write_buffer, (args, write_buffer, custom_interp_path)) - - I1 = torch.from_numpy(np.transpose(lastframe, (2, 0, 1))).to(device, non_blocking=True).unsqueeze(0).float() / 255. - I1 = pad_image(I1, args.fp16, padding) - temp = None # save lastframe when processing static frame - - while True: - if temp is not None: - frame = temp - temp = None - else: - frame = read_buffer.get() - if frame is None: - break - I0 = I1 - I1 = torch.from_numpy(np.transpose(frame, (2, 0, 1))).to(device, non_blocking=True).unsqueeze(0).float() / 255. - I1 = pad_image(I1, args.fp16, padding) - I0_small = F.interpolate(I0, (32, 32), mode='bilinear', align_corners=False) - I1_small = F.interpolate(I1, (32, 32), mode='bilinear', align_corners=False) - ssim = ssim_matlab(I0_small[:, :3], I1_small[:, :3]) - - break_flag = False - if ssim > 0.996: - frame = read_buffer.get() # read a new frame - if frame is None: - break_flag = True - frame = lastframe - else: - temp = frame - I1 = torch.from_numpy(np.transpose(frame, (2, 0, 1))).to(device, non_blocking=True).unsqueeze(0).float() / 255. - I1 = pad_image(I1, args.fp16, padding) - I1 = model.inference(I0, I1, args.scale) - I1_small = F.interpolate(I1, (32, 32), mode='bilinear', align_corners=False) - ssim = ssim_matlab(I0_small[:, :3], I1_small[:, :3]) - frame = (I1[0] * 255).byte().cpu().numpy().transpose(1, 2, 0)[:h, :w] - - if ssim < 0.2: - output = [] - for i in range(args.interp_x_amount - 1): - output.append(I0) - else: - output = make_inference(model, I0, I1, args.interp_x_amount - 1, scale) - - write_buffer.put(lastframe) - for mid in output: - mid = (((mid[0] * 255.).byte().cpu().numpy().transpose(1, 2, 0))) - write_buffer.put(mid[:h, :w]) - pbar.update(1) - lastframe = frame - if break_flag: - break - - write_buffer.put(lastframe) - - while (not write_buffer.empty()): - time.sleep(0.1) - pbar.close() - shutil.rmtree(temp_convert_raw_png_path) - - print(f"Interpolation \033[0;32mdone\033[0m in {time.time()-start_time:.2f} seconds!") - # stitch video from interpolated frames, and add audio if needed - try: - print (f"*Passing interpolated frames to ffmpeg...*") - vid_out_path = stitch_video(args.img_batch_id, args.fps, custom_interp_path, args.audio_track, args.ffmpeg_location, args.interp_x_amount, args.slow_mo_enabled, args.slow_mo_x_amount, args.ffmpeg_crf, args.ffmpeg_preset, args.keep_imgs, args.orig_vid_name) - # remove folder with raw (non-interpolated) vid input frames in case of input VID and not PNGs - if orig_vid_name is not None: - shutil.rmtree(raw_output_imgs_path) - except Exception as e: - print(f'Video stitching gone wrong. *Interpolated frames were saved to HD as backup!*. Actual error: {e}') - -def clear_write_buffer(user_args, write_buffer, custom_interp_path): - cnt = 0 - - while True: - item = write_buffer.get() - if item is None: - break - filename = '{}/{:0>7d}.png'.format(custom_interp_path, cnt) - - cv2.imwrite(filename, item[:, :, ::-1]) - - cnt += 1 - -def build_read_buffer(user_args, read_buffer, videogen, temp_convert_raw_png_path): - for frame in videogen: - if not temp_convert_raw_png_path is None: - img_path = os.path.join(temp_convert_raw_png_path, frame) - frame = cv2.imdecode(np.fromfile(img_path, dtype=np.uint8), cv2.IMREAD_UNCHANGED)[:, :, ::-1].copy() - read_buffer.put(frame) - read_buffer.put(None) - -def make_inference(model, I0, I1, n, scale): - if model.version >= 3.9: - res = [] - for i in range(n): - res.append(model.inference(I0, I1, (i + 1) * 1. / (n + 1), scale)) - return res - else: - middle = model.inference(I0, I1, scale) - if n == 1: - return [middle] - first_half = make_inference(model, I0, middle, n=n // 2, scale=scale) - second_half = make_inference(model, middle, I1, n=n // 2, scale=scale) - if n % 2: - return [*first_half, middle, *second_half] - else: - return [*first_half, *second_half] - -def pad_image(img, fp16, padding): - if (fp16): - return F.pad(img, padding).half() - else: - return F.pad(img, padding) - -# TODO: move to fream_interpolation and add FILM to it! -def stitch_video(img_batch_id, fps, img_folder_path, audio_path, ffmpeg_location, interp_x_amount, slow_mo_enabled, slow_mo_x_amount, f_crf, f_preset, keep_imgs, orig_vid_name): - parent_folder = os.path.dirname(img_folder_path) - grandparent_folder = os.path.dirname(parent_folder) - if orig_vid_name is not None: - mp4_path = os.path.join(grandparent_folder, str(orig_vid_name) +'_RIFE_' + 'x' + str(interp_x_amount)) - else: - mp4_path = os.path.join(parent_folder, str(img_batch_id) +'_RIFE_' + 'x' + str(interp_x_amount)) - - if slow_mo_enabled: - mp4_path = mp4_path + '_slomo_x' + str(slow_mo_x_amount) - mp4_path = mp4_path + '.mp4' - - t = os.path.join(img_folder_path, "%07d.png") - add_soundtrack = 'None' - if not audio_path is None: - add_soundtrack = 'File' - - exception_raised = False - try: - ffmpeg_stitch_video(ffmpeg_location=ffmpeg_location, fps=fps, outmp4_path=mp4_path, stitch_from_frame=0, stitch_to_frame=1000000, imgs_path=t, add_soundtrack=add_soundtrack, audio_path=audio_path, crf=f_crf, preset=f_preset) - except Exception as e: - exception_raised = True - print(f"An error occurred while stitching the video: {e}") - - if not exception_raised and not keep_imgs: - shutil.rmtree(img_folder_path) - - if (keep_imgs and orig_vid_name is not None) or (orig_vid_name is not None and exception_raised is True): - shutil.move(img_folder_path, grandparent_folder) - - return mp4_path \ No newline at end of file diff --git a/spaces/jaisidhsingh/cluster-summ/summarize.py b/spaces/jaisidhsingh/cluster-summ/summarize.py deleted file mode 100644 index 2c1ab033d7966dbc62d40801e74afef175d7bd84..0000000000000000000000000000000000000000 --- a/spaces/jaisidhsingh/cluster-summ/summarize.py +++ /dev/null @@ -1,82 +0,0 @@ -from utils.sentence_embedding import * -from utils.clustering import * -from models.summarizers import * -from nltk.tokenize import sent_tokenize, word_tokenize -import math -from time import perf_counter -import time - - -def get_summary(model_name, article, max_length, min_length, increment): - start_time = perf_counter() - summarization_model, summarization_tokenizer = load_summarizer(model_name) - summarizer_token_limit = summarization_tokenizer.model_max_length - print("Going Beyong Token limit:", summarizer_token_limit) - - input_word_toks = word_tokenize(article) - num_words = len(input_word_toks) - - if num_words <= summarizer_token_limit and model_name == "t5": - pred_summary = summarize_input(article, summarization_model, summarization_tokenizer) - end_time = perf_counter() - print("Time taken: ", end_time - start_time) - - else: - input_sent_toks = sent_tokenize(article) - embeddings = make_embeddings(input_sent_toks, mean_pooling) - embeddings = embeddings.numpy() - - increment[0] = 20 - - n_clusters_estimate = math.ceil(num_words / summarizer_token_limit) - - clemb = ClusterEmbeddings( - cluster_estimate=n_clusters_estimate, - cluster_fn="agglo", # much better - embeddings=embeddings, - sentences=np.array(input_sent_toks), - words=np.array(input_word_toks) - ) - - increment[0] = 50 - - sentence_clusters = clemb.get_sentence_clusters() - - n = len(sentence_clusters) - summs = "" - for cluster in sentence_clusters: - cluster_summary = summarize_input( - cluster, - summarization_model, - summarization_tokenizer, - max_length=250, - min_length=50, - ) - if type(cluster_summary) == list: - cluster_summary = cluster_summary[0] - summs += cluster_summary + " " - - increment[0] += 40 / n - - pred_summary = summarize_input( - summs, - summarization_model, - summarization_tokenizer, - max_length=max_length, - min_length=min_length, - ) - - increment[0] += 100 - - end_time = perf_counter() - time_taken = end_time - start_time - - return pred_summary, time_taken - -def test(): - article = """Recent text-to-image matching models apply contrastive learning to large corpora of uncurated pairs of images and sentences. While such models can provide a powerful score for matching and subsequent zero-shot tasks, they are not capable of generating caption given an image. In this work, we repurpose such models to generate a descriptive text given an image at inference time, without any further training or tuning step. This is done by combining the visual-semantic model with a large language model, benefiting from the knowledge in both web-scale models. The resulting captions are much less restrictive than those obtained by supervised captioning methods. Moreover, as a zero-shot learning method, it is extremely flexible and wedemonstrate its ability to perform image arithmetic in which the inputs can be either images or text and the output is a sentence.""" - model_name = "BART" - summ, time_taken = get_summary(model_name, article, 250, 150) - print(summ) - print(time_taken) - diff --git a/spaces/jasonreisman/primates/README.md b/spaces/jasonreisman/primates/README.md deleted file mode 100644 index 91e95de1ef90bea9f357d8dc3f30d6533d8ec020..0000000000000000000000000000000000000000 --- a/spaces/jasonreisman/primates/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Primates -emoji: 🦀 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.44.3 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jhj0517/Whisper-WebUI-Easy-Subtitle-Generator/modules/youtube_manager.py b/spaces/jhj0517/Whisper-WebUI-Easy-Subtitle-Generator/modules/youtube_manager.py deleted file mode 100644 index e9f11452324a06acddcd4b58ef4ee2e7dc6811a0..0000000000000000000000000000000000000000 --- a/spaces/jhj0517/Whisper-WebUI-Easy-Subtitle-Generator/modules/youtube_manager.py +++ /dev/null @@ -1,11 +0,0 @@ -from pytube import YouTube - -def get_ytdata(link): - return YouTube(link) - -def get_ytmetas(link): - yt = YouTube(link) - return yt.thumbnail_url,yt.title,yt.description - -def get_ytaudio(ytdata:YouTube): - return ytdata.streams.get_audio_only().download(filename="modules/yt_tmp.wav") diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/SHA3_512.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/SHA3_512.py deleted file mode 100644 index de8880c75c473c07d087a2ddab3302cf63bcddfb..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/SHA3_512.py +++ /dev/null @@ -1,174 +0,0 @@ -# -*- coding: utf-8 -*- -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -from Crypto.Util.py3compat import bord - -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - VoidPointer, SmartPointer, - create_string_buffer, - get_raw_buffer, c_size_t, - c_uint8_ptr, c_ubyte) - -from Crypto.Hash.keccak import _raw_keccak_lib - -class SHA3_512_Hash(object): - """A SHA3-512 hash object. - Do not instantiate directly. - Use the :func:`new` function. - - :ivar oid: ASN.1 Object ID - :vartype oid: string - - :ivar digest_size: the size in bytes of the resulting hash - :vartype digest_size: integer - """ - - # The size of the resulting hash in bytes. - digest_size = 64 - - # ASN.1 Object ID - oid = "2.16.840.1.101.3.4.2.10" - - # Input block size for HMAC - block_size = 72 - - def __init__(self, data, update_after_digest): - self._update_after_digest = update_after_digest - self._digest_done = False - self._padding = 0x06 - - state = VoidPointer() - result = _raw_keccak_lib.keccak_init(state.address_of(), - c_size_t(self.digest_size * 2), - c_ubyte(24)) - if result: - raise ValueError("Error %d while instantiating SHA-3/512" - % result) - self._state = SmartPointer(state.get(), - _raw_keccak_lib.keccak_destroy) - if data: - self.update(data) - - def update(self, data): - """Continue hashing of a message by consuming the next chunk of data. - - Args: - data (byte string/byte array/memoryview): The next chunk of the message being hashed. - """ - - if self._digest_done and not self._update_after_digest: - raise TypeError("You can only call 'digest' or 'hexdigest' on this object") - - result = _raw_keccak_lib.keccak_absorb(self._state.get(), - c_uint8_ptr(data), - c_size_t(len(data))) - if result: - raise ValueError("Error %d while updating SHA-3/512" - % result) - return self - - def digest(self): - - """Return the **binary** (non-printable) digest of the message that has been hashed so far. - - :return: The hash digest, computed over the data processed so far. - Binary form. - :rtype: byte string - """ - - self._digest_done = True - - bfr = create_string_buffer(self.digest_size) - result = _raw_keccak_lib.keccak_digest(self._state.get(), - bfr, - c_size_t(self.digest_size), - c_ubyte(self._padding)) - if result: - raise ValueError("Error %d while instantiating SHA-3/512" - % result) - - self._digest_value = get_raw_buffer(bfr) - return self._digest_value - - def hexdigest(self): - """Return the **printable** digest of the message that has been hashed so far. - - :return: The hash digest, computed over the data processed so far. - Hexadecimal encoded. - :rtype: string - """ - - return "".join(["%02x" % bord(x) for x in self.digest()]) - - def copy(self): - """Return a copy ("clone") of the hash object. - - The copy will have the same internal state as the original hash - object. - This can be used to efficiently compute the digests of strings that - share a common initial substring. - - :return: A hash object of the same type - """ - - clone = self.new() - result = _raw_keccak_lib.keccak_copy(self._state.get(), - clone._state.get()) - if result: - raise ValueError("Error %d while copying SHA3-512" % result) - return clone - - def new(self, data=None): - """Create a fresh SHA3-521 hash object.""" - - return type(self)(data, self._update_after_digest) - - -def new(*args, **kwargs): - """Create a new hash object. - - Args: - data (byte string/byte array/memoryview): - The very first chunk of the message to hash. - It is equivalent to an early call to :meth:`update`. - update_after_digest (boolean): - Whether :meth:`digest` can be followed by another :meth:`update` - (default: ``False``). - - :Return: A :class:`SHA3_512_Hash` hash object - """ - - data = kwargs.pop("data", None) - update_after_digest = kwargs.pop("update_after_digest", False) - if len(args) == 1: - if data: - raise ValueError("Initial data for hash specified twice") - data = args[0] - - if kwargs: - raise TypeError("Unknown parameters: " + str(kwargs)) - - return SHA3_512_Hash(data, update_after_digest) - -# The size of the resulting hash in bytes. -digest_size = SHA3_512_Hash.digest_size - -# Input block size for HMAC -block_size = 72 diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Cipher/common.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Cipher/common.py deleted file mode 100644 index c5bc755abb555148ab7c46f1e82fcfeacac31366..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Cipher/common.py +++ /dev/null @@ -1,510 +0,0 @@ -# -*- coding: utf-8 -*- -# -# SelfTest/Hash/common.py: Common code for Crypto.SelfTest.Hash -# -# Written in 2008 by Dwayne C. Litzenberger -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -"""Self-testing for PyCrypto hash modules""" - -import unittest -from binascii import a2b_hex, b2a_hex, hexlify - -from Crypto.Util.py3compat import b -from Crypto.Util.strxor import strxor_c - -class _NoDefault: pass # sentinel object -def _extract(d, k, default=_NoDefault): - """Get an item from a dictionary, and remove it from the dictionary.""" - try: - retval = d[k] - except KeyError: - if default is _NoDefault: - raise - return default - del d[k] - return retval - -# Generic cipher test case -class CipherSelfTest(unittest.TestCase): - - def __init__(self, module, params): - unittest.TestCase.__init__(self) - self.module = module - - # Extract the parameters - params = params.copy() - self.description = _extract(params, 'description') - self.key = b(_extract(params, 'key')) - self.plaintext = b(_extract(params, 'plaintext')) - self.ciphertext = b(_extract(params, 'ciphertext')) - self.module_name = _extract(params, 'module_name', None) - self.assoc_data = _extract(params, 'assoc_data', None) - self.mac = _extract(params, 'mac', None) - if self.assoc_data: - self.mac = b(self.mac) - - mode = _extract(params, 'mode', None) - self.mode_name = str(mode) - - if mode is not None: - # Block cipher - self.mode = getattr(self.module, "MODE_" + mode) - - self.iv = _extract(params, 'iv', None) - if self.iv is None: - self.iv = _extract(params, 'nonce', None) - if self.iv is not None: - self.iv = b(self.iv) - - else: - # Stream cipher - self.mode = None - self.iv = _extract(params, 'iv', None) - if self.iv is not None: - self.iv = b(self.iv) - - self.extra_params = params - - def shortDescription(self): - return self.description - - def _new(self): - params = self.extra_params.copy() - key = a2b_hex(self.key) - - old_style = [] - if self.mode is not None: - old_style = [ self.mode ] - if self.iv is not None: - old_style += [ a2b_hex(self.iv) ] - - return self.module.new(key, *old_style, **params) - - def isMode(self, name): - if not hasattr(self.module, "MODE_"+name): - return False - return self.mode == getattr(self.module, "MODE_"+name) - - def runTest(self): - plaintext = a2b_hex(self.plaintext) - ciphertext = a2b_hex(self.ciphertext) - assoc_data = [] - if self.assoc_data: - assoc_data = [ a2b_hex(b(x)) for x in self.assoc_data] - - ct = None - pt = None - - # - # Repeat the same encryption or decryption twice and verify - # that the result is always the same - # - for i in range(2): - cipher = self._new() - decipher = self._new() - - # Only AEAD modes - for comp in assoc_data: - cipher.update(comp) - decipher.update(comp) - - ctX = b2a_hex(cipher.encrypt(plaintext)) - ptX = b2a_hex(decipher.decrypt(ciphertext)) - - if ct: - self.assertEqual(ct, ctX) - self.assertEqual(pt, ptX) - ct, pt = ctX, ptX - - self.assertEqual(self.ciphertext, ct) # encrypt - self.assertEqual(self.plaintext, pt) # decrypt - - if self.mac: - mac = b2a_hex(cipher.digest()) - self.assertEqual(self.mac, mac) - decipher.verify(a2b_hex(self.mac)) - -class CipherStreamingSelfTest(CipherSelfTest): - - def shortDescription(self): - desc = self.module_name - if self.mode is not None: - desc += " in %s mode" % (self.mode_name,) - return "%s should behave like a stream cipher" % (desc,) - - def runTest(self): - plaintext = a2b_hex(self.plaintext) - ciphertext = a2b_hex(self.ciphertext) - - # The cipher should work like a stream cipher - - # Test counter mode encryption, 3 bytes at a time - ct3 = [] - cipher = self._new() - for i in range(0, len(plaintext), 3): - ct3.append(cipher.encrypt(plaintext[i:i+3])) - ct3 = b2a_hex(b("").join(ct3)) - self.assertEqual(self.ciphertext, ct3) # encryption (3 bytes at a time) - - # Test counter mode decryption, 3 bytes at a time - pt3 = [] - cipher = self._new() - for i in range(0, len(ciphertext), 3): - pt3.append(cipher.encrypt(ciphertext[i:i+3])) - # PY3K: This is meant to be text, do not change to bytes (data) - pt3 = b2a_hex(b("").join(pt3)) - self.assertEqual(self.plaintext, pt3) # decryption (3 bytes at a time) - - -class RoundtripTest(unittest.TestCase): - def __init__(self, module, params): - from Crypto import Random - unittest.TestCase.__init__(self) - self.module = module - self.iv = Random.get_random_bytes(module.block_size) - self.key = b(params['key']) - self.plaintext = 100 * b(params['plaintext']) - self.module_name = params.get('module_name', None) - - def shortDescription(self): - return """%s .decrypt() output of .encrypt() should not be garbled""" % (self.module_name,) - - def runTest(self): - - ## ECB mode - mode = self.module.MODE_ECB - encryption_cipher = self.module.new(a2b_hex(self.key), mode) - ciphertext = encryption_cipher.encrypt(self.plaintext) - decryption_cipher = self.module.new(a2b_hex(self.key), mode) - decrypted_plaintext = decryption_cipher.decrypt(ciphertext) - self.assertEqual(self.plaintext, decrypted_plaintext) - - -class IVLengthTest(unittest.TestCase): - def __init__(self, module, params): - unittest.TestCase.__init__(self) - self.module = module - self.key = b(params['key']) - - def shortDescription(self): - return "Check that all modes except MODE_ECB and MODE_CTR require an IV of the proper length" - - def runTest(self): - self.assertRaises(TypeError, self.module.new, a2b_hex(self.key), - self.module.MODE_ECB, b("")) - - def _dummy_counter(self): - return "\0" * self.module.block_size - - -class NoDefaultECBTest(unittest.TestCase): - def __init__(self, module, params): - unittest.TestCase.__init__(self) - self.module = module - self.key = b(params['key']) - - def runTest(self): - self.assertRaises(TypeError, self.module.new, a2b_hex(self.key)) - - -class BlockSizeTest(unittest.TestCase): - def __init__(self, module, params): - unittest.TestCase.__init__(self) - self.module = module - self.key = a2b_hex(b(params['key'])) - - def runTest(self): - cipher = self.module.new(self.key, self.module.MODE_ECB) - self.assertEqual(cipher.block_size, self.module.block_size) - - -class ByteArrayTest(unittest.TestCase): - """Verify we can use bytearray's for encrypting and decrypting""" - - def __init__(self, module, params): - unittest.TestCase.__init__(self) - self.module = module - - # Extract the parameters - params = params.copy() - self.description = _extract(params, 'description') - self.key = b(_extract(params, 'key')) - self.plaintext = b(_extract(params, 'plaintext')) - self.ciphertext = b(_extract(params, 'ciphertext')) - self.module_name = _extract(params, 'module_name', None) - self.assoc_data = _extract(params, 'assoc_data', None) - self.mac = _extract(params, 'mac', None) - if self.assoc_data: - self.mac = b(self.mac) - - mode = _extract(params, 'mode', None) - self.mode_name = str(mode) - - if mode is not None: - # Block cipher - self.mode = getattr(self.module, "MODE_" + mode) - - self.iv = _extract(params, 'iv', None) - if self.iv is None: - self.iv = _extract(params, 'nonce', None) - if self.iv is not None: - self.iv = b(self.iv) - else: - # Stream cipher - self.mode = None - self.iv = _extract(params, 'iv', None) - if self.iv is not None: - self.iv = b(self.iv) - - self.extra_params = params - - def _new(self): - params = self.extra_params.copy() - key = a2b_hex(self.key) - - old_style = [] - if self.mode is not None: - old_style = [ self.mode ] - if self.iv is not None: - old_style += [ a2b_hex(self.iv) ] - - return self.module.new(key, *old_style, **params) - - def runTest(self): - - plaintext = a2b_hex(self.plaintext) - ciphertext = a2b_hex(self.ciphertext) - assoc_data = [] - if self.assoc_data: - assoc_data = [ bytearray(a2b_hex(b(x))) for x in self.assoc_data] - - cipher = self._new() - decipher = self._new() - - # Only AEAD modes - for comp in assoc_data: - cipher.update(comp) - decipher.update(comp) - - ct = b2a_hex(cipher.encrypt(bytearray(plaintext))) - pt = b2a_hex(decipher.decrypt(bytearray(ciphertext))) - - self.assertEqual(self.ciphertext, ct) # encrypt - self.assertEqual(self.plaintext, pt) # decrypt - - if self.mac: - mac = b2a_hex(cipher.digest()) - self.assertEqual(self.mac, mac) - decipher.verify(bytearray(a2b_hex(self.mac))) - - -class MemoryviewTest(unittest.TestCase): - """Verify we can use memoryviews for encrypting and decrypting""" - - def __init__(self, module, params): - unittest.TestCase.__init__(self) - self.module = module - - # Extract the parameters - params = params.copy() - self.description = _extract(params, 'description') - self.key = b(_extract(params, 'key')) - self.plaintext = b(_extract(params, 'plaintext')) - self.ciphertext = b(_extract(params, 'ciphertext')) - self.module_name = _extract(params, 'module_name', None) - self.assoc_data = _extract(params, 'assoc_data', None) - self.mac = _extract(params, 'mac', None) - if self.assoc_data: - self.mac = b(self.mac) - - mode = _extract(params, 'mode', None) - self.mode_name = str(mode) - - if mode is not None: - # Block cipher - self.mode = getattr(self.module, "MODE_" + mode) - - self.iv = _extract(params, 'iv', None) - if self.iv is None: - self.iv = _extract(params, 'nonce', None) - if self.iv is not None: - self.iv = b(self.iv) - else: - # Stream cipher - self.mode = None - self.iv = _extract(params, 'iv', None) - if self.iv is not None: - self.iv = b(self.iv) - - self.extra_params = params - - def _new(self): - params = self.extra_params.copy() - key = a2b_hex(self.key) - - old_style = [] - if self.mode is not None: - old_style = [ self.mode ] - if self.iv is not None: - old_style += [ a2b_hex(self.iv) ] - - return self.module.new(key, *old_style, **params) - - def runTest(self): - - plaintext = a2b_hex(self.plaintext) - ciphertext = a2b_hex(self.ciphertext) - assoc_data = [] - if self.assoc_data: - assoc_data = [ memoryview(a2b_hex(b(x))) for x in self.assoc_data] - - cipher = self._new() - decipher = self._new() - - # Only AEAD modes - for comp in assoc_data: - cipher.update(comp) - decipher.update(comp) - - ct = b2a_hex(cipher.encrypt(memoryview(plaintext))) - pt = b2a_hex(decipher.decrypt(memoryview(ciphertext))) - - self.assertEqual(self.ciphertext, ct) # encrypt - self.assertEqual(self.plaintext, pt) # decrypt - - if self.mac: - mac = b2a_hex(cipher.digest()) - self.assertEqual(self.mac, mac) - decipher.verify(memoryview(a2b_hex(self.mac))) - - -def make_block_tests(module, module_name, test_data, additional_params=dict()): - tests = [] - extra_tests_added = False - for i in range(len(test_data)): - row = test_data[i] - - # Build the "params" dictionary with - # - plaintext - # - ciphertext - # - key - # - mode (default is ECB) - # - (optionally) description - # - (optionally) any other parameter that this cipher mode requires - params = {} - if len(row) == 3: - (params['plaintext'], params['ciphertext'], params['key']) = row - elif len(row) == 4: - (params['plaintext'], params['ciphertext'], params['key'], params['description']) = row - elif len(row) == 5: - (params['plaintext'], params['ciphertext'], params['key'], params['description'], extra_params) = row - params.update(extra_params) - else: - raise AssertionError("Unsupported tuple size %d" % (len(row),)) - - if not "mode" in params: - params["mode"] = "ECB" - - # Build the display-name for the test - p2 = params.copy() - p_key = _extract(p2, 'key') - p_plaintext = _extract(p2, 'plaintext') - p_ciphertext = _extract(p2, 'ciphertext') - p_mode = _extract(p2, 'mode') - p_description = _extract(p2, 'description', None) - - if p_description is not None: - description = p_description - elif p_mode == 'ECB' and not p2: - description = "p=%s, k=%s" % (p_plaintext, p_key) - else: - description = "p=%s, k=%s, %r" % (p_plaintext, p_key, p2) - name = "%s #%d: %s" % (module_name, i+1, description) - params['description'] = name - params['module_name'] = module_name - params.update(additional_params) - - # Add extra test(s) to the test suite before the current test - if not extra_tests_added: - tests += [ - RoundtripTest(module, params), - IVLengthTest(module, params), - NoDefaultECBTest(module, params), - ByteArrayTest(module, params), - BlockSizeTest(module, params), - ] - extra_tests_added = True - - # Add the current test to the test suite - tests.append(CipherSelfTest(module, params)) - - return tests - -def make_stream_tests(module, module_name, test_data): - tests = [] - extra_tests_added = False - for i in range(len(test_data)): - row = test_data[i] - - # Build the "params" dictionary - params = {} - if len(row) == 3: - (params['plaintext'], params['ciphertext'], params['key']) = row - elif len(row) == 4: - (params['plaintext'], params['ciphertext'], params['key'], params['description']) = row - elif len(row) == 5: - (params['plaintext'], params['ciphertext'], params['key'], params['description'], extra_params) = row - params.update(extra_params) - else: - raise AssertionError("Unsupported tuple size %d" % (len(row),)) - - # Build the display-name for the test - p2 = params.copy() - p_key = _extract(p2, 'key') - p_plaintext = _extract(p2, 'plaintext') - p_ciphertext = _extract(p2, 'ciphertext') - p_description = _extract(p2, 'description', None) - - if p_description is not None: - description = p_description - elif not p2: - description = "p=%s, k=%s" % (p_plaintext, p_key) - else: - description = "p=%s, k=%s, %r" % (p_plaintext, p_key, p2) - name = "%s #%d: %s" % (module_name, i+1, description) - params['description'] = name - params['module_name'] = module_name - - # Add extra test(s) to the test suite before the current test - if not extra_tests_added: - tests += [ - ByteArrayTest(module, params), - ] - - tests.append(MemoryviewTest(module, params)) - extra_tests_added = True - - # Add the test to the test suite - tests.append(CipherSelfTest(module, params)) - tests.append(CipherStreamingSelfTest(module, params)) - return tests - -# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImageWin.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImageWin.py deleted file mode 100644 index ca9b14c8adf7a7a05309e69e86465b3ddad30811..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImageWin.py +++ /dev/null @@ -1,230 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# a Windows DIB display interface -# -# History: -# 1996-05-20 fl Created -# 1996-09-20 fl Fixed subregion exposure -# 1997-09-21 fl Added draw primitive (for tzPrint) -# 2003-05-21 fl Added experimental Window/ImageWindow classes -# 2003-09-05 fl Added fromstring/tostring methods -# -# Copyright (c) Secret Labs AB 1997-2003. -# Copyright (c) Fredrik Lundh 1996-2003. -# -# See the README file for information on usage and redistribution. -# - -from . import Image - - -class HDC: - """ - Wraps an HDC integer. The resulting object can be passed to the - :py:meth:`~PIL.ImageWin.Dib.draw` and :py:meth:`~PIL.ImageWin.Dib.expose` - methods. - """ - - def __init__(self, dc): - self.dc = dc - - def __int__(self): - return self.dc - - -class HWND: - """ - Wraps an HWND integer. The resulting object can be passed to the - :py:meth:`~PIL.ImageWin.Dib.draw` and :py:meth:`~PIL.ImageWin.Dib.expose` - methods, instead of a DC. - """ - - def __init__(self, wnd): - self.wnd = wnd - - def __int__(self): - return self.wnd - - -class Dib: - """ - A Windows bitmap with the given mode and size. The mode can be one of "1", - "L", "P", or "RGB". - - If the display requires a palette, this constructor creates a suitable - palette and associates it with the image. For an "L" image, 128 greylevels - are allocated. For an "RGB" image, a 6x6x6 colour cube is used, together - with 20 greylevels. - - To make sure that palettes work properly under Windows, you must call the - ``palette`` method upon certain events from Windows. - - :param image: Either a PIL image, or a mode string. If a mode string is - used, a size must also be given. The mode can be one of "1", - "L", "P", or "RGB". - :param size: If the first argument is a mode string, this - defines the size of the image. - """ - - def __init__(self, image, size=None): - if hasattr(image, "mode") and hasattr(image, "size"): - mode = image.mode - size = image.size - else: - mode = image - image = None - if mode not in ["1", "L", "P", "RGB"]: - mode = Image.getmodebase(mode) - self.image = Image.core.display(mode, size) - self.mode = mode - self.size = size - if image: - self.paste(image) - - def expose(self, handle): - """ - Copy the bitmap contents to a device context. - - :param handle: Device context (HDC), cast to a Python integer, or an - HDC or HWND instance. In PythonWin, you can use - ``CDC.GetHandleAttrib()`` to get a suitable handle. - """ - if isinstance(handle, HWND): - dc = self.image.getdc(handle) - try: - result = self.image.expose(dc) - finally: - self.image.releasedc(handle, dc) - else: - result = self.image.expose(handle) - return result - - def draw(self, handle, dst, src=None): - """ - Same as expose, but allows you to specify where to draw the image, and - what part of it to draw. - - The destination and source areas are given as 4-tuple rectangles. If - the source is omitted, the entire image is copied. If the source and - the destination have different sizes, the image is resized as - necessary. - """ - if not src: - src = (0, 0) + self.size - if isinstance(handle, HWND): - dc = self.image.getdc(handle) - try: - result = self.image.draw(dc, dst, src) - finally: - self.image.releasedc(handle, dc) - else: - result = self.image.draw(handle, dst, src) - return result - - def query_palette(self, handle): - """ - Installs the palette associated with the image in the given device - context. - - This method should be called upon **QUERYNEWPALETTE** and - **PALETTECHANGED** events from Windows. If this method returns a - non-zero value, one or more display palette entries were changed, and - the image should be redrawn. - - :param handle: Device context (HDC), cast to a Python integer, or an - HDC or HWND instance. - :return: A true value if one or more entries were changed (this - indicates that the image should be redrawn). - """ - if isinstance(handle, HWND): - handle = self.image.getdc(handle) - try: - result = self.image.query_palette(handle) - finally: - self.image.releasedc(handle, handle) - else: - result = self.image.query_palette(handle) - return result - - def paste(self, im, box=None): - """ - Paste a PIL image into the bitmap image. - - :param im: A PIL image. The size must match the target region. - If the mode does not match, the image is converted to the - mode of the bitmap image. - :param box: A 4-tuple defining the left, upper, right, and - lower pixel coordinate. See :ref:`coordinate-system`. If - None is given instead of a tuple, all of the image is - assumed. - """ - im.load() - if self.mode != im.mode: - im = im.convert(self.mode) - if box: - self.image.paste(im.im, box) - else: - self.image.paste(im.im) - - def frombytes(self, buffer): - """ - Load display memory contents from byte data. - - :param buffer: A buffer containing display data (usually - data returned from :py:func:`~PIL.ImageWin.Dib.tobytes`) - """ - return self.image.frombytes(buffer) - - def tobytes(self): - """ - Copy display memory contents to bytes object. - - :return: A bytes object containing display data. - """ - return self.image.tobytes() - - -class Window: - """Create a Window with the given title size.""" - - def __init__(self, title="PIL", width=None, height=None): - self.hwnd = Image.core.createwindow( - title, self.__dispatcher, width or 0, height or 0 - ) - - def __dispatcher(self, action, *args): - return getattr(self, "ui_handle_" + action)(*args) - - def ui_handle_clear(self, dc, x0, y0, x1, y1): - pass - - def ui_handle_damage(self, x0, y0, x1, y1): - pass - - def ui_handle_destroy(self): - pass - - def ui_handle_repair(self, dc, x0, y0, x1, y1): - pass - - def ui_handle_resize(self, width, height): - pass - - def mainloop(self): - Image.core.eventloop() - - -class ImageWindow(Window): - """Create an image window which displays the given image.""" - - def __init__(self, image, title="PIL"): - if not isinstance(image, Dib): - image = Dib(image) - self.image = image - width, height = image.size - super().__init__(title, width=width, height=height) - - def ui_handle_repair(self, dc, x0, y0, x1, y1): - self.image.draw(dc, (x0, y0, x1, y1)) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_core/_sockets.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_core/_sockets.py deleted file mode 100644 index e6970bee2701e1d9391abb376e52a4d1a8ec7b68..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/anyio/_core/_sockets.py +++ /dev/null @@ -1,607 +0,0 @@ -from __future__ import annotations - -import socket -import ssl -import sys -from ipaddress import IPv6Address, ip_address -from os import PathLike, chmod -from pathlib import Path -from socket import AddressFamily, SocketKind -from typing import Awaitable, List, Tuple, cast, overload - -from .. import to_thread -from ..abc import ( - ConnectedUDPSocket, - IPAddressType, - IPSockAddrType, - SocketListener, - SocketStream, - UDPSocket, - UNIXSocketStream, -) -from ..streams.stapled import MultiListener -from ..streams.tls import TLSStream -from ._eventloop import get_asynclib -from ._resources import aclose_forcefully -from ._synchronization import Event -from ._tasks import create_task_group, move_on_after - -if sys.version_info >= (3, 8): - from typing import Literal -else: - from typing_extensions import Literal - -IPPROTO_IPV6 = getattr(socket, "IPPROTO_IPV6", 41) # https://bugs.python.org/issue29515 - -GetAddrInfoReturnType = List[ - Tuple[AddressFamily, SocketKind, int, str, Tuple[str, int]] -] -AnyIPAddressFamily = Literal[ - AddressFamily.AF_UNSPEC, AddressFamily.AF_INET, AddressFamily.AF_INET6 -] -IPAddressFamily = Literal[AddressFamily.AF_INET, AddressFamily.AF_INET6] - - -# tls_hostname given -@overload -async def connect_tcp( - remote_host: IPAddressType, - remote_port: int, - *, - local_host: IPAddressType | None = ..., - ssl_context: ssl.SSLContext | None = ..., - tls_standard_compatible: bool = ..., - tls_hostname: str, - happy_eyeballs_delay: float = ..., -) -> TLSStream: - ... - - -# ssl_context given -@overload -async def connect_tcp( - remote_host: IPAddressType, - remote_port: int, - *, - local_host: IPAddressType | None = ..., - ssl_context: ssl.SSLContext, - tls_standard_compatible: bool = ..., - tls_hostname: str | None = ..., - happy_eyeballs_delay: float = ..., -) -> TLSStream: - ... - - -# tls=True -@overload -async def connect_tcp( - remote_host: IPAddressType, - remote_port: int, - *, - local_host: IPAddressType | None = ..., - tls: Literal[True], - ssl_context: ssl.SSLContext | None = ..., - tls_standard_compatible: bool = ..., - tls_hostname: str | None = ..., - happy_eyeballs_delay: float = ..., -) -> TLSStream: - ... - - -# tls=False -@overload -async def connect_tcp( - remote_host: IPAddressType, - remote_port: int, - *, - local_host: IPAddressType | None = ..., - tls: Literal[False], - ssl_context: ssl.SSLContext | None = ..., - tls_standard_compatible: bool = ..., - tls_hostname: str | None = ..., - happy_eyeballs_delay: float = ..., -) -> SocketStream: - ... - - -# No TLS arguments -@overload -async def connect_tcp( - remote_host: IPAddressType, - remote_port: int, - *, - local_host: IPAddressType | None = ..., - happy_eyeballs_delay: float = ..., -) -> SocketStream: - ... - - -async def connect_tcp( - remote_host: IPAddressType, - remote_port: int, - *, - local_host: IPAddressType | None = None, - tls: bool = False, - ssl_context: ssl.SSLContext | None = None, - tls_standard_compatible: bool = True, - tls_hostname: str | None = None, - happy_eyeballs_delay: float = 0.25, -) -> SocketStream | TLSStream: - """ - Connect to a host using the TCP protocol. - - This function implements the stateless version of the Happy Eyeballs algorithm (RFC - 6555). If ``remote_host`` is a host name that resolves to multiple IP addresses, - each one is tried until one connection attempt succeeds. If the first attempt does - not connected within 250 milliseconds, a second attempt is started using the next - address in the list, and so on. On IPv6 enabled systems, an IPv6 address (if - available) is tried first. - - When the connection has been established, a TLS handshake will be done if either - ``ssl_context`` or ``tls_hostname`` is not ``None``, or if ``tls`` is ``True``. - - :param remote_host: the IP address or host name to connect to - :param remote_port: port on the target host to connect to - :param local_host: the interface address or name to bind the socket to before connecting - :param tls: ``True`` to do a TLS handshake with the connected stream and return a - :class:`~anyio.streams.tls.TLSStream` instead - :param ssl_context: the SSL context object to use (if omitted, a default context is created) - :param tls_standard_compatible: If ``True``, performs the TLS shutdown handshake before closing - the stream and requires that the server does this as well. Otherwise, - :exc:`~ssl.SSLEOFError` may be raised during reads from the stream. - Some protocols, such as HTTP, require this option to be ``False``. - See :meth:`~ssl.SSLContext.wrap_socket` for details. - :param tls_hostname: host name to check the server certificate against (defaults to the value - of ``remote_host``) - :param happy_eyeballs_delay: delay (in seconds) before starting the next connection attempt - :return: a socket stream object if no TLS handshake was done, otherwise a TLS stream - :raises OSError: if the connection attempt fails - - """ - # Placed here due to https://github.com/python/mypy/issues/7057 - connected_stream: SocketStream | None = None - - async def try_connect(remote_host: str, event: Event) -> None: - nonlocal connected_stream - try: - stream = await asynclib.connect_tcp(remote_host, remote_port, local_address) - except OSError as exc: - oserrors.append(exc) - return - else: - if connected_stream is None: - connected_stream = stream - tg.cancel_scope.cancel() - else: - await stream.aclose() - finally: - event.set() - - asynclib = get_asynclib() - local_address: IPSockAddrType | None = None - family = socket.AF_UNSPEC - if local_host: - gai_res = await getaddrinfo(str(local_host), None) - family, *_, local_address = gai_res[0] - - target_host = str(remote_host) - try: - addr_obj = ip_address(remote_host) - except ValueError: - # getaddrinfo() will raise an exception if name resolution fails - gai_res = await getaddrinfo( - target_host, remote_port, family=family, type=socket.SOCK_STREAM - ) - - # Organize the list so that the first address is an IPv6 address (if available) and the - # second one is an IPv4 addresses. The rest can be in whatever order. - v6_found = v4_found = False - target_addrs: list[tuple[socket.AddressFamily, str]] = [] - for af, *rest, sa in gai_res: - if af == socket.AF_INET6 and not v6_found: - v6_found = True - target_addrs.insert(0, (af, sa[0])) - elif af == socket.AF_INET and not v4_found and v6_found: - v4_found = True - target_addrs.insert(1, (af, sa[0])) - else: - target_addrs.append((af, sa[0])) - else: - if isinstance(addr_obj, IPv6Address): - target_addrs = [(socket.AF_INET6, addr_obj.compressed)] - else: - target_addrs = [(socket.AF_INET, addr_obj.compressed)] - - oserrors: list[OSError] = [] - async with create_task_group() as tg: - for i, (af, addr) in enumerate(target_addrs): - event = Event() - tg.start_soon(try_connect, addr, event) - with move_on_after(happy_eyeballs_delay): - await event.wait() - - if connected_stream is None: - cause = oserrors[0] if len(oserrors) == 1 else asynclib.ExceptionGroup(oserrors) - raise OSError("All connection attempts failed") from cause - - if tls or tls_hostname or ssl_context: - try: - return await TLSStream.wrap( - connected_stream, - server_side=False, - hostname=tls_hostname or str(remote_host), - ssl_context=ssl_context, - standard_compatible=tls_standard_compatible, - ) - except BaseException: - await aclose_forcefully(connected_stream) - raise - - return connected_stream - - -async def connect_unix(path: str | PathLike[str]) -> UNIXSocketStream: - """ - Connect to the given UNIX socket. - - Not available on Windows. - - :param path: path to the socket - :return: a socket stream object - - """ - path = str(Path(path)) - return await get_asynclib().connect_unix(path) - - -async def create_tcp_listener( - *, - local_host: IPAddressType | None = None, - local_port: int = 0, - family: AnyIPAddressFamily = socket.AddressFamily.AF_UNSPEC, - backlog: int = 65536, - reuse_port: bool = False, -) -> MultiListener[SocketStream]: - """ - Create a TCP socket listener. - - :param local_port: port number to listen on - :param local_host: IP address of the interface to listen on. If omitted, listen on - all IPv4 and IPv6 interfaces. To listen on all interfaces on a specific address - family, use ``0.0.0.0`` for IPv4 or ``::`` for IPv6. - :param family: address family (used if ``local_host`` was omitted) - :param backlog: maximum number of queued incoming connections (up to a maximum of - 2**16, or 65536) - :param reuse_port: ``True`` to allow multiple sockets to bind to the same - address/port (not supported on Windows) - :return: a list of listener objects - - """ - asynclib = get_asynclib() - backlog = min(backlog, 65536) - local_host = str(local_host) if local_host is not None else None - gai_res = await getaddrinfo( - local_host, # type: ignore[arg-type] - local_port, - family=family, - type=socket.SocketKind.SOCK_STREAM if sys.platform == "win32" else 0, - flags=socket.AI_PASSIVE | socket.AI_ADDRCONFIG, - ) - listeners: list[SocketListener] = [] - try: - # The set() is here to work around a glibc bug: - # https://sourceware.org/bugzilla/show_bug.cgi?id=14969 - sockaddr: tuple[str, int] | tuple[str, int, int, int] - for fam, kind, *_, sockaddr in sorted(set(gai_res)): - # Workaround for an uvloop bug where we don't get the correct scope ID for - # IPv6 link-local addresses when passing type=socket.SOCK_STREAM to - # getaddrinfo(): https://github.com/MagicStack/uvloop/issues/539 - if sys.platform != "win32" and kind is not SocketKind.SOCK_STREAM: - continue - - raw_socket = socket.socket(fam) - raw_socket.setblocking(False) - - # For Windows, enable exclusive address use. For others, enable address reuse. - if sys.platform == "win32": - raw_socket.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1) - else: - raw_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) - - if reuse_port: - raw_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1) - - # If only IPv6 was requested, disable dual stack operation - if fam == socket.AF_INET6: - raw_socket.setsockopt(IPPROTO_IPV6, socket.IPV6_V6ONLY, 1) - - # Workaround for #554 - if "%" in sockaddr[0]: - addr, scope_id = sockaddr[0].split("%", 1) - sockaddr = (addr, sockaddr[1], 0, int(scope_id)) - - raw_socket.bind(sockaddr) - raw_socket.listen(backlog) - listener = asynclib.TCPSocketListener(raw_socket) - listeners.append(listener) - except BaseException: - for listener in listeners: - await listener.aclose() - - raise - - return MultiListener(listeners) - - -async def create_unix_listener( - path: str | PathLike[str], - *, - mode: int | None = None, - backlog: int = 65536, -) -> SocketListener: - """ - Create a UNIX socket listener. - - Not available on Windows. - - :param path: path of the socket - :param mode: permissions to set on the socket - :param backlog: maximum number of queued incoming connections (up to a maximum of 2**16, or - 65536) - :return: a listener object - - .. versionchanged:: 3.0 - If a socket already exists on the file system in the given path, it will be removed first. - - """ - path_str = str(path) - path = Path(path) - if path.is_socket(): - path.unlink() - - backlog = min(backlog, 65536) - raw_socket = socket.socket(socket.AF_UNIX) - raw_socket.setblocking(False) - try: - await to_thread.run_sync(raw_socket.bind, path_str, cancellable=True) - if mode is not None: - await to_thread.run_sync(chmod, path_str, mode, cancellable=True) - - raw_socket.listen(backlog) - return get_asynclib().UNIXSocketListener(raw_socket) - except BaseException: - raw_socket.close() - raise - - -async def create_udp_socket( - family: AnyIPAddressFamily = AddressFamily.AF_UNSPEC, - *, - local_host: IPAddressType | None = None, - local_port: int = 0, - reuse_port: bool = False, -) -> UDPSocket: - """ - Create a UDP socket. - - If ``local_port`` has been given, the socket will be bound to this port on the local - machine, making this socket suitable for providing UDP based services. - - :param family: address family (``AF_INET`` or ``AF_INET6``) – automatically determined from - ``local_host`` if omitted - :param local_host: IP address or host name of the local interface to bind to - :param local_port: local port to bind to - :param reuse_port: ``True`` to allow multiple sockets to bind to the same address/port - (not supported on Windows) - :return: a UDP socket - - """ - if family is AddressFamily.AF_UNSPEC and not local_host: - raise ValueError('Either "family" or "local_host" must be given') - - if local_host: - gai_res = await getaddrinfo( - str(local_host), - local_port, - family=family, - type=socket.SOCK_DGRAM, - flags=socket.AI_PASSIVE | socket.AI_ADDRCONFIG, - ) - family = cast(AnyIPAddressFamily, gai_res[0][0]) - local_address = gai_res[0][-1] - elif family is AddressFamily.AF_INET6: - local_address = ("::", 0) - else: - local_address = ("0.0.0.0", 0) - - return await get_asynclib().create_udp_socket( - family, local_address, None, reuse_port - ) - - -async def create_connected_udp_socket( - remote_host: IPAddressType, - remote_port: int, - *, - family: AnyIPAddressFamily = AddressFamily.AF_UNSPEC, - local_host: IPAddressType | None = None, - local_port: int = 0, - reuse_port: bool = False, -) -> ConnectedUDPSocket: - """ - Create a connected UDP socket. - - Connected UDP sockets can only communicate with the specified remote host/port, and any packets - sent from other sources are dropped. - - :param remote_host: remote host to set as the default target - :param remote_port: port on the remote host to set as the default target - :param family: address family (``AF_INET`` or ``AF_INET6``) – automatically determined from - ``local_host`` or ``remote_host`` if omitted - :param local_host: IP address or host name of the local interface to bind to - :param local_port: local port to bind to - :param reuse_port: ``True`` to allow multiple sockets to bind to the same address/port - (not supported on Windows) - :return: a connected UDP socket - - """ - local_address = None - if local_host: - gai_res = await getaddrinfo( - str(local_host), - local_port, - family=family, - type=socket.SOCK_DGRAM, - flags=socket.AI_PASSIVE | socket.AI_ADDRCONFIG, - ) - family = cast(AnyIPAddressFamily, gai_res[0][0]) - local_address = gai_res[0][-1] - - gai_res = await getaddrinfo( - str(remote_host), remote_port, family=family, type=socket.SOCK_DGRAM - ) - family = cast(AnyIPAddressFamily, gai_res[0][0]) - remote_address = gai_res[0][-1] - - return await get_asynclib().create_udp_socket( - family, local_address, remote_address, reuse_port - ) - - -async def getaddrinfo( - host: bytearray | bytes | str, - port: str | int | None, - *, - family: int | AddressFamily = 0, - type: int | SocketKind = 0, - proto: int = 0, - flags: int = 0, -) -> GetAddrInfoReturnType: - """ - Look up a numeric IP address given a host name. - - Internationalized domain names are translated according to the (non-transitional) IDNA 2008 - standard. - - .. note:: 4-tuple IPv6 socket addresses are automatically converted to 2-tuples of - (host, port), unlike what :func:`socket.getaddrinfo` does. - - :param host: host name - :param port: port number - :param family: socket family (`'AF_INET``, ...) - :param type: socket type (``SOCK_STREAM``, ...) - :param proto: protocol number - :param flags: flags to pass to upstream ``getaddrinfo()`` - :return: list of tuples containing (family, type, proto, canonname, sockaddr) - - .. seealso:: :func:`socket.getaddrinfo` - - """ - # Handle unicode hostnames - if isinstance(host, str): - try: - encoded_host = host.encode("ascii") - except UnicodeEncodeError: - import idna - - encoded_host = idna.encode(host, uts46=True) - else: - encoded_host = host - - gai_res = await get_asynclib().getaddrinfo( - encoded_host, port, family=family, type=type, proto=proto, flags=flags - ) - return [ - (family, type, proto, canonname, convert_ipv6_sockaddr(sockaddr)) - for family, type, proto, canonname, sockaddr in gai_res - ] - - -def getnameinfo(sockaddr: IPSockAddrType, flags: int = 0) -> Awaitable[tuple[str, str]]: - """ - Look up the host name of an IP address. - - :param sockaddr: socket address (e.g. (ipaddress, port) for IPv4) - :param flags: flags to pass to upstream ``getnameinfo()`` - :return: a tuple of (host name, service name) - - .. seealso:: :func:`socket.getnameinfo` - - """ - return get_asynclib().getnameinfo(sockaddr, flags) - - -def wait_socket_readable(sock: socket.socket) -> Awaitable[None]: - """ - Wait until the given socket has data to be read. - - This does **NOT** work on Windows when using the asyncio backend with a proactor event loop - (default on py3.8+). - - .. warning:: Only use this on raw sockets that have not been wrapped by any higher level - constructs like socket streams! - - :param sock: a socket object - :raises ~anyio.ClosedResourceError: if the socket was closed while waiting for the - socket to become readable - :raises ~anyio.BusyResourceError: if another task is already waiting for the socket - to become readable - - """ - return get_asynclib().wait_socket_readable(sock) - - -def wait_socket_writable(sock: socket.socket) -> Awaitable[None]: - """ - Wait until the given socket can be written to. - - This does **NOT** work on Windows when using the asyncio backend with a proactor event loop - (default on py3.8+). - - .. warning:: Only use this on raw sockets that have not been wrapped by any higher level - constructs like socket streams! - - :param sock: a socket object - :raises ~anyio.ClosedResourceError: if the socket was closed while waiting for the - socket to become writable - :raises ~anyio.BusyResourceError: if another task is already waiting for the socket - to become writable - - """ - return get_asynclib().wait_socket_writable(sock) - - -# -# Private API -# - - -def convert_ipv6_sockaddr( - sockaddr: tuple[str, int, int, int] | tuple[str, int] -) -> tuple[str, int]: - """ - Convert a 4-tuple IPv6 socket address to a 2-tuple (address, port) format. - - If the scope ID is nonzero, it is added to the address, separated with ``%``. - Otherwise the flow id and scope id are simply cut off from the tuple. - Any other kinds of socket addresses are returned as-is. - - :param sockaddr: the result of :meth:`~socket.socket.getsockname` - :return: the converted socket address - - """ - # This is more complicated than it should be because of MyPy - if isinstance(sockaddr, tuple) and len(sockaddr) == 4: - host, port, flowinfo, scope_id = cast(Tuple[str, int, int, int], sockaddr) - if scope_id: - # PyPy (as of v7.3.11) leaves the interface name in the result, so - # we discard it and only get the scope ID from the end - # (https://foss.heptapod.net/pypy/pypy/-/issues/3938) - host = host.split("%")[0] - - # Add scope_id to the address - return f"{host}%{scope_id}", port - else: - return host, port - else: - return cast(Tuple[str, int], sockaddr) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/easter.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/easter.py deleted file mode 100644 index f74d1f7442473997245ac683b8a269a3574d1ba4..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dateutil/easter.py +++ /dev/null @@ -1,89 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module offers a generic Easter computing method for any given year, using -Western, Orthodox or Julian algorithms. -""" - -import datetime - -__all__ = ["easter", "EASTER_JULIAN", "EASTER_ORTHODOX", "EASTER_WESTERN"] - -EASTER_JULIAN = 1 -EASTER_ORTHODOX = 2 -EASTER_WESTERN = 3 - - -def easter(year, method=EASTER_WESTERN): - """ - This method was ported from the work done by GM Arts, - on top of the algorithm by Claus Tondering, which was - based in part on the algorithm of Ouding (1940), as - quoted in "Explanatory Supplement to the Astronomical - Almanac", P. Kenneth Seidelmann, editor. - - This algorithm implements three different Easter - calculation methods: - - 1. Original calculation in Julian calendar, valid in - dates after 326 AD - 2. Original method, with date converted to Gregorian - calendar, valid in years 1583 to 4099 - 3. Revised method, in Gregorian calendar, valid in - years 1583 to 4099 as well - - These methods are represented by the constants: - - * ``EASTER_JULIAN = 1`` - * ``EASTER_ORTHODOX = 2`` - * ``EASTER_WESTERN = 3`` - - The default method is method 3. - - More about the algorithm may be found at: - - `GM Arts: Easter Algorithms `_ - - and - - `The Calendar FAQ: Easter `_ - - """ - - if not (1 <= method <= 3): - raise ValueError("invalid method") - - # g - Golden year - 1 - # c - Century - # h - (23 - Epact) mod 30 - # i - Number of days from March 21 to Paschal Full Moon - # j - Weekday for PFM (0=Sunday, etc) - # p - Number of days from March 21 to Sunday on or before PFM - # (-6 to 28 methods 1 & 3, to 56 for method 2) - # e - Extra days to add for method 2 (converting Julian - # date to Gregorian date) - - y = year - g = y % 19 - e = 0 - if method < 3: - # Old method - i = (19*g + 15) % 30 - j = (y + y//4 + i) % 7 - if method == 2: - # Extra dates to convert Julian to Gregorian date - e = 10 - if y > 1600: - e = e + y//100 - 16 - (y//100 - 16)//4 - else: - # New method - c = y//100 - h = (c - c//4 - (8*c + 13)//25 + 19*g + 15) % 30 - i = h - (h//28)*(1 - (h//28)*(29//(h + 1))*((21 - g)//11)) - j = (y + y//4 + i + 2 - c + c//4) % 7 - - # p can be from -6 to 56 corresponding to dates 22 March to 23 May - # (later dates apply to method 2, although 23 May never actually occurs) - p = i - j + e - d = 1 + (p + 27 + (p + 6)//40) % 31 - m = 3 + (p + 26)//30 - return datetime.date(int(y), int(m), int(d)) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdata.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdata.py deleted file mode 100644 index 0d262e8d85b362a29ee4e34416afeb5108b0da45..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdata.py +++ /dev/null @@ -1,889 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2001-2017 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -"""DNS rdata.""" - -import base64 -import binascii -import inspect -import io -import itertools -import random -from importlib import import_module -from typing import Any, Dict, Optional, Tuple, Union - -import dns.exception -import dns.immutable -import dns.ipv4 -import dns.ipv6 -import dns.name -import dns.rdataclass -import dns.rdatatype -import dns.tokenizer -import dns.ttl -import dns.wire - -_chunksize = 32 - -# We currently allow comparisons for rdata with relative names for backwards -# compatibility, but in the future we will not, as these kinds of comparisons -# can lead to subtle bugs if code is not carefully written. -# -# This switch allows the future behavior to be turned on so code can be -# tested with it. -_allow_relative_comparisons = True - - -class NoRelativeRdataOrdering(dns.exception.DNSException): - """An attempt was made to do an ordered comparison of one or more - rdata with relative names. The only reliable way of sorting rdata - is to use non-relativized rdata. - - """ - - -def _wordbreak(data, chunksize=_chunksize, separator=b" "): - """Break a binary string into chunks of chunksize characters separated by - a space. - """ - - if not chunksize: - return data.decode() - return separator.join( - [data[i : i + chunksize] for i in range(0, len(data), chunksize)] - ).decode() - - -# pylint: disable=unused-argument - - -def _hexify(data, chunksize=_chunksize, separator=b" ", **kw): - """Convert a binary string into its hex encoding, broken up into chunks - of chunksize characters separated by a separator. - """ - - return _wordbreak(binascii.hexlify(data), chunksize, separator) - - -def _base64ify(data, chunksize=_chunksize, separator=b" ", **kw): - """Convert a binary string into its base64 encoding, broken up into chunks - of chunksize characters separated by a separator. - """ - - return _wordbreak(base64.b64encode(data), chunksize, separator) - - -# pylint: enable=unused-argument - -__escaped = b'"\\' - - -def _escapify(qstring): - """Escape the characters in a quoted string which need it.""" - - if isinstance(qstring, str): - qstring = qstring.encode() - if not isinstance(qstring, bytearray): - qstring = bytearray(qstring) - - text = "" - for c in qstring: - if c in __escaped: - text += "\\" + chr(c) - elif c >= 0x20 and c < 0x7F: - text += chr(c) - else: - text += "\\%03d" % c - return text - - -def _truncate_bitmap(what): - """Determine the index of greatest byte that isn't all zeros, and - return the bitmap that contains all the bytes less than that index. - """ - - for i in range(len(what) - 1, -1, -1): - if what[i] != 0: - return what[0 : i + 1] - return what[0:1] - - -# So we don't have to edit all the rdata classes... -_constify = dns.immutable.constify - - -@dns.immutable.immutable -class Rdata: - """Base class for all DNS rdata types.""" - - __slots__ = ["rdclass", "rdtype", "rdcomment"] - - def __init__(self, rdclass, rdtype): - """Initialize an rdata. - - *rdclass*, an ``int`` is the rdataclass of the Rdata. - - *rdtype*, an ``int`` is the rdatatype of the Rdata. - """ - - self.rdclass = self._as_rdataclass(rdclass) - self.rdtype = self._as_rdatatype(rdtype) - self.rdcomment = None - - def _get_all_slots(self): - return itertools.chain.from_iterable( - getattr(cls, "__slots__", []) for cls in self.__class__.__mro__ - ) - - def __getstate__(self): - # We used to try to do a tuple of all slots here, but it - # doesn't work as self._all_slots isn't available at - # __setstate__() time. Before that we tried to store a tuple - # of __slots__, but that didn't work as it didn't store the - # slots defined by ancestors. This older way didn't fail - # outright, but ended up with partially broken objects, e.g. - # if you unpickled an A RR it wouldn't have rdclass and rdtype - # attributes, and would compare badly. - state = {} - for slot in self._get_all_slots(): - state[slot] = getattr(self, slot) - return state - - def __setstate__(self, state): - for slot, val in state.items(): - object.__setattr__(self, slot, val) - if not hasattr(self, "rdcomment"): - # Pickled rdata from 2.0.x might not have a rdcomment, so add - # it if needed. - object.__setattr__(self, "rdcomment", None) - - def covers(self) -> dns.rdatatype.RdataType: - """Return the type a Rdata covers. - - DNS SIG/RRSIG rdatas apply to a specific type; this type is - returned by the covers() function. If the rdata type is not - SIG or RRSIG, dns.rdatatype.NONE is returned. This is useful when - creating rdatasets, allowing the rdataset to contain only RRSIGs - of a particular type, e.g. RRSIG(NS). - - Returns a ``dns.rdatatype.RdataType``. - """ - - return dns.rdatatype.NONE - - def extended_rdatatype(self) -> int: - """Return a 32-bit type value, the least significant 16 bits of - which are the ordinary DNS type, and the upper 16 bits of which are - the "covered" type, if any. - - Returns an ``int``. - """ - - return self.covers() << 16 | self.rdtype - - def to_text( - self, - origin: Optional[dns.name.Name] = None, - relativize: bool = True, - **kw: Dict[str, Any] - ) -> str: - """Convert an rdata to text format. - - Returns a ``str``. - """ - - raise NotImplementedError # pragma: no cover - - def _to_wire( - self, - file: Optional[Any], - compress: Optional[dns.name.CompressType] = None, - origin: Optional[dns.name.Name] = None, - canonicalize: bool = False, - ) -> bytes: - raise NotImplementedError # pragma: no cover - - def to_wire( - self, - file: Optional[Any] = None, - compress: Optional[dns.name.CompressType] = None, - origin: Optional[dns.name.Name] = None, - canonicalize: bool = False, - ) -> bytes: - """Convert an rdata to wire format. - - Returns a ``bytes`` or ``None``. - """ - - if file: - return self._to_wire(file, compress, origin, canonicalize) - else: - f = io.BytesIO() - self._to_wire(f, compress, origin, canonicalize) - return f.getvalue() - - def to_generic( - self, origin: Optional[dns.name.Name] = None - ) -> "dns.rdata.GenericRdata": - """Creates a dns.rdata.GenericRdata equivalent of this rdata. - - Returns a ``dns.rdata.GenericRdata``. - """ - return dns.rdata.GenericRdata( - self.rdclass, self.rdtype, self.to_wire(origin=origin) - ) - - def to_digestable(self, origin: Optional[dns.name.Name] = None) -> bytes: - """Convert rdata to a format suitable for digesting in hashes. This - is also the DNSSEC canonical form. - - Returns a ``bytes``. - """ - - return self.to_wire(origin=origin, canonicalize=True) - - def __repr__(self): - covers = self.covers() - if covers == dns.rdatatype.NONE: - ctext = "" - else: - ctext = "(" + dns.rdatatype.to_text(covers) + ")" - return ( - "" - ) - - def __str__(self): - return self.to_text() - - def _cmp(self, other): - """Compare an rdata with another rdata of the same rdtype and - rdclass. - - For rdata with only absolute names: - Return < 0 if self < other in the DNSSEC ordering, 0 if self - == other, and > 0 if self > other. - For rdata with at least one relative names: - The rdata sorts before any rdata with only absolute names. - When compared with another relative rdata, all names are - made absolute as if they were relative to the root, as the - proper origin is not available. While this creates a stable - ordering, it is NOT guaranteed to be the DNSSEC ordering. - In the future, all ordering comparisons for rdata with - relative names will be disallowed. - """ - try: - our = self.to_digestable() - our_relative = False - except dns.name.NeedAbsoluteNameOrOrigin: - if _allow_relative_comparisons: - our = self.to_digestable(dns.name.root) - our_relative = True - try: - their = other.to_digestable() - their_relative = False - except dns.name.NeedAbsoluteNameOrOrigin: - if _allow_relative_comparisons: - their = other.to_digestable(dns.name.root) - their_relative = True - if _allow_relative_comparisons: - if our_relative != their_relative: - # For the purpose of comparison, all rdata with at least one - # relative name is less than an rdata with only absolute names. - if our_relative: - return -1 - else: - return 1 - elif our_relative or their_relative: - raise NoRelativeRdataOrdering - if our == their: - return 0 - elif our > their: - return 1 - else: - return -1 - - def __eq__(self, other): - if not isinstance(other, Rdata): - return False - if self.rdclass != other.rdclass or self.rdtype != other.rdtype: - return False - our_relative = False - their_relative = False - try: - our = self.to_digestable() - except dns.name.NeedAbsoluteNameOrOrigin: - our = self.to_digestable(dns.name.root) - our_relative = True - try: - their = other.to_digestable() - except dns.name.NeedAbsoluteNameOrOrigin: - their = other.to_digestable(dns.name.root) - their_relative = True - if our_relative != their_relative: - return False - return our == their - - def __ne__(self, other): - if not isinstance(other, Rdata): - return True - if self.rdclass != other.rdclass or self.rdtype != other.rdtype: - return True - return not self.__eq__(other) - - def __lt__(self, other): - if ( - not isinstance(other, Rdata) - or self.rdclass != other.rdclass - or self.rdtype != other.rdtype - ): - return NotImplemented - return self._cmp(other) < 0 - - def __le__(self, other): - if ( - not isinstance(other, Rdata) - or self.rdclass != other.rdclass - or self.rdtype != other.rdtype - ): - return NotImplemented - return self._cmp(other) <= 0 - - def __ge__(self, other): - if ( - not isinstance(other, Rdata) - or self.rdclass != other.rdclass - or self.rdtype != other.rdtype - ): - return NotImplemented - return self._cmp(other) >= 0 - - def __gt__(self, other): - if ( - not isinstance(other, Rdata) - or self.rdclass != other.rdclass - or self.rdtype != other.rdtype - ): - return NotImplemented - return self._cmp(other) > 0 - - def __hash__(self): - return hash(self.to_digestable(dns.name.root)) - - @classmethod - def from_text( - cls, - rdclass: dns.rdataclass.RdataClass, - rdtype: dns.rdatatype.RdataType, - tok: dns.tokenizer.Tokenizer, - origin: Optional[dns.name.Name] = None, - relativize: bool = True, - relativize_to: Optional[dns.name.Name] = None, - ) -> "Rdata": - raise NotImplementedError # pragma: no cover - - @classmethod - def from_wire_parser( - cls, - rdclass: dns.rdataclass.RdataClass, - rdtype: dns.rdatatype.RdataType, - parser: dns.wire.Parser, - origin: Optional[dns.name.Name] = None, - ) -> "Rdata": - raise NotImplementedError # pragma: no cover - - def replace(self, **kwargs: Any) -> "Rdata": - """ - Create a new Rdata instance based on the instance replace was - invoked on. It is possible to pass different parameters to - override the corresponding properties of the base Rdata. - - Any field specific to the Rdata type can be replaced, but the - *rdtype* and *rdclass* fields cannot. - - Returns an instance of the same Rdata subclass as *self*. - """ - - # Get the constructor parameters. - parameters = inspect.signature(self.__init__).parameters # type: ignore - - # Ensure that all of the arguments correspond to valid fields. - # Don't allow rdclass or rdtype to be changed, though. - for key in kwargs: - if key == "rdcomment": - continue - if key not in parameters: - raise AttributeError( - "'{}' object has no attribute '{}'".format( - self.__class__.__name__, key - ) - ) - if key in ("rdclass", "rdtype"): - raise AttributeError( - "Cannot overwrite '{}' attribute '{}'".format( - self.__class__.__name__, key - ) - ) - - # Construct the parameter list. For each field, use the value in - # kwargs if present, and the current value otherwise. - args = (kwargs.get(key, getattr(self, key)) for key in parameters) - - # Create, validate, and return the new object. - rd = self.__class__(*args) - # The comment is not set in the constructor, so give it special - # handling. - rdcomment = kwargs.get("rdcomment", self.rdcomment) - if rdcomment is not None: - object.__setattr__(rd, "rdcomment", rdcomment) - return rd - - # Type checking and conversion helpers. These are class methods as - # they don't touch object state and may be useful to others. - - @classmethod - def _as_rdataclass(cls, value): - return dns.rdataclass.RdataClass.make(value) - - @classmethod - def _as_rdatatype(cls, value): - return dns.rdatatype.RdataType.make(value) - - @classmethod - def _as_bytes( - cls, - value: Any, - encode: bool = False, - max_length: Optional[int] = None, - empty_ok: bool = True, - ) -> bytes: - if encode and isinstance(value, str): - bvalue = value.encode() - elif isinstance(value, bytearray): - bvalue = bytes(value) - elif isinstance(value, bytes): - bvalue = value - else: - raise ValueError("not bytes") - if max_length is not None and len(bvalue) > max_length: - raise ValueError("too long") - if not empty_ok and len(bvalue) == 0: - raise ValueError("empty bytes not allowed") - return bvalue - - @classmethod - def _as_name(cls, value): - # Note that proper name conversion (e.g. with origin and IDNA - # awareness) is expected to be done via from_text. This is just - # a simple thing for people invoking the constructor directly. - if isinstance(value, str): - return dns.name.from_text(value) - elif not isinstance(value, dns.name.Name): - raise ValueError("not a name") - return value - - @classmethod - def _as_uint8(cls, value): - if not isinstance(value, int): - raise ValueError("not an integer") - if value < 0 or value > 255: - raise ValueError("not a uint8") - return value - - @classmethod - def _as_uint16(cls, value): - if not isinstance(value, int): - raise ValueError("not an integer") - if value < 0 or value > 65535: - raise ValueError("not a uint16") - return value - - @classmethod - def _as_uint32(cls, value): - if not isinstance(value, int): - raise ValueError("not an integer") - if value < 0 or value > 4294967295: - raise ValueError("not a uint32") - return value - - @classmethod - def _as_uint48(cls, value): - if not isinstance(value, int): - raise ValueError("not an integer") - if value < 0 or value > 281474976710655: - raise ValueError("not a uint48") - return value - - @classmethod - def _as_int(cls, value, low=None, high=None): - if not isinstance(value, int): - raise ValueError("not an integer") - if low is not None and value < low: - raise ValueError("value too small") - if high is not None and value > high: - raise ValueError("value too large") - return value - - @classmethod - def _as_ipv4_address(cls, value): - if isinstance(value, str): - # call to check validity - dns.ipv4.inet_aton(value) - return value - elif isinstance(value, bytes): - return dns.ipv4.inet_ntoa(value) - else: - raise ValueError("not an IPv4 address") - - @classmethod - def _as_ipv6_address(cls, value): - if isinstance(value, str): - # call to check validity - dns.ipv6.inet_aton(value) - return value - elif isinstance(value, bytes): - return dns.ipv6.inet_ntoa(value) - else: - raise ValueError("not an IPv6 address") - - @classmethod - def _as_bool(cls, value): - if isinstance(value, bool): - return value - else: - raise ValueError("not a boolean") - - @classmethod - def _as_ttl(cls, value): - if isinstance(value, int): - return cls._as_int(value, 0, dns.ttl.MAX_TTL) - elif isinstance(value, str): - return dns.ttl.from_text(value) - else: - raise ValueError("not a TTL") - - @classmethod - def _as_tuple(cls, value, as_value): - try: - # For user convenience, if value is a singleton of the list - # element type, wrap it in a tuple. - return (as_value(value),) - except Exception: - # Otherwise, check each element of the iterable *value* - # against *as_value*. - return tuple(as_value(v) for v in value) - - # Processing order - - @classmethod - def _processing_order(cls, iterable): - items = list(iterable) - random.shuffle(items) - return items - - -@dns.immutable.immutable -class GenericRdata(Rdata): - - """Generic Rdata Class - - This class is used for rdata types for which we have no better - implementation. It implements the DNS "unknown RRs" scheme. - """ - - __slots__ = ["data"] - - def __init__(self, rdclass, rdtype, data): - super().__init__(rdclass, rdtype) - self.data = data - - def to_text( - self, - origin: Optional[dns.name.Name] = None, - relativize: bool = True, - **kw: Dict[str, Any] - ) -> str: - return r"\# %d " % len(self.data) + _hexify(self.data, **kw) - - @classmethod - def from_text( - cls, rdclass, rdtype, tok, origin=None, relativize=True, relativize_to=None - ): - token = tok.get() - if not token.is_identifier() or token.value != r"\#": - raise dns.exception.SyntaxError(r"generic rdata does not start with \#") - length = tok.get_int() - hex = tok.concatenate_remaining_identifiers(True).encode() - data = binascii.unhexlify(hex) - if len(data) != length: - raise dns.exception.SyntaxError("generic rdata hex data has wrong length") - return cls(rdclass, rdtype, data) - - def _to_wire(self, file, compress=None, origin=None, canonicalize=False): - file.write(self.data) - - @classmethod - def from_wire_parser(cls, rdclass, rdtype, parser, origin=None): - return cls(rdclass, rdtype, parser.get_remaining()) - - -_rdata_classes: Dict[ - Tuple[dns.rdataclass.RdataClass, dns.rdatatype.RdataType], Any -] = {} -_module_prefix = "dns.rdtypes" - - -def get_rdata_class(rdclass, rdtype): - cls = _rdata_classes.get((rdclass, rdtype)) - if not cls: - cls = _rdata_classes.get((dns.rdatatype.ANY, rdtype)) - if not cls: - rdclass_text = dns.rdataclass.to_text(rdclass) - rdtype_text = dns.rdatatype.to_text(rdtype) - rdtype_text = rdtype_text.replace("-", "_") - try: - mod = import_module( - ".".join([_module_prefix, rdclass_text, rdtype_text]) - ) - cls = getattr(mod, rdtype_text) - _rdata_classes[(rdclass, rdtype)] = cls - except ImportError: - try: - mod = import_module(".".join([_module_prefix, "ANY", rdtype_text])) - cls = getattr(mod, rdtype_text) - _rdata_classes[(dns.rdataclass.ANY, rdtype)] = cls - _rdata_classes[(rdclass, rdtype)] = cls - except ImportError: - pass - if not cls: - cls = GenericRdata - _rdata_classes[(rdclass, rdtype)] = cls - return cls - - -def from_text( - rdclass: Union[dns.rdataclass.RdataClass, str], - rdtype: Union[dns.rdatatype.RdataType, str], - tok: Union[dns.tokenizer.Tokenizer, str], - origin: Optional[dns.name.Name] = None, - relativize: bool = True, - relativize_to: Optional[dns.name.Name] = None, - idna_codec: Optional[dns.name.IDNACodec] = None, -) -> Rdata: - """Build an rdata object from text format. - - This function attempts to dynamically load a class which - implements the specified rdata class and type. If there is no - class-and-type-specific implementation, the GenericRdata class - is used. - - Once a class is chosen, its from_text() class method is called - with the parameters to this function. - - If *tok* is a ``str``, then a tokenizer is created and the string - is used as its input. - - *rdclass*, a ``dns.rdataclass.RdataClass`` or ``str``, the rdataclass. - - *rdtype*, a ``dns.rdatatype.RdataType`` or ``str``, the rdatatype. - - *tok*, a ``dns.tokenizer.Tokenizer`` or a ``str``. - - *origin*, a ``dns.name.Name`` (or ``None``), the - origin to use for relative names. - - *relativize*, a ``bool``. If true, name will be relativized. - - *relativize_to*, a ``dns.name.Name`` (or ``None``), the origin to use - when relativizing names. If not set, the *origin* value will be used. - - *idna_codec*, a ``dns.name.IDNACodec``, specifies the IDNA - encoder/decoder to use if a tokenizer needs to be created. If - ``None``, the default IDNA 2003 encoder/decoder is used. If a - tokenizer is not created, then the codec associated with the tokenizer - is the one that is used. - - Returns an instance of the chosen Rdata subclass. - - """ - if isinstance(tok, str): - tok = dns.tokenizer.Tokenizer(tok, idna_codec=idna_codec) - rdclass = dns.rdataclass.RdataClass.make(rdclass) - rdtype = dns.rdatatype.RdataType.make(rdtype) - cls = get_rdata_class(rdclass, rdtype) - with dns.exception.ExceptionWrapper(dns.exception.SyntaxError): - rdata = None - if cls != GenericRdata: - # peek at first token - token = tok.get() - tok.unget(token) - if token.is_identifier() and token.value == r"\#": - # - # Known type using the generic syntax. Extract the - # wire form from the generic syntax, and then run - # from_wire on it. - # - grdata = GenericRdata.from_text( - rdclass, rdtype, tok, origin, relativize, relativize_to - ) - rdata = from_wire( - rdclass, rdtype, grdata.data, 0, len(grdata.data), origin - ) - # - # If this comparison isn't equal, then there must have been - # compressed names in the wire format, which is an error, - # there being no reasonable context to decompress with. - # - rwire = rdata.to_wire() - if rwire != grdata.data: - raise dns.exception.SyntaxError( - "compressed data in " - "generic syntax form " - "of known rdatatype" - ) - if rdata is None: - rdata = cls.from_text( - rdclass, rdtype, tok, origin, relativize, relativize_to - ) - token = tok.get_eol_as_token() - if token.comment is not None: - object.__setattr__(rdata, "rdcomment", token.comment) - return rdata - - -def from_wire_parser( - rdclass: Union[dns.rdataclass.RdataClass, str], - rdtype: Union[dns.rdatatype.RdataType, str], - parser: dns.wire.Parser, - origin: Optional[dns.name.Name] = None, -) -> Rdata: - """Build an rdata object from wire format - - This function attempts to dynamically load a class which - implements the specified rdata class and type. If there is no - class-and-type-specific implementation, the GenericRdata class - is used. - - Once a class is chosen, its from_wire() class method is called - with the parameters to this function. - - *rdclass*, a ``dns.rdataclass.RdataClass`` or ``str``, the rdataclass. - - *rdtype*, a ``dns.rdatatype.RdataType`` or ``str``, the rdatatype. - - *parser*, a ``dns.wire.Parser``, the parser, which should be - restricted to the rdata length. - - *origin*, a ``dns.name.Name`` (or ``None``). If not ``None``, - then names will be relativized to this origin. - - Returns an instance of the chosen Rdata subclass. - """ - - rdclass = dns.rdataclass.RdataClass.make(rdclass) - rdtype = dns.rdatatype.RdataType.make(rdtype) - cls = get_rdata_class(rdclass, rdtype) - with dns.exception.ExceptionWrapper(dns.exception.FormError): - return cls.from_wire_parser(rdclass, rdtype, parser, origin) - - -def from_wire( - rdclass: Union[dns.rdataclass.RdataClass, str], - rdtype: Union[dns.rdatatype.RdataType, str], - wire: bytes, - current: int, - rdlen: int, - origin: Optional[dns.name.Name] = None, -) -> Rdata: - """Build an rdata object from wire format - - This function attempts to dynamically load a class which - implements the specified rdata class and type. If there is no - class-and-type-specific implementation, the GenericRdata class - is used. - - Once a class is chosen, its from_wire() class method is called - with the parameters to this function. - - *rdclass*, an ``int``, the rdataclass. - - *rdtype*, an ``int``, the rdatatype. - - *wire*, a ``bytes``, the wire-format message. - - *current*, an ``int``, the offset in wire of the beginning of - the rdata. - - *rdlen*, an ``int``, the length of the wire-format rdata - - *origin*, a ``dns.name.Name`` (or ``None``). If not ``None``, - then names will be relativized to this origin. - - Returns an instance of the chosen Rdata subclass. - """ - parser = dns.wire.Parser(wire, current) - with parser.restrict_to(rdlen): - return from_wire_parser(rdclass, rdtype, parser, origin) - - -class RdatatypeExists(dns.exception.DNSException): - """DNS rdatatype already exists.""" - - supp_kwargs = {"rdclass", "rdtype"} - fmt = ( - "The rdata type with class {rdclass:d} and rdtype {rdtype:d} " - + "already exists." - ) - - -def register_type( - implementation: Any, - rdtype: int, - rdtype_text: str, - is_singleton: bool = False, - rdclass: dns.rdataclass.RdataClass = dns.rdataclass.IN, -) -> None: - """Dynamically register a module to handle an rdatatype. - - *implementation*, a module implementing the type in the usual dnspython - way. - - *rdtype*, an ``int``, the rdatatype to register. - - *rdtype_text*, a ``str``, the textual form of the rdatatype. - - *is_singleton*, a ``bool``, indicating if the type is a singleton (i.e. - RRsets of the type can have only one member.) - - *rdclass*, the rdataclass of the type, or ``dns.rdataclass.ANY`` if - it applies to all classes. - """ - - rdtype = dns.rdatatype.RdataType.make(rdtype) - existing_cls = get_rdata_class(rdclass, rdtype) - if existing_cls != GenericRdata or dns.rdatatype.is_metatype(rdtype): - raise RdatatypeExists(rdclass=rdclass, rdtype=rdtype) - _rdata_classes[(rdclass, rdtype)] = getattr( - implementation, rdtype_text.replace("-", "_") - ) - dns.rdatatype.register_type(rdtype, rdtype_text, is_singleton) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/make_com/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/make_com/__init__.py deleted file mode 100644 index c637335013c599b07de054fba07b47ecb86ad3e8..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/make_com/__init__.py +++ /dev/null @@ -1 +0,0 @@ -"""Init params.""" diff --git a/spaces/johanmichel/stabilityai-stablecode-instruct-alpha-3b-2/README.md b/spaces/johanmichel/stabilityai-stablecode-instruct-alpha-3b-2/README.md deleted file mode 100644 index 45576088d6143de7a89cb34312f9b3ca80ddec05..0000000000000000000000000000000000000000 --- a/spaces/johanmichel/stabilityai-stablecode-instruct-alpha-3b-2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Stabilityai Stablecode Instruct Alpha 3b 2 -emoji: 📉 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.41.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/johnowhitaker/waterface/app.py b/spaces/johnowhitaker/waterface/app.py deleted file mode 100644 index 8fdc3da3e53444c7cca8f452f0b4d06118ebb557..0000000000000000000000000000000000000000 --- a/spaces/johnowhitaker/waterface/app.py +++ /dev/null @@ -1,270 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.autograd import Variable -import torch.optim as optim - -from imstack.core import ImStack -from tqdm.notebook import tqdm - -import kornia.augmentation as K -from CLIP import clip -from torchvision import transforms - -from PIL import Image -import numpy as np -import math - -from matplotlib import pyplot as plt -from fastprogress.fastprogress import master_bar, progress_bar -from IPython.display import HTML -from base64 import b64encode - -import warnings -warnings.filterwarnings('ignore') # Some pytorch functions give warnings about behaviour changes that I don't want to see over and over again :) - -device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') - -def sinc(x): - return torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([])) - - -def lanczos(x, a): - cond = torch.logical_and(-a < x, x < a) - out = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([])) - return out / out.sum() - - -def ramp(ratio, width): - n = math.ceil(width / ratio + 1) - out = torch.empty([n]) - cur = 0 - for i in range(out.shape[0]): - out[i] = cur - cur += ratio - return torch.cat([-out[1:].flip([0]), out])[1:-1] - -class Prompt(nn.Module): - def __init__(self, embed, weight=1., stop=float('-inf')): - super().__init__() - self.register_buffer('embed', embed) - self.register_buffer('weight', torch.as_tensor(weight)) - self.register_buffer('stop', torch.as_tensor(stop)) - - def forward(self, input): - input_normed = F.normalize(input.unsqueeze(1), dim=2) - embed_normed = F.normalize(self.embed.unsqueeze(0), dim=2) - dists = input_normed.sub(embed_normed).norm(dim=2).div(2).arcsin().pow(2).mul(2) - dists = dists * self.weight.sign() - return self.weight.abs() * replace_grad(dists, torch.maximum(dists, self.stop)).mean() - -class MakeCutouts(nn.Module): - def __init__(self, cut_size, cutn, cut_pow=1.): - super().__init__() - self.cut_size = cut_size - self.cutn = cutn - self.cut_pow = cut_pow - self.augs = nn.Sequential( - K.RandomHorizontalFlip(p=0.5), - K.RandomSharpness(0.3,p=0.4), - K.RandomAffine(degrees=30, translate=0.1, p=0.8, padding_mode='border'), - K.RandomPerspective(0.2,p=0.4), - K.ColorJitter(hue=0.01, saturation=0.01, p=0.7)) - self.noise_fac = 0.1 - - def forward(self, input): - sideY, sideX = input.shape[2:4] - max_size = min(sideX, sideY) - min_size = min(sideX, sideY, self.cut_size) - cutouts = [] - for _ in range(self.cutn): - size = int(torch.rand([])**self.cut_pow * (max_size - min_size) + min_size) - offsetx = torch.randint(0, sideX - size + 1, ()) - offsety = torch.randint(0, sideY - size + 1, ()) - cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size] - cutouts.append(resample(cutout, (self.cut_size, self.cut_size))) - batch = self.augs(torch.cat(cutouts, dim=0)) - if self.noise_fac: - facs = batch.new_empty([self.cutn, 1, 1, 1]).uniform_(0, self.noise_fac) - batch = batch + facs * torch.randn_like(batch) - return batch - -def resample(input, size, align_corners=True): - n, c, h, w = input.shape - dh, dw = size - - input = input.view([n * c, 1, h, w]) - - if dh < h: - kernel_h = lanczos(ramp(dh / h, 2), 2).to(input.device, input.dtype) - pad_h = (kernel_h.shape[0] - 1) // 2 - input = F.pad(input, (0, 0, pad_h, pad_h), 'reflect') - input = F.conv2d(input, kernel_h[None, None, :, None]) - - if dw < w: - kernel_w = lanczos(ramp(dw / w, 2), 2).to(input.device, input.dtype) - pad_w = (kernel_w.shape[0] - 1) // 2 - input = F.pad(input, (pad_w, pad_w, 0, 0), 'reflect') - input = F.conv2d(input, kernel_w[None, None, None, :]) - - input = input.view([n, c, h, w]) - return F.interpolate(input, size, mode='bicubic', align_corners=align_corners) - -class ReplaceGrad(torch.autograd.Function): - @staticmethod - def forward(ctx, x_forward, x_backward): - ctx.shape = x_backward.shape - return x_forward - - @staticmethod - def backward(ctx, grad_in): - return None, grad_in.sum_to_size(ctx.shape) - - -replace_grad = ReplaceGrad.apply - -#Load CLOOB model -import sys -sys.path.append('./cloob-training') -sys.path.append('./clip') -# git isn't pulling the submodules for cloob-training so we need to add a path to clip -# I hate this :D -with open('./cloob-training/cloob_training/model_pt.py', 'r+') as f: - content = f.read() - f.seek(0, 0) - f.write("import sys\n" + "sys.path.append('../../../clip')\n" + '\n' + content.replace("import clip", "from CLIP import clip")) - -from cloob_training import model_pt, pretrained - -config = pretrained.get_config('cloob_laion_400m_vit_b_16_16_epochs') -cloob = model_pt.get_pt_model(config) -checkpoint = pretrained.download_checkpoint(config) -cloob.load_state_dict(model_pt.get_pt_params(config, checkpoint)) -cloob.eval().requires_grad_(False).to(device) -print('done') - -# Load fastai model - -import gradio as gr -from fastai.vision.all import * -from os.path import exists -import requests - -model_fn = 'quick_224px' -url = 'https://huggingface.co/johnowhitaker/sketchy_unet_rn34/resolve/main/quick_224px' - -if not exists(model_fn): - print('starting download') - with requests.get(url, stream=True) as r: - r.raise_for_status() - with open(model_fn, 'wb') as f: - for chunk in r.iter_content(chunk_size=8192): - f.write(chunk) - print('done') -else: - print('file exists') - -def get_x(item):return None -def get_y(item):return None -sketch_model = load_learner(model_fn) - -# Cutouts -cutn=16 -cut_pow=1 -make_cutouts = MakeCutouts(cloob.config['image_encoder']['image_size'], cutn, cut_pow) - -def process_im(image_path, - sketchify_first=True, - prompt='A watercolor painting of a face', - lr=0.03, - n_iter=10 - ): - - n_iter = int(n_iter) - - pil_im = None - - if sketchify_first: - pred = sketch_model.predict(image_path) - np_im = pred[0].permute(1, 2, 0).numpy() - pil_im = Image.fromarray(np_im.astype(np.uint8)) - else: - pil_im = Image.open(image_path).resize((540, 540)) - - - prompt_texts = [prompt] - weight_decay=1e-4 - - out_size=540 - base_size=8 - n_layers=5 - scale=3 - layer_decay = 0.3 - - - # The prompts - p_prompts = [] - for pr in prompt_texts: - embed = cloob.text_encoder(cloob.tokenize(pr).to(device)).float() - p_prompts.append(Prompt(embed, 1, float('-inf')).to(device)) # 1 is the weight - - # Some negative prompts - n_prompts = [] - for pr in ["Random noise", 'saturated rainbow RGB deep dream']: - embed = cloob.text_encoder(cloob.tokenize(pr).to(device)).float() - n_prompts.append(Prompt(embed, 0.5, float('-inf')).to(device)) # 0.5 is the weight - - # The ImageStack - trying a different scale and n_layers - ims = ImStack(base_size=base_size, - scale=scale, - n_layers=n_layers, - out_size=out_size, - decay=layer_decay, - init_image = pil_im) - - # desaturate starting image - desat = 0.6#@param - - if desat != 1: - for i in range(n_layers): - ims.layers[i] = ims.layers[i].detach()*desat - ims.layers[i].requires_grad = True - - - optimizer = optim.Adam(ims.layers, lr=lr, weight_decay=weight_decay) - losses = [] - - for i in tqdm(range(n_iter)): - optimizer.zero_grad() - - im = ims() - batch = cloob.normalize(make_cutouts(im)) - iii = cloob.image_encoder(batch).float() - - l = 0 - for prompt in p_prompts: - l += prompt(iii) - for prompt in n_prompts: - l -= prompt(iii) - - losses.append(float(l.detach().cpu())) - l.backward() # Backprop - optimizer.step() # Update - - return ims.to_pil() - -from gradio.inputs import Checkbox -iface = gr.Interface(fn=process_im, - inputs=[ - gr.inputs.Image(label="Input Image", shape=(512, 512), type="filepath"), - gr.inputs.Checkbox(label='Sketchify First', default=True), - gr.inputs.Textbox(default="A charcoal and watercolor sketch of a person", label="Prompt"), - gr.inputs.Number(default=0.03, label='LR'), - gr.inputs.Number(default=10, label='num_steps'), - - ], - outputs=[gr.outputs.Image(type="pil", label="Model Output")], - title = 'Sketchy ImStack + CLOOB', description = "Stylize an image with ImStack+CLOOB after a Sketchy Unet", - article = "An input image is sketchified with a unet - see https://huggingface.co/spaces/johnowhitaker/sketchy_unet_demo and links from there to training and blog post. It is then loaded into an imstack (https://johnowhitaker.github.io/imstack/) which is optimized towards a CLOOB prompt for n_steps. Feel free to reach me @johnowhitaker with questions :)" -) -iface.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/jonybepary/teknium-CollectiveCognition-v1.1-Mistral-7B/app.py b/spaces/jonybepary/teknium-CollectiveCognition-v1.1-Mistral-7B/app.py deleted file mode 100644 index 341224b8a18fb762cf7ba380d975b88b3bd55413..0000000000000000000000000000000000000000 --- a/spaces/jonybepary/teknium-CollectiveCognition-v1.1-Mistral-7B/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/teknium/CollectiveCognition-v1.1-Mistral-7B").launch() \ No newline at end of file diff --git a/spaces/joshen/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.cpp b/spaces/joshen/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.cpp deleted file mode 100644 index 2e26b71ed5aad0d46478fdbcd3a880be1401f946..0000000000000000000000000000000000000000 --- a/spaces/joshen/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpge.cpp +++ /dev/null @@ -1,1049 +0,0 @@ -// jpge.cpp - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// v1.01, Dec. 18, 2010 - Initial release -// v1.02, Apr. 6, 2011 - Removed 2x2 ordered dither in H2V1 chroma subsampling method load_block_16_8_8(). (The rounding factor was 2, when it should have been 1. Either way, it wasn't helping.) -// v1.03, Apr. 16, 2011 - Added support for optimized Huffman code tables, optimized dynamic memory allocation down to only 1 alloc. -// Also from Alex Evans: Added RGBA support, linear memory allocator (no longer needed in v1.03). -// v1.04, May. 19, 2012: Forgot to set m_pFile ptr to NULL in cfile_stream::close(). Thanks to Owen Kaluza for reporting this bug. -// Code tweaks to fix VS2008 static code analysis warnings (all looked harmless). -// Code review revealed method load_block_16_8_8() (used for the non-default H2V1 sampling mode to downsample chroma) somehow didn't get the rounding factor fix from v1.02. - -#include "jpge.h" - -#include -#include -#if PLATFORM_WINDOWS -#include -#endif - -#define JPGE_MAX(a,b) (((a)>(b))?(a):(b)) -#define JPGE_MIN(a,b) (((a)<(b))?(a):(b)) - -namespace jpge { - -static inline void *jpge_malloc(size_t nSize) { return FMemory::Malloc(nSize); } -static inline void jpge_free(void *p) { FMemory::Free(p);; } - -// Various JPEG enums and tables. -enum { M_SOF0 = 0xC0, M_DHT = 0xC4, M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_APP0 = 0xE0 }; -enum { DC_LUM_CODES = 12, AC_LUM_CODES = 256, DC_CHROMA_CODES = 12, AC_CHROMA_CODES = 256, MAX_HUFF_SYMBOLS = 257, MAX_HUFF_CODESIZE = 32 }; - -static uint8 s_zag[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 }; -static int16 s_std_lum_quant[64] = { 16,11,12,14,12,10,16,14,13,14,18,17,16,19,24,40,26,24,22,22,24,49,35,37,29,40,58,51,61,60,57,51,56,55,64,72,92,78,64,68,87,69,55,56,80,109,81,87,95,98,103,104,103,62,77,113,121,112,100,120,92,101,103,99 }; -static int16 s_std_croma_quant[64] = { 17,18,18,24,21,24,47,26,26,47,99,66,56,66,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99 }; -static uint8 s_dc_lum_bits[17] = { 0,0,1,5,1,1,1,1,1,1,0,0,0,0,0,0,0 }; -static uint8 s_dc_lum_val[DC_LUM_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_lum_bits[17] = { 0,0,2,1,3,3,2,4,3,5,5,4,4,0,0,1,0x7d }; -static uint8 s_ac_lum_val[AC_LUM_CODES] = -{ - 0x01,0x02,0x03,0x00,0x04,0x11,0x05,0x12,0x21,0x31,0x41,0x06,0x13,0x51,0x61,0x07,0x22,0x71,0x14,0x32,0x81,0x91,0xa1,0x08,0x23,0x42,0xb1,0xc1,0x15,0x52,0xd1,0xf0, - 0x24,0x33,0x62,0x72,0x82,0x09,0x0a,0x16,0x17,0x18,0x19,0x1a,0x25,0x26,0x27,0x28,0x29,0x2a,0x34,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48,0x49, - 0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x83,0x84,0x85,0x86,0x87,0x88,0x89, - 0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3,0xc4,0xc5, - 0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe1,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; -static uint8 s_dc_chroma_bits[17] = { 0,0,3,1,1,1,1,1,1,1,1,1,0,0,0,0,0 }; -static uint8 s_dc_chroma_val[DC_CHROMA_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_chroma_bits[17] = { 0,0,2,1,2,4,4,3,4,7,5,4,4,0,1,2,0x77 }; -static uint8 s_ac_chroma_val[AC_CHROMA_CODES] = -{ - 0x00,0x01,0x02,0x03,0x11,0x04,0x05,0x21,0x31,0x06,0x12,0x41,0x51,0x07,0x61,0x71,0x13,0x22,0x32,0x81,0x08,0x14,0x42,0x91,0xa1,0xb1,0xc1,0x09,0x23,0x33,0x52,0xf0, - 0x15,0x62,0x72,0xd1,0x0a,0x16,0x24,0x34,0xe1,0x25,0xf1,0x17,0x18,0x19,0x1a,0x26,0x27,0x28,0x29,0x2a,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48, - 0x49,0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x82,0x83,0x84,0x85,0x86,0x87, - 0x88,0x89,0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3, - 0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; - -// Low-level helper functions. -template inline void clear_obj(T &obj) { memset(&obj, 0, sizeof(obj)); } - -const int YR = 19595, YG = 38470, YB = 7471, CB_R = -11059, CB_G = -21709, CB_B = 32768, CR_R = 32768, CR_G = -27439, CR_B = -5329; -static inline uint8 clamp(int i) { if (static_cast(i) > 255U) { if (i < 0) i = 0; else if (i > 255) i = 255; } return static_cast(i); } - -static void RGB_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 3, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGB_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 3, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void RGBA_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 4, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGBA_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 4, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void Y_to_YCC(uint8* pDst, const uint8* pSrc, int num_pixels) -{ - for( ; num_pixels; pDst += 3, pSrc++, num_pixels--) { pDst[0] = pSrc[0]; pDst[1] = 128; pDst[2] = 128; } -} - -// Forward DCT - DCT derived from jfdctint. -#define CONST_BITS 13 -#define ROW_BITS 2 -#define DCT_DESCALE(x, n) (((x) + (((int32)1) << ((n) - 1))) >> (n)) -#define DCT_MUL(var, c) (static_cast(var) * static_cast(c)) -#define DCT1D(s0, s1, s2, s3, s4, s5, s6, s7) \ - int32 t0 = s0 + s7, t7 = s0 - s7, t1 = s1 + s6, t6 = s1 - s6, t2 = s2 + s5, t5 = s2 - s5, t3 = s3 + s4, t4 = s3 - s4; \ - int32 t10 = t0 + t3, t13 = t0 - t3, t11 = t1 + t2, t12 = t1 - t2; \ - int32 u1 = DCT_MUL(t12 + t13, 4433); \ - s2 = u1 + DCT_MUL(t13, 6270); \ - s6 = u1 + DCT_MUL(t12, -15137); \ - u1 = t4 + t7; \ - int32 u2 = t5 + t6, u3 = t4 + t6, u4 = t5 + t7; \ - int32 z5 = DCT_MUL(u3 + u4, 9633); \ - t4 = DCT_MUL(t4, 2446); t5 = DCT_MUL(t5, 16819); \ - t6 = DCT_MUL(t6, 25172); t7 = DCT_MUL(t7, 12299); \ - u1 = DCT_MUL(u1, -7373); u2 = DCT_MUL(u2, -20995); \ - u3 = DCT_MUL(u3, -16069); u4 = DCT_MUL(u4, -3196); \ - u3 += z5; u4 += z5; \ - s0 = t10 + t11; s1 = t7 + u1 + u4; s3 = t6 + u2 + u3; s4 = t10 - t11; s5 = t5 + u2 + u4; s7 = t4 + u1 + u3; - -static void DCT2D(int32 *p) -{ - int32 c, *q = p; - for (c = 7; c >= 0; c--, q += 8) - { - int32 s0 = q[0], s1 = q[1], s2 = q[2], s3 = q[3], s4 = q[4], s5 = q[5], s6 = q[6], s7 = q[7]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0] = s0 << ROW_BITS; q[1] = DCT_DESCALE(s1, CONST_BITS-ROW_BITS); q[2] = DCT_DESCALE(s2, CONST_BITS-ROW_BITS); q[3] = DCT_DESCALE(s3, CONST_BITS-ROW_BITS); - q[4] = s4 << ROW_BITS; q[5] = DCT_DESCALE(s5, CONST_BITS-ROW_BITS); q[6] = DCT_DESCALE(s6, CONST_BITS-ROW_BITS); q[7] = DCT_DESCALE(s7, CONST_BITS-ROW_BITS); - } - for (q = p, c = 7; c >= 0; c--, q++) - { - int32 s0 = q[0*8], s1 = q[1*8], s2 = q[2*8], s3 = q[3*8], s4 = q[4*8], s5 = q[5*8], s6 = q[6*8], s7 = q[7*8]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0*8] = DCT_DESCALE(s0, ROW_BITS+3); q[1*8] = DCT_DESCALE(s1, CONST_BITS+ROW_BITS+3); q[2*8] = DCT_DESCALE(s2, CONST_BITS+ROW_BITS+3); q[3*8] = DCT_DESCALE(s3, CONST_BITS+ROW_BITS+3); - q[4*8] = DCT_DESCALE(s4, ROW_BITS+3); q[5*8] = DCT_DESCALE(s5, CONST_BITS+ROW_BITS+3); q[6*8] = DCT_DESCALE(s6, CONST_BITS+ROW_BITS+3); q[7*8] = DCT_DESCALE(s7, CONST_BITS+ROW_BITS+3); - } -} - -struct sym_freq { uint m_key, m_sym_index; }; - -// Radix sorts sym_freq[] array by 32-bit key m_key. Returns ptr to sorted values. -static inline sym_freq* radix_sort_syms(uint num_syms, sym_freq* pSyms0, sym_freq* pSyms1) -{ - const uint cMaxPasses = 4; - uint32 hist[256 * cMaxPasses]; clear_obj(hist); - for (uint i = 0; i < num_syms; i++) { uint freq = pSyms0[i].m_key; hist[freq & 0xFF]++; hist[256 + ((freq >> 8) & 0xFF)]++; hist[256*2 + ((freq >> 16) & 0xFF)]++; hist[256*3 + ((freq >> 24) & 0xFF)]++; } - sym_freq* pCur_syms = pSyms0, *pNew_syms = pSyms1; - uint total_passes = cMaxPasses; while ((total_passes > 1) && (num_syms == hist[(total_passes - 1) * 256])) total_passes--; - for (uint pass_shift = 0, pass = 0; pass < total_passes; pass++, pass_shift += 8) - { - const uint32* pHist = &hist[pass << 8]; - uint offsets[256], cur_ofs = 0; - for (uint i = 0; i < 256; i++) { offsets[i] = cur_ofs; cur_ofs += pHist[i]; } - for (uint i = 0; i < num_syms; i++) - pNew_syms[offsets[(pCur_syms[i].m_key >> pass_shift) & 0xFF]++] = pCur_syms[i]; - sym_freq* t = pCur_syms; pCur_syms = pNew_syms; pNew_syms = t; - } - return pCur_syms; -} - -// calculate_minimum_redundancy() originally written by: Alistair Moffat, alistair@cs.mu.oz.au, Jyrki Katajainen, jyrki@diku.dk, November 1996. -static void calculate_minimum_redundancy(sym_freq *A, int n) -{ - int root, leaf, next, avbl, used, dpth; - if (n==0) return; else if (n==1) { A[0].m_key = 1; return; } - A[0].m_key += A[1].m_key; root = 0; leaf = 2; - for (next=1; next < n-1; next++) - { - if (leaf>=n || A[root].m_key=n || (root=0; next--) A[next].m_key = A[A[next].m_key].m_key+1; - avbl = 1; used = dpth = 0; root = n-2; next = n-1; - while (avbl>0) - { - while (root>=0 && (int)A[root].m_key==dpth) { used++; root--; } - while (avbl>used) { A[next--].m_key = dpth; avbl--; } - avbl = 2*used; dpth++; used = 0; - } -} - -// Limits canonical Huffman code table's max code size to max_code_size. -static void huffman_enforce_max_code_size(int *pNum_codes, int code_list_len, int max_code_size) -{ - if (code_list_len <= 1) return; - - for (int i = max_code_size + 1; i <= MAX_HUFF_CODESIZE; i++) pNum_codes[max_code_size] += pNum_codes[i]; - - uint32 total = 0; - for (int i = max_code_size; i > 0; i--) - total += (((uint32)pNum_codes[i]) << (max_code_size - i)); - - while (total != (1UL << max_code_size)) - { - pNum_codes[max_code_size]--; - for (int i = max_code_size - 1; i > 0; i--) - { - if (pNum_codes[i]) { pNum_codes[i]--; pNum_codes[i + 1] += 2; break; } - } - total--; - } -} - -// Generates an optimized offman table. -void jpeg_encoder::optimize_huffman_table(int table_num, int table_len) -{ - sym_freq syms0[MAX_HUFF_SYMBOLS], syms1[MAX_HUFF_SYMBOLS]; - syms0[0].m_key = 1; syms0[0].m_sym_index = 0; // dummy symbol, assures that no valid code contains all 1's - int num_used_syms = 1; - const uint32 *pSym_count = &m_huff_count[table_num][0]; - for (int i = 0; i < table_len; i++) - if (pSym_count[i]) { syms0[num_used_syms].m_key = pSym_count[i]; syms0[num_used_syms++].m_sym_index = i + 1; } - sym_freq* pSyms = radix_sort_syms(num_used_syms, syms0, syms1); - calculate_minimum_redundancy(pSyms, num_used_syms); - - // Count the # of symbols of each code size. - int num_codes[1 + MAX_HUFF_CODESIZE]; clear_obj(num_codes); - for (int i = 0; i < num_used_syms; i++) - num_codes[pSyms[i].m_key]++; - - const uint JPGE_CODE_SIZE_LIMIT = 16; // the maximum possible size of a JPEG Huffman code (valid range is [9,16] - 9 vs. 8 because of the dummy symbol) - huffman_enforce_max_code_size(num_codes, num_used_syms, JPGE_CODE_SIZE_LIMIT); - - // Compute m_huff_bits array, which contains the # of symbols per code size. - clear_obj(m_huff_bits[table_num]); - for (int i = 1; i <= (int)JPGE_CODE_SIZE_LIMIT; i++) - m_huff_bits[table_num][i] = static_cast(num_codes[i]); - - // Remove the dummy symbol added above, which must be in largest bucket. - for (int i = JPGE_CODE_SIZE_LIMIT; i >= 1; i--) - { - if (m_huff_bits[table_num][i]) { m_huff_bits[table_num][i]--; break; } - } - - // Compute the m_huff_val array, which contains the symbol indices sorted by code size (smallest to largest). - for (int i = num_used_syms - 1; i >= 1; i--) - m_huff_val[table_num][num_used_syms - 1 - i] = static_cast(pSyms[i].m_sym_index - 1); -} - -// JPEG marker generation. -void jpeg_encoder::emit_byte(uint8 i) -{ - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_obj(i); -} - -void jpeg_encoder::emit_word(uint i) -{ - emit_byte(uint8(i >> 8)); emit_byte(uint8(i & 0xFF)); -} - -void jpeg_encoder::emit_marker(int marker) -{ - emit_byte(uint8(0xFF)); emit_byte(uint8(marker)); -} - -// Emit JFIF marker -void jpeg_encoder::emit_jfif_app0() -{ - emit_marker(M_APP0); - emit_word(2 + 4 + 1 + 2 + 1 + 2 + 2 + 1 + 1); - emit_byte(0x4A); emit_byte(0x46); emit_byte(0x49); emit_byte(0x46); /* Identifier: ASCII "JFIF" */ - emit_byte(0); - emit_byte(1); /* Major version */ - emit_byte(1); /* Minor version */ - emit_byte(0); /* Density unit */ - emit_word(1); - emit_word(1); - emit_byte(0); /* No thumbnail image */ - emit_byte(0); -} - -// Emit quantization tables -void jpeg_encoder::emit_dqt() -{ - for (int i = 0; i < ((m_num_components == 3) ? 2 : 1); i++) - { - emit_marker(M_DQT); - emit_word(64 + 1 + 2); - emit_byte(static_cast(i)); - for (int j = 0; j < 64; j++) - emit_byte(static_cast(m_quantization_tables[i][j])); - } -} - -// Emit start of frame marker -void jpeg_encoder::emit_sof() -{ - emit_marker(M_SOF0); /* baseline */ - emit_word(3 * m_num_components + 2 + 5 + 1); - emit_byte(8); /* precision */ - emit_word(m_image_y); - emit_word(m_image_x); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); /* component ID */ - emit_byte((m_comp_h_samp[i] << 4) + m_comp_v_samp[i]); /* h and v sampling */ - emit_byte(i > 0); /* quant. table num */ - } -} - -// Emit Huffman table. -void jpeg_encoder::emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag) -{ - emit_marker(M_DHT); - - int length = 0; - for (int i = 1; i <= 16; i++) - length += bits[i]; - - emit_word(length + 2 + 1 + 16); - emit_byte(static_cast(index + (ac_flag << 4))); - - for (int i = 1; i <= 16; i++) - emit_byte(bits[i]); - - for (int i = 0; i < length; i++) - emit_byte(val[i]); -} - -// Emit all Huffman tables. -void jpeg_encoder::emit_dhts() -{ - emit_dht(m_huff_bits[0+0], m_huff_val[0+0], 0, false); - emit_dht(m_huff_bits[2+0], m_huff_val[2+0], 0, true); - if (m_num_components == 3) - { - emit_dht(m_huff_bits[0+1], m_huff_val[0+1], 1, false); - emit_dht(m_huff_bits[2+1], m_huff_val[2+1], 1, true); - } -} - -// emit start of scan -void jpeg_encoder::emit_sos() -{ - emit_marker(M_SOS); - emit_word(2 * m_num_components + 2 + 1 + 3); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); - if (i == 0) - emit_byte((0 << 4) + 0); - else - emit_byte((1 << 4) + 1); - } - emit_byte(0); /* spectral selection */ - emit_byte(63); - emit_byte(0); -} - -// Emit all markers at beginning of image file. -void jpeg_encoder::emit_markers() -{ - emit_marker(M_SOI); - emit_jfif_app0(); - emit_dqt(); - emit_sof(); - emit_dhts(); - emit_sos(); -} - -// Compute the actual canonical Huffman codes/code sizes given the JPEG huff bits and val arrays. -void jpeg_encoder::compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val) -{ - int i, l, last_p, si; - uint8 huff_size[257]; - uint huff_code[257]; - uint code; - - int p = 0; - for (l = 1; l <= 16; l++) - for (i = 1; i <= bits[l]; i++) - huff_size[p++] = (char)l; - - huff_size[p] = 0; last_p = p; // write sentinel - - code = 0; si = huff_size[0]; p = 0; - - while (huff_size[p]) - { - while (huff_size[p] == si) - huff_code[p++] = code++; - code <<= 1; - si++; - } - - memset(codes, 0, sizeof(codes[0])*256); - memset(code_sizes, 0, sizeof(code_sizes[0])*256); - for (p = 0; p < last_p; p++) - { - codes[val[p]] = huff_code[p]; - code_sizes[val[p]] = huff_size[p]; - } -} - -// Quantization table generation. -void jpeg_encoder::compute_quant_table(int32 *pDst, int16 *pSrc) -{ - int32 q; - if (m_params.m_quality < 50) - q = 5000 / m_params.m_quality; - else - q = 200 - m_params.m_quality * 2; - for (int i = 0; i < 64; i++) - { - int32 j = *pSrc++; j = (j * q + 50L) / 100L; - *pDst++ = JPGE_MIN(JPGE_MAX(j, 1), 255); - } -} - -// Higher-level methods. -void jpeg_encoder::first_pass_init() -{ - m_bit_buffer = 0; m_bits_in = 0; - memset(m_last_dc_val, 0, 3 * sizeof(m_last_dc_val[0])); - m_mcu_y_ofs = 0; - m_pass_num = 1; -} - -bool jpeg_encoder::second_pass_init() -{ - compute_huffman_table(&m_huff_codes[0+0][0], &m_huff_code_sizes[0+0][0], m_huff_bits[0+0], m_huff_val[0+0]); - compute_huffman_table(&m_huff_codes[2+0][0], &m_huff_code_sizes[2+0][0], m_huff_bits[2+0], m_huff_val[2+0]); - if (m_num_components > 1) - { - compute_huffman_table(&m_huff_codes[0+1][0], &m_huff_code_sizes[0+1][0], m_huff_bits[0+1], m_huff_val[0+1]); - compute_huffman_table(&m_huff_codes[2+1][0], &m_huff_code_sizes[2+1][0], m_huff_bits[2+1], m_huff_val[2+1]); - } - first_pass_init(); - emit_markers(); - m_pass_num = 2; - return true; -} - -bool jpeg_encoder::jpg_open(int p_x_res, int p_y_res, int src_channels) -{ - m_num_components = 3; - switch (m_params.m_subsampling) - { - case Y_ONLY: - { - m_num_components = 1; - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H1V1: - { - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H2V1: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 8; - break; - } - case H2V2: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 2; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 16; - } - } - - m_image_x = p_x_res; m_image_y = p_y_res; - m_image_bpp = src_channels; - m_image_bpl = m_image_x * src_channels; - m_image_x_mcu = (m_image_x + m_mcu_x - 1) & (~(m_mcu_x - 1)); - m_image_y_mcu = (m_image_y + m_mcu_y - 1) & (~(m_mcu_y - 1)); - m_image_bpl_xlt = m_image_x * m_num_components; - m_image_bpl_mcu = m_image_x_mcu * m_num_components; - m_mcus_per_row = m_image_x_mcu / m_mcu_x; - - if ((m_mcu_lines[0] = static_cast(jpge_malloc(m_image_bpl_mcu * m_mcu_y))) == NULL) return false; - for (int i = 1; i < m_mcu_y; i++) - m_mcu_lines[i] = m_mcu_lines[i-1] + m_image_bpl_mcu; - - compute_quant_table(m_quantization_tables[0], s_std_lum_quant); - compute_quant_table(m_quantization_tables[1], m_params.m_no_chroma_discrim_flag ? s_std_lum_quant : s_std_croma_quant); - - m_out_buf_left = JPGE_OUT_BUF_SIZE; - m_pOut_buf = m_out_buf; - - if (m_params.m_two_pass_flag) - { - clear_obj(m_huff_count); - first_pass_init(); - } - else - { - memcpy(m_huff_bits[0+0], s_dc_lum_bits, 17); memcpy(m_huff_val [0+0], s_dc_lum_val, DC_LUM_CODES); - memcpy(m_huff_bits[2+0], s_ac_lum_bits, 17); memcpy(m_huff_val [2+0], s_ac_lum_val, AC_LUM_CODES); - memcpy(m_huff_bits[0+1], s_dc_chroma_bits, 17); memcpy(m_huff_val [0+1], s_dc_chroma_val, DC_CHROMA_CODES); - memcpy(m_huff_bits[2+1], s_ac_chroma_bits, 17); memcpy(m_huff_val [2+1], s_ac_chroma_val, AC_CHROMA_CODES); - if (!second_pass_init()) return false; // in effect, skip over the first pass - } - return m_all_stream_writes_succeeded; -} - -void jpeg_encoder::load_block_8_8_grey(int x) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[i] + x; - pDst[0] = pSrc[0] - 128; pDst[1] = pSrc[1] - 128; pDst[2] = pSrc[2] - 128; pDst[3] = pSrc[3] - 128; - pDst[4] = pSrc[4] - 128; pDst[5] = pSrc[5] - 128; pDst[6] = pSrc[6] - 128; pDst[7] = pSrc[7] - 128; - } -} - -void jpeg_encoder::load_block_8_8(int x, int y, int c) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x = (x * (8 * 3)) + c; - y <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[y + i] + x; - pDst[0] = pSrc[0 * 3] - 128; pDst[1] = pSrc[1 * 3] - 128; pDst[2] = pSrc[2 * 3] - 128; pDst[3] = pSrc[3 * 3] - 128; - pDst[4] = pSrc[4 * 3] - 128; pDst[5] = pSrc[5 * 3] - 128; pDst[6] = pSrc[6 * 3] - 128; pDst[7] = pSrc[7 * 3] - 128; - } -} - -void jpeg_encoder::load_block_16_8(int x, int c) -{ - uint8 *pSrc1, *pSrc2; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - int a = 0, b = 2; - for (int i = 0; i < 16; i += 2, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pSrc2 = m_mcu_lines[i + 1] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3] + pSrc2[ 0 * 3] + pSrc2[ 1 * 3] + a) >> 2) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3] + pSrc2[ 2 * 3] + pSrc2[ 3 * 3] + b) >> 2) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3] + pSrc2[ 4 * 3] + pSrc2[ 5 * 3] + a) >> 2) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3] + pSrc2[ 6 * 3] + pSrc2[ 7 * 3] + b) >> 2) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3] + pSrc2[ 8 * 3] + pSrc2[ 9 * 3] + a) >> 2) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3] + pSrc2[10 * 3] + pSrc2[11 * 3] + b) >> 2) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3] + pSrc2[12 * 3] + pSrc2[13 * 3] + a) >> 2) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3] + pSrc2[14 * 3] + pSrc2[15 * 3] + b) >> 2) - 128; - int temp = a; a = b; b = temp; - } -} - -void jpeg_encoder::load_block_16_8_8(int x, int c) -{ - uint8 *pSrc1; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3]) >> 1) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3]) >> 1) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3]) >> 1) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3]) >> 1) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3]) >> 1) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3]) >> 1) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3]) >> 1) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3]) >> 1) - 128; - } -} - -void jpeg_encoder::load_quantized_coefficients(int component_num) -{ - int32 *q = m_quantization_tables[component_num > 0]; - int16 *pDst = m_coefficient_array; - for (int i = 0; i < 64; i++) - { - sample_array_t j = m_sample_array[s_zag[i]]; - if (j < 0) - { - if ((j = -j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast(-(j / *q)); - } - else - { - if ((j = j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast((j / *q)); - } - q++; - } -} - -void jpeg_encoder::flush_output_buffer() -{ - if (m_out_buf_left != JPGE_OUT_BUF_SIZE) - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_buf(m_out_buf, JPGE_OUT_BUF_SIZE - m_out_buf_left); - m_pOut_buf = m_out_buf; - m_out_buf_left = JPGE_OUT_BUF_SIZE; -} - -void jpeg_encoder::put_bits(uint bits, uint len) -{ - m_bit_buffer |= ((uint32)bits << (24 - (m_bits_in += len))); - while (m_bits_in >= 8) - { - uint8 c; - #define JPGE_PUT_BYTE(c) { *m_pOut_buf++ = (c); if (--m_out_buf_left == 0) flush_output_buffer(); } - JPGE_PUT_BYTE(c = (uint8)((m_bit_buffer >> 16) & 0xFF)); - if (c == 0xFF) JPGE_PUT_BYTE(0); - m_bit_buffer <<= 8; - m_bits_in -= 8; - } -} - -void jpeg_encoder::code_coefficients_pass_one(int component_num) -{ - if (component_num >= 3) return; // just to shut up static analysis - int i, run_len, nbits, temp1; - int16 *src = m_coefficient_array; - uint32 *dc_count = component_num ? m_huff_count[0 + 1] : m_huff_count[0 + 0], *ac_count = component_num ? m_huff_count[2 + 1] : m_huff_count[2 + 0]; - - temp1 = src[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = src[0]; - if (temp1 < 0) temp1 = -temp1; - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - dc_count[nbits]++; - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - ac_count[0xF0]++; - run_len -= 16; - } - if (temp1 < 0) temp1 = -temp1; - nbits = 1; - while (temp1 >>= 1) nbits++; - ac_count[(run_len << 4) + nbits]++; - run_len = 0; - } - } - if (run_len) ac_count[0]++; -} - -void jpeg_encoder::code_coefficients_pass_two(int component_num) -{ - int i, j, run_len, nbits, temp1, temp2; - int16 *pSrc = m_coefficient_array; - uint *codes[2]; - uint8 *code_sizes[2]; - - if (component_num == 0) - { - codes[0] = m_huff_codes[0 + 0]; codes[1] = m_huff_codes[2 + 0]; - code_sizes[0] = m_huff_code_sizes[0 + 0]; code_sizes[1] = m_huff_code_sizes[2 + 0]; - } - else - { - codes[0] = m_huff_codes[0 + 1]; codes[1] = m_huff_codes[2 + 1]; - code_sizes[0] = m_huff_code_sizes[0 + 1]; code_sizes[1] = m_huff_code_sizes[2 + 1]; - } - - temp1 = temp2 = pSrc[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = pSrc[0]; - - if (temp1 < 0) - { - temp1 = -temp1; temp2--; - } - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - put_bits(codes[0][nbits], code_sizes[0][nbits]); - if (nbits) put_bits(temp2 & ((1 << nbits) - 1), nbits); - - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - put_bits(codes[1][0xF0], code_sizes[1][0xF0]); - run_len -= 16; - } - if ((temp2 = temp1) < 0) - { - temp1 = -temp1; - temp2--; - } - nbits = 1; - while (temp1 >>= 1) - nbits++; - j = (run_len << 4) + nbits; - put_bits(codes[1][j], code_sizes[1][j]); - put_bits(temp2 & ((1 << nbits) - 1), nbits); - run_len = 0; - } - } - if (run_len) - put_bits(codes[1][0], code_sizes[1][0]); -} - -void jpeg_encoder::code_block(int component_num) -{ - DCT2D(m_sample_array); - load_quantized_coefficients(component_num); - if (m_pass_num == 1) - code_coefficients_pass_one(component_num); - else - code_coefficients_pass_two(component_num); -} - -void jpeg_encoder::process_mcu_row() -{ - if (m_num_components == 1) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8_grey(i); code_block(0); - } - } - else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i, 0, 0); code_block(0); load_block_8_8(i, 0, 1); code_block(1); load_block_8_8(i, 0, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_16_8_8(i, 1); code_block(1); load_block_16_8_8(i, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_8_8(i * 2 + 0, 1, 0); code_block(0); load_block_8_8(i * 2 + 1, 1, 0); code_block(0); - load_block_16_8(i, 1); code_block(1); load_block_16_8(i, 2); code_block(2); - } - } -} - -bool jpeg_encoder::terminate_pass_one() -{ - optimize_huffman_table(0+0, DC_LUM_CODES); optimize_huffman_table(2+0, AC_LUM_CODES); - if (m_num_components > 1) - { - optimize_huffman_table(0+1, DC_CHROMA_CODES); optimize_huffman_table(2+1, AC_CHROMA_CODES); - } - return second_pass_init(); -} - -bool jpeg_encoder::terminate_pass_two() -{ - put_bits(0x7F, 7); - flush_output_buffer(); - emit_marker(M_EOI); - m_pass_num++; // purposely bump up m_pass_num, for debugging - return true; -} - -bool jpeg_encoder::process_end_of_image() -{ - if (m_mcu_y_ofs) - { - if (m_mcu_y_ofs < 16) // check here just to shut up static analysis - { - for (int i = m_mcu_y_ofs; i < m_mcu_y; i++) - memcpy(m_mcu_lines[i], m_mcu_lines[m_mcu_y_ofs - 1], m_image_bpl_mcu); - } - - process_mcu_row(); - } - - if (m_pass_num == 1) - return terminate_pass_one(); - else - return terminate_pass_two(); -} - -void jpeg_encoder::load_mcu(const void *pSrc) -{ - const uint8* Psrc = reinterpret_cast(pSrc); - - uint8* pDst = m_mcu_lines[m_mcu_y_ofs]; // OK to write up to m_image_bpl_xlt bytes to pDst - - if (m_num_components == 1) - { - if (m_image_bpp == 4) - RGBA_to_Y(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_Y(pDst, Psrc, m_image_x); - else - memcpy(pDst, Psrc, m_image_x); - } - else - { - if (m_image_bpp == 4) - RGBA_to_YCC(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_YCC(pDst, Psrc, m_image_x); - else - Y_to_YCC(pDst, Psrc, m_image_x); - } - - // Possibly duplicate pixels at end of scanline if not a multiple of 8 or 16 - if (m_num_components == 1) - memset(m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt, pDst[m_image_bpl_xlt - 1], m_image_x_mcu - m_image_x); - else - { - const uint8 y = pDst[m_image_bpl_xlt - 3 + 0], cb = pDst[m_image_bpl_xlt - 3 + 1], cr = pDst[m_image_bpl_xlt - 3 + 2]; - uint8 *q = m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt; - for (int i = m_image_x; i < m_image_x_mcu; i++) - { - *q++ = y; *q++ = cb; *q++ = cr; - } - } - - if (++m_mcu_y_ofs == m_mcu_y) - { - process_mcu_row(); - m_mcu_y_ofs = 0; - } -} - -void jpeg_encoder::clear() -{ - m_mcu_lines[0] = NULL; - m_pass_num = 0; - m_all_stream_writes_succeeded = true; -} - -jpeg_encoder::jpeg_encoder() -{ - clear(); -} - -jpeg_encoder::~jpeg_encoder() -{ - deinit(); -} - -bool jpeg_encoder::init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params) -{ - deinit(); - if (((!pStream) || (width < 1) || (height < 1)) || ((src_channels != 1) && (src_channels != 3) && (src_channels != 4)) || (!comp_params.check_valid())) return false; - m_pStream = pStream; - m_params = comp_params; - return jpg_open(width, height, src_channels); -} - -void jpeg_encoder::deinit() -{ - jpge_free(m_mcu_lines[0]); - clear(); -} - -bool jpeg_encoder::process_scanline(const void* pScanline) -{ - if ((m_pass_num < 1) || (m_pass_num > 2)) return false; - if (m_all_stream_writes_succeeded) - { - if (!pScanline) - { - if (!process_end_of_image()) return false; - } - else - { - load_mcu(pScanline); - } - } - return m_all_stream_writes_succeeded; -} - -// Higher level wrappers/examples (optional). -#include - -class cfile_stream : public output_stream -{ - cfile_stream(const cfile_stream &); - cfile_stream &operator= (const cfile_stream &); - - FILE* m_pFile; - bool m_bStatus; - -public: - cfile_stream() : m_pFile(NULL), m_bStatus(false) { } - - virtual ~cfile_stream() - { - close(); - } - - bool open(const char *pFilename) - { - close(); -#if defined(_MSC_VER) - if (fopen_s(&m_pFile, pFilename, "wb") != 0) - { - return false; - } -#else - m_pFile = fopen(pFilename, "wb"); -#endif - m_bStatus = (m_pFile != NULL); - return m_bStatus; - } - - bool close() - { - if (m_pFile) - { - if (fclose(m_pFile) == EOF) - { - m_bStatus = false; - } - m_pFile = NULL; - } - return m_bStatus; - } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - m_bStatus = m_bStatus && (fwrite(pBuf, len, 1, m_pFile) == 1); - return m_bStatus; - } - - uint get_size() const - { - return m_pFile ? ftell(m_pFile) : 0; - } -}; - -// Writes JPEG image to file. -bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - cfile_stream dst_stream; - if (!dst_stream.open(pFilename)) - return false; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - // i, width, and num_channels are all 64bit - const uint8* pBuf = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pBuf)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - return dst_stream.close(); -} - -class memory_stream : public output_stream -{ - memory_stream(const memory_stream &); - memory_stream &operator= (const memory_stream &); - - uint8 *m_pBuf; - uint64_t m_buf_size, m_buf_ofs; - -public: - memory_stream(void *pBuf, uint64_t buf_size) : m_pBuf(static_cast(pBuf)), m_buf_size(buf_size), m_buf_ofs(0) { } - - virtual ~memory_stream() { } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - uint64_t buf_remaining = m_buf_size - m_buf_ofs; - if ((uint64_t)len > buf_remaining) - return false; - memcpy(m_pBuf + m_buf_ofs, pBuf, len); - m_buf_ofs += len; - return true; - } - - uint64_t get_size() const - { - return m_buf_ofs; - } -}; - -bool compress_image_to_jpeg_file_in_memory(void *pDstBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - if ((!pDstBuf) || (!buf_size)) - return false; - - memory_stream dst_stream(pDstBuf, buf_size); - - buf_size = 0; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - const uint8* pScanline = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pScanline)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - buf_size = dst_stream.get_size(); - return true; -} - -} // namespace jpge \ No newline at end of file diff --git a/spaces/juancopi81/youtube-music-transcribe/t5x/models.py b/spaces/juancopi81/youtube-music-transcribe/t5x/models.py deleted file mode 100644 index d4ec78aee68c57612576b2282a41f8eddf3bd28d..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/youtube-music-transcribe/t5x/models.py +++ /dev/null @@ -1,1178 +0,0 @@ -# Copyright 2022 The T5X Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""T5X Models. - -This module uses layers.py to build a higher-level model structure and define -methods for the loss computation as well as a train, prediction, and evaluation -steps. -""" - -import abc -import functools -from typing import Any, Callable, Mapping, MutableMapping, Optional, Tuple, Union - -import clu.metrics as clu_metrics -from flax import core as flax_core -from flax import linen as nn -from flax.core import scope as flax_scope -from flax.training import common_utils -import jax -import jax.numpy as jnp -import numpy as np -import seqio -from t5x import decoding -from t5x import losses -from t5x import metrics as metrics_lib -from t5x import optimizers -import tensorflow as tf -import typing_extensions - -Array = Union[np.ndarray, jnp.ndarray, jax.pxla.ShardedDeviceArray, tf.Tensor] -MetricsMap = metrics_lib.MetricsMap -PyTreeDef = type(jax.tree_structure(None)) - - -class TokensIdsToLogitsCallable(typing_extensions.Protocol): - """Token ids to logits mapping call signature.""" - - def __call__( - self, token_ids: jnp.ndarray, cache: Mapping[str, jnp.ndarray] - ) -> Tuple[jnp.ndarray, Mapping[str, jnp.ndarray]]: - """Performs forward pass to convert token ids to logits. - - Args: - token_ids: [batch_size, 1] int32 tokens for single position used during - incremental decoding. Non-0 prefix tokens to be used as a forced prompt. - cache: flax attention cache. - - Returns: - a tuple of logits with a shape [batch_size, vocab_size] and an updated - cache. - """ - ... - - -class DecodeFnCallable(typing_extensions.Protocol): - """Decoding function call signature.""" - - def __call__(self, *, inputs: jnp.ndarray, cache: Mapping[str, jnp.ndarray], - tokens_to_logits: TokensIdsToLogitsCallable, eos_id: int, - num_decodes: int, decode_rng: Optional[jax.random.KeyArray], - cache_offset: int, **kwargs) -> Tuple[jnp.ndarray, jnp.ndarray]: - """Decoding function interface. - - Args: - inputs: [batch_size, max_decode_len] int32 sequence of tokens, with non-0 - prefix tokens to be used as a forced prompt. - cache: flax attention cache. - tokens_to_logits: fast autoregressive decoder function taking single token - slices and cache and returning next-token logits and updated cache. - eos_id: end-of-sentence token for target vocabulary. - num_decodes: number of decoded sequences to be returned. - decode_rng: an optional JAX PRNG Key for stochastic sampling routines. - cache_offset: axis offset for cache, arising from scanned layers. - **kwargs: an optional kwargs. One common usecase of this is passing - decoding parameters at the callsite. - - Returns: - decodes: Array of sequences: [batch_size, num_decodes, max_decode_len]. - The `num_decodes` dimension is expected to be sorted by the `scores`, - i.e., `decodes[:, -1, :] has the highest scores among `num_decodes` - decoded sequences. - scores: Array of log likelihood scores: [batch_size, num_decodes] - """ - ... - - -class BaseModel(abc.ABC): - """Abstract base class for models. - - Wraps a flax module to provide a basic interface for computing loss, - evaluation metrics, prediction, and scoring. - - Subclasses must implement the abstract methods. Any additional arguments added - to these methods must have defaults or be bound at run time to fit the - interface expected by the standard training, inference, and evaluation - functions. - """ - - FEATURE_CONVERTER_CLS: Callable[..., seqio.FeatureConverter] - - def __init__(self, optimizer_def: optimizers.OptimizerDefType): - # TODO(jbulian): Move the optimizer out of the model and make it a training - # parameter. - self.optimizer_def = optimizer_def - - @abc.abstractmethod - def loss_fn( - self, - params: PyTreeDef, - batch: Mapping[str, jnp.ndarray], - dropout_rng: Optional[jax.random.KeyArray], - ) -> Tuple[jnp.ndarray, MetricsMap]: - """Computes loss and metrics. - - Args: - params: model parameters. - batch: a batch of inputs. - dropout_rng: rng to use for dropout, or None for deterministic mode. - - Returns: - loss: the loss computed for the given inputs and parameters. - aux: - weight_sum: sum of the per-token weights applied to the loss. - metrics: a mapping of metrics computed for this batch. - """ - pass - - def eval_fn( - self, - params: PyTreeDef, - batch: Mapping[str, jnp.ndarray], - ) -> Tuple[jnp.ndarray, MetricsMap]: - """Computes loss and metrics during the evaluation. - - Args: - params: model parameters. - batch: a batch of inputs. - - Returns: - loss: the loss computed for the given inputs and parameters. - aux: - weight_sum: sum of the per-token weights applied to the loss. - metrics: a mapping of metrics computed for this batch. - """ - return self.loss_fn( - params=params, - batch=batch, - dropout_rng=None, - ) - - def predict_batch(self, - params: PyTreeDef, - batch: Mapping[str, jnp.ndarray], - rng: Optional[jax.random.KeyArray] = None) -> jnp.ndarray: - """Predicts a batch of outputs from the model. - - Args: - params: model parameters. - batch: a batch of inputs. - rng: an optional RNG to use during prediction (e.g., for decoding). - - Returns: - The model predictions. - """ - return self.predict_batch_with_aux(params=params, batch=batch, rng=rng)[0] - - @abc.abstractmethod - def predict_batch_with_aux( - self, - params: PyTreeDef, - batch: Mapping[str, jnp.ndarray], - rng: Optional[jax.random.KeyArray] = None, - ) -> Tuple[jnp.ndarray, Mapping[str, jnp.ndarray]]: - """Predict a batch from the modelwith auxiliary outputs. - - Args: - params: model parameters. - batch: a batch of inputs. - rng: an optional RNG key to use during prediction (e.g., for decoding). - - Returns: - predictions: the model predictions - aux: auxiliary data - """ - pass - - @abc.abstractmethod - def score_batch(self, - params: PyTreeDef, - batch: Mapping[str, jnp.ndarray], - return_intermediates: bool = False) -> jnp.ndarray: - """Computes scores for batch.""" - pass - - @abc.abstractmethod - def get_initial_variables( - self, - rng: jax.random.KeyArray, - input_shapes: Mapping[str, Array], - input_types: Optional[Mapping[str, jnp.dtype]] = None - ) -> flax_scope.FrozenVariableDict: - """Returns the initial variables of the model.""" - pass - - -class BaseTransformerModel(BaseModel): - """Abstract base class for Transformer models. - - Subclasses must implement `predict_batch_with_aux`, `score_batch`, - `get_initial_variables` from `BaseModel` as well as `_compute_logits`. - """ - - def __init__( - self, - module: nn.Module, - input_vocabulary: seqio.Vocabulary, - output_vocabulary: seqio.Vocabulary, - optimizer_def: optimizers.OptimizerDefType, - decode_fn: Optional[DecodeFnCallable] = None, - label_smoothing: float = 0.0, - z_loss: float = 0.0, - loss_normalizing_factor: Optional[Union[ - float, int, str, losses.SpecialLossNormalizingFactor]] = None, - ): - self.module = module - self._input_vocabulary = input_vocabulary - self._output_vocabulary = output_vocabulary - self._decode_fn = decode_fn - self._label_smoothing = label_smoothing - self._z_loss = z_loss - self._loss_normalizing_factor = loss_normalizing_factor - - super().__init__(optimizer_def=optimizer_def) - - @property - def input_vocabulary(self): - return self._input_vocabulary - - @property - def output_vocabulary(self): - return self._output_vocabulary - - @property - def decode_fn(self): - return self._decode_fn - - @abc.abstractmethod - def _compute_logits( - self, - params: PyTreeDef, - batch: Mapping[str, jnp.ndarray], - dropout_rng: Optional[jax.random.KeyArray] = None) -> jnp.ndarray: - """Computes logits via a forward pass of the model.""" - pass - - def loss_fn( - self, - params: PyTreeDef, - batch: Mapping[str, jnp.ndarray], - dropout_rng: Optional[jax.random.KeyArray], - ) -> Tuple[jnp.ndarray, MetricsMap]: - """Loss function used for training with a cross-entropy loss.""" - logits = self._compute_logits(params, batch, dropout_rng) - - loss_normalizing_factor: Optional[Union[ - float, int, str, losses.SpecialLossNormalizingFactor]] - (loss_normalizing_factor, - weights) = losses.get_loss_normalizing_factor_and_weights( - self._loss_normalizing_factor, batch) - - loss, z_loss, _ = losses.compute_weighted_cross_entropy( - logits, - targets=batch['decoder_target_tokens'], - weights=weights, - label_smoothing=self._label_smoothing, - z_loss=self._z_loss, - loss_normalizing_factor=loss_normalizing_factor) - metrics = self._compute_metrics( - logits=logits, - targets=batch['decoder_target_tokens'], - mask=weights, - loss=loss, - z_loss=z_loss) - return loss, metrics - - def _compute_metrics( - self, - logits: jnp.ndarray, - targets: jnp.ndarray, - mask: jnp.ndarray, - loss: jnp.ndarray, - z_loss: Optional[jnp.ndarray] = None, - ) -> MetricsMap: - return compute_base_metrics( - logits=logits, targets=targets, mask=mask, loss=loss, z_loss=z_loss) - - -class EncoderDecoderModel(BaseTransformerModel): - """Wrapper class for the models.Transformer nn.module.""" - - FEATURE_CONVERTER_CLS = seqio.EncDecFeatureConverter - - def __init__( - self, - module: nn.Module, - input_vocabulary: seqio.Vocabulary, - output_vocabulary: seqio.Vocabulary, - optimizer_def: optimizers.OptimizerDefType, - decode_fn: DecodeFnCallable = decoding.beam_search, - feature_converter_cls: Optional[Callable[..., - seqio.FeatureConverter]] = None, - label_smoothing: float = 0.0, - z_loss: float = 0.0, - loss_normalizing_factor: Optional[float] = None, - ): - if feature_converter_cls is not None: - self.FEATURE_CONVERTER_CLS = feature_converter_cls # pylint: disable=invalid-name - super().__init__( - module=module, - input_vocabulary=input_vocabulary, - output_vocabulary=output_vocabulary, - optimizer_def=optimizer_def, - decode_fn=decode_fn, - label_smoothing=label_smoothing, - z_loss=z_loss, - loss_normalizing_factor=loss_normalizing_factor, - ) - - def get_initial_variables( - self, - rng: jax.random.KeyArray, - input_shapes: Mapping[str, Array], - input_types: Optional[Mapping[str, jnp.dtype]] = None - ) -> flax_scope.FrozenVariableDict: - """Get the initial variables for an encoder-decoder model.""" - input_types = {} if input_types is None else input_types - encoder_shape = input_shapes['encoder_input_tokens'] - encoder_type = input_types.get('encoder_input_tokens', jnp.float32) - decoder_shape = input_shapes['decoder_input_tokens'] - decoder_type = input_types.get('decoder_input_tokens', jnp.float32) - if 'encoder_positions' in input_shapes: - encoder_positions = jnp.ones( - input_shapes['encoder_positions'], - input_types.get('encoder_positions', jnp.int32)) - else: - encoder_positions = None - if 'decoder_positions' in input_shapes: - decoder_positions = jnp.ones( - input_shapes['decoder_positions'], - input_types.get('decoder_positions', jnp.int32)) - else: - decoder_positions = None - if 'encoder_segment_ids' in input_shapes: - encoder_segment_ids = jnp.ones( - input_shapes['encoder_segment_ids'], - input_types.get('encoder_segment_ids', jnp.int32)) - else: - encoder_segment_ids = None - if 'decoder_segment_ids' in input_shapes: - decoder_segment_ids = jnp.ones( - input_shapes['decoder_segment_ids'], - input_types.get('decoder_segment_ids', jnp.int32)) - else: - decoder_segment_ids = None - initial_variables = self.module.init( - rng, - jnp.ones(encoder_shape, encoder_type), - jnp.ones(decoder_shape, decoder_type), - jnp.ones(decoder_shape, decoder_type), - encoder_positions=encoder_positions, - decoder_positions=decoder_positions, - encoder_segment_ids=encoder_segment_ids, - decoder_segment_ids=decoder_segment_ids, - decode=False, - enable_dropout=False) - return initial_variables - - def _compute_logits( - self, - params: PyTreeDef, - batch: Mapping[str, jnp.ndarray], - dropout_rng: Optional[jax.random.KeyArray] = None, - mutable: flax_scope.CollectionFilter = False, - other_variables: Optional[PyTreeDef] = None, - ) -> Union[jnp.ndarray, Tuple[jnp.ndarray, flax_scope.FrozenVariableDict]]: - """Computes logits via a forward pass of `self.module_cls`.""" - # Dropout is provided only for the training mode. - rngs = {'dropout': dropout_rng} if dropout_rng is not None else None - if other_variables is None: - other_variables = {} - return self.module.apply( - { - 'params': params, - **other_variables - }, - batch['encoder_input_tokens'], - batch['decoder_input_tokens'], - batch['decoder_target_tokens'], - encoder_segment_ids=batch.get('encoder_segment_ids', None), - decoder_segment_ids=batch.get('decoder_segment_ids', None), - encoder_positions=batch.get('encoder_positions', None), - decoder_positions=batch.get('decoder_positions', None), - decode=False, - enable_dropout=rngs is not None, - rngs=rngs, - mutable=mutable) - - def _compute_logits_from_slice( - self, flat_ids: jnp.ndarray, flat_cache: Mapping[str, jnp.ndarray], - params: PyTreeDef, encoded_inputs: jnp.ndarray, raw_inputs: jnp.ndarray, - max_decode_length: int) -> Tuple[jnp.ndarray, Mapping[str, jnp.ndarray]]: - """Token slice to logits from decoder model.""" - # flat_ids: [batch * beam, seq_len=1] - # cache is expanded inside beam_search to become flat_cache - # flat_cache: [batch * beam, num_heads, depth_per_head, max_decode_len] - # flat_logits: [batch * beam, seq_len=1, vocab] - flat_logits, new_vars = self.module.apply( - { - 'params': params, - 'cache': flat_cache - }, - encoded_inputs, - raw_inputs, # only needed for encoder padding mask - flat_ids, - flat_ids, - enable_dropout=False, - decode=True, - max_decode_length=max_decode_length, - mutable=['cache'], - method=self.module.decode) - # Remove sequence length dimension since it's always 1 during decoding. - flat_logits = jnp.squeeze(flat_logits, axis=1) - new_flat_cache = new_vars['cache'] - return flat_logits, new_flat_cache - - def predict_batch_with_aux( - self, - params: PyTreeDef, - batch: Mapping[str, jnp.ndarray], - rng: Optional[jax.random.KeyArray] = None, - decoder_params: Optional[MutableMapping[str, Any]] = None, - return_all_decodes: bool = False, - num_decodes: int = 1, - prompt_with_targets: bool = False - ) -> Tuple[jnp.ndarray, Mapping[str, jnp.ndarray]]: - """Predict with fast decoding beam search on a batch. - - Here we refer to "parameters" for values that can be compiled into the - model dynamically, as opposed to static configuration settings that require - a recompile. For example, the model weights and the decoder brevity-penalty - are parameters and can be modified without requiring a recompile. The number - of layers, the batch size and the decoder beam size are configuration - options that require recompilation if changed. - - This method can be used with a customizable decoding function as long as it - follows the signature of `DecodeFnCallable`. In order to provide a unified - interface for the decoding functions, we use a generic names. For example, a - beam size is a concept unique to beam search. Conceptually, it corresponds - to the number of sequences returned by the beam search. Therefore, the - generic argument `num_decodes` corresponds to the beam size if - `self._decode_fn` is a beam search. For temperature sampling, `num_decodes` - corresponds to the number of independent sequences to be sampled. Typically - `num_decodes = 1` is used for temperature sampling. - - If `return_all_decodes = True`, the return tuple contains the predictions - with a shape [batch, num_decodes, max_decode_len] and the scores (i.e., log - probability of the generated sequence) with a shape [batch, num_decodes]. - - If `return_all_decodes = False`, the return tuple contains the predictions - with a shape [batch, max_decode_len] and the scores with a shape [batch]. - - `decoder_params` can be used to pass dynamic configurations to - `self.decode_fn`. An example usage is to pass different random seed (i.e., - `jax.random.PRNGKey(seed)` with different `seed` value). This can be done by - setting `decoder_params['decode_rng'] = jax.random.PRNGKey(seed)`. - - If `prompt_with_targets = True`, then `decoder_prompt_inputs` is initialized - from the batch's `decoder_input_tokens`. The EOS is stripped to avoid - decoding to stop after the prompt by matching to `output_vocabulary.eos_id`. - - Args: - params: model parameters. - batch: a batch of inputs. - rng: an optional RNG key to use during prediction, which is passed as - 'decode_rng' to the decoding function. - decoder_params: additional (model-independent) parameters for the decoder. - return_all_decodes: whether to return the entire beam or just the top-1. - num_decodes: the number of beams to use in beam search. - prompt_with_targets: Whether the force decode decoder_inputs. - - Returns: - A tuple containing: - the batch of predictions, with the entire beam if requested - an auxiliary dictionary of decoder scores - """ - # Prepare zeroed-out autoregressive cache. - # [batch, input_len] - inputs = batch['encoder_input_tokens'] - # [batch, target_len] - target_shape = batch['decoder_input_tokens'].shape - target_type = batch['decoder_input_tokens'].dtype - _, variables_with_cache = self.module.apply( - {'params': params}, - jnp.ones(inputs.shape, inputs.dtype), - jnp.ones(target_shape, target_type), - jnp.ones(target_shape, target_type), - decode=True, - enable_dropout=False, - mutable=['cache']) - - cache = variables_with_cache['cache'] - - # Prepare transformer fast-decoder call for beam search: for beam search, we - # need to set up our decoder model to handle a batch size equal to - # batch_size * num_decodes, where each batch item's data is expanded - # in-place rather than tiled. - # i.e. if we denote each batch element subtensor as el[n]: - # [el0, el1, el2] --> beamsize=2 --> [el0,el0,el1,el1,el2,el2] - # [batch * num_decodes, input_len, emb_dim] - encoded_inputs = decoding.flat_batch_beam_expand( - self.module.apply({'params': params}, - inputs, - enable_dropout=False, - method=self.module.encode), num_decodes) - - # [batch * num_decodes, input_len] - raw_inputs = decoding.flat_batch_beam_expand(inputs, num_decodes) - - tokens_ids_to_logits = functools.partial( - self._compute_logits_from_slice, - params=params, - encoded_inputs=encoded_inputs, - raw_inputs=raw_inputs, - max_decode_length=target_shape[1]) - - if decoder_params is None: - decoder_params = {} - if rng is not None: - if decoder_params.get('decode_rng') is not None: - raise ValueError( - f'Got RNG both from the `rng` argument ({rng}) and ' - f"`decoder_params['decode_rng']` ({decoder_params['decode_rng']}). " - 'Please specify one or the other.') - decoder_params['decode_rng'] = rng - - # `decoder_prompt_inputs` is initialized from the batch's - # `decoder_input_tokens`. The EOS is stripped to avoid decoding to stop - # after the prompt by matching to `output_vocabulary.eos_id`. - # These inputs are ignored by the beam search decode fn. - if prompt_with_targets: - decoder_prompt_inputs = batch['decoder_input_tokens'] - decoder_prompt_inputs = decoder_prompt_inputs * ( - decoder_prompt_inputs != self.output_vocabulary.eos_id) - else: - decoder_prompt_inputs = jnp.zeros_like(batch['decoder_input_tokens']) - - # TODO(hwchung): rename the returned value names to more generic ones. - # Using the above-defined single-step decoder function, run a - # beam search over possible sequences given input encoding. - # decodes: [batch, num_decodes, max_decode_len + 1] - # scores: [batch, num_decodes] - scanned = hasattr(self.module, 'scan_layers') and self.module.scan_layers - decodes, scores = self._decode_fn( - inputs=decoder_prompt_inputs, - cache=cache, - tokens_to_logits=tokens_ids_to_logits, - eos_id=self.output_vocabulary.eos_id, - num_decodes=num_decodes, - cache_offset=1 if scanned else 0, - **decoder_params) - - # Beam search returns [n_batch, n_beam, n_length] with beam dimension sorted - # in increasing order of log-probability. - # Return the highest scoring beam sequence. - if return_all_decodes: - return decodes, {'scores': scores} - else: - return decodes[:, -1, :], {'scores': scores[:, -1]} - - def score_batch( - self, - params: PyTreeDef, - batch: Mapping[str, jnp.ndarray], - return_intermediates: bool = False, - ) -> Union[jnp.ndarray, Tuple[jnp.ndarray, Mapping[str, Any]]]: - """Compute log likelihood score on a batch.""" - weights = batch['decoder_loss_weights'] - target_tokens = batch['decoder_target_tokens'] - - if return_intermediates: - logits, modified_variables = self._compute_logits( - params=params, batch=batch, mutable=['intermediates']) - - # Inside self.module, we called nn.Module.sow to track various - # intermediate values. We extract them here. - intermediates = flax_core.unfreeze( - modified_variables.get('intermediates', {})) - - # Track per-token labels and loss weights as well. These are not - # intermediate values of logit computation, so we manually add them here. - intermediates.setdefault('decoder', {}) - intermediates['decoder']['target_tokens'] = (target_tokens,) - intermediates['decoder']['loss_weights'] = (weights,) - # Note that the values are singleton tuples. This is because values inside - # `intermediates` should be tuples tracking all instantiations of a value. - # These values each have just one instantiation, hence singletons. - else: - logits = self._compute_logits(params, batch) # type: jnp.ndarray - - # Purposefully don't use config.z_loss because that term is for training - # stability and shouldn't affect our reported scores. - token_scores = -losses.cross_entropy_with_logits( - logits, - common_utils.onehot( - target_tokens, logits.shape[-1], on_value=1, off_value=0), - z_loss=0.0)[0] * weights - - sequence_scores = token_scores.sum(-1) - - if return_intermediates: - return sequence_scores, intermediates - - return sequence_scores - - -class DecoderOnlyModel(BaseTransformerModel): - """Model class for the decoder-only modules. - - It accepts inputs made out of only 'targets' or both 'inputs' - and 'targets'. If both 'inputs' and 'targets' are present, the loss will - be computed only on 'targets'. - - By default the self-attention is fully causal and a given position only - attends to the time steps before and itself. If - `inputs_bidirectional_attention = True`, the attention in the "inputs" region - is bidirectional. This architecture was referred to as "Prefix LM" in Raffel - et al. 2019 (https://arxiv.org/abs/1910.10683). - """ - - FEATURE_CONVERTER_CLS = seqio.DecoderFeatureConverter - - def __init__( - self, - module: nn.Module, - vocabulary: seqio.Vocabulary, - optimizer_def: optimizers.OptimizerDefType, - decode_fn: DecodeFnCallable = decoding.temperature_sample, - inputs_bidirectional_attention: bool = False, - feature_converter_cls: Optional[Callable[..., - seqio.FeatureConverter]] = None, - label_smoothing: float = 0.0, - z_loss: float = 0.0, - loss_normalizing_factor: Optional[float] = None, - ): - if feature_converter_cls is not None: - self.FEATURE_CONVERTER_CLS = feature_converter_cls # pylint: disable=invalid-name - self._inputs_bidirectional_attention = inputs_bidirectional_attention - super().__init__( - module, - input_vocabulary=vocabulary, - output_vocabulary=vocabulary, - optimizer_def=optimizer_def, - decode_fn=decode_fn, - label_smoothing=label_smoothing, - z_loss=z_loss, - loss_normalizing_factor=loss_normalizing_factor, - ) - - def get_initial_variables( - self, - rng: jax.random.KeyArray, - input_shapes: Mapping[str, Array], - input_types: Optional[Mapping[str, jnp.dtype]] = None - ) -> flax_scope.FrozenVariableDict: - """Get the initial variables.""" - input_types = {} if input_types is None else input_types - decoder_shape = input_shapes['decoder_input_tokens'] - decoder_type = input_types.get('decoder_input_tokens', jnp.float32) - initial_variables = self.module.init( - rng, - jnp.ones(decoder_shape, decoder_type), - jnp.ones(decoder_shape, decoder_type), - enable_dropout=False) - return initial_variables - - def _get_decoder_causal_attention(self, batch): - """Returns decoder causal attention from the batch or None.""" - if self._inputs_bidirectional_attention: - if 'decoder_causal_attention' not in batch: - raise ValueError('`inputs_bidirectional_attention` mode requires ' - '"decoder_causal_attention" feature in the batch') - decoder_causal_attention = batch['decoder_causal_attention'] - else: - decoder_causal_attention = None - - return decoder_causal_attention - - def _compute_logits( - self, - params: PyTreeDef, - batch: Mapping[str, jnp.ndarray], - dropout_rng: Optional[jax.random.KeyArray] = None, - mutable: flax_scope.CollectionFilter = False) -> jnp.ndarray: - """Computes logits via a forward pass of `self.module`.""" - rngs = {'dropout': dropout_rng} if dropout_rng is not None else None - decoder_causal_attention = self._get_decoder_causal_attention(batch) - - return self.module.apply( - {'params': params}, - batch['decoder_input_tokens'], - batch['decoder_target_tokens'], - decoder_segment_ids=batch.get('decoder_segment_ids', None), - decoder_positions=batch.get('decoder_positions', None), - decoder_causal_attention=decoder_causal_attention, - rngs=rngs, - decode=False, - enable_dropout=rngs is not None, - mutable=mutable) - - def _compute_logits_from_slice( - self, - flat_ids: jnp.ndarray, - flat_cache: Mapping[str, jnp.ndarray], - params: PyTreeDef, - max_decode_length: int, - ) -> Tuple[jnp.ndarray, Mapping[str, jnp.ndarray]]: - """Token slice to logits from decoder model.""" - # flat_ids: [batch, seq_len=1] - # flat_cache['cached_(keys|values)']: - # [batch, num_heads, depth_per_head, max_decode_length] - # flat_cache['cache_index']: [batch] - # flat_logits: [batch, seq_len=1, vocab] - flat_logits, new_vars = self.module.apply( - { - 'params': params, - 'cache': flat_cache - }, - flat_ids, - flat_ids, - enable_dropout=False, - decode=True, - max_decode_length=max_decode_length, - mutable=['cache']) - # Remove sequence length dimension since it's always 1 during decoding. - flat_logits = jnp.squeeze(flat_logits, axis=1) - new_flat_cache = new_vars['cache'] - return flat_logits, new_flat_cache - - def score_batch(self, - params: PyTreeDef, - batch: Mapping[str, jnp.ndarray], - return_intermediates: bool = False) -> jnp.ndarray: - """Compute log likelihood score on a batch.""" - - decoder_target_tokens = batch['decoder_target_tokens'] - weights = batch['decoder_loss_weights'] - - if return_intermediates: - logits, modified_variables = self._compute_logits( - params=params, - batch=batch, - dropout_rng=None, - mutable=['intermediates']) - - # Inside self.module, we called nn.Module.sow to track various - # intermediate values. We extract them here. - intermediates = flax_core.unfreeze( - modified_variables.get('intermediates', {})) - - # Track per-token labels and loss weights as well. These are not - # intermediate values of logit computation, so we manually add them here. - intermediates.setdefault('decoder', {}) - intermediates['decoder']['target_tokens'] = (decoder_target_tokens,) - intermediates['decoder']['loss_weights'] = (weights,) - # Note that the values are singleton tuples. This is because values inside - # `intermediates` should be tuples tracking all instantiations of a value. - # These values each have just one instantiation, hence singletons. - else: - logits = self._compute_logits( - params=params, batch=batch, dropout_rng=None) - - token_scores = -losses.cross_entropy_with_logits( - logits, - common_utils.onehot( - decoder_target_tokens, logits.shape[-1], on_value=1, off_value=0), - z_loss=0.0)[0] * weights - sequence_scores = token_scores.sum(-1) - - if return_intermediates: - return sequence_scores, intermediates - - return sequence_scores - - def _compute_kv_cache( - self, - params: PyTreeDef, - inputs: jnp.ndarray, - inputs_lengths: jnp.ndarray, - decoder_causal_attention: jnp.ndarray, - ) -> PyTreeDef: - """Compute the key/value cache on the input prefix.""" - _, variables_with_cache = self.module.apply({'params': params}, - jnp.ones_like(inputs), - jnp.ones_like(inputs), - enable_dropout=False, - decode=True, - mutable=['cache']) - cache = variables_with_cache['cache'] - - # Prefill our cache with all the inputs. `inputs_lengths` is the index of - # the last input token. The cache will be filled for all the input - # positions, save the last input token. The cache index will point to the - # index of this last input token which is considered during prefilling but - # not cached. This re-computation is required as the logits for this - # position are required for selecting the first output token. - # - # The cache is still `[B, ..., max_decode_len]` but any position less than - # the `inputs_length` will be non-zero, that is - # `cached_key[b, ..., i < inputs_lengths[b]] != 0`. - # - # The cache index is now a vector of size [B] = input_lengths - - # If `self._inputs_bidirectional_attention = False`, we should not pass - # batch['decoder_causal_attention'] to `module.apply` during cache prefill - # and pass None instead. - maybe_decoder_causal_attention = self._get_decoder_causal_attention( - {'decoder_causal_attention': decoder_causal_attention}) - - _, variables_with_cache = self.module.apply( - { - 'params': params, - 'cache': cache - }, - decoder_input_tokens=inputs, - # Use the `decoder_causal_attention`, which has 1 for all input - # positions, including the BOS token, as the targets so when the - # decoder attention mask is built, it will correctly cover the whole - # input, Using something like the inputs will cause the first input - # token (the 0 for BOS) will not be included in the mask. This also - # restricts the mask to not include any target positions like it would - # if you used `decoder_target_tokens`. - decoder_target_tokens=decoder_causal_attention, - decoder_causal_attention=maybe_decoder_causal_attention, - mutable=['cache'], - enable_dropout=False, - prefill=True, - prefill_lengths=inputs_lengths) - return variables_with_cache['cache'] - - def predict_batch_with_aux( - self, - params: PyTreeDef, - batch: Mapping[str, jnp.ndarray], - rng: Optional[jax.random.KeyArray] = None, - *, - return_all_decodes: bool = False, - num_decodes: int = 1, - decoder_params: Optional[MutableMapping[str, Any]] = None, - ) -> Tuple[jnp.ndarray, Mapping[str, jnp.ndarray]]: - """Predict with prefix. - - `decoder_params` can be used to pass dynamic configurations to - `self.decode_fn`. An example usage is to pass different random seed (i.e., - `jax.random.PRNGKey(seed)` with different `seed` value). This can be done by - setting `decoder_params['decode_rng'] = jax.random.PRNGKey(seed)`. - - Although this method is short, there are a few subtle points that. We use a - running example to make these points clear. - - ``` - Example - inputs = [9, 4, 6, 1] - targets = [3, 9, 1] - - seqio.DecoderFeatureConverter will generate these set of features - - decoder_target_tokens = [9, 4, 6, 1, 3, 9, 1, 0, 0] - decoder_input_tokens = [0, 9, 4, 6, 1, 3, 9, 1, 0] - decoder_causal_attention = [1, 1, 1, 1, 1, 0, 0, 0, 0] - - The output of this function is (a` through `e` are the sampled token ids): - - sampled_sequences = [9, 4, 6, 1, a, b, c, d, e]. - ``` - - Given these set of features, we make a few important observation. - - 1) When a decoder-only model is used for a supervised learning with "inputs" - and "targets", one way to handle this is to concatenate the "inputs" and - "targets". For training, we use teacher forcing for the entire - concatenated sequence. For inference, on the other hand, we don't have - the targets. This requires that we use teacher forcing on the "inputs" - portion while using the generated token as the input token for the next - decoding step. For evaluation, we do have "targets" but we only want to - use them for computing metrics, i.e., by comparing to the sequence - generated by the model. - - This function is currently used for evaluation mode, but by ignoring - "targets", it can be extended for the inference mode. - - 2) During evaluation mode, the targets portion is zeroed out and they are - filled with the sampled token ids. The inputs portion is kept intact. - - 3) Note that `decoder_causal_attention` has an additional 1 after the final - "inputs" token. This is because the position where the last "inputs" - token (in this case 1) is input and the output is the first "target" - token (in this case 3) can be included in the non-causal attention - region. - - This results in an alignment between `decoder_input_tokens` and - `decoder_causal_attention` because the former is shifted to the right by - one position. So we use `decoder_causal_attention` as a binary mask to - zero out the target tokens in `decoder_input_tokens`. - - Note: - In order to use a custom self._decode_fn with this model it must support: - - 1) Decoding from a partially decoded state by accepting a vector of - `initial_indices` that specify where in the input to start decoding - from. - 2) Using a vector as the loop counter to support different examples being - a different number of steps into their decoding loop. - 3) Be able to handle one batch element reaching `max_decode_length` - before the others without it causing the model to prematurely stop - decoding. - - Args: - params: model parameters. - batch: batch element with the model features specified in - seqio.DecoderFeatureConverter. - rng: an optional RNG key to use during prediction, which is passed as - 'decode_rng' to the decoding function. - return_all_decodes: if True, will return all batch_size * num_decodes - samples from the model as an array of shape [batch_size, num_decodes, - sequence_length]. Otherwise returns only the most likely samples as an - array of shape [batch_size, sequence_length]. - num_decodes: number of decoded sequences to be returned. - decoder_params: additional (model-independent) parameters for the decoder. - - Returns: - sampled_sequences: an array of shape [batch, max_decode_length]. - """ - if 'decoder_causal_attention' not in batch: - raise ValueError( - 'Batch does not have the right format for text generation: probably ' - 'because `task_feature_lengths` passed to the feature converter does ' - 'not have both `inputs` and `targets`.') - # We can use the decoder causal attention mask to tell how long the inputs - # are. The causal mask has a 1 for all the input tokens (and one more to - # cover the original BOS token, created by shifting the inputs one to the - # right) so we need to delete one. - inputs_lengths = jnp.sum(batch['decoder_causal_attention'], axis=1) - 1 - - # since decoder_input_tokens is shifted to the right and - # `decoder_causal_attention` has one more 1 than the number of inputs - # tokens, this masks out targets portion of the decoder_input_tokens. - inputs = batch['decoder_input_tokens'] * batch['decoder_causal_attention'] - - prefilled_cache = self._compute_kv_cache(params, inputs, inputs_lengths, - batch['decoder_causal_attention']) - - target_shape = batch['decoder_input_tokens'].shape - max_decode_length = target_shape[1] - - tokens_ids_to_logits = functools.partial( - self._compute_logits_from_slice, - params=params, - max_decode_length=max_decode_length) - - if decoder_params is None: - decoder_params = {} - if rng is not None: - if decoder_params.get('decode_rng') is not None: - raise ValueError( - f'Got RNG both from the `rng` argument ({rng}) and ' - f"`decoder_params['decode_rng']` ({decoder_params['decode_rng']}). " - 'Please specify one or the other.') - decoder_params['decode_rng'] = rng - - # Using the above-defined single-step decoder function, run temperature - # sampling with the prefix. - # [batch, max_decode_length] - scanned = hasattr(self.module, 'scan_layers') and self.module.scan_layers - decoded_sequences, scores = self._decode_fn( - inputs=inputs, - cache=prefilled_cache, - tokens_to_logits=tokens_ids_to_logits, - eos_id=self.output_vocabulary.eos_id, - num_decodes=num_decodes, - initial_index=inputs_lengths, - cache_offset=1 if scanned else 0, - **decoder_params) - - if not return_all_decodes: - # Search returns [n_batch, n_beam/decodes, n_length] with the beam/decode - # dimension sorted in increasing order of log-probability. - # `scores` is [batch, beam/decode_size] - # We take the highest scoring sequence (-1) and its score - decoded_sequences = decoded_sequences[:, -1, :] - # Beam search returns [] - aux = {'scores': scores[:, -1]} - else: - # We return all samples and scores, rather than just the top ones. - aux = {'scores': scores} - - return remove_prefix(decoded_sequences, inputs_lengths), aux - - -@jax.vmap -def remove_prefix(sequence: jnp.ndarray, - prefix_length: jnp.ndarray) -> jnp.ndarray: - """Remove the prefix portion and shift to the left by the prefix length. - - The example below uses non-decorated function definition, i.e., arrays do not - have batch dimension. `jax.vmap` internally inserts the batch dimension at - axis=0. The shape annotations do not include the batch dimension either. - - Example: - ```python - sequence = [1, 2, 3, 4, 5, 6, 7, 0] - prefix_length = 2 - remove_prefix(sequence, prefix_length) = [3, 4, 5, 6, 7, 0, 0, 0] - ``` - - Note that this function assumes that the padding token has an id of 0. - - Args: - sequence: [length] array. - prefix_length: scalar, i.e., rank 0 array. - - Returns: - [length] array with the prefix removed and the suffix shifted. - """ - length = sequence.shape[-1] - # A binary mask with 1 at inputs. - inputs_mask = (jnp.arange(length) < prefix_length) - # A binary mask with 1 at the targets and padding positions. - targets_and_padding_mask = jnp.logical_not(inputs_mask).astype(sequence.dtype) - # Since padding id = 0, the padding mask is zeroed out. - targets = sequence * targets_and_padding_mask - # Shift to the left by prefix length. Wrapped elements are already zeroed. - return jnp.roll(targets, -prefix_length, axis=-1) - - -# TODO(cpgaffney) Remove this method when dependencies no longer use - rely on -# WeightedAccuracy Metric instead. -def compute_weighted_accuracy( - logits: jnp.ndarray, - targets: jnp.ndarray, - weights: Optional[jnp.ndarray] = None) -> Tuple[jnp.ndarray, jnp.ndarray]: - """Compute weighted accuracy for log probs and targets. - - Args: - logits: [batch, length, num_classes] float array. - targets: categorical targets [batch, length] int array of categories. - weights: None or array of shape [batch, length] - - Returns: - Scalar accuracy. - """ - if logits.ndim != targets.ndim + 1: - raise ValueError('Incorrect shapes. Got shape %s logits and %s targets' % - (str(logits.shape), str(targets.shape))) - accuracy = jnp.equal(jnp.argmax(logits, axis=-1), targets) - if weights is not None: - accuracy = accuracy * weights - - return jnp.sum(accuracy) - - -# TODO(cpgaffney) remove when users rely on compute_base_metrics -def compute_metrics(logits: jnp.ndarray, targets: jnp.ndarray, - weights: jnp.ndarray, loss: jnp.ndarray, - weight_sum: jnp.ndarray, - additional_metrics: MetricsMap) -> MetricsMap: - """Compute summary metrics.""" - accuracy = compute_weighted_accuracy(logits, targets, weights) - metrics = { - 'loss': loss, - 'accuracy': accuracy, - 'weight_sum': weight_sum, - 'num_examples': targets.shape[0], - 'num_tokens': targets.size - } - metrics = metrics_lib.create_metrics_dict(metrics) - metrics.update(additional_metrics) - return metrics - - -def compute_base_metrics( - logits: jnp.ndarray, - targets: jnp.ndarray, - mask: jnp.ndarray, - loss: jnp.ndarray, - z_loss: Optional[jnp.ndarray] = None, -) -> MetricsMap: - """Compute summary metrics. - - Args: - logits: [batch, length, num_classes] float array. - targets: categorical targets [batch, length] int array of categories. - mask: None or array of shape [batch, length]. Note: must consist of boolean - values (float-valued weights not supported). - loss: loss (float) - z_loss: z_loss (float) - - Returns: - Dict of metrics. - """ - num_examples = targets.shape[0] - num_tokens = targets.size - num_devices = jax.device_count() - assert num_devices, 'JAX is reporting no devices, but it should.' - # Note: apply mask again even though mask has already been applied to loss. - # This is needed to divide by mask sum, but should not affect correctness of - # the numerator. - nonpadding_tokens = jnp.sum(mask) if mask is not None else targets.size - metrics = { - 'accuracy': - clu_metrics.Accuracy.from_model_output( - logits=logits, labels=targets.astype(jnp.int32), mask=mask), - 'loss': - metrics_lib.AveragePerStep(total=loss), - 'loss_per_nonpadding_target_token': - clu_metrics.Average(total=loss, count=nonpadding_tokens), - 'loss_per_all_target_tokens': - clu_metrics.Average(total=loss, count=num_tokens), - 'timing/seqs_per_second': - metrics_lib.TimeRate.from_model_output(numerator=num_examples), - 'timing/steps_per_second': - metrics_lib.StepsPerTime.from_model_output(), - 'timing/seconds': - metrics_lib.Time(), - 'timing/seqs': - metrics_lib.Sum(num_examples), - 'timing/seqs_per_second_per_core': - metrics_lib.TimeRate.from_model_output(numerator=num_examples / - num_devices), - 'timing/target_tokens_per_second': - metrics_lib.TimeRate.from_model_output(numerator=num_tokens), - 'timing/target_tokens_per_second_per_core': - metrics_lib.TimeRate.from_model_output(numerator=num_tokens / - num_devices), - 'nonpadding_fraction': - clu_metrics.Average(total=nonpadding_tokens, count=num_tokens), - } - if z_loss is not None: - metrics.update({ - 'z_loss': - metrics_lib.AveragePerStep(total=z_loss), - 'z_loss_per_all_target_tokens': - clu_metrics.Average(total=z_loss, count=num_tokens), - 'cross_ent_loss': - metrics_lib.AveragePerStep(total=loss - z_loss), - 'cross_ent_loss_per_all_target_tokens': - clu_metrics.Average(total=jnp.sum(loss - z_loss), count=num_tokens) - }) - return metrics - - -def get_input_vocabulary(model: BaseTransformerModel) -> seqio.Vocabulary: - return model.input_vocabulary - - -def get_output_vocabulary(model: BaseTransformerModel) -> seqio.Vocabulary: - return model.output_vocabulary diff --git a/spaces/kabita-choudhary/speaker_Diarization/README.md b/spaces/kabita-choudhary/speaker_Diarization/README.md deleted file mode 100644 index ea3a7523995977b3dd3179cc384dbff94c3ef2a3..0000000000000000000000000000000000000000 --- a/spaces/kabita-choudhary/speaker_Diarization/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Speaker Diarization -emoji: 🌖 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kashif/probabilistic-forecast/README.md b/spaces/kashif/probabilistic-forecast/README.md deleted file mode 100644 index 7ab5d4bb28a1676929a822908a5179c820f93184..0000000000000000000000000000000000000000 --- a/spaces/kashif/probabilistic-forecast/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Probablistic Forecasting -emoji: 🐨 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kboaten/MIDI-Audio-Extension/MIDI-song-extender/musicautobot/utils/midifile.py b/spaces/kboaten/MIDI-Audio-Extension/MIDI-song-extender/musicautobot/utils/midifile.py deleted file mode 100644 index 2ff931a76a477f1a0ee6ff8f013519259e09bb94..0000000000000000000000000000000000000000 --- a/spaces/kboaten/MIDI-Audio-Extension/MIDI-song-extender/musicautobot/utils/midifile.py +++ /dev/null @@ -1,107 +0,0 @@ -"Transform functions for raw midi files" -from enum import Enum -import music21 - -PIANO_TYPES = list(range(24)) + list(range(80, 96)) # Piano, Synths -PLUCK_TYPES = list(range(24, 40)) + list(range(104, 112)) # Guitar, Bass, Ethnic -BRIGHT_TYPES = list(range(40, 56)) + list(range(56, 80)) - -PIANO_RANGE = (21, 109) # https://en.wikipedia.org/wiki/Scientific_pitch_notation - -class Track(Enum): - PIANO = 0 # discrete instruments - keyboard, woodwinds - PLUCK = 1 # continuous instruments with pitch bend: violin, trombone, synths - BRIGHT = 2 - PERC = 3 - UNDEF = 4 - -type2inst = { - # use print_music21_instruments() to see supported types - Track.PIANO: 0, # Piano - Track.PLUCK: 24, # Guitar - Track.BRIGHT: 40, # Violin - Track.PERC: 114, # Steel Drum -} - -# INFO_TYPES = set(['TIME_SIGNATURE', 'KEY_SIGNATURE']) -INFO_TYPES = set(['TIME_SIGNATURE', 'KEY_SIGNATURE', 'SET_TEMPO']) - -def file2mf(fp): - mf = music21.midi.MidiFile() - if isinstance(fp, bytes): - mf.readstr(fp) - else: - mf.open(fp) - mf.read() - mf.close() - return mf - -def mf2stream(mf): return music21.midi.translate.midiFileToStream(mf) - -def is_empty_midi(fp): - if fp is None: return False - mf = file2mf(fp) - return not any([t.hasNotes() for t in mf.tracks]) - -def num_piano_tracks(fp): - music_file = file2mf(fp) - note_tracks = [t for t in music_file.tracks if t.hasNotes() and get_track_type(t) == Track.PIANO] - return len(note_tracks) - -def is_channel(t, c_val): - return any([c == c_val for c in t.getChannels()]) - -def track_sort(t): # sort by 1. variation of pitch, 2. number of notes - return len(unique_track_notes(t)), len(t.events) - -def is_piano_note(pitch): - return (pitch >= PIANO_RANGE[0]) and (pitch < PIANO_RANGE[1]) - -def unique_track_notes(t): - return { e.pitch for e in t.events if e.pitch is not None } - -def compress_midi_file(fp, cutoff=6, min_variation=3, supported_types=set([Track.PIANO, Track.PLUCK, Track.BRIGHT])): - music_file = file2mf(fp) - - info_tracks = [t for t in music_file.tracks if not t.hasNotes()] - note_tracks = [t for t in music_file.tracks if t.hasNotes()] - - if len(note_tracks) > cutoff: - note_tracks = sorted(note_tracks, key=track_sort, reverse=True) - - supported_tracks = [] - for idx,t in enumerate(note_tracks): - if len(supported_tracks) >= cutoff: break - track_type = get_track_type(t) - if track_type not in supported_types: continue - pitch_set = unique_track_notes(t) - if (len(pitch_set) < min_variation): continue # must have more than x unique notes - if not all(map(is_piano_note, pitch_set)): continue # must not contain midi notes outside of piano range -# if track_type == Track.UNDEF: print('Could not designate track:', fp, t) - change_track_instrument(t, type2inst[track_type]) - supported_tracks.append(t) - if not supported_tracks: return None - music_file.tracks = info_tracks + supported_tracks - return music_file - -def get_track_type(t): - if is_channel(t, 10): return Track.PERC - i = get_track_instrument(t) - if i in PIANO_TYPES: return Track.PIANO - if i in PLUCK_TYPES: return Track.PLUCK - if i in BRIGHT_TYPES: return Track.BRIGHT - return Track.UNDEF - -def get_track_instrument(t): - for idx,e in enumerate(t.events): - if e.type == 'PROGRAM_CHANGE': return e.data - return None - -def change_track_instrument(t, value): - for idx,e in enumerate(t.events): - if e.type == 'PROGRAM_CHANGE': e.data = value - -def print_music21_instruments(): - for i in range(200): - try: print(i, music21.instrument.instrumentFromMidiProgram(i)) - except: pass \ No newline at end of file diff --git a/spaces/kcagle/AutoGPT/autogpt/commands/web_requests.py b/spaces/kcagle/AutoGPT/autogpt/commands/web_requests.py deleted file mode 100644 index 406338f46fc7b2381e0b1634c628b123ef20b685..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/autogpt/commands/web_requests.py +++ /dev/null @@ -1,190 +0,0 @@ -"""Browse a webpage and summarize it using the LLM model""" -from __future__ import annotations - -from urllib.parse import urljoin, urlparse - -import requests -from bs4 import BeautifulSoup -from requests import Response -from requests.compat import urljoin - -from autogpt.config import Config -from autogpt.memory import get_memory -from autogpt.processing.html import extract_hyperlinks, format_hyperlinks - -CFG = Config() -memory = get_memory(CFG) - -session = requests.Session() -session.headers.update({"User-Agent": CFG.user_agent}) - - -def is_valid_url(url: str) -> bool: - """Check if the URL is valid - - Args: - url (str): The URL to check - - Returns: - bool: True if the URL is valid, False otherwise - """ - try: - result = urlparse(url) - return all([result.scheme, result.netloc]) - except ValueError: - return False - - -def sanitize_url(url: str) -> str: - """Sanitize the URL - - Args: - url (str): The URL to sanitize - - Returns: - str: The sanitized URL - """ - return urljoin(url, urlparse(url).path) - - -def check_local_file_access(url: str) -> bool: - """Check if the URL is a local file - - Args: - url (str): The URL to check - - Returns: - bool: True if the URL is a local file, False otherwise - """ - local_prefixes = [ - "file:///", - "file://localhost/", - "file://localhost", - "http://localhost", - "http://localhost/", - "https://localhost", - "https://localhost/", - "http://2130706433", - "http://2130706433/", - "https://2130706433", - "https://2130706433/", - "http://127.0.0.1/", - "http://127.0.0.1", - "https://127.0.0.1/", - "https://127.0.0.1", - "https://0.0.0.0/", - "https://0.0.0.0", - "http://0.0.0.0/", - "http://0.0.0.0", - "http://0000", - "http://0000/", - "https://0000", - "https://0000/", - ] - return any(url.startswith(prefix) for prefix in local_prefixes) - - -def get_response( - url: str, timeout: int = 10 -) -> tuple[None, str] | tuple[Response, None]: - """Get the response from a URL - - Args: - url (str): The URL to get the response from - timeout (int): The timeout for the HTTP request - - Returns: - tuple[None, str] | tuple[Response, None]: The response and error message - - Raises: - ValueError: If the URL is invalid - requests.exceptions.RequestException: If the HTTP request fails - """ - try: - # Restrict access to local files - if check_local_file_access(url): - raise ValueError("Access to local files is restricted") - - # Most basic check if the URL is valid: - if not url.startswith("http://") and not url.startswith("https://"): - raise ValueError("Invalid URL format") - - sanitized_url = sanitize_url(url) - - response = session.get(sanitized_url, timeout=timeout) - - # Check if the response contains an HTTP error - if response.status_code >= 400: - return None, f"Error: HTTP {str(response.status_code)} error" - - return response, None - except ValueError as ve: - # Handle invalid URL format - return None, f"Error: {str(ve)}" - - except requests.exceptions.RequestException as re: - # Handle exceptions related to the HTTP request - # (e.g., connection errors, timeouts, etc.) - return None, f"Error: {str(re)}" - - -def scrape_text(url: str) -> str: - """Scrape text from a webpage - - Args: - url (str): The URL to scrape text from - - Returns: - str: The scraped text - """ - response, error_message = get_response(url) - if error_message: - return error_message - if not response: - return "Error: Could not get response" - - soup = BeautifulSoup(response.text, "html.parser") - - for script in soup(["script", "style"]): - script.extract() - - text = soup.get_text() - lines = (line.strip() for line in text.splitlines()) - chunks = (phrase.strip() for line in lines for phrase in line.split(" ")) - text = "\n".join(chunk for chunk in chunks if chunk) - - return text - - -def scrape_links(url: str) -> str | list[str]: - """Scrape links from a webpage - - Args: - url (str): The URL to scrape links from - - Returns: - str | list[str]: The scraped links - """ - response, error_message = get_response(url) - if error_message: - return error_message - if not response: - return "Error: Could not get response" - soup = BeautifulSoup(response.text, "html.parser") - - for script in soup(["script", "style"]): - script.extract() - - hyperlinks = extract_hyperlinks(soup, url) - - return format_hyperlinks(hyperlinks) - - -def create_message(chunk, question): - """Create a message for the user to summarize a chunk of text""" - return { - "role": "user", - "content": f'"""{chunk}""" Using the above text, answer the following' - f' question: "{question}" -- if the question cannot be answered using the' - " text, summarize the text.", - } diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/docs/install.md b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/docs/install.md deleted file mode 100644 index 6314a40441285e9236438e468caf8b71a407531a..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/docs/install.md +++ /dev/null @@ -1,51 +0,0 @@ -## v1.8.0 -### Linux and Windows -```shell -# CUDA 11.0 -pip --default-timeout=100 install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 10.2 -pip --default-timeout=100 install torch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 - -# CPU only -pip --default-timeout=100 install torch==1.8.0+cpu torchvision==0.9.0+cpu torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html - -``` - - -## v1.7.1 -### Linux and Windows -```shell -# CUDA 11.0 -pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 10.2 -pip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 - -# CUDA 10.1 -pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 9.2 -pip install torch==1.7.1+cu92 torchvision==0.8.2+cu92 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html - -# CPU only -pip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html -``` - - -## v1.6.0 - -### Linux and Windows -```shell -# CUDA 10.2 -pip install torch==1.6.0 torchvision==0.7.0 - -# CUDA 10.1 -pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html - -# CUDA 9.2 -pip install torch==1.6.0+cu92 torchvision==0.7.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html - -# CPU only -pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html -``` \ No newline at end of file diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/template_model.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/template_model.py deleted file mode 100644 index dac7b33d5889777eb63c9882a3b9fa094dcab293..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/template_model.py +++ /dev/null @@ -1,100 +0,0 @@ -"""Model class template - -This module provides a template for users to implement custom models. -You can specify '--model template' to use this model. -The class name should be consistent with both the filename and its model option. -The filename should be _dataset.py -The class name should be Dataset.py -It implements a simple image-to-image translation baseline based on regression loss. -Given input-output pairs (data_A, data_B), it learns a network netG that can minimize the following L1 loss: - min_ ||netG(data_A) - data_B||_1 -You need to implement the following functions: - : Add model-specific options and rewrite default values for existing options. - <__init__>: Initialize this model class. - : Unpack input data and perform data pre-processing. - : Run forward pass. This will be called by both and . - : Update network weights; it will be called in every training iteration. -""" -import numpy as np -import torch -from .base_model import BaseModel -from . import networks - - -class TemplateModel(BaseModel): - @staticmethod - def modify_commandline_options(parser, is_train=True): - """Add new model-specific options and rewrite default values for existing options. - - Parameters: - parser -- the option parser - is_train -- if it is training phase or test phase. You can use this flag to add training-specific or test-specific options. - - Returns: - the modified parser. - """ - parser.set_defaults(dataset_mode='aligned') # You can rewrite default values for this model. For example, this model usually uses aligned dataset as its dataset. - if is_train: - parser.add_argument('--lambda_regression', type=float, default=1.0, help='weight for the regression loss') # You can define new arguments for this model. - - return parser - - def __init__(self, opt): - """Initialize this model class. - - Parameters: - opt -- training/test options - - A few things can be done here. - - (required) call the initialization function of BaseModel - - define loss function, visualization images, model names, and optimizers - """ - BaseModel.__init__(self, opt) # call the initialization method of BaseModel - # specify the training losses you want to print out. The program will call base_model.get_current_losses to plot the losses to the console and save them to the disk. - self.loss_names = ['loss_G'] - # specify the images you want to save and display. The program will call base_model.get_current_visuals to save and display these images. - self.visual_names = ['data_A', 'data_B', 'output'] - # specify the models you want to save to the disk. The program will call base_model.save_networks and base_model.load_networks to save and load networks. - # you can use opt.isTrain to specify different behaviors for training and test. For example, some networks will not be used during test, and you don't need to load them. - self.model_names = ['G'] - # define networks; you can use opt.isTrain to specify different behaviors for training and test. - self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, gpu_ids=self.gpu_ids) - if self.isTrain: # only defined during training time - # define your loss functions. You can use losses provided by torch.nn such as torch.nn.L1Loss. - # We also provide a GANLoss class "networks.GANLoss". self.criterionGAN = networks.GANLoss().to(self.device) - self.criterionLoss = torch.nn.L1Loss() - # define and initialize optimizers. You can define one optimizer for each network. - # If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example. - self.optimizer = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999)) - self.optimizers = [self.optimizer] - - # Our program will automatically call to define schedulers, load networks, and print networks - - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input: a dictionary that contains the data itself and its metadata information. - """ - AtoB = self.opt.direction == 'AtoB' # use to swap data_A and data_B - self.data_A = input['A' if AtoB else 'B'].to(self.device) # get image data A - self.data_B = input['B' if AtoB else 'A'].to(self.device) # get image data B - self.image_paths = input['A_paths' if AtoB else 'B_paths'] # get image paths - - def forward(self): - """Run forward pass. This will be called by both functions and .""" - self.output = self.netG(self.data_A) # generate output image given the input data_A - - def backward(self): - """Calculate losses, gradients, and update network weights; called in every training iteration""" - # caculate the intermediate results if necessary; here self.output has been computed during function - # calculate loss given the input and intermediate results - self.loss_G = self.criterionLoss(self.output, self.data_B) * self.opt.lambda_regression - self.loss_G.backward() # calculate gradients of network G w.r.t. loss_G - - def optimize_parameters(self): - """Update network weights; it will be called in every training iteration.""" - self.forward() # first call forward to calculate intermediate results - self.optimizer.zero_grad() # clear network G's existing gradients - self.backward() # calculate gradients for network G - self.optimizer.step() # update gradients for network G diff --git a/spaces/khanguyen/voice-password-app/Final_project.py b/spaces/khanguyen/voice-password-app/Final_project.py deleted file mode 100644 index ed094a8efdeec3c0e6b5b58038f9df29b434b723..0000000000000000000000000000000000000000 --- a/spaces/khanguyen/voice-password-app/Final_project.py +++ /dev/null @@ -1,434 +0,0 @@ -### AUDIO RECORDER - -import os -import streamlit as st -import streamlit.components.v1 as components - -import io -import librosa -import numpy as np - -import torch -from speechbrain.pretrained import EncoderDecoderASR -from speechbrain.pretrained import SpeakerRecognition - -import soundfile -import hnswlib -import time -from datetime import datetime - -#st.set_page_config(layout="wide") -#padding_top = 0 -#st.markdown(f""" -# """, -# unsafe_allow_html=True,) - -## DESIGN implement changes to the standard streamlit UI/UX -st.set_page_config(page_title="VOICE PASSWORD") -## Design move app further up and remove top padding -st.markdown('''''', - unsafe_allow_html=True) -## Design change st.Audio to fixed height of 45 pixels -st.markdown('''''', - unsafe_allow_html=True) -## Design change hyperlink href link color -st.markdown('''''', - unsafe_allow_html=True) # darkmode -st.markdown('''''', - unsafe_allow_html=True) # lightmode - - - -primaryColor = "#919E8B" # green -backgroundColor = "#FBF6F1" # sepia yellow -secondaryBackgroundColor = "#EBD2B9" # wheat -textColor = "#5D6169" # grey - - - -def save_audio(file): - if file.size > 4000000: - return 1 - # if not os.path.exists("audio"): - # os.makedirs("audio") - folder = "audio" - datetoday = datetime.now().strftime("%d/%m/%Y %H:%M:%S") - # clear the folder to avoid storage overload - for filename in os.listdir(folder): - file_path = os.path.join(folder, filename) - try: - if os.path.isfile(file_path) or os.path.islink(file_path): - os.unlink(file_path) - except Exception as e: - print('Failed to delete %s. Reason: %s' % (file_path, e)) - try: - with open("log0.txt", "a") as f: - f.write(f"{file.name} - {file.size} - {datetoday};\n") - except: - pass - - with open(os.path.join(folder, file.name), "wb") as f: - f.write(file.getbuffer()) - return 0 - - - -###CREATING SIDEBAR -# Using object notation -st.sidebar.subheader("Menu") -add_selectbox = st.sidebar.selectbox( - "Please select", - ("Home", "Tutorial", "About"), key= 'sidebar') - - -with st.sidebar: - st.write('##') - st.write('##') - st.write('##') - st.write('##') - - - #rate = st.select_slider( - # 'Wanna rate this app? 😎 ', - # options=['awful', 'bad', 'okay', 'good', 'great']) - - #if rate == 'awful' or rate == 'bad' or rate =='okay': - # title = st.text_input('Feedback', '') - # if title != '': - # time.sleep(3) - # st.write('Thank you for your feedback!') - - #if rate =='good' or rate=='great': - # txt = st.text_input('Feedback', '') - # if txt != '': - # time.sleep(3) - # st.write('Thank you for your support!') - - -if st.session_state.sidebar == 'Home': - - def audiorec_demo_app(): - - parent_dir = os.path.dirname(os.path.abspath(__file__)) - # Custom REACT-based component for recording client audio in browser - build_dir = os.path.join(parent_dir, "st_audiorec/frontend/build") - # specify directory and initialize st_audiorec object functionality - st_audiorec = components.declare_component("st_audiorec", path=build_dir) - - # TITLE and Creator information - st.title('Voice password') - st.markdown('Audio recorder implemented by ' - '[Stefan Rummer](https://www.linkedin.com/in/stefanrmmr/) - ' - 'view project source code on ' - '[GitHub](https://github.com/stefanrmmr/streamlit_audio_recorder)') - st.write('\n\n') - - # STREAMLIT AUDIO RECORDER Instance - st_audiorec() - - if __name__ == '__main__': - - # call main function - audiorec_demo_app() - - - - - - - # Print the current working directory - # st.write("Current working directory: {0}".format(os.getcwd())) - - ## Change the current working directory - # E:/Finalproject - - # Print the current working directory - # st.write("New Current working directory: {0}".format(os.getcwd())) - - asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-transformer-transformerlm-librispeech", - savedir="pretrained_models/asr-transformer-transformerlm-librispeech", - run_opts={"device":"cpu"}) - - ### UPLOAD RECORDED AUDIO - - uploaded_file = st.file_uploader("Choose a file") - - if uploaded_file is not None: - - ### SPEECH_TO_TEXT - #st.write(uploaded_file) - st.write("#") - - if not os.path.exists("audio"): - os.makedirs("audio") - path = os.path.join("audio", uploaded_file.name) - if_save_audio = save_audio(uploaded_file) - spoken = asr_model.transcribe_file(path) - - with st.spinner('Processing...'): - time.sleep(3) - - st.write('You said:') - st.info(spoken) - - - - - ### SPEAKER RECOGNITION - ## Upload pretrained model - - verifier = SpeakerRecognition.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb", run_opts={"device":"cpu"}) - - - ### Base_audio processing - ## Upload sample voice - # Change the current working directory - os.chdir('E:/Finalproject') - cur = os.getcwd() - - - def audio_to_numpy(filenames): - x, sr = librosa.load(filenames, sr=30000) - if x.shape[0] <= 30000: - x = np.pad(x, (0, 30000-x.shape[0]), 'constant', constant_values=(0, 0)) - if len(q.shape) == 1: - x = x[..., None] - return x - - - voice_1 = os.path.join(cur, 'An.wav') - g = audio_to_numpy(voice_1) - my_embeddings1 = np.squeeze( - verifier.encode_batch(torch.tensor(g)).detach().cpu().numpy()) - #st.write(my_embeddings1.shape) - #st.write(g.shape) - - - voice_2 = os.path.join(cur, 'SampleVoice_kha.wav') - k = audio_to_numpy(voice_2) - my_embeddings2 = np.squeeze( - verifier.encode_batch(torch.tensor(k)).detach().cpu().numpy()) - #st.write(my_embeddings2.shape) - #st.write(k.shape) - - - voice_3 = os.path.join(cur, 'Tan.wav') - m = audio_to_numpy(voice_3) - my_embeddings3 = np.squeeze( - verifier.encode_batch(torch.tensor(m)).detach().cpu().numpy()) - - - voice_4 = os.path.join(cur, 'Phu.wav') - n = audio_to_numpy(voice_4) - my_embeddings4 = np.squeeze( - verifier.encode_batch(torch.tensor(n)).detach().cpu().numpy()) - - - os.chdir('C:/Users/Administrator/Downloads') - - q = audio_to_numpy(uploaded_file.name) - my_embeddings = np.squeeze( - verifier.encode_batch(torch.tensor(q)).detach().cpu().numpy()) - - - #st.write(my_embeddings.shape) - #st.write(q.shape) - - - my_id_1 = 1 - my_id_2 = 2 - my_id_3 = 3 - my_id_4 = 4 - - - p = hnswlib.Index(space = 'cosine', dim = 192) - p.init_index(max_elements = 1000, ef_construction = 200, M = 16) - # với my_embedding là embedding voice của các em - # và my_id là id của các em trong database (ví dụ my_id=0) - p.add_items(my_embeddings1, my_id_1) - p.add_items(my_embeddings2, my_id_2) - p.add_items(my_embeddings3, my_id_3) - p.add_items(my_embeddings4, my_id_4) - - - # ta thực hiện search bằng dòng code sau - # vơi labels là array chưa k id giống với target_embed nhất - target_embed = my_embeddings - labels, distances = p.knn_query(target_embed, k = 4) - - st.write("#") - - if spoken == 'TWO SIX ZERO SIX': # labels[0][0] == 2: # - st.success('Password Correct') - if labels[0][0] == 2 and distances[0][0] <0.3: - st.balloons() - st.snow() - st.write('Welcome to my Youtube channel. Please click the following link: https://www.youtube.com/channel/UCViAzz3Qtz8IQdUI9DiJ3WA/featured') - else: - st.error('Invalid speaker. Please try again!') - - else: - st.error('Incorrect password. Please try again!') - - - - with st.sidebar: - - st.sidebar.subheader("Voice labels name") - col1, col2, col3, col4 = st.columns(4) - with col1: - st.markdown("Ân - 1") - with col2: - st.markdown("Kha - 2") - with col3: - st.markdown("Tân - 3") - with col4: - st.markdown("Phú - 4") - st.write(labels) - - st.write('#') - - st.sidebar.subheader("Distance to each labels") - st.write(distances) - - st.write('#') - - st.sidebar.subheader("Recorded audio file") - file_details = {"Filename": uploaded_file.name, "FileSize": uploaded_file.size} - st.sidebar.write(file_details) - - - -if st.session_state.sidebar == 'Tutorial': - - st.title('Tutorial') - - st.write('This is the `tutorial page` of this application') - st.write('#') - # Step1 - st.markdown('##### Step 1: Voice recording') - st.markdown('- Press `Start Recording` to record your voice password') - st.markdown('- Click `Stop` to end the audio') - st.markdown('- If you want to record again, click `Reset` to reset the audio') - - - # Step2 - st.markdown('##### Step 2: Audio download') - st.markdown('- Press `Download` to end the audio') - st.markdown('- The recorded audio will be downloaded to Downloads Folder on your desktop') - - # Step3 - st.markdown('##### Step 3: Audio upload') - st.markdown('- Click `Browse files` to upload the audio') - st.markdown('- Choose your recorded audio in the Downloads Folder') - - # Step4 - st.markdown('##### Step 4: Finish') - st.markdown('- It will take about 15 sec to process the data') - st.markdown('- In case of `incorrect password` or `invalid speaker`, click `Χ` next to the uploaded file to delete the audio and record again as from step 1') - - - -if st.session_state.sidebar == 'About': - - st.title('About my project') - - st.markdown('### Project Title: **Application of voice password and speaker verification**') - st.markdown('#### Project Description') - - st.markdown(''' - - As digital technology advanced in today's world, the potential of privacy violation has been a threat to user's information - - Thus, this AI application is designed to be capable of verifying user's identity, based on the voice characteristics such as tones, features, and at the same time integrating with voice password authentication. - ''') - - st.markdown('''- ###### [GitHub repository of the web-application](https://github.com/Kha1135123/VoiceAuthentication_Finalproject)''') - - - st.markdown("##### Theory") - with st.expander("See Wikipedia definition_Speech Recognition"): - components.iframe("https://en.wikipedia.org/wiki/Speech_recognition", - height=320, scrolling=True) - with st.expander("See Wikipedia definition_Speaker Recognition"): - components.iframe("https://en.wikipedia.org/wiki/Speaker_recognition", - height=320, scrolling=True) - - - st.markdown('#### *Project goals*') - st.markdown(''' - - Build a security system using voice password authentication combined with speaker recognition as follows: - - First, with the audio input, the system will verify the voice password before continuing to run the Speaker Recognition Model to identify user. - - If both the correct password and target user's voice are matched with the input, the system will navigate the user, or give the user a link to a private website. - - The main part this AI model needs to process is to extract features of the speaker's voice to verify it, and to transcribe audio to text. - ''') - - - st.markdown('#### **Scope of work**') - st.markdown(''' - - Find an appropriated pretrained model in speech recognition and voice recognition - - Process recorded audio on Streamlit platform. - - A completed Streamlit application will be built after accomplishing the basic objectives. - - After this project, I will be more experienced in data processing related to audio and in deploying an application on Streamlit. - ''') - - st.markdown(''' - #### *A brief introduction about the project* - - ##### *Model* - - Speech to text Pretrained Model: [speechbrain/ASR-Wav2Vec2 model -- Commonvoice-en](https://huggingface.co/speechbrain/asr-wav2vec2-commonvoice-en) - - Speaker Verification: [speechbrain/ECAPA-TDNN model -- Voxceleb](https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb) - ##### *Methods* - - Applying ASR pretrained model to translate speech to text. - - Converting audio file into numpy array by librosa module. - - Using cosine similarity based on the user's embeddings extracting from the audio to identify voices by ECAPA-TDNN model. - ##### *Note* - - **Reference**: - - Streamlit audio recorder: https://github.com/stefanrmmr/streamlit_audio_recorder - - Streamlit API reference: https://docs.streamlit.io/library/api-reference - - To set up audio recorder component, read and follow the instruction in [here](https://github.com/stefanrmmr/streamlit_audio_recorder#readme) ''') - st.write("#") - st.markdown(''' - If you want to try them we recommend to clone our GitHub repo''') - st.code("git clone https://github.com/Kha1135123/VoiceAuthentication_Finalproject.git", language='bash') - - st.markdown(''' - After that, just change the following relevant sections in the Final_project.py file to use this model: - - Change the current working directory to Downloads Folder of your desktop in order to allow the computer to detect to recorded audio file as similar: ''') - st.code( "os.chdir('C:/Users/Administrator/Downloads')", language='python') - - - st.markdown(''' - - Afterwards, change the working directory back to the directory of your Streamlit project by: - ''') - st.code("os.chdir('/home/ _Your_project_folder_')", language='python') - - - st.markdown(''' - - To verify speaker, you will need to have at least 2 audio recording from different people, including the target audio that you want the application to recognize. Put those audio in your project folder. and then use the code below to take the path of the audio in your computer. ''') - sp = ''' - cur = os.getcwd() -voice_1 = os.path.join(cur, '_SampleVoice_audio.wav') - ''' - st.code(sp, language='python') - - - - - - st.write('#') - st.markdown(''' - #### *Author* - - Nguyễn Mạnh Kha _ Class of 2022 _ Le Hong Phong High School for the Gifted, Hochiminh City, Vietnam ''') - - - -st.write('#') - -st.caption('Made by @khanguyen') - - - - - diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/apis/train.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/apis/train.py deleted file mode 100644 index 63f319a919ff023931a6a663e668f27dd1a07a2e..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/apis/train.py +++ /dev/null @@ -1,116 +0,0 @@ -import random -import warnings - -import numpy as np -import torch -from annotator.uniformer.mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from annotator.uniformer.mmcv.runner import build_optimizer, build_runner - -from annotator.uniformer.mmseg.core import DistEvalHook, EvalHook -from annotator.uniformer.mmseg.datasets import build_dataloader, build_dataset -from annotator.uniformer.mmseg.utils import get_root_logger - - -def set_random_seed(seed, deterministic=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def train_segmentor(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - """Launch segmentor training.""" - logger = get_root_logger(cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - data_loaders = [ - build_dataloader( - ds, - cfg.data.samples_per_gpu, - cfg.data.workers_per_gpu, - # cfg.gpus will be ignored if distributed - len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed, - drop_last=True) for ds in dataset - ] - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = MMDataParallel( - model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids) - - # build runner - optimizer = build_optimizer(model, cfg.optimizer) - - if cfg.get('runner') is None: - cfg.runner = {'type': 'IterBasedRunner', 'max_iters': cfg.total_iters} - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - batch_processor=None, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # register hooks - runner.register_training_hooks(cfg.lr_config, cfg.optimizer_config, - cfg.checkpoint_config, cfg.log_config, - cfg.get('momentum_config', None)) - - # an ugly walkaround to make the .log and .log.json filenames the same - runner.timestamp = timestamp - - # register eval hooks - if validate: - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - val_dataloader = build_dataloader( - val_dataset, - samples_per_gpu=1, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = DistEvalHook if distributed else EvalHook - runner.register_hook(eval_hook(val_dataloader, **eval_cfg), priority='LOW') - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/discriminative_reranking_nmt/scripts/prep_data.py b/spaces/koajoel/PolyFormer/fairseq/examples/discriminative_reranking_nmt/scripts/prep_data.py deleted file mode 100644 index 7aa7d37edc2c3e4c1d293911b753abf2ef597a7e..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/discriminative_reranking_nmt/scripts/prep_data.py +++ /dev/null @@ -1,136 +0,0 @@ -#!/usr/bin/env python - -import argparse -from multiprocessing import Pool -from pathlib import Path - -import sacrebleu -import sentencepiece as spm - - -def read_text_file(filename): - with open(filename, "r") as f: - output = [line.strip() for line in f] - - return output - - -def get_bleu(in_sent, target_sent): - bleu = sacrebleu.corpus_bleu([in_sent], [[target_sent]]) - out = " ".join( - map(str, [bleu.score, bleu.sys_len, bleu.ref_len] + bleu.counts + bleu.totals) - ) - return out - - -def get_ter(in_sent, target_sent): - ter = sacrebleu.corpus_ter([in_sent], [[target_sent]]) - out = " ".join(map(str, [ter.score, ter.num_edits, ter.ref_length])) - return out - - -def init(sp_model): - global sp - sp = spm.SentencePieceProcessor() - sp.Load(sp_model) - - -def process(source_sent, target_sent, hypo_sent, metric): - source_bpe = " ".join(sp.EncodeAsPieces(source_sent)) - hypo_bpe = [" ".join(sp.EncodeAsPieces(h)) for h in hypo_sent] - - if metric == "bleu": - score_str = [get_bleu(h, target_sent) for h in hypo_sent] - else: # ter - score_str = [get_ter(h, target_sent) for h in hypo_sent] - - return source_bpe, hypo_bpe, score_str - - -def main(args): - assert ( - args.split.startswith("train") or args.num_shards == 1 - ), "--num-shards should be set to 1 for valid and test sets" - assert ( - args.split.startswith("train") - or args.split.startswith("valid") - or args.split.startswith("test") - ), "--split should be set to train[n]/valid[n]/test[n]" - - source_sents = read_text_file(args.input_source) - target_sents = read_text_file(args.input_target) - - num_sents = len(source_sents) - assert num_sents == len( - target_sents - ), f"{args.input_source} and {args.input_target} should have the same number of sentences." - - hypo_sents = read_text_file(args.input_hypo) - assert ( - len(hypo_sents) % args.beam == 0 - ), f"Number of hypotheses ({len(hypo_sents)}) cannot be divided by beam size ({args.beam})." - - hypo_sents = [ - hypo_sents[i : i + args.beam] for i in range(0, len(hypo_sents), args.beam) - ] - assert num_sents == len( - hypo_sents - ), f"{args.input_hypo} should contain {num_sents * args.beam} hypotheses but only has {len(hypo_sents) * args.beam}. (--beam={args.beam})" - - output_dir = args.output_dir / args.metric - for ns in range(args.num_shards): - print(f"processing shard {ns+1}/{args.num_shards}") - shard_output_dir = output_dir / f"split{ns+1}" - source_output_dir = shard_output_dir / "input_src" - hypo_output_dir = shard_output_dir / "input_tgt" - metric_output_dir = shard_output_dir / args.metric - - source_output_dir.mkdir(parents=True, exist_ok=True) - hypo_output_dir.mkdir(parents=True, exist_ok=True) - metric_output_dir.mkdir(parents=True, exist_ok=True) - - if args.n_proc > 1: - with Pool( - args.n_proc, initializer=init, initargs=(args.sentencepiece_model,) - ) as p: - output = p.starmap( - process, - [ - (source_sents[i], target_sents[i], hypo_sents[i], args.metric) - for i in range(ns, num_sents, args.num_shards) - ], - ) - else: - init(args.sentencepiece_model) - output = [ - process(source_sents[i], target_sents[i], hypo_sents[i], args.metric) - for i in range(ns, num_sents, args.num_shards) - ] - - with open(source_output_dir / f"{args.split}.bpe", "w") as s_o, open( - hypo_output_dir / f"{args.split}.bpe", "w" - ) as h_o, open(metric_output_dir / f"{args.split}.{args.metric}", "w") as m_o: - for source_bpe, hypo_bpe, score_str in output: - assert len(hypo_bpe) == len(score_str) - for h, m in zip(hypo_bpe, score_str): - s_o.write(f"{source_bpe}\n") - h_o.write(f"{h}\n") - m_o.write(f"{m}\n") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--input-source", type=Path, required=True) - parser.add_argument("--input-target", type=Path, required=True) - parser.add_argument("--input-hypo", type=Path, required=True) - parser.add_argument("--output-dir", type=Path, required=True) - parser.add_argument("--split", type=str, required=True) - parser.add_argument("--beam", type=int, required=True) - parser.add_argument("--sentencepiece-model", type=str, required=True) - parser.add_argument("--metric", type=str, choices=["bleu", "ter"], default="bleu") - parser.add_argument("--num-shards", type=int, default=1) - parser.add_argument("--n-proc", type=int, default=8) - - args = parser.parse_args() - - main(args) diff --git a/spaces/koby-Jason/Music_recommend/README.md b/spaces/koby-Jason/Music_recommend/README.md deleted file mode 100644 index 6c0ce2db03d4235cc96105120be59c5ddb8b2994..0000000000000000000000000000000000000000 --- a/spaces/koby-Jason/Music_recommend/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Music Recommend -emoji: 👀 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kouenYoung/anime-tts/attentions.py b/spaces/kouenYoung/anime-tts/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/kouenYoung/anime-tts/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/ktonggg/webui/app.py b/spaces/ktonggg/webui/app.py deleted file mode 100644 index aaf559614d247ea2b80987bd77ad5f04ec2913c1..0000000000000000000000000000000000000000 --- a/spaces/ktonggg/webui/app.py +++ /dev/null @@ -1,89 +0,0 @@ -import os -from subprocess import getoutput - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") - -os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py") - -# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header---------------------------- -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py") -os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -# --------------------------------------------------------------------------------------------------------------------------------------------------- - -if "IS_SHARED_UI" in os.environ: - os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/") - - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - - os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}") - - os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding") -else: - # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py") - os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py") - - # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME") - #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study") - os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui") - - # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt") - #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt") - #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt") - #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt") - #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt") - #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt") - - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt") - - #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt") - #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml") - - os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt") - os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.yaml") - - os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}") - os.system(f"wget -q {os.getenv('EMBED_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('EMBED_NAME')}") - - - os.system(f"wget -q {os.getenv('EMBED_LINK1')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('EMBED_NAME1')}") - os.system(f"wget -q {os.getenv('EMBED_LINK2')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('EMBED_NAME2')}") - os.system(f"wget -q {os.getenv('EMBED_LINK3')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('EMBED_NAME3')}") - os.system(f"wget -q {os.getenv('EMBED_LINK4')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('EMBED_NAME4')}") - - os.system(f"wget -q {os.getenv('MODEL_LINK1')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME1')}") - os.system(f"wget -q {os.getenv('MODEL_LINK2')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME2')}") - os.system(f"wget -q {os.getenv('MODEL_LINK3')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME3')}") - os.system(f"wget -q {os.getenv('MODEL_LINK4')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME4')}") - - - os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test") - \ No newline at end of file diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/abc/_subprocesses.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/abc/_subprocesses.py deleted file mode 100644 index 704b44a2dda9e21997acf52c268e414d01bd2eb5..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/abc/_subprocesses.py +++ /dev/null @@ -1,79 +0,0 @@ -from __future__ import annotations - -from abc import abstractmethod -from signal import Signals - -from ._resources import AsyncResource -from ._streams import ByteReceiveStream, ByteSendStream - - -class Process(AsyncResource): - """An asynchronous version of :class:`subprocess.Popen`.""" - - @abstractmethod - async def wait(self) -> int: - """ - Wait until the process exits. - - :return: the exit code of the process - """ - - @abstractmethod - def terminate(self) -> None: - """ - Terminates the process, gracefully if possible. - - On Windows, this calls ``TerminateProcess()``. - On POSIX systems, this sends ``SIGTERM`` to the process. - - .. seealso:: :meth:`subprocess.Popen.terminate` - """ - - @abstractmethod - def kill(self) -> None: - """ - Kills the process. - - On Windows, this calls ``TerminateProcess()``. - On POSIX systems, this sends ``SIGKILL`` to the process. - - .. seealso:: :meth:`subprocess.Popen.kill` - """ - - @abstractmethod - def send_signal(self, signal: Signals) -> None: - """ - Send a signal to the subprocess. - - .. seealso:: :meth:`subprocess.Popen.send_signal` - - :param signal: the signal number (e.g. :data:`signal.SIGHUP`) - """ - - @property - @abstractmethod - def pid(self) -> int: - """The process ID of the process.""" - - @property - @abstractmethod - def returncode(self) -> int | None: - """ - The return code of the process. If the process has not yet terminated, this will be - ``None``. - """ - - @property - @abstractmethod - def stdin(self) -> ByteSendStream | None: - """The stream for the standard input of the process.""" - - @property - @abstractmethod - def stdout(self) -> ByteReceiveStream | None: - """The stream for the standard output of the process.""" - - @property - @abstractmethod - def stderr(self) -> ByteReceiveStream | None: - """The stream for the standard error output of the process.""" diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/momentsPen.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/momentsPen.py deleted file mode 100644 index dab0d10e2c63b2552cf44005fdd5d2ecea3dfe12..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/momentsPen.py +++ /dev/null @@ -1,882 +0,0 @@ -from fontTools.pens.basePen import BasePen, OpenContourError - -try: - import cython - - COMPILED = cython.compiled -except (AttributeError, ImportError): - # if cython not installed, use mock module with no-op decorators and types - from fontTools.misc import cython - - COMPILED = False - - -__all__ = ["MomentsPen"] - - -class MomentsPen(BasePen): - def __init__(self, glyphset=None): - BasePen.__init__(self, glyphset) - - self.area = 0 - self.momentX = 0 - self.momentY = 0 - self.momentXX = 0 - self.momentXY = 0 - self.momentYY = 0 - - def _moveTo(self, p0): - self.__startPoint = p0 - - def _closePath(self): - p0 = self._getCurrentPoint() - if p0 != self.__startPoint: - self._lineTo(self.__startPoint) - - def _endPath(self): - p0 = self._getCurrentPoint() - if p0 != self.__startPoint: - # Green theorem is not defined on open contours. - raise OpenContourError("Green theorem is not defined on open contours.") - - @cython.locals(r0=cython.double) - @cython.locals(r1=cython.double) - @cython.locals(r2=cython.double) - @cython.locals(r3=cython.double) - @cython.locals(r4=cython.double) - @cython.locals(r5=cython.double) - @cython.locals(r6=cython.double) - @cython.locals(r7=cython.double) - @cython.locals(r8=cython.double) - @cython.locals(r9=cython.double) - @cython.locals(r10=cython.double) - @cython.locals(r11=cython.double) - @cython.locals(r12=cython.double) - @cython.locals(x0=cython.double, y0=cython.double) - @cython.locals(x1=cython.double, y1=cython.double) - def _lineTo(self, p1): - x0, y0 = self._getCurrentPoint() - x1, y1 = p1 - - r0 = x1 * y0 - r1 = x1 * y1 - r2 = x1**2 - r3 = r2 * y1 - r4 = y0 - y1 - r5 = r4 * x0 - r6 = x0**2 - r7 = 2 * y0 - r8 = y0**2 - r9 = y1**2 - r10 = x1**3 - r11 = y0**3 - r12 = y1**3 - - self.area += -r0 / 2 - r1 / 2 + x0 * (y0 + y1) / 2 - self.momentX += -r2 * y0 / 6 - r3 / 3 - r5 * x1 / 6 + r6 * (r7 + y1) / 6 - self.momentY += ( - -r0 * y1 / 6 - r8 * x1 / 6 - r9 * x1 / 6 + x0 * (r8 + r9 + y0 * y1) / 6 - ) - self.momentXX += ( - -r10 * y0 / 12 - - r10 * y1 / 4 - - r2 * r5 / 12 - - r4 * r6 * x1 / 12 - + x0**3 * (3 * y0 + y1) / 12 - ) - self.momentXY += ( - -r2 * r8 / 24 - - r2 * r9 / 8 - - r3 * r7 / 24 - + r6 * (r7 * y1 + 3 * r8 + r9) / 24 - - x0 * x1 * (r8 - r9) / 12 - ) - self.momentYY += ( - -r0 * r9 / 12 - - r1 * r8 / 12 - - r11 * x1 / 12 - - r12 * x1 / 12 - + x0 * (r11 + r12 + r8 * y1 + r9 * y0) / 12 - ) - - @cython.locals(r0=cython.double) - @cython.locals(r1=cython.double) - @cython.locals(r2=cython.double) - @cython.locals(r3=cython.double) - @cython.locals(r4=cython.double) - @cython.locals(r5=cython.double) - @cython.locals(r6=cython.double) - @cython.locals(r7=cython.double) - @cython.locals(r8=cython.double) - @cython.locals(r9=cython.double) - @cython.locals(r10=cython.double) - @cython.locals(r11=cython.double) - @cython.locals(r12=cython.double) - @cython.locals(r13=cython.double) - @cython.locals(r14=cython.double) - @cython.locals(r15=cython.double) - @cython.locals(r16=cython.double) - @cython.locals(r17=cython.double) - @cython.locals(r18=cython.double) - @cython.locals(r19=cython.double) - @cython.locals(r20=cython.double) - @cython.locals(r21=cython.double) - @cython.locals(r22=cython.double) - @cython.locals(r23=cython.double) - @cython.locals(r24=cython.double) - @cython.locals(r25=cython.double) - @cython.locals(r26=cython.double) - @cython.locals(r27=cython.double) - @cython.locals(r28=cython.double) - @cython.locals(r29=cython.double) - @cython.locals(r30=cython.double) - @cython.locals(r31=cython.double) - @cython.locals(r32=cython.double) - @cython.locals(r33=cython.double) - @cython.locals(r34=cython.double) - @cython.locals(r35=cython.double) - @cython.locals(r36=cython.double) - @cython.locals(r37=cython.double) - @cython.locals(r38=cython.double) - @cython.locals(r39=cython.double) - @cython.locals(r40=cython.double) - @cython.locals(r41=cython.double) - @cython.locals(r42=cython.double) - @cython.locals(r43=cython.double) - @cython.locals(r44=cython.double) - @cython.locals(r45=cython.double) - @cython.locals(r46=cython.double) - @cython.locals(r47=cython.double) - @cython.locals(r48=cython.double) - @cython.locals(r49=cython.double) - @cython.locals(r50=cython.double) - @cython.locals(r51=cython.double) - @cython.locals(r52=cython.double) - @cython.locals(r53=cython.double) - @cython.locals(x0=cython.double, y0=cython.double) - @cython.locals(x1=cython.double, y1=cython.double) - @cython.locals(x2=cython.double, y2=cython.double) - def _qCurveToOne(self, p1, p2): - x0, y0 = self._getCurrentPoint() - x1, y1 = p1 - x2, y2 = p2 - - r0 = 2 * y1 - r1 = r0 * x2 - r2 = x2 * y2 - r3 = 3 * r2 - r4 = 2 * x1 - r5 = 3 * y0 - r6 = x1**2 - r7 = x2**2 - r8 = 4 * y1 - r9 = 10 * y2 - r10 = 2 * y2 - r11 = r4 * x2 - r12 = x0**2 - r13 = 10 * y0 - r14 = r4 * y2 - r15 = x2 * y0 - r16 = 4 * x1 - r17 = r0 * x1 + r2 - r18 = r2 * r8 - r19 = y1**2 - r20 = 2 * r19 - r21 = y2**2 - r22 = r21 * x2 - r23 = 5 * r22 - r24 = y0**2 - r25 = y0 * y2 - r26 = 5 * r24 - r27 = x1**3 - r28 = x2**3 - r29 = 30 * y1 - r30 = 6 * y1 - r31 = 10 * r7 * x1 - r32 = 5 * y2 - r33 = 12 * r6 - r34 = 30 * x1 - r35 = x1 * y1 - r36 = r3 + 20 * r35 - r37 = 12 * x1 - r38 = 20 * r6 - r39 = 8 * r6 * y1 - r40 = r32 * r7 - r41 = 60 * y1 - r42 = 20 * r19 - r43 = 4 * r19 - r44 = 15 * r21 - r45 = 12 * x2 - r46 = 12 * y2 - r47 = 6 * x1 - r48 = 8 * r19 * x1 + r23 - r49 = 8 * y1**3 - r50 = y2**3 - r51 = y0**3 - r52 = 10 * y1 - r53 = 12 * y1 - - self.area += ( - -r1 / 6 - - r3 / 6 - + x0 * (r0 + r5 + y2) / 6 - + x1 * y2 / 3 - - y0 * (r4 + x2) / 6 - ) - self.momentX += ( - -r11 * (-r10 + y1) / 30 - + r12 * (r13 + r8 + y2) / 30 - + r6 * y2 / 15 - - r7 * r8 / 30 - - r7 * r9 / 30 - + x0 * (r14 - r15 - r16 * y0 + r17) / 30 - - y0 * (r11 + 2 * r6 + r7) / 30 - ) - self.momentY += ( - -r18 / 30 - - r20 * x2 / 30 - - r23 / 30 - - r24 * (r16 + x2) / 30 - + x0 * (r0 * y2 + r20 + r21 + r25 + r26 + r8 * y0) / 30 - + x1 * y2 * (r10 + y1) / 15 - - y0 * (r1 + r17) / 30 - ) - self.momentXX += ( - r12 * (r1 - 5 * r15 - r34 * y0 + r36 + r9 * x1) / 420 - + 2 * r27 * y2 / 105 - - r28 * r29 / 420 - - r28 * y2 / 4 - - r31 * (r0 - 3 * y2) / 420 - - r6 * x2 * (r0 - r32) / 105 - + x0**3 * (r30 + 21 * y0 + y2) / 84 - - x0 - * ( - r0 * r7 - + r15 * r37 - - r2 * r37 - - r33 * y2 - + r38 * y0 - - r39 - - r40 - + r5 * r7 - ) - / 420 - - y0 * (8 * r27 + 5 * r28 + r31 + r33 * x2) / 420 - ) - self.momentXY += ( - r12 * (r13 * y2 + 3 * r21 + 105 * r24 + r41 * y0 + r42 + r46 * y1) / 840 - - r16 * x2 * (r43 - r44) / 840 - - r21 * r7 / 8 - - r24 * (r38 + r45 * x1 + 3 * r7) / 840 - - r41 * r7 * y2 / 840 - - r42 * r7 / 840 - + r6 * y2 * (r32 + r8) / 210 - + x0 - * ( - -r15 * r8 - + r16 * r25 - + r18 - + r21 * r47 - - r24 * r34 - - r26 * x2 - + r35 * r46 - + r48 - ) - / 420 - - y0 * (r16 * r2 + r30 * r7 + r35 * r45 + r39 + r40) / 420 - ) - self.momentYY += ( - -r2 * r42 / 420 - - r22 * r29 / 420 - - r24 * (r14 + r36 + r52 * x2) / 420 - - r49 * x2 / 420 - - r50 * x2 / 12 - - r51 * (r47 + x2) / 84 - + x0 - * ( - r19 * r46 - + r21 * r5 - + r21 * r52 - + r24 * r29 - + r25 * r53 - + r26 * y2 - + r42 * y0 - + r49 - + 5 * r50 - + 35 * r51 - ) - / 420 - + x1 * y2 * (r43 + r44 + r9 * y1) / 210 - - y0 * (r19 * r45 + r2 * r53 - r21 * r4 + r48) / 420 - ) - - @cython.locals(r0=cython.double) - @cython.locals(r1=cython.double) - @cython.locals(r2=cython.double) - @cython.locals(r3=cython.double) - @cython.locals(r4=cython.double) - @cython.locals(r5=cython.double) - @cython.locals(r6=cython.double) - @cython.locals(r7=cython.double) - @cython.locals(r8=cython.double) - @cython.locals(r9=cython.double) - @cython.locals(r10=cython.double) - @cython.locals(r11=cython.double) - @cython.locals(r12=cython.double) - @cython.locals(r13=cython.double) - @cython.locals(r14=cython.double) - @cython.locals(r15=cython.double) - @cython.locals(r16=cython.double) - @cython.locals(r17=cython.double) - @cython.locals(r18=cython.double) - @cython.locals(r19=cython.double) - @cython.locals(r20=cython.double) - @cython.locals(r21=cython.double) - @cython.locals(r22=cython.double) - @cython.locals(r23=cython.double) - @cython.locals(r24=cython.double) - @cython.locals(r25=cython.double) - @cython.locals(r26=cython.double) - @cython.locals(r27=cython.double) - @cython.locals(r28=cython.double) - @cython.locals(r29=cython.double) - @cython.locals(r30=cython.double) - @cython.locals(r31=cython.double) - @cython.locals(r32=cython.double) - @cython.locals(r33=cython.double) - @cython.locals(r34=cython.double) - @cython.locals(r35=cython.double) - @cython.locals(r36=cython.double) - @cython.locals(r37=cython.double) - @cython.locals(r38=cython.double) - @cython.locals(r39=cython.double) - @cython.locals(r40=cython.double) - @cython.locals(r41=cython.double) - @cython.locals(r42=cython.double) - @cython.locals(r43=cython.double) - @cython.locals(r44=cython.double) - @cython.locals(r45=cython.double) - @cython.locals(r46=cython.double) - @cython.locals(r47=cython.double) - @cython.locals(r48=cython.double) - @cython.locals(r49=cython.double) - @cython.locals(r50=cython.double) - @cython.locals(r51=cython.double) - @cython.locals(r52=cython.double) - @cython.locals(r53=cython.double) - @cython.locals(r54=cython.double) - @cython.locals(r55=cython.double) - @cython.locals(r56=cython.double) - @cython.locals(r57=cython.double) - @cython.locals(r58=cython.double) - @cython.locals(r59=cython.double) - @cython.locals(r60=cython.double) - @cython.locals(r61=cython.double) - @cython.locals(r62=cython.double) - @cython.locals(r63=cython.double) - @cython.locals(r64=cython.double) - @cython.locals(r65=cython.double) - @cython.locals(r66=cython.double) - @cython.locals(r67=cython.double) - @cython.locals(r68=cython.double) - @cython.locals(r69=cython.double) - @cython.locals(r70=cython.double) - @cython.locals(r71=cython.double) - @cython.locals(r72=cython.double) - @cython.locals(r73=cython.double) - @cython.locals(r74=cython.double) - @cython.locals(r75=cython.double) - @cython.locals(r76=cython.double) - @cython.locals(r77=cython.double) - @cython.locals(r78=cython.double) - @cython.locals(r79=cython.double) - @cython.locals(r80=cython.double) - @cython.locals(r81=cython.double) - @cython.locals(r82=cython.double) - @cython.locals(r83=cython.double) - @cython.locals(r84=cython.double) - @cython.locals(r85=cython.double) - @cython.locals(r86=cython.double) - @cython.locals(r87=cython.double) - @cython.locals(r88=cython.double) - @cython.locals(r89=cython.double) - @cython.locals(r90=cython.double) - @cython.locals(r91=cython.double) - @cython.locals(r92=cython.double) - @cython.locals(r93=cython.double) - @cython.locals(r94=cython.double) - @cython.locals(r95=cython.double) - @cython.locals(r96=cython.double) - @cython.locals(r97=cython.double) - @cython.locals(r98=cython.double) - @cython.locals(r99=cython.double) - @cython.locals(r100=cython.double) - @cython.locals(r101=cython.double) - @cython.locals(r102=cython.double) - @cython.locals(r103=cython.double) - @cython.locals(r104=cython.double) - @cython.locals(r105=cython.double) - @cython.locals(r106=cython.double) - @cython.locals(r107=cython.double) - @cython.locals(r108=cython.double) - @cython.locals(r109=cython.double) - @cython.locals(r110=cython.double) - @cython.locals(r111=cython.double) - @cython.locals(r112=cython.double) - @cython.locals(r113=cython.double) - @cython.locals(r114=cython.double) - @cython.locals(r115=cython.double) - @cython.locals(r116=cython.double) - @cython.locals(r117=cython.double) - @cython.locals(r118=cython.double) - @cython.locals(r119=cython.double) - @cython.locals(r120=cython.double) - @cython.locals(r121=cython.double) - @cython.locals(r122=cython.double) - @cython.locals(r123=cython.double) - @cython.locals(r124=cython.double) - @cython.locals(r125=cython.double) - @cython.locals(r126=cython.double) - @cython.locals(r127=cython.double) - @cython.locals(r128=cython.double) - @cython.locals(r129=cython.double) - @cython.locals(r130=cython.double) - @cython.locals(r131=cython.double) - @cython.locals(r132=cython.double) - @cython.locals(x0=cython.double, y0=cython.double) - @cython.locals(x1=cython.double, y1=cython.double) - @cython.locals(x2=cython.double, y2=cython.double) - @cython.locals(x3=cython.double, y3=cython.double) - def _curveToOne(self, p1, p2, p3): - x0, y0 = self._getCurrentPoint() - x1, y1 = p1 - x2, y2 = p2 - x3, y3 = p3 - - r0 = 6 * y2 - r1 = r0 * x3 - r2 = 10 * y3 - r3 = r2 * x3 - r4 = 3 * y1 - r5 = 6 * x1 - r6 = 3 * x2 - r7 = 6 * y1 - r8 = 3 * y2 - r9 = x2**2 - r10 = 45 * r9 - r11 = r10 * y3 - r12 = x3**2 - r13 = r12 * y2 - r14 = r12 * y3 - r15 = 7 * y3 - r16 = 15 * x3 - r17 = r16 * x2 - r18 = x1**2 - r19 = 9 * r18 - r20 = x0**2 - r21 = 21 * y1 - r22 = 9 * r9 - r23 = r7 * x3 - r24 = 9 * y2 - r25 = r24 * x2 + r3 - r26 = 9 * x2 - r27 = x2 * y3 - r28 = -r26 * y1 + 15 * r27 - r29 = 3 * x1 - r30 = 45 * x1 - r31 = 12 * x3 - r32 = 45 * r18 - r33 = 5 * r12 - r34 = r8 * x3 - r35 = 105 * y0 - r36 = 30 * y0 - r37 = r36 * x2 - r38 = 5 * x3 - r39 = 15 * y3 - r40 = 5 * y3 - r41 = r40 * x3 - r42 = x2 * y2 - r43 = 18 * r42 - r44 = 45 * y1 - r45 = r41 + r43 + r44 * x1 - r46 = y2 * y3 - r47 = r46 * x3 - r48 = y2**2 - r49 = 45 * r48 - r50 = r49 * x3 - r51 = y3**2 - r52 = r51 * x3 - r53 = y1**2 - r54 = 9 * r53 - r55 = y0**2 - r56 = 21 * x1 - r57 = 6 * x2 - r58 = r16 * y2 - r59 = r39 * y2 - r60 = 9 * r48 - r61 = r6 * y3 - r62 = 3 * y3 - r63 = r36 * y2 - r64 = y1 * y3 - r65 = 45 * r53 - r66 = 5 * r51 - r67 = x2**3 - r68 = x3**3 - r69 = 630 * y2 - r70 = 126 * x3 - r71 = x1**3 - r72 = 126 * x2 - r73 = 63 * r9 - r74 = r73 * x3 - r75 = r15 * x3 + 15 * r42 - r76 = 630 * x1 - r77 = 14 * x3 - r78 = 21 * r27 - r79 = 42 * x1 - r80 = 42 * x2 - r81 = x1 * y2 - r82 = 63 * r42 - r83 = x1 * y1 - r84 = r41 + r82 + 378 * r83 - r85 = x2 * x3 - r86 = r85 * y1 - r87 = r27 * x3 - r88 = 27 * r9 - r89 = r88 * y2 - r90 = 42 * r14 - r91 = 90 * x1 - r92 = 189 * r18 - r93 = 378 * r18 - r94 = r12 * y1 - r95 = 252 * x1 * x2 - r96 = r79 * x3 - r97 = 30 * r85 - r98 = r83 * x3 - r99 = 30 * x3 - r100 = 42 * x3 - r101 = r42 * x1 - r102 = r10 * y2 + 14 * r14 + 126 * r18 * y1 + r81 * r99 - r103 = 378 * r48 - r104 = 18 * y1 - r105 = r104 * y2 - r106 = y0 * y1 - r107 = 252 * y2 - r108 = r107 * y0 - r109 = y0 * y3 - r110 = 42 * r64 - r111 = 378 * r53 - r112 = 63 * r48 - r113 = 27 * x2 - r114 = r27 * y2 - r115 = r113 * r48 + 42 * r52 - r116 = x3 * y3 - r117 = 54 * r42 - r118 = r51 * x1 - r119 = r51 * x2 - r120 = r48 * x1 - r121 = 21 * x3 - r122 = r64 * x1 - r123 = r81 * y3 - r124 = 30 * r27 * y1 + r49 * x2 + 14 * r52 + 126 * r53 * x1 - r125 = y2**3 - r126 = y3**3 - r127 = y1**3 - r128 = y0**3 - r129 = r51 * y2 - r130 = r112 * y3 + r21 * r51 - r131 = 189 * r53 - r132 = 90 * y2 - - self.area += ( - -r1 / 20 - - r3 / 20 - - r4 * (x2 + x3) / 20 - + x0 * (r7 + r8 + 10 * y0 + y3) / 20 - + 3 * x1 * (y2 + y3) / 20 - + 3 * x2 * y3 / 10 - - y0 * (r5 + r6 + x3) / 20 - ) - self.momentX += ( - r11 / 840 - - r13 / 8 - - r14 / 3 - - r17 * (-r15 + r8) / 840 - + r19 * (r8 + 2 * y3) / 840 - + r20 * (r0 + r21 + 56 * y0 + y3) / 168 - + r29 * (-r23 + r25 + r28) / 840 - - r4 * (10 * r12 + r17 + r22) / 840 - + x0 - * ( - 12 * r27 - + r30 * y2 - + r34 - - r35 * x1 - - r37 - - r38 * y0 - + r39 * x1 - - r4 * x3 - + r45 - ) - / 840 - - y0 * (r17 + r30 * x2 + r31 * x1 + r32 + r33 + 18 * r9) / 840 - ) - self.momentY += ( - -r4 * (r25 + r58) / 840 - - r47 / 8 - - r50 / 840 - - r52 / 6 - - r54 * (r6 + 2 * x3) / 840 - - r55 * (r56 + r57 + x3) / 168 - + x0 - * ( - r35 * y1 - + r40 * y0 - + r44 * y2 - + 18 * r48 - + 140 * r55 - + r59 - + r63 - + 12 * r64 - + r65 - + r66 - ) - / 840 - + x1 * (r24 * y1 + 10 * r51 + r59 + r60 + r7 * y3) / 280 - + x2 * y3 * (r15 + r8) / 56 - - y0 * (r16 * y1 + r31 * y2 + r44 * x2 + r45 + r61 - r62 * x1) / 840 - ) - self.momentXX += ( - -r12 * r72 * (-r40 + r8) / 9240 - + 3 * r18 * (r28 + r34 - r38 * y1 + r75) / 3080 - + r20 - * ( - r24 * x3 - - r72 * y0 - - r76 * y0 - - r77 * y0 - + r78 - + r79 * y3 - + r80 * y1 - + 210 * r81 - + r84 - ) - / 9240 - - r29 - * ( - r12 * r21 - + 14 * r13 - + r44 * r9 - - r73 * y3 - + 54 * r86 - - 84 * r87 - - r89 - - r90 - ) - / 9240 - - r4 * (70 * r12 * x2 + 27 * r67 + 42 * r68 + r74) / 9240 - + 3 * r67 * y3 / 220 - - r68 * r69 / 9240 - - r68 * y3 / 4 - - r70 * r9 * (-r62 + y2) / 9240 - + 3 * r71 * (r24 + r40) / 3080 - + x0**3 * (r24 + r44 + 165 * y0 + y3) / 660 - + x0 - * ( - r100 * r27 - + 162 * r101 - + r102 - + r11 - + 63 * r18 * y3 - + r27 * r91 - - r33 * y0 - - r37 * x3 - + r43 * x3 - - r73 * y0 - - r88 * y1 - + r92 * y2 - - r93 * y0 - - 9 * r94 - - r95 * y0 - - r96 * y0 - - r97 * y1 - - 18 * r98 - + r99 * x1 * y3 - ) - / 9240 - - y0 - * ( - r12 * r56 - + r12 * r80 - + r32 * x3 - + 45 * r67 - + 14 * r68 - + 126 * r71 - + r74 - + r85 * r91 - + 135 * r9 * x1 - + r92 * x2 - ) - / 9240 - ) - self.momentXY += ( - -r103 * r12 / 18480 - - r12 * r51 / 8 - - 3 * r14 * y2 / 44 - + 3 * r18 * (r105 + r2 * y1 + 18 * r46 + 15 * r48 + 7 * r51) / 6160 - + r20 - * ( - 1260 * r106 - + r107 * y1 - + r108 - + 28 * r109 - + r110 - + r111 - + r112 - + 30 * r46 - + 2310 * r55 - + r66 - ) - / 18480 - - r54 * (7 * r12 + 18 * r85 + 15 * r9) / 18480 - - r55 * (r33 + r73 + r93 + r95 + r96 + r97) / 18480 - - r7 * (42 * r13 + r82 * x3 + 28 * r87 + r89 + r90) / 18480 - - 3 * r85 * (r48 - r66) / 220 - + 3 * r9 * y3 * (r62 + 2 * y2) / 440 - + x0 - * ( - -r1 * y0 - - 84 * r106 * x2 - + r109 * r56 - + 54 * r114 - + r117 * y1 - + 15 * r118 - + 21 * r119 - + 81 * r120 - + r121 * r46 - + 54 * r122 - + 60 * r123 - + r124 - - r21 * x3 * y0 - + r23 * y3 - - r54 * x3 - - r55 * r72 - - r55 * r76 - - r55 * r77 - + r57 * y0 * y3 - + r60 * x3 - + 84 * r81 * y0 - + 189 * r81 * y1 - ) - / 9240 - + x1 - * ( - r104 * r27 - - r105 * x3 - - r113 * r53 - + 63 * r114 - + r115 - - r16 * r53 - + 28 * r47 - + r51 * r80 - ) - / 3080 - - y0 - * ( - 54 * r101 - + r102 - + r116 * r5 - + r117 * x3 - + 21 * r13 - - r19 * y3 - + r22 * y3 - + r78 * x3 - + 189 * r83 * x2 - + 60 * r86 - + 81 * r9 * y1 - + 15 * r94 - + 54 * r98 - ) - / 9240 - ) - self.momentYY += ( - -r103 * r116 / 9240 - - r125 * r70 / 9240 - - r126 * x3 / 12 - - 3 * r127 * (r26 + r38) / 3080 - - r128 * (r26 + r30 + x3) / 660 - - r4 * (r112 * x3 + r115 - 14 * r119 + 84 * r47) / 9240 - - r52 * r69 / 9240 - - r54 * (r58 + r61 + r75) / 9240 - - r55 - * (r100 * y1 + r121 * y2 + r26 * y3 + r79 * y2 + r84 + 210 * x2 * y1) - / 9240 - + x0 - * ( - r108 * y1 - + r110 * y0 - + r111 * y0 - + r112 * y0 - + 45 * r125 - + 14 * r126 - + 126 * r127 - + 770 * r128 - + 42 * r129 - + r130 - + r131 * y2 - + r132 * r64 - + 135 * r48 * y1 - + 630 * r55 * y1 - + 126 * r55 * y2 - + 14 * r55 * y3 - + r63 * y3 - + r65 * y3 - + r66 * y0 - ) - / 9240 - + x1 - * ( - 27 * r125 - + 42 * r126 - + 70 * r129 - + r130 - + r39 * r53 - + r44 * r48 - + 27 * r53 * y2 - + 54 * r64 * y2 - ) - / 3080 - + 3 * x2 * y3 * (r48 + r66 + r8 * y3) / 220 - - y0 - * ( - r100 * r46 - + 18 * r114 - - 9 * r118 - - 27 * r120 - - 18 * r122 - - 30 * r123 - + r124 - + r131 * x2 - + r132 * x3 * y1 - + 162 * r42 * y1 - + r50 - + 63 * r53 * x3 - + r64 * r99 - ) - / 9240 - ) - - -if __name__ == "__main__": - from fontTools.misc.symfont import x, y, printGreenPen - - printGreenPen( - "MomentsPen", - [ - ("area", 1), - ("momentX", x), - ("momentY", y), - ("momentXX", x**2), - ("momentXY", x * y), - ("momentYY", y**2), - ], - ) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/compression.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/compression.py deleted file mode 100644 index afa0f41156e16f35f0062e78973d9ddd2de8bc01..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fsspec/compression.py +++ /dev/null @@ -1,171 +0,0 @@ -"""Helper functions for a standard streaming compression API""" -from bz2 import BZ2File -from zipfile import ZipFile - -import fsspec.utils -from fsspec.spec import AbstractBufferedFile - - -def noop_file(file, mode, **kwargs): - return file - - -# TODO: files should also be available as contexts -# should be functions of the form func(infile, mode=, **kwargs) -> file-like -compr = {None: noop_file} - - -def register_compression(name, callback, extensions, force=False): - """Register an "inferable" file compression type. - - Registers transparent file compression type for use with fsspec.open. - Compression can be specified by name in open, or "infer"-ed for any files - ending with the given extensions. - - Args: - name: (str) The compression type name. Eg. "gzip". - callback: A callable of form (infile, mode, **kwargs) -> file-like. - Accepts an input file-like object, the target mode and kwargs. - Returns a wrapped file-like object. - extensions: (str, Iterable[str]) A file extension, or list of file - extensions for which to infer this compression scheme. Eg. "gz". - force: (bool) Force re-registration of compression type or extensions. - - Raises: - ValueError: If name or extensions already registered, and not force. - - """ - if isinstance(extensions, str): - extensions = [extensions] - - # Validate registration - if name in compr and not force: - raise ValueError("Duplicate compression registration: %s" % name) - - for ext in extensions: - if ext in fsspec.utils.compressions and not force: - raise ValueError( - "Duplicate compression file extension: %s (%s)" % (ext, name) - ) - - compr[name] = callback - - for ext in extensions: - fsspec.utils.compressions[ext] = name - - -def unzip(infile, mode="rb", filename=None, **kwargs): - if "r" not in mode: - filename = filename or "file" - z = ZipFile(infile, mode="w", **kwargs) - fo = z.open(filename, mode="w") - fo.close = lambda closer=fo.close: closer() or z.close() - return fo - z = ZipFile(infile) - if filename is None: - filename = z.namelist()[0] - return z.open(filename, mode="r", **kwargs) - - -register_compression("zip", unzip, "zip") -register_compression("bz2", BZ2File, "bz2") - -try: # pragma: no cover - from isal import igzip - - def isal(infile, mode="rb", **kwargs): - return igzip.IGzipFile(fileobj=infile, mode=mode, **kwargs) - - register_compression("gzip", isal, "gz") -except ImportError: - from gzip import GzipFile - - register_compression( - "gzip", lambda f, **kwargs: GzipFile(fileobj=f, **kwargs), "gz" - ) - -try: - from lzma import LZMAFile - - register_compression("lzma", LZMAFile, "xz") - register_compression("xz", LZMAFile, "xz", force=True) -except ImportError: - pass - -try: - import lzmaffi - - register_compression("lzma", lzmaffi.LZMAFile, "xz", force=True) - register_compression("xz", lzmaffi.LZMAFile, "xz", force=True) -except ImportError: - pass - - -class SnappyFile(AbstractBufferedFile): - def __init__(self, infile, mode, **kwargs): - import snappy - - super().__init__( - fs=None, path="snappy", mode=mode.strip("b") + "b", size=999999999, **kwargs - ) - self.infile = infile - if "r" in mode: - self.codec = snappy.StreamDecompressor() - else: - self.codec = snappy.StreamCompressor() - - def _upload_chunk(self, final=False): - self.buffer.seek(0) - out = self.codec.add_chunk(self.buffer.read()) - self.infile.write(out) - return True - - def seek(self, loc, whence=0): - raise NotImplementedError("SnappyFile is not seekable") - - def seekable(self): - return False - - def _fetch_range(self, start, end): - """Get the specified set of bytes from remote""" - data = self.infile.read(end - start) - return self.codec.decompress(data) - - -try: - import snappy - - snappy.compress - # Snappy may use the .sz file extension, but this is not part of the - # standard implementation. - register_compression("snappy", SnappyFile, []) - -except (ImportError, NameError, AttributeError): - pass - -try: - import lz4.frame - - register_compression("lz4", lz4.frame.open, "lz4") -except ImportError: - pass - -try: - import zstandard as zstd - - def zstandard_file(infile, mode="rb"): - if "r" in mode: - cctx = zstd.ZstdDecompressor() - return cctx.stream_reader(infile) - else: - cctx = zstd.ZstdCompressor(level=10) - return cctx.stream_writer(infile) - - register_compression("zstd", zstandard_file, "zst") -except ImportError: - pass - - -def available_compressions(): - """Return a list of the implemented compressions.""" - return list(compr) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/common/entities.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/common/entities.py deleted file mode 100644 index 6bb2d343c2338de4232378bf98d6c034fbe808c0..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/common/entities.py +++ /dev/null @@ -1,4 +0,0 @@ -"""HTML5 entities map: { name -> characters }.""" -import html.entities - -entities = {name.rstrip(";"): chars for name, chars in html.entities.html5.items()} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/artist.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/artist.py deleted file mode 100644 index 44c128232235c3ec635439acb024c789b57e5974..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/artist.py +++ /dev/null @@ -1,1864 +0,0 @@ -from collections import namedtuple -import contextlib -from functools import lru_cache, wraps -import inspect -from inspect import Signature, Parameter -import logging -from numbers import Number -import re -import warnings - -import numpy as np - -import matplotlib as mpl -from . import _api, cbook -from .colors import BoundaryNorm -from .cm import ScalarMappable -from .path import Path -from .transforms import (Bbox, IdentityTransform, Transform, TransformedBbox, - TransformedPatchPath, TransformedPath) - -_log = logging.getLogger(__name__) - - -def _prevent_rasterization(draw): - # We assume that by default artists are not allowed to rasterize (unless - # its draw method is explicitly decorated). If it is being drawn after a - # rasterized artist and it has reached a raster_depth of 0, we stop - # rasterization so that it does not affect the behavior of normal artist - # (e.g., change in dpi). - - @wraps(draw) - def draw_wrapper(artist, renderer, *args, **kwargs): - if renderer._raster_depth == 0 and renderer._rasterizing: - # Only stop when we are not in a rasterized parent - # and something has been rasterized since last stop. - renderer.stop_rasterizing() - renderer._rasterizing = False - - return draw(artist, renderer, *args, **kwargs) - - draw_wrapper._supports_rasterization = False - return draw_wrapper - - -def allow_rasterization(draw): - """ - Decorator for Artist.draw method. Provides routines - that run before and after the draw call. The before and after functions - are useful for changing artist-dependent renderer attributes or making - other setup function calls, such as starting and flushing a mixed-mode - renderer. - """ - - @wraps(draw) - def draw_wrapper(artist, renderer): - try: - if artist.get_rasterized(): - if renderer._raster_depth == 0 and not renderer._rasterizing: - renderer.start_rasterizing() - renderer._rasterizing = True - renderer._raster_depth += 1 - else: - if renderer._raster_depth == 0 and renderer._rasterizing: - # Only stop when we are not in a rasterized parent - # and something has be rasterized since last stop - renderer.stop_rasterizing() - renderer._rasterizing = False - - if artist.get_agg_filter() is not None: - renderer.start_filter() - - return draw(artist, renderer) - finally: - if artist.get_agg_filter() is not None: - renderer.stop_filter(artist.get_agg_filter()) - if artist.get_rasterized(): - renderer._raster_depth -= 1 - if (renderer._rasterizing and artist.figure and - artist.figure.suppressComposite): - # restart rasterizing to prevent merging - renderer.stop_rasterizing() - renderer.start_rasterizing() - - draw_wrapper._supports_rasterization = True - return draw_wrapper - - -def _finalize_rasterization(draw): - """ - Decorator for Artist.draw method. Needed on the outermost artist, i.e. - Figure, to finish up if the render is still in rasterized mode. - """ - @wraps(draw) - def draw_wrapper(artist, renderer, *args, **kwargs): - result = draw(artist, renderer, *args, **kwargs) - if renderer._rasterizing: - renderer.stop_rasterizing() - renderer._rasterizing = False - return result - return draw_wrapper - - -def _stale_axes_callback(self, val): - if self.axes: - self.axes.stale = val - - -_XYPair = namedtuple("_XYPair", "x y") - - -class _Unset: - def __repr__(self): - return "" -_UNSET = _Unset() - - -class Artist: - """ - Abstract base class for objects that render into a FigureCanvas. - - Typically, all visible elements in a figure are subclasses of Artist. - """ - - zorder = 0 - - def __init_subclass__(cls): - - # Decorate draw() method so that all artists are able to stop - # rastrization when necessary. If the artist's draw method is already - # decorated (has a `_supports_rasterization` attribute), it won't be - # decorated. - - if not hasattr(cls.draw, "_supports_rasterization"): - cls.draw = _prevent_rasterization(cls.draw) - - # Inject custom set() methods into the subclass with signature and - # docstring based on the subclasses' properties. - - if not hasattr(cls.set, '_autogenerated_signature'): - # Don't overwrite cls.set if the subclass or one of its parents - # has defined a set method set itself. - # If there was no explicit definition, cls.set is inherited from - # the hierarchy of auto-generated set methods, which hold the - # flag _autogenerated_signature. - return - - cls.set = lambda self, **kwargs: Artist.set(self, **kwargs) - cls.set.__name__ = "set" - cls.set.__qualname__ = f"{cls.__qualname__}.set" - cls._update_set_signature_and_docstring() - - _PROPERTIES_EXCLUDED_FROM_SET = [ - 'navigate_mode', # not a user-facing function - 'figure', # changing the figure is such a profound operation - # that we don't want this in set() - '3d_properties', # cannot be used as a keyword due to leading digit - ] - - @classmethod - def _update_set_signature_and_docstring(cls): - """ - Update the signature of the set function to list all properties - as keyword arguments. - - Property aliases are not listed in the signature for brevity, but - are still accepted as keyword arguments. - """ - cls.set.__signature__ = Signature( - [Parameter("self", Parameter.POSITIONAL_OR_KEYWORD), - *[Parameter(prop, Parameter.KEYWORD_ONLY, default=_UNSET) - for prop in ArtistInspector(cls).get_setters() - if prop not in Artist._PROPERTIES_EXCLUDED_FROM_SET]]) - cls.set._autogenerated_signature = True - - cls.set.__doc__ = ( - "Set multiple properties at once.\n\n" - "Supported properties are\n\n" - + kwdoc(cls)) - - def __init__(self): - self._stale = True - self.stale_callback = None - self._axes = None - self.figure = None - - self._transform = None - self._transformSet = False - self._visible = True - self._animated = False - self._alpha = None - self.clipbox = None - self._clippath = None - self._clipon = True - self._label = '' - self._picker = None - self._rasterized = False - self._agg_filter = None - # Normally, artist classes need to be queried for mouseover info if and - # only if they override get_cursor_data. - self._mouseover = type(self).get_cursor_data != Artist.get_cursor_data - self._callbacks = cbook.CallbackRegistry(signals=["pchanged"]) - try: - self.axes = None - except AttributeError: - # Handle self.axes as a read-only property, as in Figure. - pass - self._remove_method = None - self._url = None - self._gid = None - self._snap = None - self._sketch = mpl.rcParams['path.sketch'] - self._path_effects = mpl.rcParams['path.effects'] - self._sticky_edges = _XYPair([], []) - self._in_layout = True - - def __getstate__(self): - d = self.__dict__.copy() - # remove the unpicklable remove method, this will get re-added on load - # (by the Axes) if the artist lives on an Axes. - d['stale_callback'] = None - return d - - def remove(self): - """ - Remove the artist from the figure if possible. - - The effect will not be visible until the figure is redrawn, e.g., - with `.FigureCanvasBase.draw_idle`. Call `~.axes.Axes.relim` to - update the axes limits if desired. - - Note: `~.axes.Axes.relim` will not see collections even if the - collection was added to the axes with *autolim* = True. - - Note: there is no support for removing the artist's legend entry. - """ - - # There is no method to set the callback. Instead, the parent should - # set the _remove_method attribute directly. This would be a - # protected attribute if Python supported that sort of thing. The - # callback has one parameter, which is the child to be removed. - if self._remove_method is not None: - self._remove_method(self) - # clear stale callback - self.stale_callback = None - _ax_flag = False - if hasattr(self, 'axes') and self.axes: - # remove from the mouse hit list - self.axes._mouseover_set.discard(self) - self.axes.stale = True - self.axes = None # decouple the artist from the Axes - _ax_flag = True - - if self.figure: - self.figure = None - if not _ax_flag: - self.figure = True - - else: - raise NotImplementedError('cannot remove artist') - # TODO: the fix for the collections relim problem is to move the - # limits calculation into the artist itself, including the property of - # whether or not the artist should affect the limits. Then there will - # be no distinction between axes.add_line, axes.add_patch, etc. - # TODO: add legend support - - def have_units(self): - """Return whether units are set on any axis.""" - ax = self.axes - return ax and any(axis.have_units() for axis in ax._axis_map.values()) - - def convert_xunits(self, x): - """ - Convert *x* using the unit type of the xaxis. - - If the artist is not contained in an Axes or if the xaxis does not - have units, *x* itself is returned. - """ - ax = getattr(self, 'axes', None) - if ax is None or ax.xaxis is None: - return x - return ax.xaxis.convert_units(x) - - def convert_yunits(self, y): - """ - Convert *y* using the unit type of the yaxis. - - If the artist is not contained in an Axes or if the yaxis does not - have units, *y* itself is returned. - """ - ax = getattr(self, 'axes', None) - if ax is None or ax.yaxis is None: - return y - return ax.yaxis.convert_units(y) - - @property - def axes(self): - """The `~.axes.Axes` instance the artist resides in, or *None*.""" - return self._axes - - @axes.setter - def axes(self, new_axes): - if (new_axes is not None and self._axes is not None - and new_axes != self._axes): - raise ValueError("Can not reset the axes. You are probably " - "trying to re-use an artist in more than one " - "Axes which is not supported") - self._axes = new_axes - if new_axes is not None and new_axes is not self: - self.stale_callback = _stale_axes_callback - - @property - def stale(self): - """ - Whether the artist is 'stale' and needs to be re-drawn for the output - to match the internal state of the artist. - """ - return self._stale - - @stale.setter - def stale(self, val): - self._stale = val - - # if the artist is animated it does not take normal part in the - # draw stack and is not expected to be drawn as part of the normal - # draw loop (when not saving) so do not propagate this change - if self.get_animated(): - return - - if val and self.stale_callback is not None: - self.stale_callback(self, val) - - def get_window_extent(self, renderer=None): - """ - Get the artist's bounding box in display space. - - The bounding box' width and height are nonnegative. - - Subclasses should override for inclusion in the bounding box - "tight" calculation. Default is to return an empty bounding - box at 0, 0. - - Be careful when using this function, the results will not update - if the artist window extent of the artist changes. The extent - can change due to any changes in the transform stack, such as - changing the axes limits, the figure size, or the canvas used - (as is done when saving a figure). This can lead to unexpected - behavior where interactive figures will look fine on the screen, - but will save incorrectly. - """ - return Bbox([[0, 0], [0, 0]]) - - def get_tightbbox(self, renderer=None): - """ - Like `.Artist.get_window_extent`, but includes any clipping. - - Parameters - ---------- - renderer : `.RendererBase` subclass - renderer that will be used to draw the figures (i.e. - ``fig.canvas.get_renderer()``) - - Returns - ------- - `.Bbox` - The enclosing bounding box (in figure pixel coordinates). - """ - bbox = self.get_window_extent(renderer) - if self.get_clip_on(): - clip_box = self.get_clip_box() - if clip_box is not None: - bbox = Bbox.intersection(bbox, clip_box) - clip_path = self.get_clip_path() - if clip_path is not None: - clip_path = clip_path.get_fully_transformed_path() - bbox = Bbox.intersection(bbox, clip_path.get_extents()) - return bbox - - def add_callback(self, func): - """ - Add a callback function that will be called whenever one of the - `.Artist`'s properties changes. - - Parameters - ---------- - func : callable - The callback function. It must have the signature:: - - def func(artist: Artist) -> Any - - where *artist* is the calling `.Artist`. Return values may exist - but are ignored. - - Returns - ------- - int - The observer id associated with the callback. This id can be - used for removing the callback with `.remove_callback` later. - - See Also - -------- - remove_callback - """ - # Wrapping func in a lambda ensures it can be connected multiple times - # and never gets weakref-gc'ed. - return self._callbacks.connect("pchanged", lambda: func(self)) - - def remove_callback(self, oid): - """ - Remove a callback based on its observer id. - - See Also - -------- - add_callback - """ - self._callbacks.disconnect(oid) - - def pchanged(self): - """ - Call all of the registered callbacks. - - This function is triggered internally when a property is changed. - - See Also - -------- - add_callback - remove_callback - """ - self._callbacks.process("pchanged") - - def is_transform_set(self): - """ - Return whether the Artist has an explicitly set transform. - - This is *True* after `.set_transform` has been called. - """ - return self._transformSet - - def set_transform(self, t): - """ - Set the artist transform. - - Parameters - ---------- - t : `.Transform` - """ - self._transform = t - self._transformSet = True - self.pchanged() - self.stale = True - - def get_transform(self): - """Return the `.Transform` instance used by this artist.""" - if self._transform is None: - self._transform = IdentityTransform() - elif (not isinstance(self._transform, Transform) - and hasattr(self._transform, '_as_mpl_transform')): - self._transform = self._transform._as_mpl_transform(self.axes) - return self._transform - - def get_children(self): - r"""Return a list of the child `.Artist`\s of this `.Artist`.""" - return [] - - def _default_contains(self, mouseevent, figure=None): - """ - Base impl. for checking whether a mouseevent happened in an artist. - - 1. If the artist figure is known and the event did not occur in that - figure (by checking its ``canvas`` attribute), reject it. - 2. Otherwise, return `None, {}`, indicating that the subclass' - implementation should be used. - - Subclasses should start their definition of `contains` as follows: - - inside, info = self._default_contains(mouseevent) - if inside is not None: - return inside, info - # subclass-specific implementation follows - - The *figure* kwarg is provided for the implementation of - `.Figure.contains`. - """ - if figure is not None and mouseevent.canvas is not figure.canvas: - return False, {} - return None, {} - - def contains(self, mouseevent): - """ - Test whether the artist contains the mouse event. - - Parameters - ---------- - mouseevent : `matplotlib.backend_bases.MouseEvent` - - Returns - ------- - contains : bool - Whether any values are within the radius. - details : dict - An artist-specific dictionary of details of the event context, - such as which points are contained in the pick radius. See the - individual Artist subclasses for details. - """ - inside, info = self._default_contains(mouseevent) - if inside is not None: - return inside, info - _log.warning("%r needs 'contains' method", self.__class__.__name__) - return False, {} - - def pickable(self): - """ - Return whether the artist is pickable. - - See Also - -------- - set_picker, get_picker, pick - """ - return self.figure is not None and self._picker is not None - - def pick(self, mouseevent): - """ - Process a pick event. - - Each child artist will fire a pick event if *mouseevent* is over - the artist and the artist has picker set. - - See Also - -------- - set_picker, get_picker, pickable - """ - from .backend_bases import PickEvent # Circular import. - # Pick self - if self.pickable(): - picker = self.get_picker() - if callable(picker): - inside, prop = picker(self, mouseevent) - else: - inside, prop = self.contains(mouseevent) - if inside: - PickEvent("pick_event", self.figure.canvas, - mouseevent, self, **prop)._process() - - # Pick children - for a in self.get_children(): - # make sure the event happened in the same Axes - ax = getattr(a, 'axes', None) - if (mouseevent.inaxes is None or ax is None - or mouseevent.inaxes == ax): - # we need to check if mouseevent.inaxes is None - # because some objects associated with an Axes (e.g., a - # tick label) can be outside the bounding box of the - # Axes and inaxes will be None - # also check that ax is None so that it traverse objects - # which do not have an axes property but children might - a.pick(mouseevent) - - def set_picker(self, picker): - """ - Define the picking behavior of the artist. - - Parameters - ---------- - picker : None or bool or float or callable - This can be one of the following: - - - *None*: Picking is disabled for this artist (default). - - - A boolean: If *True* then picking will be enabled and the - artist will fire a pick event if the mouse event is over - the artist. - - - A float: If picker is a number it is interpreted as an - epsilon tolerance in points and the artist will fire - off an event if its data is within epsilon of the mouse - event. For some artists like lines and patch collections, - the artist may provide additional data to the pick event - that is generated, e.g., the indices of the data within - epsilon of the pick event - - - A function: If picker is callable, it is a user supplied - function which determines whether the artist is hit by the - mouse event:: - - hit, props = picker(artist, mouseevent) - - to determine the hit test. if the mouse event is over the - artist, return *hit=True* and props is a dictionary of - properties you want added to the PickEvent attributes. - """ - self._picker = picker - - def get_picker(self): - """ - Return the picking behavior of the artist. - - The possible values are described in `.set_picker`. - - See Also - -------- - set_picker, pickable, pick - """ - return self._picker - - def get_url(self): - """Return the url.""" - return self._url - - def set_url(self, url): - """ - Set the url for the artist. - - Parameters - ---------- - url : str - """ - self._url = url - - def get_gid(self): - """Return the group id.""" - return self._gid - - def set_gid(self, gid): - """ - Set the (group) id for the artist. - - Parameters - ---------- - gid : str - """ - self._gid = gid - - def get_snap(self): - """ - Return the snap setting. - - See `.set_snap` for details. - """ - if mpl.rcParams['path.snap']: - return self._snap - else: - return False - - def set_snap(self, snap): - """ - Set the snapping behavior. - - Snapping aligns positions with the pixel grid, which results in - clearer images. For example, if a black line of 1px width was - defined at a position in between two pixels, the resulting image - would contain the interpolated value of that line in the pixel grid, - which would be a grey value on both adjacent pixel positions. In - contrast, snapping will move the line to the nearest integer pixel - value, so that the resulting image will really contain a 1px wide - black line. - - Snapping is currently only supported by the Agg and MacOSX backends. - - Parameters - ---------- - snap : bool or None - Possible values: - - - *True*: Snap vertices to the nearest pixel center. - - *False*: Do not modify vertex positions. - - *None*: (auto) If the path contains only rectilinear line - segments, round to the nearest pixel center. - """ - self._snap = snap - self.stale = True - - def get_sketch_params(self): - """ - Return the sketch parameters for the artist. - - Returns - ------- - tuple or None - - A 3-tuple with the following elements: - - - *scale*: The amplitude of the wiggle perpendicular to the - source line. - - *length*: The length of the wiggle along the line. - - *randomness*: The scale factor by which the length is - shrunken or expanded. - - Returns *None* if no sketch parameters were set. - """ - return self._sketch - - def set_sketch_params(self, scale=None, length=None, randomness=None): - """ - Set the sketch parameters. - - Parameters - ---------- - scale : float, optional - The amplitude of the wiggle perpendicular to the source - line, in pixels. If scale is `None`, or not provided, no - sketch filter will be provided. - length : float, optional - The length of the wiggle along the line, in pixels - (default 128.0) - randomness : float, optional - The scale factor by which the length is shrunken or - expanded (default 16.0) - - The PGF backend uses this argument as an RNG seed and not as - described above. Using the same seed yields the same random shape. - - .. ACCEPTS: (scale: float, length: float, randomness: float) - """ - if scale is None: - self._sketch = None - else: - self._sketch = (scale, length or 128.0, randomness or 16.0) - self.stale = True - - def set_path_effects(self, path_effects): - """ - Set the path effects. - - Parameters - ---------- - path_effects : `.AbstractPathEffect` - """ - self._path_effects = path_effects - self.stale = True - - def get_path_effects(self): - return self._path_effects - - def get_figure(self): - """Return the `.Figure` instance the artist belongs to.""" - return self.figure - - def set_figure(self, fig): - """ - Set the `.Figure` instance the artist belongs to. - - Parameters - ---------- - fig : `.Figure` - """ - # if this is a no-op just return - if self.figure is fig: - return - # if we currently have a figure (the case of both `self.figure` - # and *fig* being none is taken care of above) we then user is - # trying to change the figure an artist is associated with which - # is not allowed for the same reason as adding the same instance - # to more than one Axes - if self.figure is not None: - raise RuntimeError("Can not put single artist in " - "more than one figure") - self.figure = fig - if self.figure and self.figure is not self: - self.pchanged() - self.stale = True - - def set_clip_box(self, clipbox): - """ - Set the artist's clip `.Bbox`. - - Parameters - ---------- - clipbox : `.Bbox` - - Typically would be created from a `.TransformedBbox`. For - instance ``TransformedBbox(Bbox([[0, 0], [1, 1]]), ax.transAxes)`` - is the default clipping for an artist added to an Axes. - - """ - self.clipbox = clipbox - self.pchanged() - self.stale = True - - def set_clip_path(self, path, transform=None): - """ - Set the artist's clip path. - - Parameters - ---------- - path : `.Patch` or `.Path` or `.TransformedPath` or None - The clip path. If given a `.Path`, *transform* must be provided as - well. If *None*, a previously set clip path is removed. - transform : `~matplotlib.transforms.Transform`, optional - Only used if *path* is a `.Path`, in which case the given `.Path` - is converted to a `.TransformedPath` using *transform*. - - Notes - ----- - For efficiency, if *path* is a `.Rectangle` this method will set the - clipping box to the corresponding rectangle and set the clipping path - to ``None``. - - For technical reasons (support of `~.Artist.set`), a tuple - (*path*, *transform*) is also accepted as a single positional - parameter. - - .. ACCEPTS: Patch or (Path, Transform) or None - """ - from matplotlib.patches import Patch, Rectangle - - success = False - if transform is None: - if isinstance(path, Rectangle): - self.clipbox = TransformedBbox(Bbox.unit(), - path.get_transform()) - self._clippath = None - success = True - elif isinstance(path, Patch): - self._clippath = TransformedPatchPath(path) - success = True - elif isinstance(path, tuple): - path, transform = path - - if path is None: - self._clippath = None - success = True - elif isinstance(path, Path): - self._clippath = TransformedPath(path, transform) - success = True - elif isinstance(path, TransformedPatchPath): - self._clippath = path - success = True - elif isinstance(path, TransformedPath): - self._clippath = path - success = True - - if not success: - raise TypeError( - "Invalid arguments to set_clip_path, of type {} and {}" - .format(type(path).__name__, type(transform).__name__)) - # This may result in the callbacks being hit twice, but guarantees they - # will be hit at least once. - self.pchanged() - self.stale = True - - def get_alpha(self): - """ - Return the alpha value used for blending - not supported on all - backends. - """ - return self._alpha - - def get_visible(self): - """Return the visibility.""" - return self._visible - - def get_animated(self): - """Return whether the artist is animated.""" - return self._animated - - def get_in_layout(self): - """ - Return boolean flag, ``True`` if artist is included in layout - calculations. - - E.g. :doc:`/tutorials/intermediate/constrainedlayout_guide`, - `.Figure.tight_layout()`, and - ``fig.savefig(fname, bbox_inches='tight')``. - """ - return self._in_layout - - def _fully_clipped_to_axes(self): - """ - Return a boolean flag, ``True`` if the artist is clipped to the Axes - and can thus be skipped in layout calculations. Requires `get_clip_on` - is True, one of `clip_box` or `clip_path` is set, ``clip_box.extents`` - is equivalent to ``ax.bbox.extents`` (if set), and ``clip_path._patch`` - is equivalent to ``ax.patch`` (if set). - """ - # Note that ``clip_path.get_fully_transformed_path().get_extents()`` - # cannot be directly compared to ``axes.bbox.extents`` because the - # extents may be undefined (i.e. equivalent to ``Bbox.null()``) - # before the associated artist is drawn, and this method is meant - # to determine whether ``axes.get_tightbbox()`` may bypass drawing - clip_box = self.get_clip_box() - clip_path = self.get_clip_path() - return (self.axes is not None - and self.get_clip_on() - and (clip_box is not None or clip_path is not None) - and (clip_box is None - or np.all(clip_box.extents == self.axes.bbox.extents)) - and (clip_path is None - or isinstance(clip_path, TransformedPatchPath) - and clip_path._patch is self.axes.patch)) - - def get_clip_on(self): - """Return whether the artist uses clipping.""" - return self._clipon - - def get_clip_box(self): - """Return the clipbox.""" - return self.clipbox - - def get_clip_path(self): - """Return the clip path.""" - return self._clippath - - def get_transformed_clip_path_and_affine(self): - """ - Return the clip path with the non-affine part of its - transformation applied, and the remaining affine part of its - transformation. - """ - if self._clippath is not None: - return self._clippath.get_transformed_path_and_affine() - return None, None - - def set_clip_on(self, b): - """ - Set whether the artist uses clipping. - - When False, artists will be visible outside the Axes which - can lead to unexpected results. - - Parameters - ---------- - b : bool - """ - self._clipon = b - # This may result in the callbacks being hit twice, but ensures they - # are hit at least once - self.pchanged() - self.stale = True - - def _set_gc_clip(self, gc): - """Set the clip properly for the gc.""" - if self._clipon: - if self.clipbox is not None: - gc.set_clip_rectangle(self.clipbox) - gc.set_clip_path(self._clippath) - else: - gc.set_clip_rectangle(None) - gc.set_clip_path(None) - - def get_rasterized(self): - """Return whether the artist is to be rasterized.""" - return self._rasterized - - def set_rasterized(self, rasterized): - """ - Force rasterized (bitmap) drawing for vector graphics output. - - Rasterized drawing is not supported by all artists. If you try to - enable this on an artist that does not support it, the command has no - effect and a warning will be issued. - - This setting is ignored for pixel-based output. - - See also :doc:`/gallery/misc/rasterization_demo`. - - Parameters - ---------- - rasterized : bool - """ - supports_rasterization = getattr(self.draw, - "_supports_rasterization", False) - if rasterized and not supports_rasterization: - _api.warn_external(f"Rasterization of '{self}' will be ignored") - - self._rasterized = rasterized - - def get_agg_filter(self): - """Return filter function to be used for agg filter.""" - return self._agg_filter - - def set_agg_filter(self, filter_func): - """ - Set the agg filter. - - Parameters - ---------- - filter_func : callable - A filter function, which takes a (m, n, depth) float array - and a dpi value, and returns a (m, n, depth) array and two - offsets from the bottom left corner of the image - - .. ACCEPTS: a filter function, which takes a (m, n, 3) float array - and a dpi value, and returns a (m, n, 3) array and two offsets - from the bottom left corner of the image - """ - self._agg_filter = filter_func - self.stale = True - - def draw(self, renderer): - """ - Draw the Artist (and its children) using the given renderer. - - This has no effect if the artist is not visible (`.Artist.get_visible` - returns False). - - Parameters - ---------- - renderer : `.RendererBase` subclass. - - Notes - ----- - This method is overridden in the Artist subclasses. - """ - if not self.get_visible(): - return - self.stale = False - - def set_alpha(self, alpha): - """ - Set the alpha value used for blending - not supported on all backends. - - Parameters - ---------- - alpha : scalar or None - *alpha* must be within the 0-1 range, inclusive. - """ - if alpha is not None and not isinstance(alpha, Number): - raise TypeError( - f'alpha must be numeric or None, not {type(alpha)}') - if alpha is not None and not (0 <= alpha <= 1): - raise ValueError(f'alpha ({alpha}) is outside 0-1 range') - self._alpha = alpha - self.pchanged() - self.stale = True - - def _set_alpha_for_array(self, alpha): - """ - Set the alpha value used for blending - not supported on all backends. - - Parameters - ---------- - alpha : array-like or scalar or None - All values must be within the 0-1 range, inclusive. - Masked values and nans are not supported. - """ - if isinstance(alpha, str): - raise TypeError("alpha must be numeric or None, not a string") - if not np.iterable(alpha): - Artist.set_alpha(self, alpha) - return - alpha = np.asarray(alpha) - if not (0 <= alpha.min() and alpha.max() <= 1): - raise ValueError('alpha must be between 0 and 1, inclusive, ' - f'but min is {alpha.min()}, max is {alpha.max()}') - self._alpha = alpha - self.pchanged() - self.stale = True - - def set_visible(self, b): - """ - Set the artist's visibility. - - Parameters - ---------- - b : bool - """ - self._visible = b - self.pchanged() - self.stale = True - - def set_animated(self, b): - """ - Set whether the artist is intended to be used in an animation. - - If True, the artist is excluded from regular drawing of the figure. - You have to call `.Figure.draw_artist` / `.Axes.draw_artist` - explicitly on the artist. This approach is used to speed up animations - using blitting. - - See also `matplotlib.animation` and - :doc:`/tutorials/advanced/blitting`. - - Parameters - ---------- - b : bool - """ - if self._animated != b: - self._animated = b - self.pchanged() - - def set_in_layout(self, in_layout): - """ - Set if artist is to be included in layout calculations, - E.g. :doc:`/tutorials/intermediate/constrainedlayout_guide`, - `.Figure.tight_layout()`, and - ``fig.savefig(fname, bbox_inches='tight')``. - - Parameters - ---------- - in_layout : bool - """ - self._in_layout = in_layout - - def get_label(self): - """Return the label used for this artist in the legend.""" - return self._label - - def set_label(self, s): - """ - Set a label that will be displayed in the legend. - - Parameters - ---------- - s : object - *s* will be converted to a string by calling `str`. - """ - if s is not None: - self._label = str(s) - else: - self._label = None - self.pchanged() - self.stale = True - - def get_zorder(self): - """Return the artist's zorder.""" - return self.zorder - - def set_zorder(self, level): - """ - Set the zorder for the artist. Artists with lower zorder - values are drawn first. - - Parameters - ---------- - level : float - """ - if level is None: - level = self.__class__.zorder - self.zorder = level - self.pchanged() - self.stale = True - - @property - def sticky_edges(self): - """ - ``x`` and ``y`` sticky edge lists for autoscaling. - - When performing autoscaling, if a data limit coincides with a value in - the corresponding sticky_edges list, then no margin will be added--the - view limit "sticks" to the edge. A typical use case is histograms, - where one usually expects no margin on the bottom edge (0) of the - histogram. - - Moreover, margin expansion "bumps" against sticky edges and cannot - cross them. For example, if the upper data limit is 1.0, the upper - view limit computed by simple margin application is 1.2, but there is a - sticky edge at 1.1, then the actual upper view limit will be 1.1. - - This attribute cannot be assigned to; however, the ``x`` and ``y`` - lists can be modified in place as needed. - - Examples - -------- - >>> artist.sticky_edges.x[:] = (xmin, xmax) - >>> artist.sticky_edges.y[:] = (ymin, ymax) - - """ - return self._sticky_edges - - def update_from(self, other): - """Copy properties from *other* to *self*.""" - self._transform = other._transform - self._transformSet = other._transformSet - self._visible = other._visible - self._alpha = other._alpha - self.clipbox = other.clipbox - self._clipon = other._clipon - self._clippath = other._clippath - self._label = other._label - self._sketch = other._sketch - self._path_effects = other._path_effects - self.sticky_edges.x[:] = other.sticky_edges.x.copy() - self.sticky_edges.y[:] = other.sticky_edges.y.copy() - self.pchanged() - self.stale = True - - def properties(self): - """Return a dictionary of all the properties of the artist.""" - return ArtistInspector(self).properties() - - def _update_props(self, props, errfmt): - """ - Helper for `.Artist.set` and `.Artist.update`. - - *errfmt* is used to generate error messages for invalid property - names; it gets formatted with ``type(self)`` and the property name. - """ - ret = [] - with cbook._setattr_cm(self, eventson=False): - for k, v in props.items(): - # Allow attributes we want to be able to update through - # art.update, art.set, setp. - if k == "axes": - ret.append(setattr(self, k, v)) - else: - func = getattr(self, f"set_{k}", None) - if not callable(func): - raise AttributeError( - errfmt.format(cls=type(self), prop_name=k)) - ret.append(func(v)) - if ret: - self.pchanged() - self.stale = True - return ret - - def update(self, props): - """ - Update this artist's properties from the dict *props*. - - Parameters - ---------- - props : dict - """ - return self._update_props( - props, "{cls.__name__!r} object has no property {prop_name!r}") - - def _internal_update(self, kwargs): - """ - Update artist properties without prenormalizing them, but generating - errors as if calling `set`. - - The lack of prenormalization is to maintain backcompatibility. - """ - return self._update_props( - kwargs, "{cls.__name__}.set() got an unexpected keyword argument " - "{prop_name!r}") - - def set(self, **kwargs): - # docstring and signature are auto-generated via - # Artist._update_set_signature_and_docstring() at the end of the - # module. - return self._internal_update(cbook.normalize_kwargs(kwargs, self)) - - @contextlib.contextmanager - def _cm_set(self, **kwargs): - """ - `.Artist.set` context-manager that restores original values at exit. - """ - orig_vals = {k: getattr(self, f"get_{k}")() for k in kwargs} - try: - self.set(**kwargs) - yield - finally: - self.set(**orig_vals) - - def findobj(self, match=None, include_self=True): - """ - Find artist objects. - - Recursively find all `.Artist` instances contained in the artist. - - Parameters - ---------- - match - A filter criterion for the matches. This can be - - - *None*: Return all objects contained in artist. - - A function with signature ``def match(artist: Artist) -> bool``. - The result will only contain artists for which the function - returns *True*. - - A class instance: e.g., `.Line2D`. The result will only contain - artists of this class or its subclasses (``isinstance`` check). - - include_self : bool - Include *self* in the list to be checked for a match. - - Returns - ------- - list of `.Artist` - - """ - if match is None: # always return True - def matchfunc(x): - return True - elif isinstance(match, type) and issubclass(match, Artist): - def matchfunc(x): - return isinstance(x, match) - elif callable(match): - matchfunc = match - else: - raise ValueError('match must be None, a matplotlib.artist.Artist ' - 'subclass, or a callable') - - artists = sum([c.findobj(matchfunc) for c in self.get_children()], []) - if include_self and matchfunc(self): - artists.append(self) - return artists - - def get_cursor_data(self, event): - """ - Return the cursor data for a given event. - - .. note:: - This method is intended to be overridden by artist subclasses. - As an end-user of Matplotlib you will most likely not call this - method yourself. - - Cursor data can be used by Artists to provide additional context - information for a given event. The default implementation just returns - *None*. - - Subclasses can override the method and return arbitrary data. However, - when doing so, they must ensure that `.format_cursor_data` can convert - the data to a string representation. - - The only current use case is displaying the z-value of an `.AxesImage` - in the status bar of a plot window, while moving the mouse. - - Parameters - ---------- - event : `matplotlib.backend_bases.MouseEvent` - - See Also - -------- - format_cursor_data - - """ - return None - - def format_cursor_data(self, data): - """ - Return a string representation of *data*. - - .. note:: - This method is intended to be overridden by artist subclasses. - As an end-user of Matplotlib you will most likely not call this - method yourself. - - The default implementation converts ints and floats and arrays of ints - and floats into a comma-separated string enclosed in square brackets, - unless the artist has an associated colorbar, in which case scalar - values are formatted using the colorbar's formatter. - - See Also - -------- - get_cursor_data - """ - if np.ndim(data) == 0 and isinstance(self, ScalarMappable): - # This block logically belongs to ScalarMappable, but can't be - # implemented in it because most ScalarMappable subclasses inherit - # from Artist first and from ScalarMappable second, so - # Artist.format_cursor_data would always have precedence over - # ScalarMappable.format_cursor_data. - n = self.cmap.N - if np.ma.getmask(data): - return "[]" - normed = self.norm(data) - if np.isfinite(normed): - if isinstance(self.norm, BoundaryNorm): - # not an invertible normalization mapping - cur_idx = np.argmin(np.abs(self.norm.boundaries - data)) - neigh_idx = max(0, cur_idx - 1) - # use max diff to prevent delta == 0 - delta = np.diff( - self.norm.boundaries[neigh_idx:cur_idx + 2] - ).max() - - else: - # Midpoints of neighboring color intervals. - neighbors = self.norm.inverse( - (int(normed * n) + np.array([0, 1])) / n) - delta = abs(neighbors - data).max() - g_sig_digits = cbook._g_sig_digits(data, delta) - else: - g_sig_digits = 3 # Consistent with default below. - return "[{:-#.{}g}]".format(data, g_sig_digits) - else: - try: - data[0] - except (TypeError, IndexError): - data = [data] - data_str = ', '.join('{:0.3g}'.format(item) for item in data - if isinstance(item, Number)) - return "[" + data_str + "]" - - def get_mouseover(self): - """ - Return whether this artist is queried for custom context information - when the mouse cursor moves over it. - """ - return self._mouseover - - def set_mouseover(self, mouseover): - """ - Set whether this artist is queried for custom context information when - the mouse cursor moves over it. - - Parameters - ---------- - mouseover : bool - - See Also - -------- - get_cursor_data - .ToolCursorPosition - .NavigationToolbar2 - """ - self._mouseover = bool(mouseover) - ax = self.axes - if ax: - if self._mouseover: - ax._mouseover_set.add(self) - else: - ax._mouseover_set.discard(self) - - mouseover = property(get_mouseover, set_mouseover) # backcompat. - - -def _get_tightbbox_for_layout_only(obj, *args, **kwargs): - """ - Matplotlib's `.Axes.get_tightbbox` and `.Axis.get_tightbbox` support a - *for_layout_only* kwarg; this helper tries to use the kwarg but skips it - when encountering third-party subclasses that do not support it. - """ - try: - return obj.get_tightbbox(*args, **{**kwargs, "for_layout_only": True}) - except TypeError: - return obj.get_tightbbox(*args, **kwargs) - - -class ArtistInspector: - """ - A helper class to inspect an `~matplotlib.artist.Artist` and return - information about its settable properties and their current values. - """ - - def __init__(self, o): - r""" - Initialize the artist inspector with an `Artist` or an iterable of - `Artist`\s. If an iterable is used, we assume it is a homogeneous - sequence (all `Artist`\s are of the same type) and it is your - responsibility to make sure this is so. - """ - if not isinstance(o, Artist): - if np.iterable(o): - o = list(o) - if len(o): - o = o[0] - - self.oorig = o - if not isinstance(o, type): - o = type(o) - self.o = o - - self.aliasd = self.get_aliases() - - def get_aliases(self): - """ - Get a dict mapping property fullnames to sets of aliases for each alias - in the :class:`~matplotlib.artist.ArtistInspector`. - - e.g., for lines:: - - {'markerfacecolor': {'mfc'}, - 'linewidth' : {'lw'}, - } - """ - names = [name for name in dir(self.o) - if name.startswith(('set_', 'get_')) - and callable(getattr(self.o, name))] - aliases = {} - for name in names: - func = getattr(self.o, name) - if not self.is_alias(func): - continue - propname = re.search("`({}.*)`".format(name[:4]), # get_.*/set_.* - inspect.getdoc(func)).group(1) - aliases.setdefault(propname[4:], set()).add(name[4:]) - return aliases - - _get_valid_values_regex = re.compile( - r"\n\s*(?:\.\.\s+)?ACCEPTS:\s*((?:.|\n)*?)(?:$|(?:\n\n))" - ) - - def get_valid_values(self, attr): - """ - Get the legal arguments for the setter associated with *attr*. - - This is done by querying the docstring of the setter for a line that - begins with "ACCEPTS:" or ".. ACCEPTS:", and then by looking for a - numpydoc-style documentation for the setter's first argument. - """ - - name = 'set_%s' % attr - if not hasattr(self.o, name): - raise AttributeError('%s has no function %s' % (self.o, name)) - func = getattr(self.o, name) - - docstring = inspect.getdoc(func) - if docstring is None: - return 'unknown' - - if docstring.startswith('Alias for '): - return None - - match = self._get_valid_values_regex.search(docstring) - if match is not None: - return re.sub("\n *", " ", match.group(1)) - - # Much faster than list(inspect.signature(func).parameters)[1], - # although barely relevant wrt. matplotlib's total import time. - param_name = func.__code__.co_varnames[1] - # We could set the presence * based on whether the parameter is a - # varargs (it can't be a varkwargs) but it's not really worth it. - match = re.search(r"(?m)^ *\*?{} : (.+)".format(param_name), docstring) - if match: - return match.group(1) - - return 'unknown' - - def _replace_path(self, source_class): - """ - Changes the full path to the public API path that is used - in sphinx. This is needed for links to work. - """ - replace_dict = {'_base._AxesBase': 'Axes', - '_axes.Axes': 'Axes'} - for key, value in replace_dict.items(): - source_class = source_class.replace(key, value) - return source_class - - def get_setters(self): - """ - Get the attribute strings with setters for object. - - For example, for a line, return ``['markerfacecolor', 'linewidth', - ....]``. - """ - setters = [] - for name in dir(self.o): - if not name.startswith('set_'): - continue - func = getattr(self.o, name) - if (not callable(func) - or self.number_of_parameters(func) < 2 - or self.is_alias(func)): - continue - setters.append(name[4:]) - return setters - - @staticmethod - @lru_cache(maxsize=None) - def number_of_parameters(func): - """Return number of parameters of the callable *func*.""" - return len(inspect.signature(func).parameters) - - @staticmethod - @lru_cache(maxsize=None) - def is_alias(method): - """ - Return whether the object *method* is an alias for another method. - """ - - ds = inspect.getdoc(method) - if ds is None: - return False - - return ds.startswith('Alias for ') - - def aliased_name(self, s): - """ - Return 'PROPNAME or alias' if *s* has an alias, else return 'PROPNAME'. - - For example, for the line markerfacecolor property, which has an - alias, return 'markerfacecolor or mfc' and for the transform - property, which does not, return 'transform'. - """ - aliases = ''.join(' or %s' % x for x in sorted(self.aliasd.get(s, []))) - return s + aliases - - _NOT_LINKABLE = { - # A set of property setter methods that are not available in our - # current docs. This is a workaround used to prevent trying to link - # these setters which would lead to "target reference not found" - # warnings during doc build. - 'matplotlib.image._ImageBase.set_alpha', - 'matplotlib.image._ImageBase.set_array', - 'matplotlib.image._ImageBase.set_data', - 'matplotlib.image._ImageBase.set_filternorm', - 'matplotlib.image._ImageBase.set_filterrad', - 'matplotlib.image._ImageBase.set_interpolation', - 'matplotlib.image._ImageBase.set_interpolation_stage', - 'matplotlib.image._ImageBase.set_resample', - 'matplotlib.text._AnnotationBase.set_annotation_clip', - } - - def aliased_name_rest(self, s, target): - """ - Return 'PROPNAME or alias' if *s* has an alias, else return 'PROPNAME', - formatted for reST. - - For example, for the line markerfacecolor property, which has an - alias, return 'markerfacecolor or mfc' and for the transform - property, which does not, return 'transform'. - """ - # workaround to prevent "reference target not found" - if target in self._NOT_LINKABLE: - return f'``{s}``' - - aliases = ''.join(' or %s' % x for x in sorted(self.aliasd.get(s, []))) - return ':meth:`%s <%s>`%s' % (s, target, aliases) - - def pprint_setters(self, prop=None, leadingspace=2): - """ - If *prop* is *None*, return a list of strings of all settable - properties and their valid values. - - If *prop* is not *None*, it is a valid property name and that - property will be returned as a string of property : valid - values. - """ - if leadingspace: - pad = ' ' * leadingspace - else: - pad = '' - if prop is not None: - accepts = self.get_valid_values(prop) - return '%s%s: %s' % (pad, prop, accepts) - - lines = [] - for prop in sorted(self.get_setters()): - accepts = self.get_valid_values(prop) - name = self.aliased_name(prop) - lines.append('%s%s: %s' % (pad, name, accepts)) - return lines - - def pprint_setters_rest(self, prop=None, leadingspace=4): - """ - If *prop* is *None*, return a list of reST-formatted strings of all - settable properties and their valid values. - - If *prop* is not *None*, it is a valid property name and that - property will be returned as a string of "property : valid" - values. - """ - if leadingspace: - pad = ' ' * leadingspace - else: - pad = '' - if prop is not None: - accepts = self.get_valid_values(prop) - return '%s%s: %s' % (pad, prop, accepts) - - prop_and_qualnames = [] - for prop in sorted(self.get_setters()): - # Find the parent method which actually provides the docstring. - for cls in self.o.__mro__: - method = getattr(cls, f"set_{prop}", None) - if method and method.__doc__ is not None: - break - else: # No docstring available. - method = getattr(self.o, f"set_{prop}") - prop_and_qualnames.append( - (prop, f"{method.__module__}.{method.__qualname__}")) - - names = [self.aliased_name_rest(prop, target) - .replace('_base._AxesBase', 'Axes') - .replace('_axes.Axes', 'Axes') - for prop, target in prop_and_qualnames] - accepts = [self.get_valid_values(prop) - for prop, _ in prop_and_qualnames] - - col0_len = max(len(n) for n in names) - col1_len = max(len(a) for a in accepts) - table_formatstr = pad + ' ' + '=' * col0_len + ' ' + '=' * col1_len - - return [ - '', - pad + '.. table::', - pad + ' :class: property-table', - '', - table_formatstr, - pad + ' ' + 'Property'.ljust(col0_len) - + ' ' + 'Description'.ljust(col1_len), - table_formatstr, - *[pad + ' ' + n.ljust(col0_len) + ' ' + a.ljust(col1_len) - for n, a in zip(names, accepts)], - table_formatstr, - '', - ] - - def properties(self): - """Return a dictionary mapping property name -> value.""" - o = self.oorig - getters = [name for name in dir(o) - if name.startswith('get_') and callable(getattr(o, name))] - getters.sort() - d = {} - for name in getters: - func = getattr(o, name) - if self.is_alias(func): - continue - try: - with warnings.catch_warnings(): - warnings.simplefilter('ignore') - val = func() - except Exception: - continue - else: - d[name[4:]] = val - return d - - def pprint_getters(self): - """Return the getters and actual values as list of strings.""" - lines = [] - for name, val in sorted(self.properties().items()): - if getattr(val, 'shape', ()) != () and len(val) > 6: - s = str(val[:6]) + '...' - else: - s = str(val) - s = s.replace('\n', ' ') - if len(s) > 50: - s = s[:50] + '...' - name = self.aliased_name(name) - lines.append(' %s = %s' % (name, s)) - return lines - - -def getp(obj, property=None): - """ - Return the value of an `.Artist`'s *property*, or print all of them. - - Parameters - ---------- - obj : `.Artist` - The queried artist; e.g., a `.Line2D`, a `.Text`, or an `~.axes.Axes`. - - property : str or None, default: None - If *property* is 'somename', this function returns - ``obj.get_somename()``. - - If it's None (or unset), it *prints* all gettable properties from - *obj*. Many properties have aliases for shorter typing, e.g. 'lw' is - an alias for 'linewidth'. In the output, aliases and full property - names will be listed as: - - property or alias = value - - e.g.: - - linewidth or lw = 2 - - See Also - -------- - setp - """ - if property is None: - insp = ArtistInspector(obj) - ret = insp.pprint_getters() - print('\n'.join(ret)) - return - return getattr(obj, 'get_' + property)() - -# alias -get = getp - - -def setp(obj, *args, file=None, **kwargs): - """ - Set one or more properties on an `.Artist`, or list allowed values. - - Parameters - ---------- - obj : `.Artist` or list of `.Artist` - The artist(s) whose properties are being set or queried. When setting - properties, all artists are affected; when querying the allowed values, - only the first instance in the sequence is queried. - - For example, two lines can be made thicker and red with a single call: - - >>> x = arange(0, 1, 0.01) - >>> lines = plot(x, sin(2*pi*x), x, sin(4*pi*x)) - >>> setp(lines, linewidth=2, color='r') - - file : file-like, default: `sys.stdout` - Where `setp` writes its output when asked to list allowed values. - - >>> with open('output.log') as file: - ... setp(line, file=file) - - The default, ``None``, means `sys.stdout`. - - *args, **kwargs - The properties to set. The following combinations are supported: - - - Set the linestyle of a line to be dashed: - - >>> line, = plot([1, 2, 3]) - >>> setp(line, linestyle='--') - - - Set multiple properties at once: - - >>> setp(line, linewidth=2, color='r') - - - List allowed values for a line's linestyle: - - >>> setp(line, 'linestyle') - linestyle: {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} - - - List all properties that can be set, and their allowed values: - - >>> setp(line) - agg_filter: a filter function, ... - [long output listing omitted] - - `setp` also supports MATLAB style string/value pairs. For example, the - following are equivalent: - - >>> setp(lines, 'linewidth', 2, 'color', 'r') # MATLAB style - >>> setp(lines, linewidth=2, color='r') # Python style - - See Also - -------- - getp - """ - - if isinstance(obj, Artist): - objs = [obj] - else: - objs = list(cbook.flatten(obj)) - - if not objs: - return - - insp = ArtistInspector(objs[0]) - - if not kwargs and len(args) < 2: - if args: - print(insp.pprint_setters(prop=args[0]), file=file) - else: - print('\n'.join(insp.pprint_setters()), file=file) - return - - if len(args) % 2: - raise ValueError('The set args must be string, value pairs') - - funcvals = dict(zip(args[::2], args[1::2])) - ret = [o.update(funcvals) for o in objs] + [o.set(**kwargs) for o in objs] - return list(cbook.flatten(ret)) - - -def kwdoc(artist): - r""" - Inspect an `~matplotlib.artist.Artist` class (using `.ArtistInspector`) and - return information about its settable properties and their current values. - - Parameters - ---------- - artist : `~matplotlib.artist.Artist` or an iterable of `Artist`\s - - Returns - ------- - str - The settable properties of *artist*, as plain text if - :rc:`docstring.hardcopy` is False and as a rst table (intended for - use in Sphinx) if it is True. - """ - ai = ArtistInspector(artist) - return ('\n'.join(ai.pprint_setters_rest(leadingspace=4)) - if mpl.rcParams['docstring.hardcopy'] else - 'Properties:\n' + '\n'.join(ai.pprint_setters(leadingspace=4))) - -# We defer this to the end of them module, because it needs ArtistInspector -# to be defined. -Artist._update_set_signature_and_docstring() diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Anaconda 1 Le Prdateur FRENCH DVDRIP.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Anaconda 1 Le Prdateur FRENCH DVDRIP.md deleted file mode 100644 index 07db30e79e33e7b7bea139b4a2dce5c23d16ccb9..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Anaconda 1 Le Prdateur FRENCH DVDRIP.md +++ /dev/null @@ -1,48 +0,0 @@ -
        -

        Anaconda 1 Le Prdateur FRENCH DVDRIP: Un film d'horreur à ne pas manquer

        -

        Anaconda 1 Le Prdateur FRENCH DVDRIP est le titre français du film Anaconda, sorti en 1997 et réalisé par Luis Llosa. Il s'agit d'un film d'horreur qui met en scène un groupe de documentaristes qui partent à la recherche d'une tribu perdue dans la forêt amazonienne. Mais ils vont se retrouver face à un redoutable prédateur: un anaconda géant qui n'hésite pas à attaquer et à dévorer ses proies humaines.

        -

        Le synopsis du film

        -

        Le film raconte l'histoire de Terri Flores (Jennifer Lopez), une réalisatrice qui veut faire un documentaire sur les Shirishamas, une tribu indigène qui vit dans la jungle amazonienne. Elle est accompagnée de son équipe, composée de l'anthropologue Steven Cale (Eric Stoltz), du caméraman Danny Rich (Ice Cube), du producteur Warren Westridge (Jonathan Hyde), du guide Mateo (Vincent Castellanos) et du technicien du son Gary Dixon (Owen Wilson).

        -

        Anaconda 1 Le Prdateur FRENCH DVDRIP


        Download Filehttps://bytlly.com/2uGxMC



        -

        En chemin, ils rencontrent Paul Sarone (Jon Voight), un chasseur mystérieux qui prétend connaître l'emplacement des Shirishamas. Il leur propose de les guider en échange de leur aide pour réparer son bateau. Mais Sarone a en réalité un autre objectif: capturer un anaconda géant, le plus grand serpent du monde, qu'il considère comme le roi de la jungle.

        -

        Sarone va alors manipuler et trahir le groupe pour les entraîner dans son piège mortel. Un à un, les membres de l'équipe vont être attaqués et avalés par l'anaconda, qui mesure plus de 12 mètres de long et pèse plus de 200 kilos. Terri et Danny vont devoir se battre pour survivre et échapper à la fois au serpent et au chasseur fou.

        -

        Les critiques du film

        -

        Anaconda 1 Le Prdateur FRENCH DVDRIP a reçu des critiques mitigées de la part des spectateurs et des critiques. Certains ont apprécié le film pour son côté divertissant, ses scènes d'action et ses effets spéciaux. D'autres ont détesté le film pour son scénario invraisemblable, ses dialogues ridicules et ses acteurs peu convaincants. Le film a été un succès commercial, rapportant plus de 136 millions de dollars au box-office mondial pour un budget de 45 millions. Il a également engendré plusieurs suites et spin-offs, dont Anacondas : À la poursuite de l'orchidée de sang (2004), Anaconda 3 : L'Héritier (2008), Anacondas 4 : La Piste du sang (2009) et Lake Placid vs. Anaconda (2015).

        -

        Les informations pratiques sur le film

        -

        Si vous voulez regarder Anaconda 1 Le Prdateur FRENCH DVDRIP, voici quelques informations pratiques qui pourraient vous intéresser:

        -
          -
        • Le film est disponible en streaming ou en téléchargement sur plusieurs plateformes légales, telles que Netflix, Amazon Prime Video, iTunes ou Google Play.
        • -
        • Le film est classé interdit aux moins de 12 ans en France, en raison de ses scènes violentes et sanglantes.
        • -
        • Le film dure environ 89 minutes et est divisé en 12 chapitres.
        • -
        • Le film a été tourné principalement au Brésil, au Pérou et aux États-Unis.
        • -
        • Le film a été nominé pour six Razzie Awards, les prix qui récompensent les pires films de l'année, dont celui du pire film, du pire réalisateur, du pire acteur (Jon Voight) et du pire couple à l'écran (Jon Voight et l'anaconda).
        • -
        -

        Conclusion

        -

        Anaconda 1 Le Prdateur FRENCH DVDRIP est un film d'horreur qui vous fera frissonner devant votre écran. Si vous aimez les films avec des animaux géants et terrifiants, vous ne serez pas déçu par ce film qui vous plongera dans une aventure palpitante au coeur de la jungle amazonienne. Mais si vous cherchez un film avec un scénario crédible, des dialogues intelligents et des acteurs talentueux, vous risquez d'être déçu par ce film qui ne brille pas par sa qualité artistique. À vous de voir si vous voulez tenter l'expérience ou pas.

        - - -- Anaconda 1 Le Prdateur FRENCH DVDRIP is the French title of the 1997 horror film Anaconda, directed by Luis Llosa. -- The film follows a group of documentary filmmakers who go to the Amazon rainforest to find a lost tribe, but encounter a giant anaconda that attacks and devours them. -- The film received mixed reviews from critics and audiences, but was a commercial success and spawned several sequels and spin-offs. -- The film is available on various legal platforms for streaming or downloading, and is rated 12+ in France for its violent and bloody scenes. -- The film lasts about 89 minutes and was filmed mainly in Brazil, Peru and the United States. -- The film was nominated for six Razzie Awards, the awards that reward the worst films of the year. -
      • Jon Voight a improvisé la scène où il crache sur Terri après avoir été régurgité par l'anaconda. Jennifer Lopez n'était pas au courant et a eu une réaction de dégoût authentique.
      • -
      • Ice Cube a failli ne pas participer au film car il était en tournée avec son groupe de rap. Il a accepté le rôle à condition que le tournage soit adapté à son emploi du temps.
      • -
      • Jennifer Lopez et Jon Voight ne s'entendaient pas bien sur le plateau. Ils avaient des opinions politiques opposées et se disputaient souvent. Jennifer Lopez a même demandé à ce que Jon Voight soit tué plus tôt dans le film, mais sa requête a été refusée.
      • -
      -

      Les suites et les spin-offs du film

      -

      Anaconda 1 Le Prdateur FRENCH DVDRIP a connu un tel succès qu'il a donné naissance à plusieurs suites et spin-offs, qui ont exploré d'autres aspects de l'univers des anacondas géants. Voici quelques-uns des films qui ont suivi le premier opus:

      -
        -
      • Anacondas : À la poursuite de l'orchidée de sang (2004): Ce film est une suite indirecte du premier film. Il suit un groupe de scientifiques qui partent en Birmanie pour trouver une orchidée rare qui aurait des propriétés médicinales. Mais ils vont se retrouver face à des anacondas encore plus grands et plus dangereux que ceux du premier film.
      • -
      • Anaconda 3 : L'Héritier (2008): Ce film est un téléfilm qui fait suite au deuxième film. Il met en scène un milliardaire qui finance des recherches génétiques sur les anacondas dans un laboratoire secret en Roumanie. Mais les serpents vont s'échapper et semer la terreur dans la région.
      • -
      • Anacondas 4 : La Piste du sang (2009): Ce film est un téléfilm qui fait suite au troisième film. Il suit une équipe de mercenaires qui partent à la chasse aux anacondas échappés du laboratoire. Mais ils vont découvrir que les serpents se sont reproduits et ont donné naissance à une nouvelle génération d'hybrides.
      • -
      • Lake Placid vs. Anaconda (2015): Ce film est un crossover entre les franchises Anaconda et Lake Placid, qui mettent en scène respectivement des anacondas géants et des crocodiles géants. Il suit une équipe de chasseurs qui doivent affronter les deux espèces de monstres dans un lac isolé.
      • -
      -

      Conclusion

      -

      Anaconda 1 Le Prdateur FRENCH DVDRIP est un film d'horreur qui vous fera frissonner devant votre écran. Si vous aimez les films avec des animaux géants et terrifiants, vous ne serez pas déçu par ce film qui vous plongera dans une aventure palpitante au coeur de la jungle amazonienne. Mais si vous cherchez un film avec un scénario crédible, des dialogues intelligents et des acteurs talentueux, vous risquez d'être déçu par ce film qui ne brille pas par sa qualité artistique. À vous de voir si vous voulez tenter l'expérience ou pas.

      -

      Conclusion

      -

      Anaconda 1 Le Prdateur FRENCH DVDRIP est un film d'horreur qui vous fera frissonner devant votre écran. Si vous aimez les films avec des animaux géants et terrifiants, vous ne serez pas déçu par ce film qui vous plongera dans une aventure palpitante au coeur de la jungle amazonienne. Mais si vous cherchez un film avec un scénario crédible, des dialogues intelligents et des acteurs talentueux, vous risquez d'être déçu par ce film qui ne brille pas par sa qualité artistique. À vous de voir si vous voulez tenter l'expérience ou pas.

      -

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/EPLAN P8 MACROS SIEMENS Download ((BETTER)).md b/spaces/lincquiQcaudo/Top-20-Diffusion/EPLAN P8 MACROS SIEMENS Download ((BETTER)).md deleted file mode 100644 index 6b8468eac701a9c2aba4eeeb75e4d06a050b377e..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/EPLAN P8 MACROS SIEMENS Download ((BETTER)).md +++ /dev/null @@ -1,41 +0,0 @@ -
      -

      How to Download and Use EPLAN P8 Macros for Siemens Products

      -

      EPLAN P8 macros are files that contain information about electrical components and devices, such as symbols, part data, connection points, and documentation. They can help you design and document electrical projects faster and more efficiently.

      -

      Siemens provides EPLAN P8 macros for many of its products, such as SINAMICS converters, SIMOTION controllers, MICROMASTER frequency converters, and more. You can download these macros from the Siemens Industry Support website or the CAx Shopping Cart.

      -

      EPLAN P8 MACROS SIEMENS DOWNLOAD


      Download Filehttps://bytlly.com/2uGxNC



      -

      How to Download EPLAN P8 Macros from Siemens Industry Support

      -

      To download EPLAN P8 macros from Siemens Industry Support, follow these steps:

      -
        -
      1. Go to https://support.industry.siemens.com/cs/start/en and enter the product name or number in the search box.
      2. -
      3. Select the product from the search results and scroll down to the "CAx data" section.
      4. -
      5. Click on the "EPLAN Electric P8" link and choose the desired macro format (EDZ or ZIP).
      6. -
      7. Click on the "Download" button and save the file to your computer.
      8. -
      -

      How to Download EPLAN P8 Macros from CAx Shopping Cart

      -

      To download EPLAN P8 macros from CAx Shopping Cart, follow these steps:

      -
        -
      1. Go to https://mall.industry.siemens.com/mall/en/WW/Catalog/CAx and log in with your Siemens account.
      2. -
      3. Select the product category and subcategory from the left menu.
      4. -
      5. Select the product from the list and click on the "Add to cart" button.
      6. -
      7. Click on the "Shopping cart" icon at the top right corner and review your selection.
      8. -
      9. Click on the "Download" button and save the file to your computer.
      10. -
      -

      How to Use EPLAN P8 Macros

      -

      To use EPLAN P8 macros, follow these steps:

      -
        -
      1. Open EPLAN Electric P8 and create a new project or open an existing one.
      2. -
      3. Go to "Utilities > Parts > Import" and select the macro file (EDZ or ZIP) that you downloaded.
      4. -
      5. Select the parts that you want to import and click on "OK". The parts will be added to your parts database.
      6. -
      7. Go to "Insert > Symbol > Device" and select the part that you want to insert in your project.
      8. -
      9. Place the symbol on your schematic page and connect it as needed.
      10. -
      11. You can also edit the properties, documentation, and representation of the part by double-clicking on it or using the context menu.
      12. -
      -

      You can find more information about EPLAN P8 macros for Siemens products in the following links:

      -

      -

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Manufactura De Clase Mundial Richard Schonberger Pdf 27.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Manufactura De Clase Mundial Richard Schonberger Pdf 27.md deleted file mode 100644 index f2c605165b3612b78d9c1465f594ff8ba3a98154..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Manufactura De Clase Mundial Richard Schonberger Pdf 27.md +++ /dev/null @@ -1,12 +0,0 @@ -

      Manufactura De Clase Mundial Richard Schonberger Pdf 27


      Download 🗸🗸🗸 https://bytlly.com/2uGwXN



      -
      -Manufactura De Clase Mundial Richard Schonberger Pdf 27 - -Acima De Culatra Negra - Vias 1 & 2 - Desde La L. Fetiche Masivo Romano cuando es chorrillo, abundante, caliente, y eso es lo que cuenta la palabra. The company was organised as a joint-stock company in Novo Mesto, in the Austrian-Hungarian empire, on 3 May 1867. The first production facilities were built at nearby Glogau in the Bohemian-Moravian borderland. The settlement became independent from Bohemia in 1871 and the factory became independent as well, being known under the name Bohemia Glassworks. This glass factory was turned into a modern glassworks, located in Nowa Huta, in 1920. In the beginning it was an independent enterprise, but from the 1970s it was incorporated into the group of UAG (Technical University of Olomouc) and is now known as Závody v oblasti techniky UAG. The first mineral water was sold in September 1876. In 1912, the company established an interconnecting track from Lomnice nad Ipľúm to Moštilffy-Hámre. The interconnecting track was completed in 1915. The company closed its mineral water plant in Lomnice nad Ipľúm in 1920. - -L’Espagne Dans La Tourmente Et La Violence - Just one day it was a bustling metropolis, a “paradise” on Earth. In a stone mill, only one kilometer from the front line, the first shells of the WWI fell. When the war was over the city was “wiped off the map”. Until today, this is the unforgettable story of the capital of the Spanish revolution, the old Barcelona, a lively and beautiful old-fashioned barrio, a glimpse into one of the most brutal decades of the 20th century. - -De La Sanidad Social A La Que Hace Un Tributo A Al Catolicismo - Instituciones Para La Vida De La Ciencia (1906) en su edición de 1912, el premio Nobel de 4fefd39f24
      -
      -
      -

      diff --git a/spaces/lixq/bingo61/src/components/voice.tsx b/spaces/lixq/bingo61/src/components/voice.tsx deleted file mode 100644 index 074d0e145229947282a472bd84f6578cf0b3c71c..0000000000000000000000000000000000000000 --- a/spaces/lixq/bingo61/src/components/voice.tsx +++ /dev/null @@ -1,52 +0,0 @@ -import React, { useEffect } from 'react' -import { useSetAtom } from 'jotai' -import { useBing } from '@/lib/hooks/use-bing' -import Image from 'next/image' -import VoiceIcon from '@/assets/images/voice.svg' -import VoiceButton from './ui/voice' -import { SR } from '@/lib/bots/bing/sr' -import { voiceListenAtom } from '@/state' - -const sr = new SR(['发送', '清空', '退出']) - -const Voice = ({ setInput, input, sendMessage, isSpeaking }: Pick, 'setInput' | 'sendMessage' | 'input' | 'isSpeaking'>) => { - const setListen = useSetAtom(voiceListenAtom) - useEffect(() => { - if (sr.listening) return - sr.transcript = !isSpeaking - }, [isSpeaking]) - - useEffect(() => { - sr.onchange = (msg: string, command?: string) => { - switch (command) { - case '退出': - sr.stop() - break; - case '发送': - sendMessage(input) - case '清空': - setInput('') - break; - default: - setInput(input + msg) - } - } - }, [input]) - - const switchSR = (enable: boolean = false) => { - setListen(enable) - if (enable) { - sr.start() - } else { - sr.stop() - } - } - - return sr.listening ? ( - switchSR(false)} /> - ) : ( - start voice switchSR(true)} /> - ) -}; - -export default Voice; diff --git a/spaces/ljjggr/bingo/tests/kblob.ts b/spaces/ljjggr/bingo/tests/kblob.ts deleted file mode 100644 index 9e15b41c1c94a690beb61b23cdb42fc78767ccd2..0000000000000000000000000000000000000000 --- a/spaces/ljjggr/bingo/tests/kblob.ts +++ /dev/null @@ -1,27 +0,0 @@ -import FormData from 'form-data' - -import { fetch } from '@/lib/isomorphic' - -const formData = new FormData() - -const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}} - -formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - - -fetch('https://bing.vcanbb.top/images/kblob', - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": "https://bing.vcanbb.top/web/index.html", - "Referrer-Policy": "origin-when-cross-origin", - ...formData.getHeaders() - } - - } -).then(res => res.text()) -.then(res => console.log('res', res)) diff --git a/spaces/llmonitor/benchmarks/app/prompts/submit/page.js b/spaces/llmonitor/benchmarks/app/prompts/submit/page.js deleted file mode 100644 index 4317fe1e61699f0703f8f520a0e5433812a4771b..0000000000000000000000000000000000000000 --- a/spaces/llmonitor/benchmarks/app/prompts/submit/page.js +++ /dev/null @@ -1,113 +0,0 @@ -import db from "@/utils/db" -import { cookies } from "next/headers" - -import Link from "next/link" -import { redirect } from "next/navigation" - -import jwt from "jsonwebtoken" - -export default async function Submit() { - // Get user session token - - async function create(formData) { - "use server" - - const cookiesList = cookies() - const token = cookiesList.get("token") - - console.log("token", token) - - if (!token) throw new Error("not logged") - - const { userId } = jwt.verify(token.value, process.env.JWT_SECRET) - - const [userObj] = await db`SELECT * FROM users WHERE id = ${userId}` - if (!userObj) throw new Error("user not found") - - const text = formData.get("prompt") - const slug = formData.get("slug") - - if (text.length <= 20 || text.length > 1000) - throw new Error("prompt too long or too short") - - const [prompt] = - await db`INSERT INTO prompts (text, submitter, slug) VALUES (${text}, ${userObj.id}, ${slug}) RETURNING *` - - const [vote] = - await db`INSERT INTO votes ("user", prompt) VALUES (${userObj.id}, ${prompt.id}) RETURNING *` - - redirect(`/prompts`) - - // send email to user to confirm submission - } - - return ( - <> -

      Submit a new prompt to be included to the benchmark.

      -

      - Each week, the highest rated prompt will become part of the benchmark. -

      -

      What makes a good prompt:

      -
        -
      • - Can be broke down into rubrics & evaluated -
      • -
      • - Is original and not popular on the internet (unlikely to be already - part in the benchmark) -
      • -
      • Is not too long (max 1000 characters)
      • -
      -
      - -
      - - - - -
      - - -